SlideShare a Scribd company logo
1 of 14
Download to read offline
A Whole Lot of Ports:
Juniper Networks QFabric
     System Assessment

                   March 2012
Juniper QFabric System Assessment

       Executive Summary
       Juniper Networks commissioned Network Test to assess the performance, interoperability, and usability
       of its QFabric System, a converged switch fabric for cloud and large data center applications tested with
       1,536 10-Gbit/s Ethernet ports.

       Even at this unprecedented scale – by far the largest ever in a public switch test – this project loaded the
       QFabric System to only one-quarter of its maximum capacity of 6,144 10-Gbit/s Ethernet ports.

       Using industry-standard RFC benchmarks representing the most rigorous possible test cases, engineers
       stress-tested QFabric System performance in terms of unicast and multicast throughput and latency
       with separate events for Layer 2 and Layer 3 traffic. Engineers also assessed interoperability, a key
       consideration when adding QFabric technology incrementally into existing data center networks, and
       evaluated device management.

       Key results from QFabric System testing include the following:

            All QFabric System components operated as a single device, simplifying network management
             and reducing operational complexity
            Throughput for Layer 2 traffic was virtually identical in store-and-forward and cut-through
             modes, with rates approaching the maximum channel capacity for most frame sizes
            QFabric forwarding delay is low and consistent across all tests (less than 5 microseconds for all
             frames sizes up to 512 bytes) when offered loads below the throughput rate
            Average latency for Layer 3 traffic is 10 microseconds or less for most frame sizes tested
            Multicast throughput was close to line rate in all tests, regardless of frame length, with the
             system moving traffic at speeds of up to 15.3 terabits per second
            Multicast average latency was low and consistent in all tests, never exceeding 4 microseconds
            The QFabric System successfully interoperated with Cisco Nexus 7010 and Cisco Catalyst 6506-E
             switch/routers when using common data center protocols such as link aggregation, OSPF equal
             cost multipath, and BGP

       The remainder of this document discusses the test results in more detail. Besides presenting the test
       results, each section describes the test objective and procedure, as well as its meaning for network
       architects and network managers.




       The QFabric System Test Bed
       Figure 1 shows the test bed used in this project, encompassing 1,536 10-Gbit/s Ethernet edge ports; 128
       redundant 40-Gbit/s fabric uplinks; and an out-of-band gigabit Ethernet management network. From a
       network management perspective, all the various QFabric System components operated as a single
       device. Engineers used one Junos configuration file to define all interfaces and protocols. The
2




       configuration syntax is identical to Junos on other Juniper platforms. Even in very large cloud and data
Page
Juniper QFabric System Assessment

center applications, the QFabric System allows the entire data center network infrastructure to be
managed as a single entity.




Figure 1: The QFabric System Test Bed

The test bed illustrates how the QFabric System takes the key pieces of a conventional modular switch
and separates them into individual physical components, while still managing the entire system as one
entity. The QFabric System’s three components are:

        the QFabric Director (QFX3100), which performs control-plane functions analogous to the
         Routing Engine module in Juniper switches and routers
        the QFabric Interconnect (QFX3008), which ties together switch ports in the same way as a
         modular switch’s backplane
        the QFabric Nodes (QFX3500), which are analogous to line cards in a modular switch

In this test, Juniper used 32 QFabric Nodes, each with 48 10-Gbit/s Ethernet ports and four 40-Gbit/s
fabric uplink ports.

Building a 1,536-port test bed is a massive undertaking. The test bed fully occupied four standard 42U
                                                                                                         3
                                                                                                         Page




racks – two apiece for the Juniper QFabric System components and the Spirent TestCenter traffic
Juniper QFabric System Assessment

       generator/analyzers, which used Spirent’s 32-port HyperMetrics dX modules. Because of QFabric’s
       modular design, the QFabric Nodes could have been physically dispersed – for example, with one in
       each of 32 racks scattered throughout a data center.

       The QFabric System used standard structured cabling, with 3-meter direct-attached copper (DAC)
       cables between Spirent test ports and the Juniper QFabric System and MPO multimode OM3 fiber
       cables between the QFabric Nodes and the QFabric Interconnect devices. The out-of-band management
       network used copper gigabit Ethernet ports, tied together with a pair of Juniper EX4200 switches using
       Virtual Chassis technology (not shown in figure).

       As large as this test bed was, it is nowhere near the limit of QFabric scalability. A pair of QFX3008
       Interconnects can accommodate up to 128 top-of-rack switches, for a total of 6,144 10-Gbit/s Ethernet
       ports in a single system.

       The high-level goals for this project were to assess QFabric System performance, interoperability, and
       usability. Performance tests covered throughput and delay for unicast and multicast traffic, using the
       Spirent test tool to run RFC benchmarks across 1,536 10-Gbit/s Ethernet ports. Interoperability tests
       sought to verify that the QFabric System would work with switches/routers from Cisco Systems running
       common data center protocols such as link aggregation, OSPF equal cost multipath, and BGP. There
       were no separate usability tests, but engineers verified during all other tests that all QFabric System
       components were capable of being managed as a single system, reducing operational complexity.




       Unicast Throughput
       The primary goal of the unicast throughput tests was to determine how fast the QFabric System
       moved traffic with zero frame loss.

       QFabric allows network architects to build blocking or nonblocking switch fabrics. In this case, the
       overall system was 3:1 oversubscribed for unicast traffic (with 48 10-Gbit/s Ethernet ports on each
       QFabric Node [480 Gbit/s of capacity in each direction] attached to the interconnect nodes via four 40-
       Gbit/s fabric uplink ports [160 Gbit/s of capacity in each direction]).

       As defined in RFCs 1242 and 2544, throughput is the maximum rate at which a system can forward
       traffic with zero loss. With a 3:1 oversubscription, this meant the QFabric System’s channel capacity was
       essentially one-third that of the QFabric Nodes. (This document uses the term “channel capacity” to
       describe the maximum transmission rate a system will support; RFC 4689 also refers to the same
       concept as “forwarding capacity.”) Engineers repeated these tests with both Layer 2 Ethernet and Layer
       3 IP traffic, in all cases using a fully meshed traffic pattern, the most stressful possible test case.

       With a fully meshed pattern, the Spirent test instruments attached to all ports offered traffic destined to
       all other ports. This is far more stressful on the switching fabric than port-pair tests and better describes
4
Page




       the limits of system performance. Network architects and managers have a reasonable expectation of
Juniper QFabric System Assessment

being able to send traffic between any arbitrary sets of ports in a system, not just selected port pairs.
With results from a fully meshed traffic test, users can be confident they are truly seeing the limits of
system performance.

In the Layer 2 tests, engineers configured the Spirent TestCenter traffic generator to offer raw Ethernet
frames with pseudorandom MAC addresses as described in RFC 4814. These frames had no IP or other
upper-layer headers, forcing the QFabric System’s hashing algorithms to make path selection decisions
based solely on destination and source MAC addresses.

This Ethernet-only configuration is meaningful for data centers with large, flat Layer 2 domains, such as
those carrying non-IP storage traffic. Other common use cases for large, flat Layer 2 domains in the data
center include virtualization, where large broadcast domains ensure seamless migration of virtual
machines between physical hosts; and converged data centers, where data and non-IP storage traffic
shares a single high-speed fabric for transport.

The QFabric System has the ability to forward traffic in both store-and-forward and cut-through modes,
with the latter sometimes used in data centers with latency-sensitive applications. Test engineers
conducted separate throughput tests in each mode. Throughput for Layer 2 traffic was virtually
identical in store-and-forward and cut-through modes, with rates approaching channel capacity for
most frame sizes. Rates for 64- and 9,216-byte jumbo frames were slightly lower than the channel
capacity rate. Figure 2 presents the results of throughput testing.




                                                                                                            5
                                                                                                            Page




Figure 2: Unicast Throughput
Juniper QFabric System Assessment

       Engineers then moved on to throughput tests involving IP traffic. Here, engineers configured each edge
       port on the QFabric Nodes to use a different IP subnet, with IP networks assigned in sequential order
       (e.g., port 1 used 11.1.1.0/24, port 2 used 11.1.2.0/24, and so on). Test traffic contained IP and UDP
       headers, with the latter containing random source and destination port numbers. In the Layer 3 tests,
       the QFabric System’s hashing algorithms used a combination of IP and UDP data to select interconnect
       forwarding paths. In these Layer 3 tests, engineers configured the QFabric Nodes to use their default
       store-and-forward mode.

       When handling IP traffic, the QFabric System exhibited higher throughput with smaller (64- and 128-
       byte) frames and equivalent throughput for medium-sized 256-byte frames compared with Layer 2
       results.

       Layer 3 throughput rates for frame lengths of 512 bytes and above were lower than those in the Layer 2
       tests. During these tests, engineers observed some unevenness in traffic distribution on the links
       between QFabric Nodes and QFabric Interconnect devices.

       Given different traffic content, IP throughput may well change. More randomness in IP addressing
       might lead to higher Layer 3 throughput; conversely, less randomness in UDP port numbers might
       reduce throughput. Defining a one-size-fits-all model of IP test traffic was definitely not a goal of this
       project. The traffic parameters described here represent one model of network traffic; other models
       may result in different throughput rates.




       Unicast Forwarding Delay and Latency
       Delay is a critical consideration in the data center. With traffic increasingly moving between servers a
       few meters apart rather than across the global Internet, every microsecond can have an impact on
       application performance. For some applications such as video, voice, and high-frequency trading, delay
       is even more important than throughput.

       In the context of network device benchmarking, the term latency differs somewhat from its colloquial
       definition. As discussed in RFC 2544, latency describes delay at, and only at, the throughput rate. As
       such, it’s really a description of buffering capacity. RFC 4689 introduces a complementary metric,
       forwarding delay, that describes delay at any load, not just the throughput rate.

       Network Test used both metrics in this project – latency at the throughput rate, and forwarding delay
       for loads below the throughput rate. Because no production network operates at 100 percent utilization
       for any significant duration, forwarding delay is useful in understanding delays under common operating
       conditions. Latency is important in describing system limits, in this extreme case when all 1,536 ports
       are loaded to the maximum zero-drop rate.
6
Page
Juniper QFabric System Assessment

To get a more complete picture of the QFabric System’s load vs. delay curve, Network Test conducted
measurements of forwarding delay by offering fully meshed traffic at 5, 10, 15, and 20 percent of
theoretical line rate, as well as at the throughput rate. (Remember that with the 3:1 oversubscription of
the QFabric Nodes, channel capacity is equivalent to 33.3 percent of the theoretical maximum.)

Figure 3 presents measurements from the Layer 2 store-and-forward tests. QFabric forwarding delay
was much lower in step tests than delays observed at the throughput rate. Forwarding delay is less
than 5 microseconds for frame sizes up to 512 bytes with loads of up to 20 percent of theoretical line
rate. At the throughput rate – the maximum stress point where all 1,536 ports are fully loaded and
switch buffers are filled to capacity – latency is still less than 10 microseconds for 64- and 128-byte
frames, and never exceeds 40 microseconds for any frame length.




Figure 3: Forwarding Delay in Layer 2 Store-and-Forward Mode



To get a sense of how heavy a load the QFabric System handled, the inset box in Figure 3 shows
throughput rates for 1,024-byte frames, the midpoint of all sizes tested. Forwarding delay was
essentially identical for fully meshed, 1536-port loads of up to 3.0 Tbit/s, and increased only at the
throughput rate approaching 5.0 Tbit/s.
                                                                                                            7
                                                                                                            Page




Network Test also measured forwarding delay and latency with the QFabric System configured in Layer 3
store-and-forward mode. As in the Layer 2 tests, forwarding delay was less than 5 microseconds for
Juniper QFabric System Assessment

       frame sizes up to 512 bytes at intended loads of up to 20 percent of line rate, and delays were
       consistent across various intended loads. Figure 4 presents the Layer 3 forwarding delay measurements
       with the QFabric System configured in store-and-forward mode.




       Figure 4: Forwarding Delay in Layer 3 Store-and-Forward Mode

       The inset box in Figure 4 presents the rates used to obtain these measurements. Here again, forwarding
       delay was essentially identical for fully meshed, 1536-port loads of up to 3.0 Tbit/s, and increased only
       at the throughput rate of more than 4.4 Tbit/s.

       As noted, Network Test also measured latency with the QFabric System configured in cut-through mode.
       RFC 1242 requires that cut-through latency use a different measurement method than with store-and-
       forward devices.1 It’s improper to do direct comparisons between the two, so the results for cut-through
       latency are presented separately here, in Figure 5.




       1
         With cut-through devices, RFC 1242 requires measurement using a first-in, first-out (FIFO) method; in contrast, the RFC requires a last-in, first
8




       out (LIFO) method when measuring the latency of store-and-forward devices. Measurements from these two methods will differ by at least the
Page




       time it takes to put each frame on the wire (sometimes known as serialization delay or frame insertion time).
Juniper QFabric System Assessment




Figure 5: Layer 2 Cut-Through Latency

In the cut-through tests, average latency at the throughput rate appears slightly higher than in the store-
and-forward tests, but this is due mainly to differences in the measurement methods involved. Network
Test was unable to complete load-vs.-delay measurements in Layer 2 cut-through mode, as in the store-
and-forward tests, so results shown here are only for the throughput (maximum zero-drop) rate. The
inset box shows the load used in these tests for 1,024-byte frames, the midpoint of frame sizes tested.




Layer 2 Multicast Performance
With the highest throughput and lowest latency of all tests in this project, IP multicast traffic really
showed off the potential of the QFabric System. That’s important for high-bandwidth applications that
use multicast, such as streaming media, telepresence, and videoconferencing. The multicast
performance results also validate Juniper’s claim that the QFabric Interconnect component is
nonblocking, no matter how many edge ports are involved.
                                                                                                              9




To assess multicast performance, Network Test used the standard throughput and latency benchmarks
                                                                                                              Page




defined in RFC 3918. As with the unicast tests, these benchmarks determine the highest forwarding rate
Juniper QFabric System Assessment

       with zero packet drops, and then measure latency at that rate. The QFabric Nodes ran in their default
       store-and-forward mode for these tests.

       Engineers forced the maximum amount of multicast replication by configuring the Spirent TestCenter
       traffic generator to emulate one transmitter and 1,535 receivers, all subscribed to the same 100
       multicast groups. The QFabric System used IGMPv2 snooping to keep track of multicast state.

       Multicast throughput was close to line rate in all tests, regardless of frame length, with the system
       moving traffic at speeds of up to 15.3 terabits per second. The minor difference between observed
       rates and the theoretical maximum is explained by clocking differences between the QFabric Nodes and
       the Spirent TestCenter traffic generator. To compensate for these clocking differences, engineers
       reduced the intended load to 99.99 percent of nominal line rate. Figure 7 shows throughput for IP
       multicast traffic.




       Figure 6: Multicast Throughput

       Multicast latency was low and consistent across all frame sizes tested. Low, deterministic latency is
       essential for video and voice applications, both of which often use IP multicast. In these tests, multicast
       average latency never exceeded 4 microseconds, even when forwarding traffic on all ports at the
10




       maximum zero-drop rate. Figure 6 summarizes multicast latency measurements.
Page
Juniper QFabric System Assessment




Figure 7: Layer 2 Multicast Latency




Integrating the QFabric System Into Existing Data Centers
Although the QFabric System introduces a new approach to data center network design, it is not a rip-
and-replace technology. To verify Juniper’s claim of interoperability with existing devices, Network Test
used three common protocols to link a QFabric System with other devices. These protocols were IEEE
802.3ad link aggregation; OSPF equal cost multipath (OSPF ECMP); and BGP. For all three protocols,
Network Test verified interoperability between the QFabric System and two other switch/routers: A
Cisco Nexus 7010 and a Cisco Catalyst 6506-E.

All interoperability tests used the same topology, with two physical links on the Juniper and Cisco
devices load-sharing traffic. In the link aggregation case, the two physical links formed one logical link,
negotiated using the link aggregation control protocol (LACP). In the OSPF ECMP case, engineers
established separate OSPF adjacencies on each pair of physical interfaces, with the same routing metric
used for each.
                                                                                                              11




For both link aggregation and OSPF ECMP, the Juniper QFabric System correctly interoperated with
                                                                                                              Page




both Cisco switches. After every test, test engineers checked the counters on each interface linking the
Juniper QFabric System Assessment

       Juniper and Cisco devices and observed roughly equivalent numbers of frames forwarded on each
       physical interface.

       Besides supporting link aggregation and OSPF ECMP, Network Test also verified that the Juniper
       QFabric System can interoperate with other routers using BGP, an important consideration when a
       QFabric System exchanges routing information about the global Internet. Once again, these tests were
       conducted using the Cisco Nexus 7010 and Cisco Catalyst 6506-E.

       The BGP tests used both the external and internal variations of BGP (eBGP and iBGP). Here, engineers
       configured the Spirent TestCenter traffic generator and one interface on a Cisco device to use eBGP,
       with the Spirent and Cisco devices using different autonomous system numbers (ASNs). The rest of the
       test bed – including the Juniper QFabric System and the remaining Cisco and Spirent TestCenter
       interfaces – used iBGP, as would be the case when BGP distributes routing information within a single
       autonomous system. Test engineers also used link aggregation to bond two interfaces between the
       Juniper QFabric System and Cisco devices into single logical interfaces. Figure 9 shows the configuration
       used to verify BGP interoperability.
12




       Figure 8: BGP Interoperability Test Bed
Page
Juniper QFabric System Assessment

As in the previous tests, the Juniper QFabric System successfully established BGP sessions and
forwarded all routing and data traffic. Test engineers verified BGP interoperability both by observing
frame counters on the Spirent TestCenter test instrument and interface counters on the Juniper QFabric
and Cisco devices. The Juniper QFabric System demonstrated BGP interoperability in a test bed
running both eBGP and iBGP, an important consideration when data center networks must exchange
routing information about the global Internet.




Conclusion
These tests validated the performance of the Juniper QFabric System on an unprecedented scale. In
this largest public switch test ever conducted, engineers built a QFabric test bed comprised of 1,536 10-
Gbit/s Ethernet edge ports. In tests involving a fully meshed pattern of all 1,536 edge ports – the most
stressful test case possible – QFabric moved Ethernet frames at rates approaching the channel
capacity. Further, Layer 2 average latency is 5 microseconds or less for frame sizes up to 512 bytes at
intended loads of up to 20 percent of line rate.

Multicast throughput was higher still, since there’s no oversubscription involved. The multicast tests
validated Juniper’s claim that its QFabric Interconnect component is nonblocking, in this case moving
traffic at more than 15 Tbit/s. Multicast latency also is low and consistent, with average latency never
exceeding 4 microseconds.

A final set of tests examined interoperability between the Juniper QFabric System and other
switch/routers using common data center protocols such as link aggregation, OSPF equal cost multipath,
and BGP. Interoperability between the Juniper QFabric System and Cisco Systems equipment was
successful in all cases, demonstrating that QFabric technology can be added incrementally without the
need to replace existing data center devices.

Finally, for all the complexity of this test bed – again, the largest used in any public switch test – the
QFabric System was managed as one single device. The implication for network managers is twofold:
First, this means the QFabric System helps reduce operational complexity by presenting far fewer
devices to be managed. Second, even in very large cloud and data center applications, the QFabric
System allows the entire data center network infrastructure to be managed as a single entity.
                                                                                                             13
                                                                                                             Page
Juniper QFabric System Assessment

       Appendix A: About Network Test
       Network Test is an independent third-party test lab and engineering services consultancy. Our core
       competencies are performance, security, and conformance assessment of networking equipment and
       live networks. Our clients include equipment manufacturers, large enterprises, service providers,
       industry consortia, and trade publications.


       Appendix B: Hardware and Software Releases Tested
       This appendix describes the software versions used on the test bed. All tests were conducted in
       November-December 2011 at Juniper’s headquarters facility in Sunnyvale, CA, USA.

       Component                                        Version
       Juniper QFabric System (including QFX3100        Junos 11.3X30.9 (performance tests); Junos
       QFabric Director, QFX3008 QFabric                11.3I20111015_1217 (interoperability tests)
       Interconnect, and QFX3500 QFabric Node top-
       of-rack switches)
       Cisco Nexus 7010                                 NX-OS 5.2(1)

       Cisco Catalyst 6506-E                            IOS 12.2(33)SXI4a
       Spirent TestCenter                               3.90.0293.0000


       Appendix C: Disclaimer
       Network Test Inc. has made every attempt to ensure that all test procedures were conducted with the
       utmost precision and accuracy, but acknowledges that errors do occur. Network Test Inc. shall not be
       held liable for damages which may result for the use of information contained in this document. All
       trademarks mentioned in this document are property of their respective owners.




       Version 2012022700. Copyright © 2011-12 Network Test Inc. All rights reserved.

       Network Test Inc.
       31324 Via Colinas, Suite 113
       Westlake Village, CA 91362-6761
       USA
       +1-818-889-0011
14




       http://networktest.com
       info@networktest.com
Page




       2000464-001-EN Feb 2012

More Related Content

What's hot

11.signal strength based congestion control in manet
11.signal strength based congestion control in manet11.signal strength based congestion control in manet
11.signal strength based congestion control in manetAlexander Decker
 
4..[26 36]signal strength based congestion control in manet
4..[26 36]signal strength based congestion control in manet4..[26 36]signal strength based congestion control in manet
4..[26 36]signal strength based congestion control in manetAlexander Decker
 
Backbone nodes based stable routing for mobile ad hoc networks
Backbone nodes based stable routing for mobile ad hoc networksBackbone nodes based stable routing for mobile ad hoc networks
Backbone nodes based stable routing for mobile ad hoc networksambitlick
 
Protocol based QoS Estimation of OFDM-WIMAX Network
Protocol based QoS Estimation of OFDM-WIMAX NetworkProtocol based QoS Estimation of OFDM-WIMAX Network
Protocol based QoS Estimation of OFDM-WIMAX Networkidescitation
 
A New Data Link Layer Protocol for Satellite IP Networks
A New Data Link Layer Protocolfor Satellite IP NetworksA New Data Link Layer Protocolfor Satellite IP Networks
A New Data Link Layer Protocol for Satellite IP NetworksNiraj Solanki
 
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...IJECEIAES
 
477818. paper(92-97)ctts زیلایی
477818. paper(92-97)ctts زیلایی477818. paper(92-97)ctts زیلایی
477818. paper(92-97)ctts زیلاییmorteza zilaie bouri
 
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdf
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdfPerformance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdf
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdfIUA
 
UAN (Underwater Acoustic Network)
UAN (Underwater Acoustic Network)UAN (Underwater Acoustic Network)
UAN (Underwater Acoustic Network)Prof Ansari
 
Implementation_and_Analysis_of_the_6LoWPAN.pdf
Implementation_and_Analysis_of_the_6LoWPAN.pdfImplementation_and_Analysis_of_the_6LoWPAN.pdf
Implementation_and_Analysis_of_the_6LoWPAN.pdfIUA
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Recital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkRecital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
 
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...Narendra Singh Yadav
 
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...Vinicius Palhano de Siqueira
 
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...VLSICS Design
 

What's hot (19)

11.signal strength based congestion control in manet
11.signal strength based congestion control in manet11.signal strength based congestion control in manet
11.signal strength based congestion control in manet
 
4..[26 36]signal strength based congestion control in manet
4..[26 36]signal strength based congestion control in manet4..[26 36]signal strength based congestion control in manet
4..[26 36]signal strength based congestion control in manet
 
Backbone nodes based stable routing for mobile ad hoc networks
Backbone nodes based stable routing for mobile ad hoc networksBackbone nodes based stable routing for mobile ad hoc networks
Backbone nodes based stable routing for mobile ad hoc networks
 
Protocol based QoS Estimation of OFDM-WIMAX Network
Protocol based QoS Estimation of OFDM-WIMAX NetworkProtocol based QoS Estimation of OFDM-WIMAX Network
Protocol based QoS Estimation of OFDM-WIMAX Network
 
A New Data Link Layer Protocol for Satellite IP Networks
A New Data Link Layer Protocolfor Satellite IP NetworksA New Data Link Layer Protocolfor Satellite IP Networks
A New Data Link Layer Protocol for Satellite IP Networks
 
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
 
477818. paper(92-97)ctts زیلایی
477818. paper(92-97)ctts زیلایی477818. paper(92-97)ctts زیلایی
477818. paper(92-97)ctts زیلایی
 
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdf
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdfPerformance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdf
Performance-Evaluation-of-RPL-Routes-and-DODAG-Construction-for-IoTs .pdf
 
UAN (Underwater Acoustic Network)
UAN (Underwater Acoustic Network)UAN (Underwater Acoustic Network)
UAN (Underwater Acoustic Network)
 
Implementation_and_Analysis_of_the_6LoWPAN.pdf
Implementation_and_Analysis_of_the_6LoWPAN.pdfImplementation_and_Analysis_of_the_6LoWPAN.pdf
Implementation_and_Analysis_of_the_6LoWPAN.pdf
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
94
9494
94
 
Recital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless networkRecital Study of Various Congestion Control Protocols in wireless network
Recital Study of Various Congestion Control Protocols in wireless network
 
Kr2518691873
Kr2518691873Kr2518691873
Kr2518691873
 
H018214754
H018214754H018214754
H018214754
 
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...
Performance Comparison of AODV and DSDV Routing Protocols for Ad-hoc Wireless...
 
Chapter4[one.]
Chapter4[one.]Chapter4[one.]
Chapter4[one.]
 
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...
Standards based npu-switchfabricdevicesfornext-generationmulti-serviceplatfor...
 
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...
MINIMALLY BUFFERED ROUTER USING WEIGHTED DEFLECTION ROUTING FOR MESH NETWORK ...
 

Similar to A Whole Lot of Ports: Juniper Networks QFabric System Assessment

Network Test: EX8200 Virtual Chassis Performance and Scale
Network Test: EX8200 Virtual Chassis Performance and ScaleNetwork Test: EX8200 Virtual Chassis Performance and Scale
Network Test: EX8200 Virtual Chassis Performance and ScaleJuniper Networks
 
Cisco Router Throughput
Cisco Router ThroughputCisco Router Throughput
Cisco Router Throughput3Anetwork com
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkIJERA Editor
 
Performance Evaluation of Ipv4, Ipv6 Migration Techniques
Performance Evaluation of Ipv4, Ipv6 Migration TechniquesPerformance Evaluation of Ipv4, Ipv6 Migration Techniques
Performance Evaluation of Ipv4, Ipv6 Migration TechniquesIOSR Journals
 
Juniper Networks: Virtual Chassis High Availability
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks: Virtual Chassis High Availability
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks
 
Achieving congestion diversity in multi hop wireless mesh networks
Achieving congestion diversity in multi hop wireless mesh networksAchieving congestion diversity in multi hop wireless mesh networks
Achieving congestion diversity in multi hop wireless mesh networksieeeprojectschennai
 
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...Communication Systems & Networks
 
IBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandIBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandAngel Villar Garea
 
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual NetworkPerformance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual NetworkjournalBEEI
 
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSFPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSIAEME Publication
 
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSFPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSIAEME Publication
 
LTE premiere
LTE premiereLTE premiere
LTE premiereBP Tiwari
 
Speed5G Workshop London presentation of the Speed5G project approach and achi...
Speed5G Workshop London presentation of the Speed5G project approach and achi...Speed5G Workshop London presentation of the Speed5G project approach and achi...
Speed5G Workshop London presentation of the Speed5G project approach and achi...Klaus Moessner
 
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGS
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGSA STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGS
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGSijscmcj
 
Analysis and Survey of FPGA Based PROFIBUS Board
Analysis and Survey of FPGA Based PROFIBUS BoardAnalysis and Survey of FPGA Based PROFIBUS Board
Analysis and Survey of FPGA Based PROFIBUS BoardIJAEMSJORNAL
 
Performance evaluation of Frame Slotted.pdf
Performance evaluation of Frame Slotted.pdfPerformance evaluation of Frame Slotted.pdf
Performance evaluation of Frame Slotted.pdfYAAKOVSOLOMON1
 
40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overviewMapYourTech
 

Similar to A Whole Lot of Ports: Juniper Networks QFabric System Assessment (20)

Network Test: EX8200 Virtual Chassis Performance and Scale
Network Test: EX8200 Virtual Chassis Performance and ScaleNetwork Test: EX8200 Virtual Chassis Performance and Scale
Network Test: EX8200 Virtual Chassis Performance and Scale
 
Cisco Router Throughput
Cisco Router ThroughputCisco Router Throughput
Cisco Router Throughput
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet Network
 
Performance Evaluation of Ipv4, Ipv6 Migration Techniques
Performance Evaluation of Ipv4, Ipv6 Migration TechniquesPerformance Evaluation of Ipv4, Ipv6 Migration Techniques
Performance Evaluation of Ipv4, Ipv6 Migration Techniques
 
N017147679
N017147679N017147679
N017147679
 
Juniper Networks: Virtual Chassis High Availability
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks: Virtual Chassis High Availability
Juniper Networks: Virtual Chassis High Availability
 
Achieving congestion diversity in multi hop wireless mesh networks
Achieving congestion diversity in multi hop wireless mesh networksAchieving congestion diversity in multi hop wireless mesh networks
Achieving congestion diversity in multi hop wireless mesh networks
 
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
 
IBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to InfinibandIBM 40Gb Ethernet - A competitive alternative to Infiniband
IBM 40Gb Ethernet - A competitive alternative to Infiniband
 
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual NetworkPerformance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
Performance Analysis of VXLAN and NVGRE Tunneling Protocol on Virtual Network
 
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSFPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
 
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMSFPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
FPGA IMPLEMENTATION OF PRIORITYARBITER BASED ROUTER DESIGN FOR NOC SYSTEMS
 
LTE premiere
LTE premiereLTE premiere
LTE premiere
 
Speed5G Workshop London presentation of the Speed5G project approach and achi...
Speed5G Workshop London presentation of the Speed5G project approach and achi...Speed5G Workshop London presentation of the Speed5G project approach and achi...
Speed5G Workshop London presentation of the Speed5G project approach and achi...
 
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGS
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGSA STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGS
A STUDY OF QOS 6LOWPAN FOR THE INTERNET OF THINGS
 
i pv6
i pv6i pv6
i pv6
 
Analysis and Survey of FPGA Based PROFIBUS Board
Analysis and Survey of FPGA Based PROFIBUS BoardAnalysis and Survey of FPGA Based PROFIBUS Board
Analysis and Survey of FPGA Based PROFIBUS Board
 
A018120105
A018120105A018120105
A018120105
 
Performance evaluation of Frame Slotted.pdf
Performance evaluation of Frame Slotted.pdfPerformance evaluation of Frame Slotted.pdf
Performance evaluation of Frame Slotted.pdf
 
40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview
 

More from Juniper Networks

Why Juniper, Driven by Mist AI, Leads the Market
 Why Juniper, Driven by Mist AI, Leads the Market Why Juniper, Driven by Mist AI, Leads the Market
Why Juniper, Driven by Mist AI, Leads the MarketJuniper Networks
 
Experience the AI-Driven Enterprise
Experience the AI-Driven EnterpriseExperience the AI-Driven Enterprise
Experience the AI-Driven EnterpriseJuniper Networks
 
How AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WANHow AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WANJuniper Networks
 
Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.Juniper Networks
 
Are you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devicesAre you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devicesJuniper Networks
 
Stop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WANStop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WANJuniper Networks
 
Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachJuniper Networks
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Juniper Networks
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Juniper Networks
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper Networks
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationJuniper Networks
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveJuniper Networks
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud Juniper Networks
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper Networks
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionJuniper Networks
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastJuniper Networks
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingJuniper Networks
 

More from Juniper Networks (20)

Why Juniper, Driven by Mist AI, Leads the Market
 Why Juniper, Driven by Mist AI, Leads the Market Why Juniper, Driven by Mist AI, Leads the Market
Why Juniper, Driven by Mist AI, Leads the Market
 
Experience the AI-Driven Enterprise
Experience the AI-Driven EnterpriseExperience the AI-Driven Enterprise
Experience the AI-Driven Enterprise
 
How AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WANHow AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WAN
 
Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.
 
SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.
 
Are you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devicesAre you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devices
 
Stop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WANStop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WAN
 
Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic Approach
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
 
What's Your IT Alter Ego?
What's Your IT Alter Ego?What's Your IT Alter Ego?
What's Your IT Alter Ego?
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCO
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider Organization
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 Infographic
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer Satisfaction
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning Fast
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computing
 

Recently uploaded

Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 

Recently uploaded (20)

Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 

A Whole Lot of Ports: Juniper Networks QFabric System Assessment

  • 1. A Whole Lot of Ports: Juniper Networks QFabric System Assessment March 2012
  • 2. Juniper QFabric System Assessment Executive Summary Juniper Networks commissioned Network Test to assess the performance, interoperability, and usability of its QFabric System, a converged switch fabric for cloud and large data center applications tested with 1,536 10-Gbit/s Ethernet ports. Even at this unprecedented scale – by far the largest ever in a public switch test – this project loaded the QFabric System to only one-quarter of its maximum capacity of 6,144 10-Gbit/s Ethernet ports. Using industry-standard RFC benchmarks representing the most rigorous possible test cases, engineers stress-tested QFabric System performance in terms of unicast and multicast throughput and latency with separate events for Layer 2 and Layer 3 traffic. Engineers also assessed interoperability, a key consideration when adding QFabric technology incrementally into existing data center networks, and evaluated device management. Key results from QFabric System testing include the following:  All QFabric System components operated as a single device, simplifying network management and reducing operational complexity  Throughput for Layer 2 traffic was virtually identical in store-and-forward and cut-through modes, with rates approaching the maximum channel capacity for most frame sizes  QFabric forwarding delay is low and consistent across all tests (less than 5 microseconds for all frames sizes up to 512 bytes) when offered loads below the throughput rate  Average latency for Layer 3 traffic is 10 microseconds or less for most frame sizes tested  Multicast throughput was close to line rate in all tests, regardless of frame length, with the system moving traffic at speeds of up to 15.3 terabits per second  Multicast average latency was low and consistent in all tests, never exceeding 4 microseconds  The QFabric System successfully interoperated with Cisco Nexus 7010 and Cisco Catalyst 6506-E switch/routers when using common data center protocols such as link aggregation, OSPF equal cost multipath, and BGP The remainder of this document discusses the test results in more detail. Besides presenting the test results, each section describes the test objective and procedure, as well as its meaning for network architects and network managers. The QFabric System Test Bed Figure 1 shows the test bed used in this project, encompassing 1,536 10-Gbit/s Ethernet edge ports; 128 redundant 40-Gbit/s fabric uplinks; and an out-of-band gigabit Ethernet management network. From a network management perspective, all the various QFabric System components operated as a single device. Engineers used one Junos configuration file to define all interfaces and protocols. The 2 configuration syntax is identical to Junos on other Juniper platforms. Even in very large cloud and data Page
  • 3. Juniper QFabric System Assessment center applications, the QFabric System allows the entire data center network infrastructure to be managed as a single entity. Figure 1: The QFabric System Test Bed The test bed illustrates how the QFabric System takes the key pieces of a conventional modular switch and separates them into individual physical components, while still managing the entire system as one entity. The QFabric System’s three components are:  the QFabric Director (QFX3100), which performs control-plane functions analogous to the Routing Engine module in Juniper switches and routers  the QFabric Interconnect (QFX3008), which ties together switch ports in the same way as a modular switch’s backplane  the QFabric Nodes (QFX3500), which are analogous to line cards in a modular switch In this test, Juniper used 32 QFabric Nodes, each with 48 10-Gbit/s Ethernet ports and four 40-Gbit/s fabric uplink ports. Building a 1,536-port test bed is a massive undertaking. The test bed fully occupied four standard 42U 3 Page racks – two apiece for the Juniper QFabric System components and the Spirent TestCenter traffic
  • 4. Juniper QFabric System Assessment generator/analyzers, which used Spirent’s 32-port HyperMetrics dX modules. Because of QFabric’s modular design, the QFabric Nodes could have been physically dispersed – for example, with one in each of 32 racks scattered throughout a data center. The QFabric System used standard structured cabling, with 3-meter direct-attached copper (DAC) cables between Spirent test ports and the Juniper QFabric System and MPO multimode OM3 fiber cables between the QFabric Nodes and the QFabric Interconnect devices. The out-of-band management network used copper gigabit Ethernet ports, tied together with a pair of Juniper EX4200 switches using Virtual Chassis technology (not shown in figure). As large as this test bed was, it is nowhere near the limit of QFabric scalability. A pair of QFX3008 Interconnects can accommodate up to 128 top-of-rack switches, for a total of 6,144 10-Gbit/s Ethernet ports in a single system. The high-level goals for this project were to assess QFabric System performance, interoperability, and usability. Performance tests covered throughput and delay for unicast and multicast traffic, using the Spirent test tool to run RFC benchmarks across 1,536 10-Gbit/s Ethernet ports. Interoperability tests sought to verify that the QFabric System would work with switches/routers from Cisco Systems running common data center protocols such as link aggregation, OSPF equal cost multipath, and BGP. There were no separate usability tests, but engineers verified during all other tests that all QFabric System components were capable of being managed as a single system, reducing operational complexity. Unicast Throughput The primary goal of the unicast throughput tests was to determine how fast the QFabric System moved traffic with zero frame loss. QFabric allows network architects to build blocking or nonblocking switch fabrics. In this case, the overall system was 3:1 oversubscribed for unicast traffic (with 48 10-Gbit/s Ethernet ports on each QFabric Node [480 Gbit/s of capacity in each direction] attached to the interconnect nodes via four 40- Gbit/s fabric uplink ports [160 Gbit/s of capacity in each direction]). As defined in RFCs 1242 and 2544, throughput is the maximum rate at which a system can forward traffic with zero loss. With a 3:1 oversubscription, this meant the QFabric System’s channel capacity was essentially one-third that of the QFabric Nodes. (This document uses the term “channel capacity” to describe the maximum transmission rate a system will support; RFC 4689 also refers to the same concept as “forwarding capacity.”) Engineers repeated these tests with both Layer 2 Ethernet and Layer 3 IP traffic, in all cases using a fully meshed traffic pattern, the most stressful possible test case. With a fully meshed pattern, the Spirent test instruments attached to all ports offered traffic destined to all other ports. This is far more stressful on the switching fabric than port-pair tests and better describes 4 Page the limits of system performance. Network architects and managers have a reasonable expectation of
  • 5. Juniper QFabric System Assessment being able to send traffic between any arbitrary sets of ports in a system, not just selected port pairs. With results from a fully meshed traffic test, users can be confident they are truly seeing the limits of system performance. In the Layer 2 tests, engineers configured the Spirent TestCenter traffic generator to offer raw Ethernet frames with pseudorandom MAC addresses as described in RFC 4814. These frames had no IP or other upper-layer headers, forcing the QFabric System’s hashing algorithms to make path selection decisions based solely on destination and source MAC addresses. This Ethernet-only configuration is meaningful for data centers with large, flat Layer 2 domains, such as those carrying non-IP storage traffic. Other common use cases for large, flat Layer 2 domains in the data center include virtualization, where large broadcast domains ensure seamless migration of virtual machines between physical hosts; and converged data centers, where data and non-IP storage traffic shares a single high-speed fabric for transport. The QFabric System has the ability to forward traffic in both store-and-forward and cut-through modes, with the latter sometimes used in data centers with latency-sensitive applications. Test engineers conducted separate throughput tests in each mode. Throughput for Layer 2 traffic was virtually identical in store-and-forward and cut-through modes, with rates approaching channel capacity for most frame sizes. Rates for 64- and 9,216-byte jumbo frames were slightly lower than the channel capacity rate. Figure 2 presents the results of throughput testing. 5 Page Figure 2: Unicast Throughput
  • 6. Juniper QFabric System Assessment Engineers then moved on to throughput tests involving IP traffic. Here, engineers configured each edge port on the QFabric Nodes to use a different IP subnet, with IP networks assigned in sequential order (e.g., port 1 used 11.1.1.0/24, port 2 used 11.1.2.0/24, and so on). Test traffic contained IP and UDP headers, with the latter containing random source and destination port numbers. In the Layer 3 tests, the QFabric System’s hashing algorithms used a combination of IP and UDP data to select interconnect forwarding paths. In these Layer 3 tests, engineers configured the QFabric Nodes to use their default store-and-forward mode. When handling IP traffic, the QFabric System exhibited higher throughput with smaller (64- and 128- byte) frames and equivalent throughput for medium-sized 256-byte frames compared with Layer 2 results. Layer 3 throughput rates for frame lengths of 512 bytes and above were lower than those in the Layer 2 tests. During these tests, engineers observed some unevenness in traffic distribution on the links between QFabric Nodes and QFabric Interconnect devices. Given different traffic content, IP throughput may well change. More randomness in IP addressing might lead to higher Layer 3 throughput; conversely, less randomness in UDP port numbers might reduce throughput. Defining a one-size-fits-all model of IP test traffic was definitely not a goal of this project. The traffic parameters described here represent one model of network traffic; other models may result in different throughput rates. Unicast Forwarding Delay and Latency Delay is a critical consideration in the data center. With traffic increasingly moving between servers a few meters apart rather than across the global Internet, every microsecond can have an impact on application performance. For some applications such as video, voice, and high-frequency trading, delay is even more important than throughput. In the context of network device benchmarking, the term latency differs somewhat from its colloquial definition. As discussed in RFC 2544, latency describes delay at, and only at, the throughput rate. As such, it’s really a description of buffering capacity. RFC 4689 introduces a complementary metric, forwarding delay, that describes delay at any load, not just the throughput rate. Network Test used both metrics in this project – latency at the throughput rate, and forwarding delay for loads below the throughput rate. Because no production network operates at 100 percent utilization for any significant duration, forwarding delay is useful in understanding delays under common operating conditions. Latency is important in describing system limits, in this extreme case when all 1,536 ports are loaded to the maximum zero-drop rate. 6 Page
  • 7. Juniper QFabric System Assessment To get a more complete picture of the QFabric System’s load vs. delay curve, Network Test conducted measurements of forwarding delay by offering fully meshed traffic at 5, 10, 15, and 20 percent of theoretical line rate, as well as at the throughput rate. (Remember that with the 3:1 oversubscription of the QFabric Nodes, channel capacity is equivalent to 33.3 percent of the theoretical maximum.) Figure 3 presents measurements from the Layer 2 store-and-forward tests. QFabric forwarding delay was much lower in step tests than delays observed at the throughput rate. Forwarding delay is less than 5 microseconds for frame sizes up to 512 bytes with loads of up to 20 percent of theoretical line rate. At the throughput rate – the maximum stress point where all 1,536 ports are fully loaded and switch buffers are filled to capacity – latency is still less than 10 microseconds for 64- and 128-byte frames, and never exceeds 40 microseconds for any frame length. Figure 3: Forwarding Delay in Layer 2 Store-and-Forward Mode To get a sense of how heavy a load the QFabric System handled, the inset box in Figure 3 shows throughput rates for 1,024-byte frames, the midpoint of all sizes tested. Forwarding delay was essentially identical for fully meshed, 1536-port loads of up to 3.0 Tbit/s, and increased only at the throughput rate approaching 5.0 Tbit/s. 7 Page Network Test also measured forwarding delay and latency with the QFabric System configured in Layer 3 store-and-forward mode. As in the Layer 2 tests, forwarding delay was less than 5 microseconds for
  • 8. Juniper QFabric System Assessment frame sizes up to 512 bytes at intended loads of up to 20 percent of line rate, and delays were consistent across various intended loads. Figure 4 presents the Layer 3 forwarding delay measurements with the QFabric System configured in store-and-forward mode. Figure 4: Forwarding Delay in Layer 3 Store-and-Forward Mode The inset box in Figure 4 presents the rates used to obtain these measurements. Here again, forwarding delay was essentially identical for fully meshed, 1536-port loads of up to 3.0 Tbit/s, and increased only at the throughput rate of more than 4.4 Tbit/s. As noted, Network Test also measured latency with the QFabric System configured in cut-through mode. RFC 1242 requires that cut-through latency use a different measurement method than with store-and- forward devices.1 It’s improper to do direct comparisons between the two, so the results for cut-through latency are presented separately here, in Figure 5. 1 With cut-through devices, RFC 1242 requires measurement using a first-in, first-out (FIFO) method; in contrast, the RFC requires a last-in, first 8 out (LIFO) method when measuring the latency of store-and-forward devices. Measurements from these two methods will differ by at least the Page time it takes to put each frame on the wire (sometimes known as serialization delay or frame insertion time).
  • 9. Juniper QFabric System Assessment Figure 5: Layer 2 Cut-Through Latency In the cut-through tests, average latency at the throughput rate appears slightly higher than in the store- and-forward tests, but this is due mainly to differences in the measurement methods involved. Network Test was unable to complete load-vs.-delay measurements in Layer 2 cut-through mode, as in the store- and-forward tests, so results shown here are only for the throughput (maximum zero-drop) rate. The inset box shows the load used in these tests for 1,024-byte frames, the midpoint of frame sizes tested. Layer 2 Multicast Performance With the highest throughput and lowest latency of all tests in this project, IP multicast traffic really showed off the potential of the QFabric System. That’s important for high-bandwidth applications that use multicast, such as streaming media, telepresence, and videoconferencing. The multicast performance results also validate Juniper’s claim that the QFabric Interconnect component is nonblocking, no matter how many edge ports are involved. 9 To assess multicast performance, Network Test used the standard throughput and latency benchmarks Page defined in RFC 3918. As with the unicast tests, these benchmarks determine the highest forwarding rate
  • 10. Juniper QFabric System Assessment with zero packet drops, and then measure latency at that rate. The QFabric Nodes ran in their default store-and-forward mode for these tests. Engineers forced the maximum amount of multicast replication by configuring the Spirent TestCenter traffic generator to emulate one transmitter and 1,535 receivers, all subscribed to the same 100 multicast groups. The QFabric System used IGMPv2 snooping to keep track of multicast state. Multicast throughput was close to line rate in all tests, regardless of frame length, with the system moving traffic at speeds of up to 15.3 terabits per second. The minor difference between observed rates and the theoretical maximum is explained by clocking differences between the QFabric Nodes and the Spirent TestCenter traffic generator. To compensate for these clocking differences, engineers reduced the intended load to 99.99 percent of nominal line rate. Figure 7 shows throughput for IP multicast traffic. Figure 6: Multicast Throughput Multicast latency was low and consistent across all frame sizes tested. Low, deterministic latency is essential for video and voice applications, both of which often use IP multicast. In these tests, multicast average latency never exceeded 4 microseconds, even when forwarding traffic on all ports at the 10 maximum zero-drop rate. Figure 6 summarizes multicast latency measurements. Page
  • 11. Juniper QFabric System Assessment Figure 7: Layer 2 Multicast Latency Integrating the QFabric System Into Existing Data Centers Although the QFabric System introduces a new approach to data center network design, it is not a rip- and-replace technology. To verify Juniper’s claim of interoperability with existing devices, Network Test used three common protocols to link a QFabric System with other devices. These protocols were IEEE 802.3ad link aggregation; OSPF equal cost multipath (OSPF ECMP); and BGP. For all three protocols, Network Test verified interoperability between the QFabric System and two other switch/routers: A Cisco Nexus 7010 and a Cisco Catalyst 6506-E. All interoperability tests used the same topology, with two physical links on the Juniper and Cisco devices load-sharing traffic. In the link aggregation case, the two physical links formed one logical link, negotiated using the link aggregation control protocol (LACP). In the OSPF ECMP case, engineers established separate OSPF adjacencies on each pair of physical interfaces, with the same routing metric used for each. 11 For both link aggregation and OSPF ECMP, the Juniper QFabric System correctly interoperated with Page both Cisco switches. After every test, test engineers checked the counters on each interface linking the
  • 12. Juniper QFabric System Assessment Juniper and Cisco devices and observed roughly equivalent numbers of frames forwarded on each physical interface. Besides supporting link aggregation and OSPF ECMP, Network Test also verified that the Juniper QFabric System can interoperate with other routers using BGP, an important consideration when a QFabric System exchanges routing information about the global Internet. Once again, these tests were conducted using the Cisco Nexus 7010 and Cisco Catalyst 6506-E. The BGP tests used both the external and internal variations of BGP (eBGP and iBGP). Here, engineers configured the Spirent TestCenter traffic generator and one interface on a Cisco device to use eBGP, with the Spirent and Cisco devices using different autonomous system numbers (ASNs). The rest of the test bed – including the Juniper QFabric System and the remaining Cisco and Spirent TestCenter interfaces – used iBGP, as would be the case when BGP distributes routing information within a single autonomous system. Test engineers also used link aggregation to bond two interfaces between the Juniper QFabric System and Cisco devices into single logical interfaces. Figure 9 shows the configuration used to verify BGP interoperability. 12 Figure 8: BGP Interoperability Test Bed Page
  • 13. Juniper QFabric System Assessment As in the previous tests, the Juniper QFabric System successfully established BGP sessions and forwarded all routing and data traffic. Test engineers verified BGP interoperability both by observing frame counters on the Spirent TestCenter test instrument and interface counters on the Juniper QFabric and Cisco devices. The Juniper QFabric System demonstrated BGP interoperability in a test bed running both eBGP and iBGP, an important consideration when data center networks must exchange routing information about the global Internet. Conclusion These tests validated the performance of the Juniper QFabric System on an unprecedented scale. In this largest public switch test ever conducted, engineers built a QFabric test bed comprised of 1,536 10- Gbit/s Ethernet edge ports. In tests involving a fully meshed pattern of all 1,536 edge ports – the most stressful test case possible – QFabric moved Ethernet frames at rates approaching the channel capacity. Further, Layer 2 average latency is 5 microseconds or less for frame sizes up to 512 bytes at intended loads of up to 20 percent of line rate. Multicast throughput was higher still, since there’s no oversubscription involved. The multicast tests validated Juniper’s claim that its QFabric Interconnect component is nonblocking, in this case moving traffic at more than 15 Tbit/s. Multicast latency also is low and consistent, with average latency never exceeding 4 microseconds. A final set of tests examined interoperability between the Juniper QFabric System and other switch/routers using common data center protocols such as link aggregation, OSPF equal cost multipath, and BGP. Interoperability between the Juniper QFabric System and Cisco Systems equipment was successful in all cases, demonstrating that QFabric technology can be added incrementally without the need to replace existing data center devices. Finally, for all the complexity of this test bed – again, the largest used in any public switch test – the QFabric System was managed as one single device. The implication for network managers is twofold: First, this means the QFabric System helps reduce operational complexity by presenting far fewer devices to be managed. Second, even in very large cloud and data center applications, the QFabric System allows the entire data center network infrastructure to be managed as a single entity. 13 Page
  • 14. Juniper QFabric System Assessment Appendix A: About Network Test Network Test is an independent third-party test lab and engineering services consultancy. Our core competencies are performance, security, and conformance assessment of networking equipment and live networks. Our clients include equipment manufacturers, large enterprises, service providers, industry consortia, and trade publications. Appendix B: Hardware and Software Releases Tested This appendix describes the software versions used on the test bed. All tests were conducted in November-December 2011 at Juniper’s headquarters facility in Sunnyvale, CA, USA. Component Version Juniper QFabric System (including QFX3100 Junos 11.3X30.9 (performance tests); Junos QFabric Director, QFX3008 QFabric 11.3I20111015_1217 (interoperability tests) Interconnect, and QFX3500 QFabric Node top- of-rack switches) Cisco Nexus 7010 NX-OS 5.2(1) Cisco Catalyst 6506-E IOS 12.2(33)SXI4a Spirent TestCenter 3.90.0293.0000 Appendix C: Disclaimer Network Test Inc. has made every attempt to ensure that all test procedures were conducted with the utmost precision and accuracy, but acknowledges that errors do occur. Network Test Inc. shall not be held liable for damages which may result for the use of information contained in this document. All trademarks mentioned in this document are property of their respective owners. Version 2012022700. Copyright © 2011-12 Network Test Inc. All rights reserved. Network Test Inc. 31324 Via Colinas, Suite 113 Westlake Village, CA 91362-6761 USA +1-818-889-0011 14 http://networktest.com info@networktest.com Page 2000464-001-EN Feb 2012