Qf deck


Published on


Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • The application architectures have evolved from client-server to distributed apps, causing an underlining shift in traffic pattern. The newer applications are increasing the communication between server to server causing more east west traffic within the data center.Server virtualization is helping businesses gain efficiency by consolidating many physical servers into fewer high-performance virtualized servers. Of all the traffic types traversing an Ethernet network, storage has risen in prominence in recent years. This traffic type has its very own unique characteristics and demands a separate treatment than other traditional Ethernet traffic.Networking has not progressed and is now a barrier slowing the pace of innovation in the rest of the data center.
  • - High performance: eliminate STP, 1G->10G in access and 40G Fabrics Scale: seamlessly scale from 10s to 1000s ports without disruptions and Operational simplicity by reducing the managed devices and total cost of ownership
  • Concept of a chassis switch: We have line cards, which contain the ports. We have some sort of a center plane or back plane that you plug into. Then there is fabric circuitry, which interconnects all the ports in each line card to all the other ports in the line cards. Ethernet packets come in, some initial processing at the egress port, and then the bits are sprayed across the backplane in a non-blocking fashion and reassembled at the far side and then Ethernet out. What you experience as a user is “Ethernet in and Ethernet out”, but you don’t manage these bits. It’s a very efficient transport.   QFabric also behaves like any classis switch. QFabric has distributed components but is managed as a single logical switch.  In designing QFabric, Juniper has essentially taken the three basic components of a self-contained switch fabric—line cards, backplane, and Routing Engines—and broken them out into independent, standalone devices—the QF/Node, the QF/Interconnect, and the QF/Director respectively. • QF/Node: In the QFabric architecture, the line cards that typically reside within a chassis switch become a high density, fixed configuration 1 RU edge device called the QF/Node, which provides access into and out of the fabric. The first member of the QFX Series product family, the QFX3500, not only acts as a QF/Node edge device in a QFabric architecture, it can also serve as a high-performance standalone switch in highly demanding data center environments. • QF/Interconnect: Similarly, the backplane of a single switch becomes the QFabric/Interconnect device, which connects all QF/Node edge devices in a full mesh topology. • QF/Director: The Routing Engines embedded within a switch are externalized in the QFabric architecture and called the QF/Director, which provides the control and management services for
  • Storage
  • Simulating data traffic representative of today's most demanding data center and cloud environments.The QFabric system showed that enterprises can scale their existing data center infrastructure without loss of performance and without adding complexity. The QFabric system demonstrated record performance, delivering network multicast traffic at a rate of 15.3 terabits per second -- enough bandwidth to stream 3.4 million HD movies simultaneously.Results confirmed that all 1,536 ports were easily and simply managed as one device.The testing also validated interoperability with a variety of switches.
  • A change in data center network design is needed to ensure that organizations can take full advantage of their investments in new applications, virtualization, and storage and compute resources. The most efficient way for resources to interact is for them to be no more than a single hop away from each other. It’s time to break the network barriers and build a network environment that is optimized for performance and simple to operate.QFabric architecture would address the latency requirements of today’s applications, eliminate the complexity of legacy hierarchical architectures, scale elegantly, and support virtualization, convergence, and cloud computing and other demanding requirements for the next-generation data center.
  • Qf deck

    2. 2. THE DATA CENTER HAS EVOLVED – BUT NOT THENETWORK From To Client /Server Applications silos Software Services Flexible, virtualized model Rigid, legacy model of I.T. Servers/ Dedicated Virtualized Workloads Servers Compute Dedicated Storage Storage Shared Storage Layers of Network Complexity2 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    3. 3. TRADITIONAL DATA CENTER NETWORK CHALLENGES Performance Data Center Spanning Tree disables up to 50% of bandwidth, tiered networks not optimized for E-W traffic  Scale   Requires additional layers and adds cost & complexity W E Management Simplification Every device managed separately, multiple networks Traditional Networks: Inefficient, Complex, Costly to Manage3 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    4. 4. A Revolutionary New Switch 3 Design PrinciplesManagement N=1 Operational model of Plane a single switchControl Plane Federated Intelligence Only way to scale with resilience Data Plane Rich edge, Simple core Everything is one hop away
    5. 5. COMPARISON OF DATA CENTER NETWORKARCHITECUTRES Single Switch Fabric model. Extender Rich Top of Rack End of Row Simple Edge Edge, Simple Rich Core Core. Performance Resiliency Management Simplification Cable Simplification5 COMPANY CONFIDENTIAL: DO NOT DISTRIBUTE OR COPY - Copyright © 2012 Juniper Networks, Inc.
    6. 6. QFABRIC EVOLVING THE SINGLE SWITCH MODEL Director Route Fabric Engine • Separate the I/O modules from the fabric and replace copper traces with fiber links. • For redundancy add multiple Interconnect devices. • Federated Control and Intelligent Nodes • One switch Interconnect I/O Node Modules Chassis Switch QFabric6 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    7. 7. QFABRIC – 2 CONFIGURATIONS QFABRIC (QFX3000-G) QFABRIC (QFX3000-M) New Scale: 6,144 10GbE Scale: 768 10GbE ports; 40G fabric ports; 40G fabric Performance: 5 Performance: 3 microseconds microseconds Target Markets: Cloud Target Markets: (IaaS, SaaS), Large Mid-Tier Enterprise Enterprise IT IT DC, Satellite DC, HPC DC, Container / (Federal, Financial Space Services, Oil & Constrained, HPC Gas), Grid Compute Shipping since Sept 2011 Shipping in June 20127 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    8. 8. QFABRIC FAMILY COMPONENTS QFX3000-G QFX3000-M QFX 3100 QFX 3100 QFabric Director SAME QFX 3008-I QFX 3600-I QFabric NEW Interconnect DIFFERENT QFX 3500 QFX 3500 QFabric Node QFX 3600 SAME QFX 3600 NEW NEW8 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    9. 9. 40GE CONNECTIVITY TO QFABRIC QFX3600 New • 16 ports of 40GE (QSFP+ ports) or 64 10GbE ports1 rack unit • QFabric Node for QFX3000-G 1 rack optimized form Space unit factor and QFX3000-M Space optimized form factor Available June 2012 Available June 2012 • Standalone 10GbE/40GbE ToR*List Price: $40,000 • Full L2 and L3 support • Storage DCB and FCoE support * In 2H2012 with a future software release 9 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    10. 10. NEXUS 7K/2K “FEX” VS QFX3000-M QFX3000-M 40G Fabric vs. 10G Fabric 1 4 74% fewer cables Non-Blocking 63% less space L2 & L3 40G Links 29% fewer devices 1 16 Full Layer 2 and Layer 3 FC-FCoE gateway N7K and N2K FEX – Architecture Seamless scaling beyond 768 ports (to QFX3000-G) Non- Blocking 1 2 • Predictable L2 performance 10G Links • Single device mgmt • 1-Tier 1 .. .. .. .. .. .. .. .. .. .. .. 32 • Flat, any-any@3:1 oversubscription10 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    11. 11. QFABRIC FAMILY SUMMARY Scalability Runs Junos QFX3000-M QFX3000-G Rich functionality 10s to 768 ports 10s to 6,144 ports Performance Lossless QFX3000-M QFX3000-G Low jitter— Low jitter— DCB compliant <3usec. <5usec. Simplicity Storage End-to-end FCoE N=1 FCoE/FC Gateway and FCoE/iSCSI Transit Switch Designed for Modern DC Seamless Layer 2 and Layer 3 Virtualization and Flexible VLAN capability convergence11 JUNIPER NETWORKS CONFIDENTIAL Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    12. 12. BIGGEST EVER 10GBE NETWORK TEST QFabric Validated by David Newman! 4X greater Performs, Scales, Interoperates & Simplifiesthan previous tests! 15.3 Tbps throughput at wire speed (multicast traffic) Layer 2/3 unicast forwarding delay1 <5usec for most packet sizes Multicast latency <5usec for any packet sizes Interoperates with Cisco Switches Nexus 7000 & Catalyst 6500-E 1,536 10GbE ports Full mesh: Millions of traffic flows Entire data center managed as one Extreme stress conditions switch! 12 Copyright © 2012 Juniper Networks, Inc. www.juniper.net 1Forwarding delay is close proxy for latency under typical loads
    13. 13. QFABRIC UNLEASHES THE POWER OF THEEXPONENTIAL DATA CENTER From To On-Premise Software Apps Performance Applications Flexible, virtualized model Services Rigid, legacy model of I.T. Dedicated Servers/ Virtualized Servers Scalability Compute Workloads Dedicated Shared Storage Manageability Storage Storage Layers of Complexity Network Economics Network QFabric Network13 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
    14. 14. QFABRIC: FOUNDATION FOR CUSTOMER EVOLUTION AND DATA CENTER DEMANDS Storage Storage Big Big 100GE Performance Low Latency Virtualization Cloud, Multi- Cloud, Multi- Convergence Data Virtualization Tenancy Convergence Data at Scale and Low Jitter Tenancy QFABRIC • Architecture as foundation for the future • New level of management simplification QFabric’s architecture and implementation provides the agility and investment protection data centers require14 Copyright © 2012 Juniper Networks, Inc. www.juniper.net