Fabric Arch Compet

2,981 views
2,766 views

Published on

Very Low latency Switch Fabric, its new era Data Center Cloud

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,981
On SlideShare
0
From Embeds
0
Number of Embeds
29
Actions
Shares
0
Downloads
266
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Fabric Arch Compet

  1. 1. DATA CENTER FABRIC ARCHITECTURE COMPETITIVE DIFFERENTIATORS SUTAPA BANSAL Product Marketing Manager, FSGJuniper Confidential
  2. 2. In the struggle for survival, the fittest win out because they succeed in adapting themselves best to their environment. Charles Darwin INCUMBENT Dunkleosteus Fish Marlin Ruled the World Fast, Agile SLOW, Armored Replaced FISH – Did Not Prehistoric Adapt NEW RULER! NOW EXTINCT! FAST MARLIN2 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  3. 3. AGENDA 1 Challenges With Data Centers 2 Juniper’s Fabric Architecture 3 Competitive Cisco Fabric Path Fabric Path Architecture TRILL Protocol 4 Competitive Brocade VCS Architecture 5 Summary3 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  4. 4. TRENDS IN THE DATA CENTER Application Data Center Virtualization Architecture Consolidation Evolution4 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  5. 5. CHANGING ROLES OF THE NETWORK Traditional role – connecting users • North-South traffic Latency Tolerant New role – connecting devices • East-West traffic • Ideally one hop away Latency Sensitive Newest role – foundation of the cloud • Any-to-any connectivity Application running5 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  6. 6. CHALLENGES IN DATA CENTERS 1 Scale to support DC consolidation 2 High Performance Networking 3 Better user experience: Faster NW with low latency / lower oversubscription / large bandwidth 4 Virtualization support 5 High reliability 6 Manage complexity, and reduce cost of networking6 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  7. 7. DEFINING THE IDEAL NETWORK – A FABRIC Flat, any-to-any Single device connectivity N=1 Attributes Network Fabric Data Plane • Low latency and jitter Flat Any-to-any • Lossless • Non L2 and L3 support Control Plane Single device Shared state A Network Fabric has the…. Performance and Simplicity And Scalability and Resiliency of a single switch the of a network7 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  8. 8. AGENDA 1 Challenges With Data Centers 2 Juniper’s Fabric Architecture 3 Competitive Cisco Fabric Path Fabric Path Architecture TRILL Protocol 4 Competitive Brocade VCS Architecture 5 Summary8 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  9. 9. DATA PLANE IN A SINGLE SWITCH Data Plane 1. All ports are directly connected to every other port 2. A single “full lookup” processes packets9 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  10. 10. CONTROL PLANE IN A SINGLE SWITCH Control Plane Single consciousness Centralized shared table(s) have information about all ports Management Plane All the ports are managed from a single point10 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  11. 11. THE QFABRIC DATA PLANE Data Plane So, we separate the fabric from the i/o ports And replace the copper traces with fiber links For redundancy add multiple devices11 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  12. 12. THE QFABRIC DATA PLANE Data Plane 1. All ports are directly connected to every other port 2. A single “full lookup” at the ingress edge Interconnect device Edge The distributed data plane is implemented on the interconnect and edge devices12 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  13. 13. SCALING THE CONTROL PLANE Control Plane Director The intelligence and state is federated, distributed across the fabricNew Host Address A federated control plane provides both scalability and resiliency13 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  14. 14. SCALING THE MANAGEMENT PLANEManagement Plane Director Single point of management Extensive use of automation Managed as a single switch - N=114 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  15. 15. QFABRIC HARDWARE Interconnect Connects all the edge devices Cannot function as a standalone device Edge node Media independent I/O ToR device. Can be run in independent or fabric mode Director 2 RU high fixed configuration X86 based system architecture15 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  16. 16. QFABRIC VALUE PROPOSITION Low latency High – <1Us in rack Performance – <5Us across fabric – max cable length Flat Fabric Low jitter Large Seamless layer 2 and layer 3 with scale Scale Full Non-blocking and lossless Wirespeed Storage FCoE gateway and transit switch Convergence Green Lower power, HVAC, space Solution16 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  17. 17. COMPETITIVE LANDSCAPE 1H2011 Timeframe Estimate Cisco (N7K+N5K) Brocade QFabric FabricPath VDX 4 Chassis 24 Chassis 24 Chassis1 Configuration 125 ToRs 171 ToRs 136 ToRs Layer 3 Support Yes No No2 Networking Racks (#) 2 24 24 Power/Port 14W 37W 54W Device to Manage 1 191 160 Latency (Inter-rack) <5u >35u >93u Oversubscription 3:1 3:1 3:117 1Needs L3 core to scale for 6K ports Copyright © 2011 Juniper Networks, Inc. www.juniper.net 2L3 supported only in spine/core devicesJuniper Confidential
  18. 18. AGENDA 1 Challenges With Data Centers 2 Juniper’s Fabric Architecture 3 Competitive Cisco Fabric Path Fabric Path Architecture TRILL Protocol 4 Competitive Brocade VCS Architecture 5 Summary18 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  19. 19. JUNIPER QFABRIC VS. CISCO FABRICPATH3 COMPETITIVE AREAS Hardware Architecture Protocols Components19 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  20. 20. CISCO FABRICPATH ARCHITECTURE – FUTURE Cisco’s architecture is based on L2MP, Cisco called FabricPath, and the IETF standard is TRILL. Cisco FabricPath works at Layer 2, and need to extend the IS-IS protocol as its control protocol. DC Core Nexus 7000 WAN 10GbE Core IP+MPLS WAN Agg Router FabricPath DC Access Gigabit Ethernet 10 Gigabit Ethernet 10 Gigabit DCE 4Gb Fibre Channel 10 Gigabit FCoE/DCE Nexus 5548 and Nexus 200020 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  21. 21. CISCO FABRICPATH ARCHITECTUREREALITY FOR 6000 PORT CONFIG FabricPath: 1 2 3 4 5 6 1 12 …. …. …. …. …. 1 19 20 38 39 57 58 76 77 95 96 112 Chassis: 4 vs 20 Access devs: 84 vs 112 Managed Devices: Links: 336 vs 4288 N = 1 vs 13221 Note: Copyright © 2011 Juniper Networks, Inc. www.juniper.net - OS* Over Subscription 3:1 - Ports: 4000 server ports Juniper Confidential
  22. 22. QFABRIC RESILIENCY CONTROL PLANE ISOLATED FROM DATA PLANE Control Plane Juniper Interconnect Director Redundant out-of-band connections MAC spoofingUnknown traffic X Server QFX 3500 Faster Re-convergence Dedicated control plane Out of band control plane – data plane flooded but control 22 plane is not blocked Copyright © 2011 Juniper Networks, Inc. www.juniper.net Juniper Confidential
  23. 23. FABRICPATH CONTROL PLANE PRONE TO ATTACKS:SHARED WITH DATA PATH Control Plane Same path used to propagate control plane changes as data plane; prone to attacks, less resilient Control Plane Propagation Over Same Data Plane Unknown Destination Traffic Flood XNexus 5548 Control plane impacted due to data plane flood23 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  24. 24. QFABRIC: HIGH PERFORMANCE, LOW-LATENCY Data Plane <1Us <5Us <5Us Stock <1Us update <5Us 231.1.1.1 Server Storage Storage Back-End Server Server Server Server Application Servers QFABRIC: Predictable, consistent performance; every path < 5Us24 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  25. 25. FABRICPATH: INCONSISTENT PERFORMANCE, HIGH LATENCY Data Plane Different latency depending upon path; latency across Fabric <35UsNexus 5548 14Us 35Us Stock update Storage Back-End Server Server Back-End231.1.1.1 Server Application Servers Application Servers FabricPath: Unpredictable, inconsistent performance; QFabric is half the latency of a single Nexus7000 25 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  26. 26. QFABRIC: SCALED MANAGEMENT PLANE Management Plane Admin Director Single point of management Extensive use of automation Managed as a single switch – N=126 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  27. 27. FABRICPATH COMPLEX AND COSTLY MANAGEMENT:1000 PORT VIEW M M Management Plane Each device managed Admin separately Multiple touch complex management M M M M OPEX: During image upgrade, provisioning, Maintenance 6*10G links M M M M M M MNexus 5548 Each Device Managed as an Individual Switch27 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  28. 28. JUNIPER QFABRIC VS. CISCO FABRICPATH3 COMPETITIVE AREAS Hardware Architecture Protocols Components28 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  29. 29. FABRICPATH TRILL: A BRIEF OVERVIEWTRILL is a new IETF protocol to No Multipathing With STPperform Layer2 bridgingbased on Layer3 IS-IS linkstate routing technology B1 B3 B2 XEliminates STP and increase Link B4 Blockedbandwidth utilization (active-Nlinks) and Fabric efficiency byallowing equal bisectional traffic Trill Allows MultipathMinimal configuration burdenon user RB1Layer2 only technology and RB3 RB2does not address Layer3 RB4TRILL supposed to help inter-op but questionable29 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  30. 30. ARE TRILL IMPLEMENTATIONS INTER-OPERABLE? Next Hop Destination Address (NHDA) Outer Destination Address (ODA) NHDA NHSA ODA OSA Next Hop Source Address (NHSA) Outer Source Address (OSA) Different frame Ethertype = CTAG Next_Hop.VLAN Ethertype = DTAG FTAG (10b) TTL (6b) formats, = TRILL Ethertype V,R,M,OpL,Hop_Count Inner Destination Address (IDA) proprietary Egress Nickname Ingress Nickname IDA ISA extensions Inner Destination Address (IDA) – No Inner Source Address (ISA) interoperability IDA ISA Inner Source Address (ISA) Rest of Original Frame Rest of Original Frame FCS FCS Fabric Path TRILL TRILL implementation based on FSPF TRILL implementation based on IS-IS30 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  31. 31. DATA CENTER DESIGN L3 OR L2? Address Space Domain Size L3 Fault L3 Hierarchical Containment vs. vs. L2 Flat L2 Flexibility L3 Needed for: Need Both Capacity planning, L3 & L2 traffic engineering, VLANs anywhere Learning Complexity and G/W routing L3 Control Plane vs. L3 Convergence vs. L2 Data Plane L2 Plug and Play TRILL is L2 only Needs external (or internal) router to connect L2 domains31 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  32. 32. QFABRIC VS FABRICPATH QFabric is Layer 2 and Layer 3 1 4 QFabric 1 ... ... ... 84 FabricPath is only Layer 2 FabricPath: 1 2 3 4 5 6 12 1 1 …. 19 20 38 …. 39 …. 57 58 …. 76 77 …. 95 96 11232 Note: Copyright © 2011 Juniper Networks, Inc. www.juniper.net - OS* Over Subscription 3:1 - Ports: 4000 server ports Juniper Confidential
  33. 33. ECONOMICS OF TRILL BASED DATACENTERS Cisco Strategy Initial Pitch Real Deployment • L2-only Traffic • L3 or Multicast or FCoE or QCN • Position Fcard • Mcard Required • 32x10GE • 8x10GE • $35K ($1K/10GE) • $70K ($9K/10GE)Cost Cost Simple Rich Core Core Up-Sell (FP & L3 Cards) Lock-in Scale Simple Edge Simple Edge Scale33 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  34. 34. QFABRIC EFFICIENT MULTI-PATHINGBY LOAD BALANCING Efficient Director Bandwidth utilization with spraying across unequal links 28% 25% 28% 25% 28% 25% 28% 25% 14% X Multi-Pathing Dynamic34 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  35. 35. FABRICPATH TRILL: NOT EFFICIENT LOADBALANCING FOR MULTI-PATHING In-proffeciant bandwidth utilization NX7K 33%X 0% 25% 25% 33% 25% 25% 33% 33% 4 8 12 16 28Nexus 5548 FP uses TRILL based on ECMP: Only equal cost paths are chosen35 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  36. 36. KEY ISSUES WITH FABRICPATH IMPLEMENTATION Poor Performance (high latency, high jitter) Complex Network Management Low Reliability and Control over Network Limited or No L3 (Layer 3 as a Service Limits FabricPath Scale Costly Solution) Limited Virtualization Support: Small 16K MAC Table, Small Port Density at Full L2/L3 Low on Uplink Bandwidth: Oversubscription36 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  37. 37. JUNIPER QFABRIC VS. CISCO FABRICPATH3 COMPETITIVE AREAS Hardware Architecture Protocols Components37 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  38. 38. Cisco Nexus Fabric building blocks.. F: L2, FSS M: L3, no FSS Nexus 5548 at Access (ToR) Nexus 7K in Core Line cards in this architecture are either capable of L2 (with FSS) or L3 (without FSS) processing + Eliminates STP, allows multi-pathing + Scales to many core chassis at L2 - No layer 3 : Does matter whether traffic is within or across VLANs38 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  39. 39. Closer look at Nexus7k with F & M cards For Parity: Lossless, 3:1, Oversubscription and Line-rate L3 From N5k • Backplane bandwidth of F (230GE) F card is 230G – Only 23*10 GE ports are line rate per F card • L3 traffic needs to go via M card which is limited to 80Gbps per card M (80GE) Every F Card will need 3xMcards to support line rate L3 traffic F(230G) So, a N7K 16 slot chassis will have: To N5k • 4 * F cards • 12 * M cards • i.e., 23 * 4 = 92 line-rate ToR facing ports39 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  40. 40. QFABRIC VS. FABRICPATH – 6000 PORTS QFabric QFabric 1/3 fewer Non-Blocking 1 4 devices L2 & L3 2/3 less power 1 125 $300k/year TRILL like - “Big Pile” Architecture 90% less floor space L3 7-10x faster 90% fewer links L2 1 2 3 4 5 6 7 8 Mgd. Devices 1 16 1 vs 193 1 vs 25 admins 1 21 .. .. 42 .. 63 84 .. .. 105 .. 126 .. 147 16740 Note: Copyright © 2011 Juniper Networks, Inc. www.juniper.net • OS* Over Subscription 3:1 • Ports: 6000 server ports Juniper Confidential
  41. 41. QFABRIC VS. FABRICPATH NUM DEVICES REQUIRED:CAPEX AND COMPLEXITY 250 Juniper QFabric $$ 200 150 100 Servers 50 Cisco FabricPath $$ 0 500p 1000p 3000p 6000p # of Devices JNPR # of Devices CSCO Servers41 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  42. 42. QFABRIC VS. CISCO NEXUS ENVIRONMENTALFRIENDLINESS QFabric Environmental Labels Cisco Nexus Environmental Labels 5/6 China RoHS SR-3580 NEBS Level 3 6/6 • Verizon NEBS compliance Recycled Material42 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  43. 43. QFABRIC VS. CISCO NEXUS, ENERGY UTILIZATION 800000 700000 600000 500000 400000 300000 200000 100000 0 Max Power Nominal Power Qfabric Cisco 5548, NX7018 L2 Cisco 5548, NX7018 L3Source for Cisco Power comparisonhttp://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.htmlhttp://www.cisco.com/en/US/docs/switches/datacenter/hw/nexus7000/installation/guide/n7k_sys_specs.html43 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  44. 44. QFABRIC VS CISCO NEXUS TAKE AWAYS Performance 7x lower Latency (Juniper Qfabric <5Us vs. Nexus 35Usec ) Highest scale (6000 ports )with complete Layer 2 and layer 3 Invisible IT Management Complexity: Minimal. QFabric has operational simplicity of single switch High resiliency: no single point of failure Operational savings : 1/20th operators/administrators vs Fabricpath Most eco-friendly fabric 1/3 power/port at 1/10 footprint of FabricPath44 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  45. 45. AGENDA 1 Challenges With Data Centers 2 Juniper’s Fabric Architecture 3 Competitive Cisco Fabric Path Fabric Path Architecture TRILL Protocol 4 Competitive Brocade VCS Architecture 5 Summary45 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  46. 46. BROCADE VIRTUAL CLUSTER SWITCH SOLUTION VDX 6720-24 VDX 6720 -60 No Scale : VCS fabric has only upto 10 switches Low port utilization: VDX 6720-60 has 60 ports but Ethernet Fabric connections take 12 ports (out of 50) on each switch. Minimal Virtualization support : 32k total MAC table insufficient for large scale Data center solution. Lack of basic layer3,QOS features Multicast group supporting is very limited, only 256. Manageability complexity VCS solution relies on each individual switches managed independently46 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  47. 47. BROCADE VCS SOLUTION :BRCD REFERENCE ARCHITECTURE Core layer: MLX with MCT BROCADE CLAIM: 6 links per trunk (24 total) VCS fabric has up 6:1 subscription 10-switch VCS fabric; to 10 switches ratio in VCS fabric 312 usable ports Scale: 600 ports vLAG Latency: 600ns *Up to 36 servers per rack4 racks per VCS Servers With 1/10G and 10G DCB FCoE/iSCSI DCB Connectivity Storage47 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  48. 48. QFABRIC VS. BROCADE VDX VCS SCALE BASED ON: BROCADE VDX IN ACCESS, AGGREATION (VCS ) WITH 450 PORTS BROCADE REALITY: BigIron BigIron 1. Max 450 ports or only Rx-16 Rx-16 315 ports server ports 2. Latency >1200ns within switch Based on: Max # TOR: 7, Aggregation devices: 3, Max MCT MLX: 2 VDX 6720-60VDX 6720-60 7 MAX VCS SCALE: 450 Server Ports Only` 48 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  49. 49. QFABRIC VS BROCADE VCS SCALE, SIMPLICITY AND MANAGEABILITY Max scale 3507000 Manageabilty at 450 ports 300 2906000 2505000 2004000 1503000 1002000 50 401000 2 1 - 0 # of Admins # of JNPR BRCD Links,Interactions BRCD JNPR 49 Copyright © 2011 Juniper Networks, Inc. www.juniper.net Juniper Confidential
  50. 50. QFABRIC VS BROCADE VCS PERFORAMNCE 18 16 14 12 10 8 6 4 2 - Latency (usec) BRCD JNPR Based50 on : BROCADE vdx IN ACCESS,AGGGREATION( vcs ) WITH 450 PORTS Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  51. 51. Brocade VCS implementation KNOCKOFSS 1 Brocade solution does not scale – limited to 600 ports 2 No Storage Convergence : No FCoE-FC Gateway No Layer 3 – VCS does not achieve a single-layer DC 3 POOR PERFORMANCE: HIGH LATENCY ACROSS FABRIC 15Us* 4 Brocade implementation is based on FSPF:non-standard TRILL; it cannot interoperate with other vendors TRILL- 5 based implementation Brocade VCS is a NOT a single flat “Fabric” technology; similar to Juniper’s Virtual Chassis technology, only 3 6 years later *Assuming BigIron as Core switch because of lack of VCS core switch51 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  52. 52. QFABRIC VS BROCADE VCS TAKE AWAYS Performance Highest scale (6000 ports vs 450 ports )with complete Layer 2 and layer 3 3x lower Latency (Juniper Qfabric <5Us vs. BROCADE VCS ) Invisible IT Management Complexity: Minimal. QFabric has operational simplicity of single switch resiliency: no single point of failure Operational savings vs Brocade: ½ administrators ,1/7th cables neede using QFabric vs VCS52 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  53. 53. AGENDA 1 Challenges With Data Centers 2 Juniper’s Fabric Architecture 3 Competitive Cisco Fabric Path Fabric Path Architecture TRILL Protocol 4 Competitive Brocade VCS Architecture 5 Summary53 Copyright © 2011 Juniper Networks, Inc. www.juniper.netJuniper Confidential
  54. 54. JNPR VS CSCO DETAIL COMPARISON …FOR MULTIPLE PORT COUNTSCategory #Ports >> 500p 1000p 3000p 6000pPerformance Latency (usec) CSCO 34 34 34 34 JNPR 5 5 5 5Simplicity # of Devices CSCO 18 32 98 195 JNPR 15 25 67 129 # of Admins CSCO 2 2 5 9 JNPR 1 1 1 1 # of Cables CSCO 240 482 1,812 3,612 JNPR 44 84 252 500Operational Expenses Power & Cooling Cost ($/yr) CSCO $ 61,834 $ 73,409 $ 245,355 $ 489,884 JNPR $ 33,717 $ 39,971 $ 66,237 $ 105,010 Adminstrative Costs ($/yr) CSCO $ 340,000 $ 460,000 $ 1,000,000 $ 1,800,000 JNPR $ 200,000 $ 200,000 $ 200,000 $ 200,000 Maintenance Cost ($/yr) CSCO $ 41,798 $ 73,396 $ 225,288 $ 448,319 JNPR Need PLM InputCapital Expenditure Equipment Costs CSCO $ 2,888,792 $ 3,953,584 $ 12,992,752 $ 25,950,876 JNPR Need PLM InputGreen CO2 Emissions (lbs) CSCO 359,760 427,105 1,427,522 2,850,233 JNPR 196,174 232,559 385,378 610,969 55 Copyright © 2011 Juniper Networks, Inc. www.juniper.net

×