UC Expo 2010 - Brocade's new campus Unified Communications optimized network

780 views
700 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
780
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Brocade’s Bay Area campus located at intersection of Hwy 237 & North 1st street is comprised of three office buildings totaling 562,000 square feet and a 6-story parking garage. The facility includes an advanced data center, R&D space and offices for 2,000 Silicon Valley employees. Brocade’s Bay Area workforce is currently distributed in 5 buildings in San Jose and Santa Clara. Consolidating all Silicon Valley employees in new state of the art eco-friendly facilities provides Brocade with economies of scale, operational efficiencies and offers the opportunity for closer collaboration among our employees. The campus also provides room for future expansion with an option to construct a fourth building on the site. The campus incorporates sustainable design on many levels, from the selection of the campus site down to the materials in the carpeting. Construction of the building’s core and shell is designed for a Leadership in Energy and Environmental Design (LEED) Silver rating, including extensive energy monitoring, water conservation systmes and a 450Kw solar power system. The building interiors are targeted for LEED Silver rating or higher, and are designed to provide a healthy, comfortable, and dynamic work environment. 
  • Brocade NetIron MLX routers in the datacenter and campus core/backbone.FastIron FCX stacks as a converged network edge, serving computers, IP security cameras, Avaya IP phones, and Brocade (Motorola OEM) wireless access points.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • Processing delay includes the time required collecting a frame of voice samples before processing by the speech encoder can occur, the actual process of encoding, encrypting if appropriate, packetizing for transmission, and the corresponding reverse process on the receiving end, including the jitter buffer used to compensate for varying packet arriving delay on the receiving end. The complete end-to-end processing delay is often in the 60 ms to 120 ms range when all of the contributing factors are taken into account. The processing delay is essentially within a fixed range determined by the vendor’s technology and implementation choices. Encoding and decoding might be repeated several times however if there is any inline transcoding from one codec to another, for example for some hand-off between networks, in which case accumulated processing delay can become disruptive.Serialization delay is a fixed delay required to clock a voice or data frame onto a network interface, placing the bits onto the wire for transmission. The delay will vary based on the clocking speed of the interface. A lower speed circuit (such as a modem interface or smaller transmission circuit) will have a higher serialization delay than a higher speed circuit. It can be quite significant on low-speed links and occurs on every single link of a multi-hop network.Network delay is mostly caused by inspecting, queuing and buffering of packets, which can occur at traffic shaping buffers (such as “leaky bucket” buffers) sometimes encountered at various network ingress points, or at various router hops encountered by the packet along the way. Network delay on the internet generally averages less than 40 ms when there is no major congestion. Modernization of routers has contributed to reducing this delay over time.Propagation delay is the distance traveled by the packet divided by the speed of signal propagation (i.e. speed of light). Propagation delay on transcontinental routes is relatively small – typically less than 40 ms – but propagation delay across complex intercontinental paths can be much larger. This is especially true when satellite circuits are involved or on very long routes such as Australia to South-Africa via Europe, for example, which might incur up to 500 ms of one way propagation delay. Propagation delay can only be optimized by designing the shortest possible path links.
  • UC Expo 2010 - Brocade's new campus Unified Communications optimized network

    1. 1. Infrastructure & Delivery Management Theatre<br />Brocade’s New Campus UC Ready Network<br />Harry PettyDirector, Product Marketing<br />March , 2010<br />
    2. 2. Legal Disclaimer<br />All or some of the products detailed in this presentation may still be under development and certain specifications, including but not limited to, release dates, prices, and product features, may change. The products may not function as intended and a production version of the products may never be released. Even if a production version is released, it may be materially different from the pre-release version discussed in this presentation. <br />NOTHING IN THIS PRESENTATION SHALL BE DEEMED TO CREATE A WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, STATUTORY OR OTHERWISE, INCLUDING BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT OF THIRD-PARTY RIGHTS WITH RESPECT TO ANY PRODUCTS AND SERVICES REFERENCED HEREIN. <br />Brocade, the B-wing symbol, BigIron, DCX, Fabric OS, FastIron, File Lifecycle Manager, IronPoint, IronShield, IronView, IronWare, JetCore, MyView, NetIron, SecureIron, ServerIron, StorageX, and TurboIron are registered trademarks, and DCFM and SAN Health are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. All other brands, products, or service names are or may be trademarks or service marks of, and are used to identify, products or services of their respective owners.<br />March 11, 2010<br />UC Expo | Scaling and Securing Your Microsoft OCS Investment<br />
    3. 3. Abstract<br />Brocade’s New Campus UC Ready Network<br />Network-equipment maker Brocade Communications Systems is putting the finishing touch on a new 562,000-square-feet headquarters campus in San Jose, CA<br />This session will look at the campus state of the art converged network infrastructure and how it has been purposely designed and architected to meet the demand of Unified Communications, Voice, Video and Mobility services<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />3<br />
    4. 4. Agenda<br /><ul><li>Brocade’s new campus facts
    5. 5. Network requirements & constraints
    6. 6. Network topology core/edge/wireless
    7. 7. UC and VoIP services
    8. 8. UC network challenges
    9. 9. Brocade UC ready network</li></ul>March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />4<br />Is your Network Infrastructure Ready for Voice and UC Services?<br />
    10. 10. March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />5<br />Brocade Today<br />4,000+ Employees<br />Publicly Traded: Nasdaq: BRCD<br />Networking Technology Specialists<br />$2 Billion+ Annual Revenue<br />Worldwide Presence<br />
    11. 11. Brocade’s Bay Area Locations<br />Legacy<br />5 Buildings distributed across 3 Locations<br />2000+ Employees<br />3 Data centers<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />New Campus<br /><ul><li>3 Buildings, 1 Location
    12. 12. 562,000 square fee
    13. 13. 2000+ Employees
    14. 14. 1 Data center</li></ul>New Campus Business Drivers<br /><ul><li>Reduce costs: Economy of scale, operational efficiency, reduced power consumption, efficient space utilization
    15. 15. Boost employee productivity: Eliminate cross site commute, foster employee collaboration and communication</li></li></ul><li>Campus Network Key Application Requirements<br />Office Productivity<br />Intranet/internet access, SharePoint collaboration, corporate apps<br />Office Communication<br />UC desktop and VoIP phone in every office/cubicle<br />Video<br />Telepresence equipment in conference rooms<br />Video multicast to employee desktops (all hands …)<br />Wireless Access<br />High speed WiFi coverage through-out the campus<br />Wireless UC for roaming employees, Seamless wired/wireless transition<br />Data Protection<br />Daily automated backup of employees’ desktops/laptops over the net<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />
    16. 16. Campus Network Key Design Constraints<br />Space<br />One wiring closet per floor, limited space in wiring closet<br />High density, 300+ cubicles per floor, 2+ ports per cubicle<br />Power<br />Minimize power draw in wiring closets (“green” goal)<br />Cooling<br />Limit AC requirements in wiring closets (“green” goal)<br />Management<br />Simplify: Minimize the number of “boxes” to manage individually<br />Manage wired/wireless access as one<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />
    17. 17. New Campus Network Topology<br />FCX-STK 1<br />FCX-STK 1<br />FCX-STK 2<br />FCX-STK 2<br />FCX-STK 1<br />FCX-STK 2<br />Building 3<br />Building 2<br />Access/Edgeeach floor<br />Access/Edgeeach floor<br />Building 1<br />Access/Edgeeach floor<br />2 X Stacks Per Floor<br />2 VLANs per floor with L3 links to core<br />Each Link<br />2 x 10GbE<br />MLX-32<br />MLX-32<br />Building 3<br />Building 1<br />Backbone<br />
    18. 18. March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />Access Layer: Brocade FastIron CX Switches<br />Space Efficiency<br />768 x 1 Gb ports, 16 RU in wiring closet<br />45% more space efficient than chassis<br />Power Efficiency<br />44 % less power than chassis solution<br />Management<br />Each stack is managed as a single entity<br />Performance<br />Non blocking, full line rate switching<br />8 X 10 GigE uplinks to campus core<br />Bottom Line<br />Best of both worlds, fixed port efficiency with chassis like management<br />FCX-STK 1<br />FCX-STK 2<br /><ul><li>2 x FastIron CX Stacks per Floor
    19. 19. 7 to 8 Switches per stack
    20. 20. 48 X POE ports per switch
    21. 21. Up to 28 POE+ Ports for 802.1n access points
    22. 22. Redundant power supplies
    23. 23. 128 Gbps stacking bandwidth across each stack
    24. 24. 80 Gigabit of aggregated bandwidth between STK1 and STK2
    25. 25. 80 Gigabit of aggregated bandwidth to core
    26. 26. 2 VLANs per floor (voice/data)
    27. 27. L3 routing to campus core</li></li></ul><li>Core Layer: Brocade MLX 32 Chassis<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />Simplicity<br />“Collapsed” aggregation layer reduces the number of ‘boxes”<br />Reliability<br />Redundant power supplies, management modules and fabric modulesRedundant topology with dual chassis<br />Management<br />Only two “boxes” to manage for the core. <br />Capacity and Performance<br />2 X 3.2 Terabit data forwarding capacity<br />2 X 128 X 10 GigE ports <br />Bottom Line<br />Design simplicity, maximum performance <br />Brocade MLX-32<br /><ul><li>2 x Brocade MLX 32 chassis form the core layer of the entire campus
    28. 28. 128 x 10 GigE ports in each chassis to connect to the access layer, the data center and the WAN
    29. 29. L3 routing to the access layer, the data center and the WAN
    30. 30. Redundant network topology: each chassis is connected to all access layer stacks , data center and WAN
    31. 31. Voice and Data VLANs to preserve traffic isolation</li></li></ul><li>March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />Wireless : Brocade Mobility APs and Controllers<br />Coverage<br /><ul><li>Brocade 7131 APs monitor and auto-adjust RF level across APs to eliminate blind spots</li></ul>Reliability<br /><ul><li>Redundant wireless coverage and redundant AP controllers</li></ul>Management<br /><ul><li>All APscentrally managed from controller</li></ul>Capacity and Performance<br /><ul><li>802.11n speed (160Mbps real throughput)
    32. 32. 2 X 1Gig ports to access layer switch</li></ul>Bottom Line<br /><ul><li>Seamless high performance wired/wireless access across campus
    33. 33. A total of over 200 Brocade 7131 802.11n APs for the campus with 8 to 14 on each floor
    34. 34. Brocade 5181 AP for outdoor coverage
    35. 35. AP connected to access layer switches on each floor, and powered by POE+
    36. 36. All APs are controlled an managed by a set of 2 X Brocade Mobility RFS7000 controllers in cluster mode
    37. 37. Controllers deployed in the data center</li></li></ul><li>Campus UC and Voice Services<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />Cost savings<br />Unified network infrastructure reduces CAPEX and OPEX costs by using a single IP network for both data and voice<br />VoIP is based on open standards eliminating telephony vendor lock-in.<br />Improved Employee productivity <br />Enables advanced voice/data services integration such as integrated address book and voice mail delivery via email<br />Integrated communications across applications: voice, e-mail, instant messaging, video, seamless transitions between modes and devices<br /><ul><li>Avaya VoIP phone in each cube
    38. 38. Avaya media servers S8800 session managers in clustered mode
    39. 39. Media servers deployed in SJ data center and Broomfield data center for full redundancy
    40. 40. SIP trunk from service providers to both SJ and Broomfield
    41. 41. Microsoft OCS services deployed in SJ data center
    42. 42. OCS integrated with Avaya VoIP services and voice messaging</li></li></ul><li>Key UC Network Infrastructure Challenges<br />Service quality: The network infrastructure needs to meet critical voice and video conferencing latency requirements.<br />Power delivery: The network needs to support power delivery to IP phones and other UC devices.<br />Auto-configuration: Network ports need to auto-configure when IP phones are plugged in. <br />Availability: Voice is a mission-critical application that does not tolerate downtime.<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />
    43. 43. UC Ready Brocade Campus<br />Service quality: <br />Over-provisioned network capacity: Plenty of bandwidth across the campus to avoid contention. Enough for the most demanding voice/video applications<br />Constant monitoring of service quality through S-Flow capable monitoring tools<br />QoS setting : L2 (802.1p) and L3 (DSCP) insures voice takes precedence in case of contention (critical for VoIP traffic on WAN connections across sites)<br />Power delivery:<br />All phones are PoE powered<br />Access layer switches can deliver max PoE power on all ports<br />All 802.1n access points powered through PoE+ (30W/Per port)<br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />
    44. 44. UC Ready Brocade Campus<br />3. Auto-configuration:<br />Device configured automatically: LLDP-MED based phone configuration<br />Phones auto discover voice VLAN<br />4. Availability:<br />Redundant components across the board power supplies, management modules, fabric modules …<br />Redundant network topology: Dual Chassis at the network core, automatic stacking fail over, stacks on each floor connected to both chassisEach core chassis connected to WAN and data center.<br />Redundant wireless coverage (lots of APs) and redundant AP controllers <br />March 11, 2010<br />© 2010 Brocade Communications Systems, Inc. <br />
    45. 45. THANK YOU<br />For more information, please visit www.brocade.com<br />© 2010 Brocade Communications Systems, Inc. <br />17<br />

    ×