IP Expo 2009 - When Data Centre Cabling Becomes Art

Like this? Share it with your network

Share

IP Expo 2009 - When Data Centre Cabling Becomes Art

  • 2,404 views
Uploaded on

You may take it for granted but cabling is the essential building block when it comes to the data centre. The decision on how and what to use in order to weave your data conduit tapestry requires......

You may take it for granted but cabling is the essential building block when it comes to the data centre. The decision on how and what to use in order to weave your data conduit tapestry requires knowledge. This session will help you understand and answer the following:

* Should I install copper or fiber?
* What category of copper and what type of fiber?
* Should I go with a combination of both?
* If I use both again what category and grade of each?
* what's on the horizon

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
2,404
On Slideshare
2,389
From Embeds
15
Number of Embeds
3

Actions

Shares
Downloads
142
Comments
0
Likes
0

Embeds 15

http://www.slideshare.net 11
http://www.ipleaders.co.uk 2
http://online.ipexpo.co.uk 2

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Point out the connectivity opportunities that are associated with each of these opportunities and explain each situation.
  • This is an example of one of those horror stories you hear about, this site has no structured cabling under the floor. If you have poor airflow to your servers it can have a dramatic effect on the reliability of the equipment. Many data centres require more cooling than they actually need because the cooling cannot effectively reach the servers in the racks. Bad wiring management can be the cause of problems with getting cooled air to the servers.
  • You can see in these photos how a connector to connector deployment might appear in a data center. The photo on the left shows 1100 modules installed in an 1100 panel in a Server cabinet ready to connect to the servers once they are installed. The picture on the right shows M-series outlets terminated and mounted in a modular patch panel on the Switch end of the connection.
  • Data maintained for 7 years Engineered to ease installation Rapid Deployment – Faster Mean Time to Productivity Find bad components/terminations prior to install. Similar to InstaPATCH. DTX1800 Testing De-risk product performance and scrap – partially used reels, on-site theft/loss Scalability – add additional harnesses very simply by ordering a single part number – Source/Destination known. 8X Faster – Two people can install a harness in half an hour.
  • Centralized Switching to TC Cross-Connect or Directly to Server Cabinets Interconnect harnesses No patching at the switch Chassis based switches are hotswappable – Cards/Power/Fan/Management Left 6513 with 8 blades and 1 management card – Need to keep cabling clean so you can swap in 10min. Cable support bar Mockup for Caterpillar 6513 on left Modeled for DC 3 rd rack is server rack Could be done in a TR as well – Retrofit OPPORTUNITIES
  • You can see on this slide the Cisco Nexus 7000 switch where we have indicated with the yellow arrows the copper ports are arranged in a 6x2 configuration. With the InstaPATCH Cu solution we offer a 6x2 SwitchPack designed to mate to this configuration for the simple and fast connection of 12 copper cables with plugs at one time.
  • Cisco does have other port configurations as displayed here and we offer SwitchPack versions that mate with each.
  • Networking standards will not stop at 10Gb/s transmission speeds. But, obviously, the economic viability of 100GbE will depends on the transceiver options, which in turn depends on the PMD (Physical Media Dependent) options. These are several PMD options available for consideration by IEEE: 100GBASE-P10 and 100GBASE-P5 utilise OM3 multimode fibre technology. 100GBASE-P10 uses parallel fibre technology consisting of 10-fibres for transmit and 10-fibres for receive. Each fibre will carry 10G and will use existing 10GBASE-SR VCSEL technology at 850 nm. 100GBASE-P5 uses parallel fibre technology in combination with 2-channel coarse wavelength division multiplexing (CWDM) technology. It consists of 5-fibres for transmit and 5-fibres for receive. It will use existing 10GBASE-SR VCSEL technology at 850 nm. Effectively, each fibre will carry 20G. The remaining PMDs use singlemode fibre technology. 100GBASE-X4 uses CWDM technology with four wavelengths at 25Gb/s each. 100GBASE-X10 uses DWDM technology with ten wavelengths at 10Gb/s each. Obviously, 100GBASE-X10 can use existing 10GBASE-L and 10GBASE-E platforms. The serial PMDs, 100GBASE-L and 100GBASE-E are the most challenging technologies.
  • With intelligent infrastructure, data centre tasks like adding a new email server to the LAN can be simple and quick. With just three simple steps, server provisioning can be accomplished. Step 1 – select the server port. Step 2 – select the service. And Step 3 – iPatch Intelligent Infrastructure will determine the connection points needed to provide the service. Three simple steps that make server provisioning a snap….Let’s move to the next slide….

Transcript

  • 1. David Tanis Technical Director EMEA 7 October 2009 When Data Centre Cabling Becomes Art
  • 2. Data Centre Trends
    • Data Centre Consolidation
    • Data Centre Expansion
    • Merger / Acquisition
    • Technology Refresh/ Migration (18-36 months)
      • Server and SAN Virtualisation
      • Blade server deployment
      • Enhancing SAN connectivity architecture
    • New Data Centre Build - Greenfield
    • Business Continuity/Disaster Recovery
  • 3. European Commission Code of Conduct on Data Centres http://re.jrc.ec.europa.eu/energyefficiency/html/standby_initiative_data_centers.htm “ The aim is to coordinate activities by manufacturers, vendors, consultants, utilities and data centre operators/owners to reduce the electricity consumption in a cost effective manner without hampering the mission critical function of data centres. “ Other Initiatives and Organisations working to increase IT efficiency Green Data Project
  • 4. Data Centre Cabling Best Practices
  • 5. Cable Placement and Thermal Management
    • For every 10 ° C increase above 21 ° C , mean time to failure (MTTF) for active equipment is reduced by 50%
    • The average large data centre provides 2.7 times more cooling than necessary due to poor airflow management in racks and cabinets
    • In most data centres 60% of conditioned airflow bypasses data equipment air intakes
    • Cabling issues consistently ranked as a top contributor to poor cooling in data centres
  • 6. Standards-Based Data Centre Cable Routing Hot Aisle Cold Aisle Hot Aisle Raised Floor Communications Cabling
  • 7. Cable Placement: Under-floor or Overhead?
    • Advantages
      • Clean appearance
      • Improved security
    • Disadvantages
      • May impede cooling airflow
      • More difficult to maintain and upgrade
    • Advantages
      • Simpler maintenance and upgrade
      • Will not affect underfloor cooling
    • Disadvantages
      • Appearance
      • Decreased security (cables exposed)
  • 8. Data Centre Cabling Standards
  • 9. ??? Data Centre Cabling by Type 10G Distances Supported OM4 OM3 OM2 OM1 100m 200m 300m 400m 500m 600m CAT 6A CAT 6 CAT 5e 100m 200m 300m 400m 500m 600m Multimode Fibre
  • 10. Infrastructure Standards Enterprise Cabling Data Centre Cabling Enterprise Cabling ISO/IEC 11801: Information technology – Generic cabling for customer premises TIA 568B: Commercial Building Telecommunication Cabling Standard EN 50173-1: Information technology Generic cabling systems - Part 1: General requirements Draft ISO/IEC 24764: Generic cabling for data-centres TIA/EIA-942: Telecommunications Infrastructure Standard for Data Centres EN 50173-5: Information Technology –Generic Cabling Systems - Part 5: Data Centres
  • 11. Infrastructure Standards for the Data Centre   Media 2007 Minimum Class E OM3 recommended OS1 SFF (1-2 fibres) MPO ( > 2 fibres) E N 50173-5 TIA/EIA-942 2005 Cat 6 recommended OM3 recommended OS1 2009? Class E A OM3 recommended OS2 LC (1-2 fibres) MPO (> 2 fibres) ISO 24764 Published Copper Fibre Connector
  • 12. Basic Data Centre Topology Source - TIA-942 Main Distribution Area (Routers, backbone LAN/SAN switches, PBX) Horiz Dist’n Area ( LAN/SAN/KVM switches) Equip Dist’n Area (Rack/Cabinet) Horiz Dist’n Area ( LAN/SAN/KVM switches) Equip Dist’n Area (Rack/Cabinet) Equip Dist’n Area (Rack/Cabinet) Horiz Dist’n Area ( LAN/SAN/KVM switches) Zone Dist Area Entrance Room (Carrier Equip & Demarcation) Access Providers Access Providers Offices, Operations Centre, Support Rooms Telecom Room (Office & Operations Centre LAN) Backbone Cabling Backbone Cabling Horizontal Cabling Horizontal Cabling Horizontal Cabling Horizontal Cabling Horizontal Cabling Backbone Cabling Computer Room
  • 13. Server Virtualisation Driving the Need for 10G
    • Studies show that average usage rate of stand-alone servers is only around 20%
    • Virtualisation allows multiple applications to run on the same server
    • Provides better utilisation of servers, rack space AND power
    • Uplink must be able to handle increased traffic load  10G cabling required!
    Virtualisation Study : Annual Power and Cooling savings of €570 per virtualised application Source:
  • 14. Internet selling prices 14/7/09 10G: Server Adapter Price Comparison Price: 10GBASE-T £ 365 Distance: 100 meters 10GBASE-SR £ 1,058 550 meters (OM4 Fibre) 10GBASE-CX4 £ 637 10 meters
  • 15. 10GBASE-T Equipment Already in the Market Transceivers Adapters Switches Testers
  • 16. LAN on Motherboard 10GBASE-T Server Ports
    • LOM removes the cost barrier to adopt 10G on servers.
    • 10G Server LOM expected in 2010/2011
    • Server vendors require LOM to be backward compatible, hence LOMs should support:
      • interoperability with 100M/1G/10G switches
      • support RJ45 cabling infrastructure
    10G NIC
  • 17. Pre-Terminated Cabling for Data Centres
  • 18. InstaPATCH Cu Pre-Terminated Copper Harnesses Features & Benefits
    • Significant reduction in installation time
    • Factory termination / 100% tested
    • Customised design and labeling
    • Fully warranted
    • SwitchPack Plug Technology for quick switch connections
  • 19. erline Switch Harness: SwitchPack to Jack
    • High-density Switchport to Patch Panels
      • 6, 8, 12, 16-port versions
      • Cisco, HP, Foundry and others
      • Cat6 and Cat6A
    Switch End
  • 20. SwitchPack 6x2 SwithPack12 (6x2)
  • 21. SwitchPack Examples 8x2 6x2 6x1 8x1
  • 22. From Elevation View to Installation
  • 23. Pre-Terminated Fibre Solutions
    • Fibre Cables pre-terminated with MPO connector
      • Designed to length, fibre count and type
    • All cables and modules tested in factory
    • Installation time reduced significantly
  • 24. Fibre Array Cabling and the MPO Ideally Suited for Upgrade to Parallel Transmission MPO Connectors
    • Proven technology supporting serial transmission today
    • Compliant with TIA-568B.1 AD7 Array Polarity Addendum
    • Method B (InstaPATCH Plus)
      • No polarity-correcting components
      • Supports 2-fiber connectivity and parallel applications
      • Recommended in upcoming standards revisions:
      • EN 50174-2: Cabling installation – Installation Planning and Practices
      • ISO/IEC 14763-2 Installation and Test
  • 25. Standard Products Array Patch Cords 1000G2 IP Shelves 8 MPO Adapter Panels InstaPATCH Trunk Cables Fibre Patch Cords 1000G2 IP Shelves 1000G2 IP Shelves InstaPATCH Plus DM2-24LC Modules Cross Connect Director Rack
  • 26. High Density Installation becomes unmanageable using jumpers SAN Director With InstaPATCH
  • 27. 40G and 100GbE
  • 28. 40 Gigabit Ethernet Targeting Next Generation Server Networks
    • Support full-duplex operation only
    • Preserve 802 Ethernet frame format and min/max frame sizes
    • Support a Bit Error Rate of >= 10^-12 at physical interface (PHY)
    • Support a MAC data rate of 40G + PHY specs for:
      • at least 100m on OM3 and 125m on OM4
      • at least 10m on copper cabling
      • at least 1m over a backplane
    40 Gigabit Ethernet targeting servers, high performance computing clusters, blade servers, storage area networks and network attached storage 40 GbE links
  • 29. 40GbE: Baseline Draft Fibre Options 4-Lane Parallel PHY 40 Gigabit Media Independent Interface (XLGMII), 4 Lanes 40 Gigabit Attachment Unit Interface (XLAUI), 4 Lanes 40 Gigabit MAC MSA 4  CWDM ~1310 nm  10 km OS2 PMD 40GBASE-LR4 LR4 CFP 4 Lane 850 nm  100 m OM3 PMD 40GBASE-SR4 SR4 QSFP SNAP-12
  • 30. Parallel Systems Technology Using 850 nm VCSEL arrays for Higher Speeds
  • 31. Adding Intelligence to the Data Centre iPatch System for Intelligent Infrastructure Management
  • 32. Why Consider Intelligent Infrastructure in the DC? Meeting Today’s IT Challenges
    • Pressure to Achieve More with Less…
    • Streamlining of Workflow Processes
    • Disaster R ecovery and F ault M anagement
    • Data Centre Growth and Manageability
    • Regulatory, Compliance and Security
      • Sarbanes-Oxley
      • HIPAA
      • ISO 17799 / ISO 27001
    • Best Practices Industry Standards
      • CoBiT, ITIL, FCAPS, ISO 20000…
  • 33. Collecting Data for the Configuration Management Database (CMDB)
    • Desktop
    • User
    • LAN
    • Voice
    • Server
    • Application
    • WAN
    • OSI Layer 1
  • 34. Best Case
    • Excel spreadsheet or Cable Management Software
    • Manually updated
    • Reliance on technician for accuracy
    • Not “real-time”
    • Must be audited for accuracy
    Worst Case Mapping the Physical Layer without an Intelligent System
  • 35. Patch Cord Management The “Spaghetti” Challenge
  • 36. iPatch Provides Physical Connectivity Map Logical Connection Server to Switch Switch Server ZDA
  • 37. Select server template Vision + Knowledge = Control Change Management with Intelligent Server Deployment Template includes size, power, weight, ports, required services Deploy server Technicians guided to complete Electronic work orders Automatic confirmation Correct service activation detected, service tickets closed Schedule deployment Connectivity paths determined electronic work orders issued Select Rack/Cabinet Selection based on available rack units, power, maximum load LAN SAN
  • 38. Thank You