THE NEWDATACENTER FIRST EDITION New technologies are radically reshaping the data center TOM CLARK
Tom Clark, 1947–2010  All too infrequently we have the true privilege of knowing a friend  and colleague like Tom Clark. W...
© 2010 Brocade Communications Systems, Inc. All Rights Reserved.Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS,...
Important NoticeUse of this book constitutes consent to the following conditions. This book issupplied “AS IS” for informa...
About the AuthorTom Clark was a resident SAN evangelist for Brocade and representedBrocade in industry associations, condu...
ContentsPreface ....................................................................................................... xv...
ContentsChapter 5: Weaving a New Data Center Fabric ................................. 45Better Fewer but Better .............
ContentsAccess ..............................................................................................................
Contentsx          The New Data Center
FiguresFigure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3Figure 2. The support infrastru...
FiguresFigure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-itate a compact, high-performance FCoE depl...
FiguresFigure 39. A Brocade BigIron RX Series switch consolidates connectivity in amore energy efficient footprint. .........
FiguresFigure 68. Brocade INM Dashboard (top) and Backup Configuration Manager(bottom). .....................................
PrefaceData center administrators today are facing unprecedented chal-lenges. Business applications are shifting from conv...
standards compliance and multi-vendor operability, but managementbetween the domains must be orchestrated to ensure stable...
•   “Chapter 5: Weaving a New Data Center Fabric” starting on    page 45 examines the recent developments in storage netwo...
xviii   The New Data Center
Supply and Demand                                                              1The collapse of the old data center paradi...
Chapter 1: Supply and DemandThe conventional approach to data center design and operations hasendured beyond its usefulnes...
ture and the recurring monthly cost of energy. Without new means toreduce the amount of space, cooling, and power while ma...
Chapter 1: Supply and Demandbe done in anticipation of future growth, over-provisioning is now a lux-ury that few data cen...
ment requires UPS support for continuous operation in the event of apower failure. Even in standby mode, the UPS draws pow...
Chapter 1: Supply and DemandLikewise, The Green Grid, a global consortium of IT companies andprofessionals seeking to impr...
High-level SI-EER and DCiE metrics focus on data center energy effi-ciency to power IT equipment. Unfortunately, this does...
Chapter 1: Supply and DemandWhat differentiates the new data center architecture from the old maynot be obvious at first g...
Running Hot and Cold                                                              2Taking the heatDissipating the heat gen...
Chapter 2: Running Hot and ColdMedium and large IT hardware products are typically in the 1000+watt range. Fibre Channel d...
Rationalizing IT Equipment DistributionRationalizing IT Equipment DistributionServers and network equipment are typically ...
Chapter 2: Running Hot and Cold                                                                                More       ...
Rationalizing IT Equipment Distribution                                          Work cell            Cold aisle      Equi...
Chapter 2: Running Hot and ColdEconomizersTraditionally, data center cooling has been provided by large air condi-tioning ...
Monitoring the Data Center EnvironmentDry-side economizers depend on the external air supply temperatureto be sufficiently...
Chapter 2: Running Hot and Cold16                                The New Data Center
Doing More with Less                                                              3Leveraging virtualization and blade ser...
Chapter 3: Doing More with LessThe benefits of server virtualization are as obvious as the potentialrisks. Running 10 VMs ...
VMs Reborn             Application   Application     Application   Application                 OS            OS           ...
Chapter 3: Doing More with LessServer virtualization software is now available for a variety of CPUs,hardware platforms an...
Blade Server ArchitectureBlade Server ArchitectureServer consolidation in the new data center can also be achieved bydeplo...
Chapter 3: Doing More with LessBy significantly reducing the number of discrete components per pro-cessing unit, the blade...
Brocade Server Virtualization SolutionsTo maximize the benefits of network connectivity in a virtualized serverenvironment...
Chapter 3: Doing More with LessThe Brocade 815 and 825 HBAs are further optimized for server virtu-alization connectivity ...
Brocade Server Virtualization Solutions                                         ...                                  ...  ...
Chapter 3: Doing More with LessConfiguring Single Initiator/Target ZoningBrocade has been a pioneer in fabric-based zoning...
Brocade Server Virtualization Solutionsto another. This feature ensures that applications enjoy non-disruptivedata access ...
Chapter 3: Doing More with LessBrocade Access Gateway for Blade FramesServer virtualization software can be installed on c...
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
Upcoming SlideShare
Loading in...5
×

87652141 the-new-data-center-brocade

1,267

Published on

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,267
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
99
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

87652141 the-new-data-center-brocade

  1. 1. THE NEWDATACENTER FIRST EDITION New technologies are radically reshaping the data center TOM CLARK
  2. 2. Tom Clark, 1947–2010 All too infrequently we have the true privilege of knowing a friend and colleague like Tom Clark. We mourn the passing of a special person, a man who was inspired as well as inspiring, an intelligent and articulate man, a sincere and gentle person with enjoyablehumor, and someone who was respected for his great achievements. We will always remember the endearing and rewarding experiences with Tom and he will be greatly missed by those who knew him. Mark S. Detrick
  3. 3. © 2010 Brocade Communications Systems, Inc. All Rights Reserved.Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView,NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registeredtrademarks, and Brocade Assurance, Brocade NET Health, Brocade One,Extraordinary Networks, MyBrocade, and VCS are trademarks of BrocadeCommunications Systems, Inc., in the United States and/or in other countries.Other brands, products, or service names mentioned are or may betrademarks or service marks of their respective owners.Notice: This document is for informational purposes only and does not setforth any warranty, expressed or implied, concerning any equipment,equipment feature, or service offered or to be offered by Brocade. Brocadereserves the right to make changes to this document at any time, withoutnotice, and assumes no responsibility for its use. This informational documentdescribes features that may not be currently available. Contact a Brocadesales office for information on feature and product availability. Export oftechnical data contained in this document may require an export license fromthe United States government.Brocade Bookshelf Series designed by Josh JuddThe New Data CenterWritten by Tom ClarkReviewed by Brook ReamsEdited by Victoria ThomasDesign and Production by Victoria ThomasIllustrated by Jim Heuser, David Lehmann, and Victoria ThomasPrinting HistoryFirst Edition, August 2010iv The New Data Center
  4. 4. Important NoticeUse of this book constitutes consent to the following conditions. This book issupplied “AS IS” for informational purposes only, without warranty of any kind,expressed or implied, concerning any equipment, equipment feature, orservice offered or to be offered by Brocade. Brocade reserves the right tomake changes to this book at any time, without notice, and assumes noresponsibility for its use. This informational document describes features thatmay not be currently available. Contact a Brocade sales office for informationon feature and product availability. Export of technical data contained in thisbook may require an export license from the United States government.Brocade Corporate HeadquartersSan Jose, CA USAT: +01-408-333-8000info@brocade.comBrocade European HeadquartersGeneva, SwitzerlandT: +41-22-799-56-40emea-info@brocade.comBrocade Asia Pacific HeadquartersSingaporeT: +65-6538-4700apac-info@brocade.comAcknowledgementsI would first of all like to thank Ron Totah, Senior Director of Marketing atBrocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers.Rons consistent support and encouragement for the Brocade Bookshelfprojects and Brocade TechBytes Webcast series provides sustainedmomentum for getting technical information into the hands of our customers.The real work of project management, copyediting, content generation,assembly, publication, and promotion is done by Victoria Thomas, TechnicalMarketing Manager at Brocade. Without Victorias steadfast commitment,none of this material would see the light of day.I would also like to thank Brook Reams, Solution Architect for Applicationson the Integrated Marketing team, for reviewing my draft manuscript andproviding suggestions and invaluable insights on the technologies underdiscussion.Finally, a thank you to the entire Brocade team for making this a first-classcompany that produces first-class products for first-class customersworldwide.The New Data Center v
  5. 5. About the AuthorTom Clark was a resident SAN evangelist for Brocade and representedBrocade in industry associations, conducted seminars and tutorials atconferences and trade shows, promoted Brocade storage networkingsolutions, and acted as a customer liaison. A noted author and industryadvocate of storage networking technology, he was a board member of theStorage Networking Industry Association (SNIA) and former Chair of the SNIAGreen Storage Initiative. Clark has published hundreds of articles and whitepapers on storage networking and is the author of Designing Storage AreaNetworks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI,iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001),Storage Virtualization: Technologies for Simplifying Data Storage andManagement (Addison-Wesley 2005), and Strategies for Data Protection(Brocade Bookshelf, 2008).Prior to joining Brocade, Clark was Director of Solutions and Technologiesfor McDATA Corporation and the Director of Technical Marketing for NishanSystems, the innovator of storage over IP technology. As a liaison betweenmarketing, engineering, and customers, he focused on customer educationand defining features that ensure productive deployment of SANs. With morethan 20 years experience in the IT industry, Clark held technical marketing andsystems consulting positions with storage networking and other datacommunications companies.Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knowsthat he was intelligent, quick, a voice of sanity and also sarcasm, and apragmatist with a great heart. He was indeed the heart of Brocade TechBytes,a monthly Webcast he described as “a late night technical talk show,” whichwas launched in November 2008 and is still part of Brocade’s TechnicalMarketing program.vi The New Data Center
  6. 6. ContentsPreface ....................................................................................................... xvChapter 1: Supply and Demand ..............................................................1Chapter 2: Running Hot and Cold ...........................................................9Energy, Power, and Heat ...................................................................................... 9Environmental Parameters ................................................................................10Rationalizing IT Equipment Distribution ............................................................11Economizers ........................................................................................................14Monitoring the Data Center Environment .........................................................15Chapter 3: Doing More with Less ......................................................... 17VMs Reborn ......................................................................................................... 17Blade Server Architecture ..................................................................................21Brocade Server Virtualization Solutions ...........................................................22 Brocade High-Performance 8 Gbps HBAs .................................................23 Brocade 8 Gbps Switch and Director Ports ..............................................24 Brocade Virtual Machine SAN Boot ...........................................................24 Brocade N_Port ID Virtualization for Workload Optimization ..................25 Configuring Single Initiator/Target Zoning ................................................26 Brocade End-to-End Quality of Service ......................................................26 Brocade LAN and SAN Security .................................................................27 Brocade Access Gateway for Blade Frames ..............................................28 The Energy-Efficient Brocade DCX Backbone Platform for Consolidation ..............................................................................................28 Enhanced and Secure Client Access with Brocade LAN Solutions .........29 Brocade Industry Standard SMI-S Monitoring ..........................................29 Brocade Professional Services ..................................................................30FCoE and Server Virtualization ..........................................................................31Chapter 4: Into the Pool ........................................................................ 35Optimizing Storage Capacity Utilization in the Data Center .............................35Building on a Storage Virtualization Foundation ..............................................39Centralizing Storage Virtualization from the Fabric .......................................... 41Brocade Fabric-based Storage Virtualization ...................................................43The New Data Center vii
  7. 7. ContentsChapter 5: Weaving a New Data Center Fabric ................................. 45Better Fewer but Better ......................................................................................46Intelligent by Design ...........................................................................................48Energy Efficient Fabrics ......................................................................................53Safeguarding Storage Data ................................................................................55Multi-protocol Data Center Fabrics ....................................................................58Fabric-based Disaster Recovery ........................................................................64Chapter 6: The New Data Center LAN ................................................. 69A Layered Architecture ....................................................................................... 71Consolidating Network Tiers .............................................................................. 74Design Considerations .......................................................................................75 Consolidate to Accommodate Growth .......................................................75 Network Resiliency .....................................................................................76 Network Security .........................................................................................77 Power, Space and Cooling Efficiency .........................................................78 Network Virtualization ................................................................................79Application Delivery Infrastructure ....................................................................80Chapter 7: Orchestration ....................................................................... 83Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89Server Adapters ..................................................................................................89 Brocade 825/815 FC HBA .........................................................................90 Brocade 425/415 FC HBA .........................................................................91 Brocade FCoE CNAs ....................................................................................91Brocade 8000 Switch and FCOE10-24 Blade ..................................................92Access Gateway ..................................................................................................93Brocade Management Pack ..............................................................................94Brocade ServerIron ADX .....................................................................................95Chapter 9: Brocade SAN Solutions ...................................................... 97Brocade DCX Backbones (Core) ........................................................................98Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100 Brocade 5300 Switch ...............................................................................101 Brocade 5100 Switch .............................................................................. 102 Brocade 300 Switch ................................................................................ 103 Brocade VA-40FC Switch ......................................................................... 104Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106Brocade Optical Transceiver Modules .............................................................107Brocade Data Center Fabric Manager ............................................................ 108Chapter 10: Brocade LAN Network Solutions ..................................109Core and Aggregation ...................................................................................... 110 Brocade NetIron MLX Series ................................................................... 110 Brocade BigIron RX Series ...................................................................... 111viii The New Data Center
  8. 8. ContentsAccess .............................................................................................................. 112 Brocade TurboIron 24X Switch ................................................................ 112 Brocade FastIron CX Series ..................................................................... 113 Brocade NetIron CES 2000 Series ......................................................... 113 Brocade FastIron Edge X Series ............................................................. 114Brocade IronView Network Manager .............................................................. 115Brocade Mobility .............................................................................................. 116Chapter 11: Brocade One ....................................................................117Evolution not Revolution ..................................................................................117Industrys First Converged Data Center Fabric .............................................. 119 Ethernet Fabric ........................................................................................ 120 Distributed Intelligence ........................................................................... 120 Logical Chassis ........................................................................................ 121 Dynamic Services .................................................................................... 121The VCS Architecture ....................................................................................... 122Appendix A: “Best Practices for Energy Efficient StorageOperations” .............................................................................................123Introduction ...................................................................................................... 123Some Fundamental Considerations ............................................................... 124Shades of Green .............................................................................................. 125 Best Practice #1: Manage Your Data ..................................................... 126 Best Practice #2: Select the Appropriate Storage RAID Level .............. 128 Best Practice #3: Leverage Storage Virtualization ................................ 129 Best Practice #4: Use Data Compression .............................................. 130 Best Practice #5: Incorporate Data Deduplication ................................131 Best Practice #6: File Deduplication .......................................................131 Best Practice #7: Thin Provisioning of Storage to Servers .................... 132 Best Practice #8: Leverage Resizeable Volumes .................................. 132 Best Practice #9: Writeable Snapshots ................................................. 132 Best Practice #10: Deploy Tiered Storage ............................................. 133 Best Practice #11: Solid State Storage .................................................. 133 Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133 Best Practice #13: Tape Subsystems ..................................................... 134 Best Practice #14: Fabric Design ........................................................... 134 Best Practice #15 - File System Virtualization ....................................... 134 Best Practice #16: Server, Fabric and Storage Virtualization .............. 135 Best Practice #17: Flywheel UPS Technology ........................................ 135 Best Practice #18: Data Center Air Conditioning Improvements ......... 136 Best Practice #19: Increased Data Center temperatures .................... 136 Best Practice #20: Work with Your Regional Utilities .............................137What the SNIA is Doing About Data Center Energy Usage .............................137About the SNIA ................................................................................................. 138Appendix B: Online Sources .................................................................139Glossary ..................................................................................................141Index ........................................................................................................153The New Data Center ix
  9. 9. Contentsx The New Data Center
  10. 10. FiguresFigure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3Figure 2. The support infrastructure adds substantial cost and energy over-head to the data center. ...................................................................................... 4Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11Figure 4. Variable speed fans enable more efficient distribution of cooling. 12Figure 5. The concept of work cell incorporates both equipment power drawand requisite cooling. .........................................................................................13Figure 6. An economizer uses the lower ambient temperature of outside air toprovide cooling. ...................................................................................................14Figure 7. A native or Type 1 hypervisor. ...........................................................18Figure 8. A hosted or Type 2 hypervisor. ..........................................................19Figure 9. A blade server architecture centralizes shared resources while reduc-ing individual blade server elements. ...............................................................21Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag-gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23Figure 11. SAN boot centralizes management of boot images and facilitatesmigration of virtual machines between hosts. .................................................25Figure 12. Brocades QoS enforces traffic prioritization from the server HBA tothe storage port across the fabric. ....................................................................26Figure 13. Brocade SecureIron switches provide firewall traffic managementand LAN security for client access to virtual server clusters. ..........................27Figure 14. The Brocade Encryption Switch provides high-performance data en-cryption to safeguard data written to disk or tape. ..........................................27Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3switching in three compact, energy-efficient form factors. .............................29Figure 16. FCoE simplifies the server cable plant by reducing the number ofnetwork interfaces required for client, peer-to-peer, and storage access. ....31Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channelports and provides protocol conversion to the data center SAN. ...................32The New Data Center xi
  11. 11. FiguresFigure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-itate a compact, high-performance FCoE deployment. ....................................33Figure 19. Conventional storage configurations often result in over- and under-utilization of storage capacity across multiple storage arrays. .......................36Figure 20. Storage virtualization aggregates the total storage capacity of mul-tiple physical arrays into a single virtual pool. ..................................................37Figure 21. The virtualization abstraction layer provides virtual targets to realhosts and virtual hosts to real targets. .............................................................38Figure 22. Leveraging classes of storage to align data storage to the businessvalue of data over time. .....................................................................................40Figure 23. FAIS splits the control and data paths for more efficient executionof metadata mapping between virtual storage and servers. ..........................42Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-ta map execution for non-disruptive storage pooling, mirroring and data migra-tion. ......................................................................................................................43Figure 25. A storage-centric core/edge topology provides flexibility in deployingservers and storage assets while accommodating growth over time. ............47Figure 26. Brocade QoS gives preferential treatment to high-value applicationsthrough the fabric to ensure reliable delivery. ..................................................49Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-tion by throttling the transmission rate of the offending initiator. ..................50Figure 28. Preferred paths are established through traffic isolation zones,which enforce separation of traffic through the fabric based on designatedapplications. ........................................................................................................51Figure 29. By monitoring traffic activity on each port, Top Talkers can identifywhich applications would most benefit from Adaptive Networking services. 52Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps portcompared to the competition. ...........................................................................54Figure 31. The Brocade Encryption Switch provides secure encryption for diskor tape. ................................................................................................................56Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58Figure 33. Integrating formerly standalone mid-tier servers into the data centerfabric with an iSCSI blade in the Brocade DCX. ...............................................61Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-wide disruptions. ................................................................................................62Figure 35. IR facilitates resource sharing between physically independentSANs. ...................................................................................................................64Figure 36. Long-distance connectivity options using Brocade devices. ........67Figure 37. Access, aggregation, and core layers in the data centernetwork. ...............................................................................................................71Figure 38. Access layer switch placement is determined by availability, portdensity, and cable strategy. ...............................................................................73xii The New Data Center
  12. 12. FiguresFigure 39. A Brocade BigIron RX Series switch consolidates connectivity in amore energy efficient footprint. .........................................................................75Figure 40. Network infrastructure typically contributes only 10% to 15% of totaldata center IT equipment power usage. ...........................................................79Figure 41. Application congestion (traffic shown as a dashed line) on a Web-based enterprise application infrastructure. ....................................................80Figure 42. Application workload balancing, protocol processing offload and se-curity via the Brocade ServerIron ADX. .............................................................81Figure 43. Open systems-based orchestration between virtualizationdomains. ..............................................................................................................84Figure 44. Brocade Management Pack for Microsoft Service Center VirtualMachine Manager leverages APIs between the SAN and SCVMM to trigger VMmigration. ............................................................................................................86Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNA. ............................................................................................................92Figure 48. Brocade 8000 Switch. ....................................................................92Figure 49. Brocade FCOE10-24 Blade. ............................................................93Figure 50. SAN Call Home events displayed in the Microsoft System CenterOperations Center interface. .............................................................................94Figure 51. Brocade ServerIron ADX 1000. ......................................................95Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98Figure 53. Brocade 5300 Switch. ................................................................. 101Figure 54. Brocade 5100 Switch. ................................................................. 102Figure 55. Brocade 300 Switch. .................................................................... 103Figure 56. Brocade VA-40FC Switch. ............................................................ 104Figure 57. Brocade Encryption Switch. ......................................................... 105Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105Figure 59. Brocade 7800 Extension Switch. ................................................ 106Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107Figure 61. Brocade DCFM main window showing the topology view. ......... 108Figure 62. Brocade NetIron MLX-4. ............................................................... 110Figure 63. Brocade BigIron RX-16. ................................................................ 111Figure 64. Brocade TurboIron 24X Switch. ................................................... 112Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura-tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114Figure 67. Brocade FastIron Edge X 624. ..................................................... 114The New Data Center xiii
  13. 13. FiguresFigure 68. Brocade INM Dashboard (top) and Backup Configuration Manager(bottom). ........................................................................................................... 115Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118Figure 70. A Brocade VCS reference network architecture. ........................ 122xiv The New Data Center
  14. 14. PrefaceData center administrators today are facing unprecedented chal-lenges. Business applications are shifting from conventional client/server relationships to Web-based applications, data center realestate is at a premium, energy costs continue to escalate, new regula-tions are imposing more rigorous requirements for data protection andsecurity, and tighter corporate budgets are making it difficult toaccommodate client demands for more applications and data storage.Since all major enterprises run their businesses on the basis of digitalinformation, the consequences of inadequate processing power, stor-age, network accessibility, or data availability can have a profoundimpact on the viability of the enterprise itself.At the same time, new technologies that promise to alleviate some ofthese issues require both capital expenditures and a sharp learningcurve to successfully integrate new solutions that can increase produc-tivity and lower ongoing operational costs. The ability to quickly adaptnew technologies to new problems is essential for creating a more flex-ible data center strategy that can meet both current and futurerequirements. This effort necessitates cooperation between both datacenter administrators and vendors and between the multiple vendorsresponsible for providing the elements that compose a comprehensivedata center solution.The much overused term “ecosystem” is nonetheless an accuratedescription of the interdependencies of technologies required fortwenty-first century data center operation. No single vendor manufac-tures the full spectrum of hardware and software elements required todrive data center IT processing. This is especially true when each ofthe three major domains of IT operations -server, storage, and net-working-are each undergoing profound technical evolution in the formof virtualization. Not only must products be designed and tested forThe New Data Center xv
  15. 15. standards compliance and multi-vendor operability, but managementbetween the domains must be orchestrated to ensure stable opera-tions and coordination of tasks.Brocade has a long and proven track record in data center networkinnovation and collaboration with partners to create new solutions tosolve real problems and at the same time reducing deployment andoperational costs. This book provides an overview of the new technolo-gies that are radically transforming the data center into a more cost-effective corporate asset and the specific Brocade products that canhelp you achieve this goal.The book is organized as follows:• “Chapter 1: Supply and Demand” starting on page 1 examines the technological and business drivers that are forcing changes in the conventional data center paradigm. Due to increased business demands (even in difficult economic times), data centers are run- ning out of space and power and this in turn is driving new initiatives for server, storage and network consolidation.• “Chapter 2: Running Hot and Cold” starting on page 9 looks at data center power and cooling issues that threaten productivity and operational budgets. New technologies such as wet and dry- side economizers, hot aisle/cold aisle rack deployment, and proper sizing of the cooling plant can help maximize productive use of existing real estate and reduce energy overhead.• “Chapter 3: Doing More with Less” starting on page 17 provides an overview of server virtualization and blade server technology. Server virtualization, in particular, is moving from secondary to pri- mary applications and requires coordination with upstream networking and downstream storage for successful implementa- tion. Brocade has developed a suite of new technologies to leverage the benefits of server virtualization and coordinate oper- ation between virtual machine managers and the LAN and SAN networks.• “Chapter 4: Into the Pool” starting on page 35 reviews the poten- tial benefits of storage virtualization for maximizing utilization of storage assets and automating life cycle management.xvi The New Data Center
  16. 16. • “Chapter 5: Weaving a New Data Center Fabric” starting on page 45 examines the recent developments in storage networking technology, including higher bandwidth, fabric virtualization, enhanced security, and SAN extension. Brocade continues to pio- neer more productive solutions for SANs and is the author or co- author of the significant standards underlying these new technologies.• “Chapter 6: The New Data Center LAN” starting on page 69 high- lights the new challenges that virtualization and Web-based applications present to the data communications network. Prod- ucts like the Brocade ServerIron ADX Series of application delivery controller provide more intelligence in the network to offload server protocol processing and provide much higher levels of avail- ability and security.• “Chapter 7: Orchestration” starting on page 83 focuses on the importance of standards-based coordination between server, stor- age and network domains so that management frameworks can provide a comprehensive view of the entire infrastructure and pro- actively address potential bottlenecks.• Chapters 8, 9, and 10 provide brief descriptions of Brocade prod- ucts and technologies that have been developed to solve data center problems.• “Chapter 11: Brocade One” starting on page 117 described a new Brocade direction and innovative technologies to simplify the com- plexity of virtualized data centers.• “Appendix A: “Best Practices for Energy Efficient Storage Opera- tions”” starting on page 123 is a reprint of an article written by Tom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green Storage Initiative (GSI).• “Appendix B: Online Sources” starting on page 139 is a list of online resources.• The “Glossary” starting on page 141 is a list of data center net- work terms and definitions.The New Data Center xvii
  17. 17. xviii The New Data Center
  18. 18. Supply and Demand 1The collapse of the old data center paradigmAs in other social and economic sectors, information technology hasrecently found itself in the awkward position of having lived beyond itsmeans. The seemingly endless supply of affordable real estate, elec-tricity, data processing equipment, and technical personnel enabledcompanies to build large data centers to house their mainframe andopen systems infrastructures and to support the diversity of businessapplications typical of modern enterprises. In the new millennium,however, real estate has become prohibitively expensive, the cost ofenergy has skyrocketed, utilities are often incapable of increasing sup-ply to existing facilities, data processing technology has become morecomplex, and the pool of technical talent to support new technologiesis shrinking.At the same time, the increasing dependence of companies and insti-tutions on electronic information and communications has resulted ina geometric increase in the amount of data that must be managedand stored. Since 2000, the amount of corporate data generatedworldwide has grown from 5 exabytes (5 billion gigabytes) to over 300exabytes, with projections of about 1 zetabyte (1000 exabytes) by2010. This data must be stored somewhere. The installation of moreservers and disk arrays to accommodate data growth is simply not sus-tainable as data centers run out of floor space, cooling capacity, andenergy to feed additional hardware. The demands constantly placedon IT administrators to expand support for new applications and dataare now in direct conflict with the supply of data center space andpower.Gartner predicted that by 2009, half of the worlds data centers willnot have sufficient power to support their applications. An EmersonPower survey projects that 96% of all data centers will not have suffi-cient power by 2011.The New Data Center 1
  19. 19. Chapter 1: Supply and DemandThe conventional approach to data center design and operations hasendured beyond its usefulness primarily due to a departmental siloeffect common to many business operations. A data center adminis-trator, for example, could specify the near-term requirements for powerdistribution for IT equipment but because the utility bill was often paidfor by the companys facilities management, the administrator wouldbe unaware of continually increasing utility costs. Likewise, individualbusiness units might deploy new rich content applications resulting ina sudden spike in storage requirements and additional load placed onthe messaging network, with no proactive notification of the data cen-ter and network operators.In addition, the technical evolution of data center design, cooling tech-nology, and power distribution has lagged far behind the rapiddevelopment of server platforms, networks, storage technology, andapplications. Twenty-first century technology now resides in twentiethcentury facilities that are proving too inflexible to meet the needs ofthe new data processing paradigm. Consequently, many IT managersare looking for ways to align the data center infrastructure to the newrealities of space, power, and budget constraints.Although data centers have existed for over 50 years, guidelines fordata center design were not codified into standards until 2005. TheANSI/TIA-942 Telecommunications Infrastructure Standard for DataCenters focuses primarily on cable plant design but also includespower distribution, cooling, and facilities layout. TIA-942 defines fourbasic tiers for data center classification, characterized chiefly by thedegree of availability each provides:• Tier 1. Basic data center with no redundancy• Tier 2. Redundant components but single distribution path• Tier 3. Concurrently maintainable with multiple distribution paths and one active• Tier 4. Fault tolerant with multiple active distribution paths A Tier 4 data center is obviously the most expensive to build and main-tain but fault tolerance is now essential for most data centerimplementations. Loss of data access is loss of business and few com-panies can afford to risk unplanned outages that disrupt customersand revenue streams. A “five-nines” (99.999%) availability that allowsfor only 5.26 minutes of data center downtime annually requiresredundant electrical, UPS, mechanical, and generator systems. Dupli-cation of power and cooling sources, cabling, network ports, andstorage, however, both doubles the cost of the data center infrastruc-2 The New Data Center
  20. 20. ture and the recurring monthly cost of energy. Without new means toreduce the amount of space, cooling, and power while maintaininghigh data availability, the classic data center architecture is notsustainable. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM Backbone Telecom room cabling Main Office & Operations Distribution Area Center LAN Switches Routers, backbone LAN/SAN/KVM Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling LAN/SAN/KVM Switches Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Equipment Equipment Equipment Distribution Area Distribution Area Distribution Area Rack / Cabinets Rack / Cabinets Rack / CabinetsFigure 1. The ANSI/TIA-942 standard functional area connectivity.As shown in Figure 1, the TIA-942 standard defines the main func-tional areas and interconnecting cable plant for the data center.Horizontal distribution is typically subfloor for older raised-floor datacenters or ceiling rack drop for newer facilities. The definition of pri-mary functional areas is meant to rationalize the cable plant andequipment placement so that space is used more efficiently and ongo-ing maintenance and troubleshooting can be minimized. As part of themainframe legacy, many older data centers are victims of indiscrimi-nant cable runs, often strung reactively in response to an immediateneed. The subfloors of older data centers can be clogged with aban-doned bus and tag cables, which are simply too long and too tangledto remove. This impedes airflow and makes it difficult to accommo-date new cable requirements.Note that the overview in Figure 1 does not depict the additional datacenter infrastructure required for UPS systems (primarily batteryrooms), cooling plant, humidifiers, backup generators, fire suppres-sion equipment, and other facilities support systems. Although thesupport infrastructure represents a significant part of the data centerinvestment, it is often over-provisioned for the actual operationalpower and cooling requirements of IT equipment. Even though it mayThe New Data Center 3
  21. 21. Chapter 1: Supply and Demandbe done in anticipation of future growth, over-provisioning is now a lux-ury that few data centers can afford. Properly sizing the computerroom air conditioning (CRAC) to the proven cooling requirement is oneof the first steps in getting data center power costs under control. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM UPS Backbone Telecom room cabling Main Battery Office & Operations Distribution Area Routers, backbone LAN/SAN/KVM Switches Room Center LAN Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling Backup LAN/SAN/KVM Switches Generators Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Diesel Equipment Equipment Equipment Fuel Distribution Area Distribution Area Distribution Area Reserves Rack / Cabinets Rack / Cabinets Rack / Cabinets Power Distribution Cooling Fire Suppression Computer Room CRAC Towers System Air Conditioners (CRAC) ConduitsFigure 2. The support infrastructure adds substantial cost and energyoverhead to the data center.The diagram in Figure 2 shows the basic functional areas for IT pro-cessing supplemented by the key data center support systemsrequired for high availability data access. Each unit of powered equip-ment has a multiplier effect on total energy draw. First, each datacenter element consumes electricity according to its specific loadrequirements, typically on a 7x24 basis. Second, each unit dissipatesheat as a natural by-product of its operation, and heat removal andcooling requires additional energy draw in the form of the computerroom air conditioning system. The CRAC system itself generates heat,which also requires cooling. Depending on the design, the CRAC sys-tem may require auxiliary equipment such as cooling towers, pumps,and so on, which draw additional power. Because electronic equip-ment is sensitive to ambient humidity, each element also places anadditional load on the humidity control system. And finally, each ele-4 The New Data Center
  22. 22. ment requires UPS support for continuous operation in the event of apower failure. Even in standby mode, the UPS draws power for monitor-ing controls, charging batteries, and fly-wheel operation.Air conditioning and air flow systems typically represent about 37% ofa data centers power bill. Although these systems are essential for IToperations, they are often over-provisioned in older data centers andthe original air flow strategy may not work efficiently for rack-mountopen systems infrastructure. For an operational data center, however,retrofitting or redesigning air conditioning and flow during productionmay not be feasible.For large data centers in particular, the steady accumulation of moreservers, network infrastructure, and storage elements and theiraccompanying impact on space, cooling, and energy capabilities high-lights the shortcomings of conventional data center design. Additionalspace simply may not be available, the air flow inadequate for suffi-cient cooling, and utility-supplied power already at their maximum. Andyet the escalating requirements for more applications, more data stor-age, faster performance, and higher availability continue unabated.Resolving this contradiction between supply and demand requiresmuch closer attention to both the IT infrastructure and the data centerarchitecture as elements of a common ecosystem.As long as energy was relatively inexpensive, companies tended tosimply buy additional floor space and cooling to deal with increasing ITprocessing demands. Little attention was paid to the efficiency of elec-trical distribution systems or the IT equipment they serviced. Withenergy now at a premium, maximizing utilization of available power byincreasing energy efficiency is essential.Industry organizations have developed new metrics for calculating theenergy efficiency of data centers and providing guidance for data cen-ter design and operations. The Uptime Institute, for example, hasformulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) toanalyze the relationship between total power supplied to the data cen-ter and the power that is supplied specifically to operate IT equipment.The total facilities power input divided by the IT equipment power drawhighlights the energy losses due to power conversion, heating/cooling,inefficient hardware, and other contributors. A SI-EER of 2 would indi-cate that for every 2 watts of energy input at the data center meter,only 1 watt is drives IT equipment. By the Uptime Institutes own mem-ber surveys, a SI-EER of 2.5 is not uncommon.The New Data Center 5
  23. 23. Chapter 1: Supply and DemandLikewise, The Green Grid, a global consortium of IT companies andprofessionals seeking to improve energy efficiency in data centers andbusiness computing ecosystems, has proposed a Data Center Infra-structure Efficiency (DCiE) ratio that divides the IT equipment powerdraw by the total data center facility power. This is essentially the recip-rocal of SI-EER, yielding a fractional ratio between the facilities powersupplied and the actual power draw for IT processing. With DCiE or SI-EER, however, it is not possible to achieve a 1:1 ratio that wouldenable every watt supplied to the data center to be productively usedfor IT processing. Cooling, air flow, humidity control, fire suppression,power distribution losses, backup power, lighting, and other factorsinevitably consume power. These supporting elements, however, canbe managed so that productive utilization of facilities power isincreased and IT processing itself is made more efficient via new tech-nologies and better product design.Although SI-EER and DCiE are useful tools for a top-down analysis ofdata center efficiency, it is difficult to support these high-level metricswith real substantiating data. It is not sufficient, for example, to simplyuse the manufacturers stated power figures for specific equipment,especially since manufacturer power ratings are often based on pro-jected peak usage and not normal operations. In addition, statedratings cannot account for hidden inefficiencies (for example, failure touse blanking panels in 19" racks) that periodically increase the overallpower draw depending on ambient conditions. The alternative is tometer major data center components to establish baselines of opera-tional power consumption. Although it may be feasible to design inmetering for a new data center deployment, it is more difficult for exist-ing environments. The ideal solution is for facilities and IT equipmentto have embedded power metering capability that can be solicited vianetwork management frameworks.6 The New Data Center
  24. 24. High-level SI-EER and DCiE metrics focus on data center energy effi-ciency to power IT equipment. Unfortunately, this does not provideinformation on the energy efficiency or productivity of the IT equipmentitself. Suppose that there were two data centers with equivalent IT pro-ductivity, the one drawing 50 megawatts of power to drive 25megawatts of IT equipment would have the same DCiE as a data cen-ter drawing 10 megawatts to drive 5 megawatts of IT equipment. TheIT equipment energy efficiency delta could be due to a number of dif-ferent technology choices, including server virtualization, moreefficient power supplies and hardware design, data deduplication,tiered storage, storage virtualization, or other elements. The practicalusefulness of high-level metrics is therefore dependent on underlyingopportunities to increase energy efficiency in individual products andIT systems. Having a tighter ratio between facilities power input and IToutput is good, but lowering the overall input number is much better.Data center energy efficiency has external implications as well. Cur-rently, data centers in the US alone require the equivalent of morethan 6 x 1000 megawatt power plants at a cost of approximately $3Bannually. Although that represents less than 2% of US power consump-tion, it is still a significant and growing number. Global data centerpower usage is more than twice the US figure. Given that all moderncommerce and information exchange is based ultimately on digitizeddata, the social cost in terms of energy consumption for IT processingis relatively modest. In addition, the spread of digital information andcommerce has already provided environmentally friendly benefits interms of electronic transactions for banking and finance, e-commercefor both retail and wholesale channels, remote online employment,electronic information retrieval, and other systems that have increasedproductivity and reduced the requirement for brick-and-mortar onsitecommercial transactions.Data center managers, however, have little opportunity to bask in theglow of external efficiencies especially when energy costs continue toclimb and energy sourcing becomes problematic. Although $3B maybe a bargain for modern US society as a whole, achieving higher levelsof data center efficiency is now a prerequisite for meeting the contin-ued expansion of IT processing requirements. More applications andmore data means either more hardware and energy draw or the adop-tion of new data center technologies and practices that can achievemuch more with far less.The New Data Center 7
  25. 25. Chapter 1: Supply and DemandWhat differentiates the new data center architecture from the old maynot be obvious at first glance. There are, after all, still endless racks ofblinking lights, cabling, network infrastructure, storage arrays, andother familiar systems and a certain chill in the air. The differences arefound in the types of technologies deployed and the real estaterequired to house them.As we will see in subsequent chapters, the new data center is anincreasingly virtualized environment. The static relationships betweenclients, applications, and data characteristic of conventional IT pro-cessing are being replaced with more flexible and mobile relationshipsthat enables IT resources to be dynamically allocated when and wherethey are needed most. The enabling infrastructure in the form of vir-tual servers, virtual fabrics, and virtual storage has the added benefitof reducing the physical footprint of IT and its accompanying energyconsumption. The new data center architecture thus reconciles theconflict between supply and demand by requiring less energy whilesupplying higher levels of IT productivity.8 The New Data Center
  26. 26. Running Hot and Cold 2Taking the heatDissipating the heat generated by IT equipment is a persistent prob-lem for data center operations. Cooling systems alone can account forone third to one half of data center energy consumption. Over-provi-sioning the thermal plant to accommodate current and futurerequirements leads to higher operational costs. Under-provisioning thethermal plant to reduce costs can negatively impact IT equipment,increase the risk of equipment outages, and disrupt ongoing businessoperations. Resolving heat generation issues therefore requires amulti-pronged approach to address (1) the source of heat from ITequipment, (2) the amount and type of cooling plant infrastructurerequired, and (3) the efficiency of air flow around equipment on thedata center floor to remove heat.Energy, Power, and HeatIn common usage, energy is the capacity of a physical system to dowork and is expressed in standardized units of joules (the work doneby a force of one newton moving one meter along the line of directionof the force). Power, by contrast, is the rate at which energy isexpended over time, with one watt of power equal to one joule ofenergy per second. The power of a 100-watt light bulb, for example, isequivalent to 100 joules of energy per second, and the amount ofenergy consumed by the bulb over an hour would be 6000 joules.Because electrical systems often consume thousands of watts, theamount of energy consumed is expressed in kilowatt hours (kWh), andin fact the kilowatt hour is the preferred unit used by power companiesfor billing purposes. A system that requires 10,000 watts of powerwould thus consume and be billed for 10 kWh of energy for each hourof operation, or 240 kWh per day, or 87,600 kWh per year. The typicalAmerican household consumes 10,656 kWh per year.The New Data Center 9
  27. 27. Chapter 2: Running Hot and ColdMedium and large IT hardware products are typically in the 1000+watt range. Fibre Channel directors, for example, can be as efficient as1300 watts (Brocade) to more than 3000 watts (competition). A largestorage array can be in the 6400 watt range. Although low-end serversmay be rated at ~200 watts, higher-end enterprise servers can be asmuch as 8000 watts. With the high population of servers and the req-uisite storage infrastructure to support them in the data center, plusthe typical 2x factor for the cooling plant energy draw, it is not difficultto understand why data center power bills keep escalating. Accordingto the Environmental Protection Agency (EPA), data centers in the UScollectively consume the energy equivalent of approximately 6 millionhouseholds, or about 61 billion kWh per year.Energy consumption generates heat. While energy consumption isexpressed in watts, heat dissipation is expressed in BTU (British Ther-mal Units) per hour (h). One watt is approximately 3.4 BTU/h. BecauseBTUs quickly add up to tens or hundreds of thousands per hour incomplex systems, heat can also be expressed in therms, with onetherm equal to 100,000 BTU. Your household heating bill, for example,is often listed as therms averaged per day or billing period.Environmental ParametersBecause data centers are closed environments, ambient temperatureand humidity must also be considered. ASHRAE Thermal Guidelinesfor Data Processing Environments provides best practices for main-taining proper ambient conditions for operating IT equipment withindata centers. Data centers typically run fairly cool at about 68 degreesFahrenheit and 50% relative humidity. While legacy mainframe sys-tems did require considerable cooling to remain within operationalnorms, open systems IT equipment is less demanding. Consequently,there has been a more recent trend to run data centers at higherambient temperatures, sometimes disturbingly referred to as“Speedo” mode data center operation. Although ASHRAEs guidelinespresent fairly broad allowable ranges of operation (50 to 90 degrees,20 to 80% relative humidity), recommended ranges are still somewhatnarrow (68 to 77 degrees, 40 to 55% relative humidity).10 The New Data Center
  28. 28. Rationalizing IT Equipment DistributionRationalizing IT Equipment DistributionServers and network equipment are typically configured in standard19" (wide) racks and rack enclosures, in turn, are arranged for accessi-bility for cabling and servicing. Increasingly, however, the floor plan fordata center equipment distribution must also accommodate air flowfor equipment cooling. This requires that individual units be mountedin a rack for consistent air flow direction (all exhaust to the rear or allexhaust to the front) and that the rows of racks be arranged to exhaustinto a common space, called a hot aisle/cold aisle plan, as shown inFigure 3. Cold aisleEquipment row Hot aisleEquipment row Air flow Cold aisleEquipment row Hot aisleFigure 3. Hot aisle/cold aisle equipment floor plan.A hot aisle/cold aisle floor plan provides greater cooling efficiency bydirecting cold to hot air flow for each equipment row into a commonaisle. Each cold aisle feeds cool air for two equipment rows while eachhot aisle allows exhaust for two equipment rows, thus enabling maxi-mum benefit for the hot/cold circulation infrastructure. Even greaterefficiency is achieved by deploying equipment with variable-speedfans.The New Data Center 11
  29. 29. Chapter 2: Running Hot and Cold More even cooling Equipment at bottom is cooler Server rack with constant speed fans Server rack with variable speed fansFigure 4. Variable speed fans enable more efficient distribution ofcooling.Variable speed fans increase or decrease their spin rate in response tochanges in equipment temperature. As shown in Figure 4, cold air flowinto equipment racks with constant speed fans favors the hardwaremounted in the lower equipment slots and thus nearer to the cold airfeed. Equipment mounted in the upper slots is heated by their ownpower draw as well as the heat exhaust from the lower tiers. Use ofvariable speed fans, by contrast, enables each unit to selectively applycooling as needed, with more even utilization of cooling throughout theequipment rack.Research done by Michael Patterson and Annabelle Pratt of Intel lever-ages the hot aisle/cold aisle floor plan approach to create a metric formeasuring energy consumption of IT equipment. By convention, theenergy consumption of a unit of IT hardware can be measured physi-cally via use of metering equipment or approximated via use of themanufacturers stated power rating (in watts or BTUs).As shown in Figure 5 Patterson and Pratt incorporate both the energydraw of the equipment mounted within a rack and the associated hotaisle/cold aisle real estate required to cool the entire rack. This “workcell” u nit thus provides a more accurate description of what is actuallyrequired to power and cool IT equipment and, supposing the equip-ment (for example, servers) is uniform across a row, provides a usefulmultiplier for calculating total energy consumption of an entire row ofmounted hardware.12 The New Data Center
  30. 30. Rationalizing IT Equipment Distribution Work cell Cold aisle Equipment racks Hot aisleFigure 5. The concept of work cell incorporates both equipment powerdraw and requisite cooling.When energy was plentiful and cheap, it was often easy to overlook thebasic best practices for data center hardware deployment and the sim-ple remedies to correct inefficient air flow. Blanking plates, forexample, are used to cover unused rack or cabinet slots and thusenforce more efficient airflow within an individual rack. Blankingplates, however, are often ignored, especially when equipment is fre-quently moved or upgraded. Likewise, it is not uncommon to finddecommissioned equipment still racked up (and sometimes actuallypowered on). Racked but unused equipment can disrupt air flow withina cabinet and become a heat trap for heat generated by active hard-ware. In raised floor data centers, decommissioned cabling candisrupt cold air circulation and unsealed cable cutouts can result incontinuous and fruitless loss of cooling. Because the cooling plantitself represents such a significant share of data center energy use,even seemingly minor issues can quickly add up to major inefficien-cies and higher energy bills.The New Data Center 13
  31. 31. Chapter 2: Running Hot and ColdEconomizersTraditionally, data center cooling has been provided by large air condi-tioning systems (computer room air conditioning, or CRAC) that usedCFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-ants. Since both CFCs and HCFCs are ozone depleting, currentsystems use ozone-friendly refrigerants to minimize broader environ-mental impact. Conventional CRAC systems, however, consumesignificant amounts of energy and may account for nearly half of adata center power bill. In addition, these systems are typically over-pro-visioned to accommodate data center growth and consequently incura higher operational expense than is justified for the required coolingcapacity.For new data centers in temperate or colder latitudes, economizerscan provide part or all of the cooling requirement. Economizer technol-ogy dates to the mid-1800s but has seen a revival in response to risingenergy costs. As shown in Figure 6, an economizer (in this case, a dry-side economizer) is essentially a heat exchanger that leverages cooleroutside ambient air temperature to cool the equipment racks. Humidifier/ dehumidifier Damper Particulate filter Outside air Air returnFigure 6. An economizer uses the lower ambient temperature of out-side air to provide cooling.Use of outside air has its inherent problems. Data center equipment issensitive to particulates that can build up on circuit boards and con-tribute to heating issues. An economizer may therefore incorporateparticulate filters to scrub the external air before the air flow enters thedata center. In addition, external air may be too humid or too dry fordata center use. Integrated humidifiers and dehumidifiers can condi-tion the air flow to meet operational specifications for data center use.As stated above, ASHRAE recommends 40 to 55% relative humidity.14 The New Data Center
  32. 32. Monitoring the Data Center EnvironmentDry-side economizers depend on the external air supply temperatureto be sufficiently lower than the data center itself, and this may fluctu-ate seasonally. Wet-side economizers thus include cooling towers aspart of the design to further condition the air supply for data centeruse. Cooling towers present their own complications, which are tough,especially in more arid geographies where water resources are expen-sive and scarce. Ideally, economizers should leverage as muchrecyclable resources as possible to accomplish the task of coolingwhile reducing any collateral environmental impact.Monitoring the Data Center EnvironmentBecause vendor wattage and BTU specifications may assume maxi-mum load conditions, using data sheet specifications or equipmentlabel declarations does not provide an accurate basis for calculatingequipment power draw or heat dissipation. An objective multi-pointmonitoring system for measuring heat and humidity throughout thedata center is really the only means to observe and proactivelyrespond to changes in the environment.A number of monitoring options are available today. For example,some vendors are incorporating temperature probes into their equip-ment design to provide continuous reporting of heat levels viamanagement software. Some solutions provide rack-mountable sys-tems that include both temperature and humidity probes andmonitoring through a Web interface. Fujitsu offers a fiber optic systemthat leverages the affect of temperature on light propagation to pro-vide a multi-point probe using a single fiber optic cable strungthroughout equipment racks. Accuracy is reported to be within a halfdegree Celsius and within 1 meter of the measuring point. In addition,new monitoring software products can render a three-dimensionalview of temperature distribution across the entire data center, analo-gous to an infrared photo of a heat source.Although monitoring systems add cost to data center design, they areinvaluable diagnostic tools for fine-tuning airflow and equipmentplacement to maximize cooling and keeping power and cooling coststo a minimum. Many monitoring systems can be retrofitted to existingdata center plants so that even older sites can leverage newtechnologies.The New Data Center 15
  33. 33. Chapter 2: Running Hot and Cold16 The New Data Center
  34. 34. Doing More with Less 3Leveraging virtualization and blade servertechnologiesOf the three primary components of an IT data center infrastructure—servers, storage and network—servers are by far the most populousand have the highest energy impact. Servers represent approximatelyhalf of the IT equipment energy cost and about a quarter of the totaldata center power bill. Server technology has therefore been a primecandidate for regulation via EPA Energy Star and other market-driveninitiatives and has undergone a transformation in both hardware andsoftware. Server virtualization and blade server design, for example,are distinct technologies fulfilling different goals but together have amultiplying affect on server processing performance and energy effi-ciency. In addition, multi-core processors and multi-processormotherboards have dramatically increased server processing power ina more compact footprint.VMs RebornThe concept of virtual machines dates back to mainframe days. Tomaximize the benefit of mainframe processing, a single physical sys-tem was logically partitioned into independent virtual machines. EachVM ran its own operating system and applications in isolation althoughthe processor and peripherals could be shared. In todays usage, VMstypically run on open systems servers and although direct-connectstorage is possible, shared storage on a SAN or NAS is the norm.Unlike previous mainframe implementations, todays virtualizationsoftware can support dozens of VMs on a single physical server. Typi-cally, 10 or fewer VM instances are run per physical platform althoughmore powerful server platforms can support 20 or more VMs.The New Data Center 17
  35. 35. Chapter 3: Doing More with LessThe benefits of server virtualization are as obvious as the potentialrisks. Running 10 VMs on a single server platform eliminates the needfor 9 additional servers with their associated cost, components, andaccompanying power draw and heat dissipation. For data centers withhundreds or thousands of servers, virtualization offers an immediatesolution for server sprawl and ever increasing costs.Like any virtualization strategy, however, the logical separation of VMsmust be maintained and access to server memory and externalperipherals negotiated to prevent conflicts or errors. VMs on a singleplatform are hosted by a hypervisor layer which runs either directly(Type 1 or native) on the server hardware or on top of (Type 2 orhosted) the conventional operating system already running on theserver hardware. Application Application Application Service console OS OS OS Hypervisor Hardware CPU Memory NIC Storage I/OFigure 7. A native or Type 1 hypervisor.In a native Type 1 virtualization implementation, the hypervisor runsdirectly on the server hardware as shown in Figure 7. This type ofhypervisor must therefore support all CPU, memory, network and stor-age I/O traffic directly without the assistance of an underlyingoperating system. The hypervisor is consequently written to a specificCPU architecture (for open systems, typically an Intel x86 design) andassociated I/O. Clearly, one of the benefits of native hypervisors is thatoverall latency can be minimized as individual VMs perform the normalfunctions required by their applications. With the hypervisor directlymanaging hardware resources, it is also less vulnerable over time tocode changes or updates that might be required if an underlying OSwere used.18 The New Data Center
  36. 36. VMs Reborn Application Application Application Application OS OS OS OS Hypervisor Host Operating System Hardware CPU Memory NIC Storage I/OFigure 8. A hosted or Type 2 hypervisor.As shown in Figure 8, a hosted or Type 2 server virtualization solutionis installed on top of the host operating system. The advantage of thisapproach is that virtualization can be implemented on existing serversto more fully leverage existing processing power and support moreapplications in the same footprint. Given that the host OS and hypervi-sor layer inserts additional steps between the VMs and the lower levelhardware, this hosted implementation incurs more latency than nativehypervisors. On the other hand, hosted hypervisors can readily supportapplications with moderate performance requirements and stillachieve the objective of consolidating compute resources.In both native and hosted hypervisor environments, the hypervisoroversees the creation and activity of its VMs to ensure that each VMhas its requisite resources and does not interfere with the activity ofother VMs. Without the proper management of shared memory tablesby the hypervisor, for example, one VM instance could easily crashanother. The hypervisor must also manage the software traps createdto intercept hardware calls made by the guest OS and provide theappropriate emulation of normal OS hardware access and I/O.Because the hypervisor is now managing multiple virtual computers,secure access to the hypervisor itself must be maintained. Efforts tostandardize server virtualization management for stable and secureoperation are being led by the Distributed Management Task Force(DMTF) through its Virtualization Management Initiative (VMAN) andthrough collaborative efforts by virtualization vendors and partnercompanies.The New Data Center 19
  37. 37. Chapter 3: Doing More with LessServer virtualization software is now available for a variety of CPUs,hardware platforms and operating systems. Adoption for mid-tier, mod-erate performance applications has been enabled by the availability ofeconomical dual-core CPUs and commodity rack-mount servers. High-performance requirements can be met with multi-CPU platforms opti-mized for shared processing. Although server virtualization hassteadily been gaining ground in large data centers, there has beensome reluctance to commit the most mission-critical applications toVM implementations. Consequently, mid-tier applications have beenfirst in line and as these deployments become more pervasive andproven, mission-critical applications will follow.In addition to providing a viable means to consolidate server hardwareand reduce energy costs, server virtualization enables a degree ofmobility unachievable via conventional server management. Becausethe virtual machine is now detached from the underlying physical pro-cessing, memory, and I/O hardware, it is now possible to migrate avirtual machine from one hardware platform to another non-disrup-tively. If, for example, an applications performance is beginning toexceed the capabilities of its shared physical host, it can be migratedonto a less busy host or one that supports faster CPUs and I/O. Thisapplication agility that initially was just an unintended by-product ofmigrating virtual machines has become one of the compelling reasonsto invest in a virtual server solution. With ever-changing business,workload and application priorities, the ability to quickly shift process-ing resources where most needed is a competitive businessadvantage.As discussed in more detail below, virtual machine mobility createsnew opportunities for automating application distribution within thevirtual server pool and implementing policy-based procedures toenforce priority handling of select applications over others. Communi-cation between the virtualization manager and the fabric via APIs, forexample, enable proactive response to potential traffic congestion orchanges in the state of the network infrastructure. This further simpli-fies management of application resources and ensures higheravailability.20 The New Data Center
  38. 38. Blade Server ArchitectureBlade Server ArchitectureServer consolidation in the new data center can also be achieved bydeploying blade server frames. The successful development of bladeserver architecture has been dependent on the steady increase in CPUprocessing power and solving basic problems around shared power,cooling, memory, network, storage, and I/O resources. Although bladeservers are commonly associated with server virtualization, these aredistinct technologies that have a multiplying benefit when combined.Blade server design strips away all but the most essential dedicatedcomponents from the motherboard and provides shared assets aseither auxiliary special function blades or as part of the blade chassishardware. Consequently, the power consumption of each blade serveris dramatically reduced while power supply, fans and other elementsare shared with greater efficiency. A standard data center rack, forexample, can accommodate 42 1U conventional rack-mount servers,but 128 or more blade servers in the same space. A single rack ofblade servers can therefore house the equivalent of 3 racks of conven-tional servers; and although the cooling requirement for a fullypopulated blade server rack may be greater than for a conventionalserver rack, it is still less than the equivalent 3 racks that would other-wise be required.As shown in Figure 9, a blade server architecture offloads all compo-nents that can be supplied by the chassis or by supporting specializedblades. The blade server itself is reduced to one or more CPUs andrequisite auxiliary logic. The degree of component offload and avail-ability of specialized blades varies from vendor to vendor, but the netresult is essentially the same. More processing power can now bepacked into a much smaller space and compute resources can bemanaged more efficiently. Brocade Access Gateway Power Power CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic Fan supply Network I/O CPU supply Memory Fans Network Memory I/O Bus AUX Storage Bus External SAN storageFigure 9. A blade server architecture centralizes shared resourceswhile reducing individual blade server elements.The New Data Center 21
  39. 39. Chapter 3: Doing More with LessBy significantly reducing the number of discrete components per pro-cessing unit, the blade server architecture achieves higher efficienciesin manufacturing, reduced consumption of resources, streamlineddesign and reduced overall costs of provisioning and administration.The unique value-add of each vendors offering may leverage hot-swapcapability, variable-speed fans, variable-speed CPUs, shared memoryblades and consolidated network access. Brocade has long workedwith the major blade server manufacturers to provide optimizedAccess Gateway and switch blades to centralize storage network capa-bility and the specific features of these products will be discussed inthe next section.Although consolidation ratios of 3:1 are impressive, much higherserver consolidation is achieved when blade servers are combinedwith server virtualization software. A fully populated data center rackof 128 blade servers, for example, could support 10 or more virtualmachines per blade for a total of 1280 virtual servers. That would bethe equivalent of 30 racks (at 42 servers per rack) of conventional 1Urack-mount servers running one OS instance per server. From anenergy savings standpoint, that represents the elimination of over1000 power supplies, fan units, network adapters, and other elementsthat contribute to higher data center power bills and cooling load.As a 2009 survey by blade.org shows, adoption of blade server tech-nology has been increasing in both large data centers and small/medium business (SMB) environments. Slightly less than half of thedata center respondents and approximately a third of SMB operationshave already implemented blade servers and over a third in both cate-gories have deployment plans in place. With limited data center realestate and increasing power costs squeezing data center budgets, thecombination of blade servers and server virtualization is fairly easy tojustify.Brocade Server Virtualization SolutionsWhether on standalone servers or blade server frames, implementingserver virtualization has both upstream (client) and downstream (stor-age) impact in the data center. Because Brocade offers a full spectrumof products spanning LAN, WAN and SAN, it can help ensure that aserver virtualization deployment proactively addresses the newrequirements of both client and storage access. The value of a servervirtualization solution is thus amplified when combined with Brocadesnetwork technology.22 The New Data Center
  40. 40. Brocade Server Virtualization SolutionsTo maximize the benefits of network connectivity in a virtualized serverenvironment, Brocade has worked with the major server virtualizationsolutions and managers to deliver high performance, high availability,security, energy efficiency, and streamlined management end to end.The following Brocade solutions can enhance a server virtualizationdeployment and help eliminate potential bottlenecks:Brocade High-Performance 8 Gbps HBAsIn a conventional server, a host bus adapter (HBA) provides storageaccess for a single operating system and its applications. In a virtualserver configuration, the HBA may be supporting 10 to 20 OSinstances, each running its own application. High performance istherefore essential for enabling multiple virtual machines to shareHBA ports without congestion. The Brocade 815 (single port) and 825HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and500,000 I/Os per second (IOPS) performance per port to ensure themaximum throughput for shared virtualized connectivity. BrocadeN_Port Trunking enables the 825 to deliver an unprecedented 16Gbps bandwidth (3200 MBps) and one million IOPS performance. Thisexceptional performance helps ensure that server virtualization con-figurations can expand over time to accommodate additional virtualmachines without impacting the continuous operation of existingapplications.Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking foran aggregate 16 Gbps bandwidth and 1000 IOPS.The New Data Center 23
  41. 41. Chapter 3: Doing More with LessThe Brocade 815 and 825 HBAs are further optimized for server virtu-alization connectivity by supporting advanced intelligent services thatenable end-to-end visibility and management. As discussed below,Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) andintegrated Quality of Service (QoS) provide powerful tools for simplify-ing virtual machine deployments and providing proactive alerts directlyto server virtualization managers.Brocade 8 Gbps Switch and Director PortsIn virtual server environments, the need for speed does not end at thenetwork or storage port. Because more traffic is now traversing fewerphysical links, building high-performance network infrastructures is aprerequisite for maintaining non-disruptive, high-performance virtualmachine traffic flows. Brocades support of 8 Gbps ports on bothswitch and enterprise-class platforms enables customers to build high-performance, non-blocking storage fabrics that can scale from smallVM configurations to enterprise-class data center deployments.Designing high-performance fabrics ensures that applications runningon virtual machines are not exposed to bandwidth issues and canaccommodate high volume traffic patterns required for data backupand other applications.Brocade Virtual Machine SAN BootFor both standalone physical servers and blade server environments,the ability to boot from the storage network greatly simplifies virtualmachine deployment and migration of VM instances from one serverto another. As shown in Figure 11, SAN boot centralizes managementof boot images and eliminates the need for local storage on each phys-ical server platform. When virtual machines are migrated from onehardware platform to another, the boot images can be readilyaccessed across the SAN via Brocade HBAs.24 The New Data Center
  42. 42. Brocade Server Virtualization Solutions ... ... Boot Servers images Brocade ... ... 825 HBAs Servers SAN switches Direct-attached storage (DAS) Storage arrays Boot imagesFigure 11. SAN boot centralizes management of boot images andfacilitates migration of virtual machines between hosts.Brocade 815 and 825 HBAs provide the ability to automaticallyretrieve boot LUN parameters from a centralized fabric-based registry.This eliminates the error-prone manual host-based configurationscheme required by other HBA vendors. Brocades SAN boot and bootLUN discovery facilitates migration of virtual machines from host tohost, removes the need for local storage and improves reliability andperformance.Brocade N_Port ID Virtualization for WorkloadOptimizationIn a virtual server environment, the individual virtual machineinstances are unaware of physical ports since the underlying hardwarehas been abstracted by the hypervisor. This creates potential problemsfor identifying traffic flows from virtual machines through shared phys-ical ports. NPIV is an industry standard that enables multiple FibreChannel addresses to share a single physical Fibre Channel port. In aserver virtualization environment, NPIV allows each virtual machineinstance to have a unique World Wide Name (WWN) or virtual HBAport. This in turn provides a level of granularity for identifying each VMattached to the fabric for end-to-end monitoring, accounting, and con-figuration. Because the WWN is now bound to an individual virtualmachine, the WWN follows the VM when it is migrated to another plat-form. In addition, NPIV creates the linkage required for advancedservices such as QoS, security, and zoning as discussed in the nextsection.The New Data Center 25
  43. 43. Chapter 3: Doing More with LessConfiguring Single Initiator/Target ZoningBrocade has been a pioneer in fabric-based zoning to segregate fabrictraffic and restrict visibility of storage resources to only authorizedhosts. As a recognized best practice for server to storage configura-tion, NPIV and single initiator/target zoning ensures that individualvirtual machines have access only to their designated storage assets.This feature minimizes configuration errors during VM migration andextends the management visibility of fabric connections to specific vir-tual machines.Brocade End-to-End Quality of ServiceThe combination of NPIV and zoning functionality on Brocade HBAsand switches provides the foundation for higher-level fabric servicesincluding end-to-end QoS. Because the traffic flows from each virtualmachine can be identified by virtual WWN and segregated via zoning,each can be assigned a delivery priority (low, medium or high) that isenforced fabric-wide from the host connection to the storage port, asshown in Figure 12. QoS Priorities App 1 App 2 App 3 App 4 High Medium Low Virtual Channels technology enables QoS at the ASIC level in the HBA Default QoS HBA priority Frame-level interleaving of is Medium outbound data maximizes initiator link utilizationFigure 12. Brocades QoS enforces traffic prioritization from the serverHBA to the storage port across the fabric.While some applications running on virtual machines are logical candi-dates for QoS prioritization (for example, SQL Server), Brocades TopTalkers management feature can help identify which VM applicationsmay require priority treatment. Because Brocade end-to-end QoS is ulti-mately tied to the virtual machines virtualized WWN address, the QoSassignment follows the VM if it is migrated from one hardware platform26 The New Data Center
  44. 44. Brocade Server Virtualization Solutionsto another. This feature ensures that applications enjoy non-disruptivedata access despite adds/moves and changes to the downstream envi-ronment and enables administrators to more easily fulfill client service-level agreements (SLAs).Brocade LAN and SAN SecurityMost companies are now subject to government regulations that man-date the protection and security of customer data transactions. Planninga virtualization deployment must therefore also account for basic secu-rity mechanisms for both client and storage access. Brocade offers abroad spectrum of security solutions, including LAN and WAN-basedtechnologies and storage-specific SAN security features. For example,Brocade SecureIron products, shown in Figure 13, provide firewall trafficmanagement and LAN security to safeguard access from clients to vir-tual hosts on the IP network.Figure 13. Brocade SecureIron switches provide firewall traffic man-agement and LAN security for client access to virtual server clusters.Brocade SAN security features include authentication via access controllists (ACLs) and role-based access control (RBAC) as well as securitymechanisms for authenticating connectivity of switch ports and devicesto fabrics. In addition, the Brocade Encryption Switch, shown inFigure 14, and FS8-18 Encryption Blade for the Brocade DCX Backboneplatform provide high-performance (96 Gbps) data encryption for data-at-rest. Brocades security environment thus protects data-in-flight fromclient to virtual host as well as data written to disk across the SAN.Figure 14. The Brocade Encryption Switch provides high-performancedata encryption to safeguard data written to disk or tape.The New Data Center 27
  45. 45. Chapter 3: Doing More with LessBrocade Access Gateway for Blade FramesServer virtualization software can be installed on conventional serverplatforms or blade server frames. Blade server form factors offer thehighest density for consolidating IT processing in the data center andleverage shared resources across the backplane. To optimize storageaccess from blade server frames, Brocade has partnered with bladeserver providers to create high-performance, high-availability AccessGateway blades for Fibre Channel connectivity to the SAN. BrocadeAccess Gateway technology leverages NPIV to simplify virtual machineaddressing and F_Port Trunking for high utilization and automatic linkfailover. By integrating SAN connectivity into a virtualized blade serverchassis, Brocade helps to streamline deployment and simplify manage-ment while reducing overall costs.The Energy-Efficient Brocade DCX Backbone Platform forConsolidationWith 4x the performance and over 10x the energy efficiency of otherSAN directors, the Brocade DCX delivers the high performance requiredfor virtual server implementation and can accommodate growth in VMenvironments in a compact footprint. The Brocade DCX supports 384ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speedinter-chassis links (ICLs) allow further expansion of the SAN core forscaling to meet the requirements of very large server virtualizationdeployments. The Brocade DCX is also designed to non-disruptively inte-grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging(DCB) for future virtual server connectivity. The Brocade DCX is alsoavailable in a 192-port configuration (as the Brocade DCX-4S) to supportmedium VM configurations, while providing the same high availability,performance, and advanced SAN services.The Brocade DCXs Adaptive Networking services for QoS, ingress ratelimiting, congestion detection, and management ensure that trafficstreams from virtual machines are proactively managed throughout thefabric and accommodate the varying requirements of upper-layer busi-ness applications. Adaptive Networking services provide greater agilityin managing application workloads as they migrate between physicalservers.28 The New Data Center

×