• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
87652141 the-new-data-center-brocade

87652141 the-new-data-center-brocade






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    87652141 the-new-data-center-brocade 87652141 the-new-data-center-brocade Document Transcript

    • THE NEWDATACENTER FIRST EDITION New technologies are radically reshaping the data center TOM CLARK
    • Tom Clark, 1947–2010 All too infrequently we have the true privilege of knowing a friend and colleague like Tom Clark. We mourn the passing of a special person, a man who was inspired as well as inspiring, an intelligent and articulate man, a sincere and gentle person with enjoyablehumor, and someone who was respected for his great achievements. We will always remember the endearing and rewarding experiences with Tom and he will be greatly missed by those who knew him. Mark S. Detrick
    • © 2010 Brocade Communications Systems, Inc. All Rights Reserved.Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView,NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registeredtrademarks, and Brocade Assurance, Brocade NET Health, Brocade One,Extraordinary Networks, MyBrocade, and VCS are trademarks of BrocadeCommunications Systems, Inc., in the United States and/or in other countries.Other brands, products, or service names mentioned are or may betrademarks or service marks of their respective owners.Notice: This document is for informational purposes only and does not setforth any warranty, expressed or implied, concerning any equipment,equipment feature, or service offered or to be offered by Brocade. Brocadereserves the right to make changes to this document at any time, withoutnotice, and assumes no responsibility for its use. This informational documentdescribes features that may not be currently available. Contact a Brocadesales office for information on feature and product availability. Export oftechnical data contained in this document may require an export license fromthe United States government.Brocade Bookshelf Series designed by Josh JuddThe New Data CenterWritten by Tom ClarkReviewed by Brook ReamsEdited by Victoria ThomasDesign and Production by Victoria ThomasIllustrated by Jim Heuser, David Lehmann, and Victoria ThomasPrinting HistoryFirst Edition, August 2010iv The New Data Center
    • Important NoticeUse of this book constitutes consent to the following conditions. This book issupplied “AS IS” for informational purposes only, without warranty of any kind,expressed or implied, concerning any equipment, equipment feature, orservice offered or to be offered by Brocade. Brocade reserves the right tomake changes to this book at any time, without notice, and assumes noresponsibility for its use. This informational document describes features thatmay not be currently available. Contact a Brocade sales office for informationon feature and product availability. Export of technical data contained in thisbook may require an export license from the United States government.Brocade Corporate HeadquartersSan Jose, CA USAT: +01-408-333-8000info@brocade.comBrocade European HeadquartersGeneva, SwitzerlandT: +41-22-799-56-40emea-info@brocade.comBrocade Asia Pacific HeadquartersSingaporeT: +65-6538-4700apac-info@brocade.comAcknowledgementsI would first of all like to thank Ron Totah, Senior Director of Marketing atBrocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers.Rons consistent support and encouragement for the Brocade Bookshelfprojects and Brocade TechBytes Webcast series provides sustainedmomentum for getting technical information into the hands of our customers.The real work of project management, copyediting, content generation,assembly, publication, and promotion is done by Victoria Thomas, TechnicalMarketing Manager at Brocade. Without Victorias steadfast commitment,none of this material would see the light of day.I would also like to thank Brook Reams, Solution Architect for Applicationson the Integrated Marketing team, for reviewing my draft manuscript andproviding suggestions and invaluable insights on the technologies underdiscussion.Finally, a thank you to the entire Brocade team for making this a first-classcompany that produces first-class products for first-class customersworldwide.The New Data Center v
    • About the AuthorTom Clark was a resident SAN evangelist for Brocade and representedBrocade in industry associations, conducted seminars and tutorials atconferences and trade shows, promoted Brocade storage networkingsolutions, and acted as a customer liaison. A noted author and industryadvocate of storage networking technology, he was a board member of theStorage Networking Industry Association (SNIA) and former Chair of the SNIAGreen Storage Initiative. Clark has published hundreds of articles and whitepapers on storage networking and is the author of Designing Storage AreaNetworks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI,iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001),Storage Virtualization: Technologies for Simplifying Data Storage andManagement (Addison-Wesley 2005), and Strategies for Data Protection(Brocade Bookshelf, 2008).Prior to joining Brocade, Clark was Director of Solutions and Technologiesfor McDATA Corporation and the Director of Technical Marketing for NishanSystems, the innovator of storage over IP technology. As a liaison betweenmarketing, engineering, and customers, he focused on customer educationand defining features that ensure productive deployment of SANs. With morethan 20 years experience in the IT industry, Clark held technical marketing andsystems consulting positions with storage networking and other datacommunications companies.Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knowsthat he was intelligent, quick, a voice of sanity and also sarcasm, and apragmatist with a great heart. He was indeed the heart of Brocade TechBytes,a monthly Webcast he described as “a late night technical talk show,” whichwas launched in November 2008 and is still part of Brocade’s TechnicalMarketing program.vi The New Data Center
    • ContentsPreface ....................................................................................................... xvChapter 1: Supply and Demand ..............................................................1Chapter 2: Running Hot and Cold ...........................................................9Energy, Power, and Heat ...................................................................................... 9Environmental Parameters ................................................................................10Rationalizing IT Equipment Distribution ............................................................11Economizers ........................................................................................................14Monitoring the Data Center Environment .........................................................15Chapter 3: Doing More with Less ......................................................... 17VMs Reborn ......................................................................................................... 17Blade Server Architecture ..................................................................................21Brocade Server Virtualization Solutions ...........................................................22 Brocade High-Performance 8 Gbps HBAs .................................................23 Brocade 8 Gbps Switch and Director Ports ..............................................24 Brocade Virtual Machine SAN Boot ...........................................................24 Brocade N_Port ID Virtualization for Workload Optimization ..................25 Configuring Single Initiator/Target Zoning ................................................26 Brocade End-to-End Quality of Service ......................................................26 Brocade LAN and SAN Security .................................................................27 Brocade Access Gateway for Blade Frames ..............................................28 The Energy-Efficient Brocade DCX Backbone Platform for Consolidation ..............................................................................................28 Enhanced and Secure Client Access with Brocade LAN Solutions .........29 Brocade Industry Standard SMI-S Monitoring ..........................................29 Brocade Professional Services ..................................................................30FCoE and Server Virtualization ..........................................................................31Chapter 4: Into the Pool ........................................................................ 35Optimizing Storage Capacity Utilization in the Data Center .............................35Building on a Storage Virtualization Foundation ..............................................39Centralizing Storage Virtualization from the Fabric .......................................... 41Brocade Fabric-based Storage Virtualization ...................................................43The New Data Center vii
    • ContentsChapter 5: Weaving a New Data Center Fabric ................................. 45Better Fewer but Better ......................................................................................46Intelligent by Design ...........................................................................................48Energy Efficient Fabrics ......................................................................................53Safeguarding Storage Data ................................................................................55Multi-protocol Data Center Fabrics ....................................................................58Fabric-based Disaster Recovery ........................................................................64Chapter 6: The New Data Center LAN ................................................. 69A Layered Architecture ....................................................................................... 71Consolidating Network Tiers .............................................................................. 74Design Considerations .......................................................................................75 Consolidate to Accommodate Growth .......................................................75 Network Resiliency .....................................................................................76 Network Security .........................................................................................77 Power, Space and Cooling Efficiency .........................................................78 Network Virtualization ................................................................................79Application Delivery Infrastructure ....................................................................80Chapter 7: Orchestration ....................................................................... 83Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89Server Adapters ..................................................................................................89 Brocade 825/815 FC HBA .........................................................................90 Brocade 425/415 FC HBA .........................................................................91 Brocade FCoE CNAs ....................................................................................91Brocade 8000 Switch and FCOE10-24 Blade ..................................................92Access Gateway ..................................................................................................93Brocade Management Pack ..............................................................................94Brocade ServerIron ADX .....................................................................................95Chapter 9: Brocade SAN Solutions ...................................................... 97Brocade DCX Backbones (Core) ........................................................................98Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100 Brocade 5300 Switch ...............................................................................101 Brocade 5100 Switch .............................................................................. 102 Brocade 300 Switch ................................................................................ 103 Brocade VA-40FC Switch ......................................................................... 104Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106Brocade Optical Transceiver Modules .............................................................107Brocade Data Center Fabric Manager ............................................................ 108Chapter 10: Brocade LAN Network Solutions ..................................109Core and Aggregation ...................................................................................... 110 Brocade NetIron MLX Series ................................................................... 110 Brocade BigIron RX Series ...................................................................... 111viii The New Data Center
    • ContentsAccess .............................................................................................................. 112 Brocade TurboIron 24X Switch ................................................................ 112 Brocade FastIron CX Series ..................................................................... 113 Brocade NetIron CES 2000 Series ......................................................... 113 Brocade FastIron Edge X Series ............................................................. 114Brocade IronView Network Manager .............................................................. 115Brocade Mobility .............................................................................................. 116Chapter 11: Brocade One ....................................................................117Evolution not Revolution ..................................................................................117Industrys First Converged Data Center Fabric .............................................. 119 Ethernet Fabric ........................................................................................ 120 Distributed Intelligence ........................................................................... 120 Logical Chassis ........................................................................................ 121 Dynamic Services .................................................................................... 121The VCS Architecture ....................................................................................... 122Appendix A: “Best Practices for Energy Efficient StorageOperations” .............................................................................................123Introduction ...................................................................................................... 123Some Fundamental Considerations ............................................................... 124Shades of Green .............................................................................................. 125 Best Practice #1: Manage Your Data ..................................................... 126 Best Practice #2: Select the Appropriate Storage RAID Level .............. 128 Best Practice #3: Leverage Storage Virtualization ................................ 129 Best Practice #4: Use Data Compression .............................................. 130 Best Practice #5: Incorporate Data Deduplication ................................131 Best Practice #6: File Deduplication .......................................................131 Best Practice #7: Thin Provisioning of Storage to Servers .................... 132 Best Practice #8: Leverage Resizeable Volumes .................................. 132 Best Practice #9: Writeable Snapshots ................................................. 132 Best Practice #10: Deploy Tiered Storage ............................................. 133 Best Practice #11: Solid State Storage .................................................. 133 Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133 Best Practice #13: Tape Subsystems ..................................................... 134 Best Practice #14: Fabric Design ........................................................... 134 Best Practice #15 - File System Virtualization ....................................... 134 Best Practice #16: Server, Fabric and Storage Virtualization .............. 135 Best Practice #17: Flywheel UPS Technology ........................................ 135 Best Practice #18: Data Center Air Conditioning Improvements ......... 136 Best Practice #19: Increased Data Center temperatures .................... 136 Best Practice #20: Work with Your Regional Utilities .............................137What the SNIA is Doing About Data Center Energy Usage .............................137About the SNIA ................................................................................................. 138Appendix B: Online Sources .................................................................139Glossary ..................................................................................................141Index ........................................................................................................153The New Data Center ix
    • Contentsx The New Data Center
    • FiguresFigure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3Figure 2. The support infrastructure adds substantial cost and energy over-head to the data center. ...................................................................................... 4Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11Figure 4. Variable speed fans enable more efficient distribution of cooling. 12Figure 5. The concept of work cell incorporates both equipment power drawand requisite cooling. .........................................................................................13Figure 6. An economizer uses the lower ambient temperature of outside air toprovide cooling. ...................................................................................................14Figure 7. A native or Type 1 hypervisor. ...........................................................18Figure 8. A hosted or Type 2 hypervisor. ..........................................................19Figure 9. A blade server architecture centralizes shared resources while reduc-ing individual blade server elements. ...............................................................21Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag-gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23Figure 11. SAN boot centralizes management of boot images and facilitatesmigration of virtual machines between hosts. .................................................25Figure 12. Brocades QoS enforces traffic prioritization from the server HBA tothe storage port across the fabric. ....................................................................26Figure 13. Brocade SecureIron switches provide firewall traffic managementand LAN security for client access to virtual server clusters. ..........................27Figure 14. The Brocade Encryption Switch provides high-performance data en-cryption to safeguard data written to disk or tape. ..........................................27Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3switching in three compact, energy-efficient form factors. .............................29Figure 16. FCoE simplifies the server cable plant by reducing the number ofnetwork interfaces required for client, peer-to-peer, and storage access. ....31Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channelports and provides protocol conversion to the data center SAN. ...................32The New Data Center xi
    • FiguresFigure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-itate a compact, high-performance FCoE deployment. ....................................33Figure 19. Conventional storage configurations often result in over- and under-utilization of storage capacity across multiple storage arrays. .......................36Figure 20. Storage virtualization aggregates the total storage capacity of mul-tiple physical arrays into a single virtual pool. ..................................................37Figure 21. The virtualization abstraction layer provides virtual targets to realhosts and virtual hosts to real targets. .............................................................38Figure 22. Leveraging classes of storage to align data storage to the businessvalue of data over time. .....................................................................................40Figure 23. FAIS splits the control and data paths for more efficient executionof metadata mapping between virtual storage and servers. ..........................42Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-ta map execution for non-disruptive storage pooling, mirroring and data migra-tion. ......................................................................................................................43Figure 25. A storage-centric core/edge topology provides flexibility in deployingservers and storage assets while accommodating growth over time. ............47Figure 26. Brocade QoS gives preferential treatment to high-value applicationsthrough the fabric to ensure reliable delivery. ..................................................49Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-tion by throttling the transmission rate of the offending initiator. ..................50Figure 28. Preferred paths are established through traffic isolation zones,which enforce separation of traffic through the fabric based on designatedapplications. ........................................................................................................51Figure 29. By monitoring traffic activity on each port, Top Talkers can identifywhich applications would most benefit from Adaptive Networking services. 52Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps portcompared to the competition. ...........................................................................54Figure 31. The Brocade Encryption Switch provides secure encryption for diskor tape. ................................................................................................................56Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58Figure 33. Integrating formerly standalone mid-tier servers into the data centerfabric with an iSCSI blade in the Brocade DCX. ...............................................61Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-wide disruptions. ................................................................................................62Figure 35. IR facilitates resource sharing between physically independentSANs. ...................................................................................................................64Figure 36. Long-distance connectivity options using Brocade devices. ........67Figure 37. Access, aggregation, and core layers in the data centernetwork. ...............................................................................................................71Figure 38. Access layer switch placement is determined by availability, portdensity, and cable strategy. ...............................................................................73xii The New Data Center
    • FiguresFigure 39. A Brocade BigIron RX Series switch consolidates connectivity in amore energy efficient footprint. .........................................................................75Figure 40. Network infrastructure typically contributes only 10% to 15% of totaldata center IT equipment power usage. ...........................................................79Figure 41. Application congestion (traffic shown as a dashed line) on a Web-based enterprise application infrastructure. ....................................................80Figure 42. Application workload balancing, protocol processing offload and se-curity via the Brocade ServerIron ADX. .............................................................81Figure 43. Open systems-based orchestration between virtualizationdomains. ..............................................................................................................84Figure 44. Brocade Management Pack for Microsoft Service Center VirtualMachine Manager leverages APIs between the SAN and SCVMM to trigger VMmigration. ............................................................................................................86Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNA. ............................................................................................................92Figure 48. Brocade 8000 Switch. ....................................................................92Figure 49. Brocade FCOE10-24 Blade. ............................................................93Figure 50. SAN Call Home events displayed in the Microsoft System CenterOperations Center interface. .............................................................................94Figure 51. Brocade ServerIron ADX 1000. ......................................................95Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98Figure 53. Brocade 5300 Switch. ................................................................. 101Figure 54. Brocade 5100 Switch. ................................................................. 102Figure 55. Brocade 300 Switch. .................................................................... 103Figure 56. Brocade VA-40FC Switch. ............................................................ 104Figure 57. Brocade Encryption Switch. ......................................................... 105Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105Figure 59. Brocade 7800 Extension Switch. ................................................ 106Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107Figure 61. Brocade DCFM main window showing the topology view. ......... 108Figure 62. Brocade NetIron MLX-4. ............................................................... 110Figure 63. Brocade BigIron RX-16. ................................................................ 111Figure 64. Brocade TurboIron 24X Switch. ................................................... 112Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura-tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114Figure 67. Brocade FastIron Edge X 624. ..................................................... 114The New Data Center xiii
    • FiguresFigure 68. Brocade INM Dashboard (top) and Backup Configuration Manager(bottom). ........................................................................................................... 115Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118Figure 70. A Brocade VCS reference network architecture. ........................ 122xiv The New Data Center
    • PrefaceData center administrators today are facing unprecedented chal-lenges. Business applications are shifting from conventional client/server relationships to Web-based applications, data center realestate is at a premium, energy costs continue to escalate, new regula-tions are imposing more rigorous requirements for data protection andsecurity, and tighter corporate budgets are making it difficult toaccommodate client demands for more applications and data storage.Since all major enterprises run their businesses on the basis of digitalinformation, the consequences of inadequate processing power, stor-age, network accessibility, or data availability can have a profoundimpact on the viability of the enterprise itself.At the same time, new technologies that promise to alleviate some ofthese issues require both capital expenditures and a sharp learningcurve to successfully integrate new solutions that can increase produc-tivity and lower ongoing operational costs. The ability to quickly adaptnew technologies to new problems is essential for creating a more flex-ible data center strategy that can meet both current and futurerequirements. This effort necessitates cooperation between both datacenter administrators and vendors and between the multiple vendorsresponsible for providing the elements that compose a comprehensivedata center solution.The much overused term “ecosystem” is nonetheless an accuratedescription of the interdependencies of technologies required fortwenty-first century data center operation. No single vendor manufac-tures the full spectrum of hardware and software elements required todrive data center IT processing. This is especially true when each ofthe three major domains of IT operations -server, storage, and net-working-are each undergoing profound technical evolution in the formof virtualization. Not only must products be designed and tested forThe New Data Center xv
    • standards compliance and multi-vendor operability, but managementbetween the domains must be orchestrated to ensure stable opera-tions and coordination of tasks.Brocade has a long and proven track record in data center networkinnovation and collaboration with partners to create new solutions tosolve real problems and at the same time reducing deployment andoperational costs. This book provides an overview of the new technolo-gies that are radically transforming the data center into a more cost-effective corporate asset and the specific Brocade products that canhelp you achieve this goal.The book is organized as follows:• “Chapter 1: Supply and Demand” starting on page 1 examines the technological and business drivers that are forcing changes in the conventional data center paradigm. Due to increased business demands (even in difficult economic times), data centers are run- ning out of space and power and this in turn is driving new initiatives for server, storage and network consolidation.• “Chapter 2: Running Hot and Cold” starting on page 9 looks at data center power and cooling issues that threaten productivity and operational budgets. New technologies such as wet and dry- side economizers, hot aisle/cold aisle rack deployment, and proper sizing of the cooling plant can help maximize productive use of existing real estate and reduce energy overhead.• “Chapter 3: Doing More with Less” starting on page 17 provides an overview of server virtualization and blade server technology. Server virtualization, in particular, is moving from secondary to pri- mary applications and requires coordination with upstream networking and downstream storage for successful implementa- tion. Brocade has developed a suite of new technologies to leverage the benefits of server virtualization and coordinate oper- ation between virtual machine managers and the LAN and SAN networks.• “Chapter 4: Into the Pool” starting on page 35 reviews the poten- tial benefits of storage virtualization for maximizing utilization of storage assets and automating life cycle management.xvi The New Data Center
    • • “Chapter 5: Weaving a New Data Center Fabric” starting on page 45 examines the recent developments in storage networking technology, including higher bandwidth, fabric virtualization, enhanced security, and SAN extension. Brocade continues to pio- neer more productive solutions for SANs and is the author or co- author of the significant standards underlying these new technologies.• “Chapter 6: The New Data Center LAN” starting on page 69 high- lights the new challenges that virtualization and Web-based applications present to the data communications network. Prod- ucts like the Brocade ServerIron ADX Series of application delivery controller provide more intelligence in the network to offload server protocol processing and provide much higher levels of avail- ability and security.• “Chapter 7: Orchestration” starting on page 83 focuses on the importance of standards-based coordination between server, stor- age and network domains so that management frameworks can provide a comprehensive view of the entire infrastructure and pro- actively address potential bottlenecks.• Chapters 8, 9, and 10 provide brief descriptions of Brocade prod- ucts and technologies that have been developed to solve data center problems.• “Chapter 11: Brocade One” starting on page 117 described a new Brocade direction and innovative technologies to simplify the com- plexity of virtualized data centers.• “Appendix A: “Best Practices for Energy Efficient Storage Opera- tions”” starting on page 123 is a reprint of an article written by Tom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green Storage Initiative (GSI).• “Appendix B: Online Sources” starting on page 139 is a list of online resources.• The “Glossary” starting on page 141 is a list of data center net- work terms and definitions.The New Data Center xvii
    • xviii The New Data Center
    • Supply and Demand 1The collapse of the old data center paradigmAs in other social and economic sectors, information technology hasrecently found itself in the awkward position of having lived beyond itsmeans. The seemingly endless supply of affordable real estate, elec-tricity, data processing equipment, and technical personnel enabledcompanies to build large data centers to house their mainframe andopen systems infrastructures and to support the diversity of businessapplications typical of modern enterprises. In the new millennium,however, real estate has become prohibitively expensive, the cost ofenergy has skyrocketed, utilities are often incapable of increasing sup-ply to existing facilities, data processing technology has become morecomplex, and the pool of technical talent to support new technologiesis shrinking.At the same time, the increasing dependence of companies and insti-tutions on electronic information and communications has resulted ina geometric increase in the amount of data that must be managedand stored. Since 2000, the amount of corporate data generatedworldwide has grown from 5 exabytes (5 billion gigabytes) to over 300exabytes, with projections of about 1 zetabyte (1000 exabytes) by2010. This data must be stored somewhere. The installation of moreservers and disk arrays to accommodate data growth is simply not sus-tainable as data centers run out of floor space, cooling capacity, andenergy to feed additional hardware. The demands constantly placedon IT administrators to expand support for new applications and dataare now in direct conflict with the supply of data center space andpower.Gartner predicted that by 2009, half of the worlds data centers willnot have sufficient power to support their applications. An EmersonPower survey projects that 96% of all data centers will not have suffi-cient power by 2011.The New Data Center 1
    • Chapter 1: Supply and DemandThe conventional approach to data center design and operations hasendured beyond its usefulness primarily due to a departmental siloeffect common to many business operations. A data center adminis-trator, for example, could specify the near-term requirements for powerdistribution for IT equipment but because the utility bill was often paidfor by the companys facilities management, the administrator wouldbe unaware of continually increasing utility costs. Likewise, individualbusiness units might deploy new rich content applications resulting ina sudden spike in storage requirements and additional load placed onthe messaging network, with no proactive notification of the data cen-ter and network operators.In addition, the technical evolution of data center design, cooling tech-nology, and power distribution has lagged far behind the rapiddevelopment of server platforms, networks, storage technology, andapplications. Twenty-first century technology now resides in twentiethcentury facilities that are proving too inflexible to meet the needs ofthe new data processing paradigm. Consequently, many IT managersare looking for ways to align the data center infrastructure to the newrealities of space, power, and budget constraints.Although data centers have existed for over 50 years, guidelines fordata center design were not codified into standards until 2005. TheANSI/TIA-942 Telecommunications Infrastructure Standard for DataCenters focuses primarily on cable plant design but also includespower distribution, cooling, and facilities layout. TIA-942 defines fourbasic tiers for data center classification, characterized chiefly by thedegree of availability each provides:• Tier 1. Basic data center with no redundancy• Tier 2. Redundant components but single distribution path• Tier 3. Concurrently maintainable with multiple distribution paths and one active• Tier 4. Fault tolerant with multiple active distribution paths A Tier 4 data center is obviously the most expensive to build and main-tain but fault tolerance is now essential for most data centerimplementations. Loss of data access is loss of business and few com-panies can afford to risk unplanned outages that disrupt customersand revenue streams. A “five-nines” (99.999%) availability that allowsfor only 5.26 minutes of data center downtime annually requiresredundant electrical, UPS, mechanical, and generator systems. Dupli-cation of power and cooling sources, cabling, network ports, andstorage, however, both doubles the cost of the data center infrastruc-2 The New Data Center
    • ture and the recurring monthly cost of energy. Without new means toreduce the amount of space, cooling, and power while maintaininghigh data availability, the classic data center architecture is notsustainable. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM Backbone Telecom room cabling Main Office & Operations Distribution Area Center LAN Switches Routers, backbone LAN/SAN/KVM Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling LAN/SAN/KVM Switches Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Equipment Equipment Equipment Distribution Area Distribution Area Distribution Area Rack / Cabinets Rack / Cabinets Rack / CabinetsFigure 1. The ANSI/TIA-942 standard functional area connectivity.As shown in Figure 1, the TIA-942 standard defines the main func-tional areas and interconnecting cable plant for the data center.Horizontal distribution is typically subfloor for older raised-floor datacenters or ceiling rack drop for newer facilities. The definition of pri-mary functional areas is meant to rationalize the cable plant andequipment placement so that space is used more efficiently and ongo-ing maintenance and troubleshooting can be minimized. As part of themainframe legacy, many older data centers are victims of indiscrimi-nant cable runs, often strung reactively in response to an immediateneed. The subfloors of older data centers can be clogged with aban-doned bus and tag cables, which are simply too long and too tangledto remove. This impedes airflow and makes it difficult to accommo-date new cable requirements.Note that the overview in Figure 1 does not depict the additional datacenter infrastructure required for UPS systems (primarily batteryrooms), cooling plant, humidifiers, backup generators, fire suppres-sion equipment, and other facilities support systems. Although thesupport infrastructure represents a significant part of the data centerinvestment, it is often over-provisioned for the actual operationalpower and cooling requirements of IT equipment. Even though it mayThe New Data Center 3
    • Chapter 1: Supply and Demandbe done in anticipation of future growth, over-provisioning is now a lux-ury that few data centers can afford. Properly sizing the computerroom air conditioning (CRAC) to the proven cooling requirement is oneof the first steps in getting data center power costs under control. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM UPS Backbone Telecom room cabling Main Battery Office & Operations Distribution Area Routers, backbone LAN/SAN/KVM Switches Room Center LAN Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling Backup LAN/SAN/KVM Switches Generators Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Diesel Equipment Equipment Equipment Fuel Distribution Area Distribution Area Distribution Area Reserves Rack / Cabinets Rack / Cabinets Rack / Cabinets Power Distribution Cooling Fire Suppression Computer Room CRAC Towers System Air Conditioners (CRAC) ConduitsFigure 2. The support infrastructure adds substantial cost and energyoverhead to the data center.The diagram in Figure 2 shows the basic functional areas for IT pro-cessing supplemented by the key data center support systemsrequired for high availability data access. Each unit of powered equip-ment has a multiplier effect on total energy draw. First, each datacenter element consumes electricity according to its specific loadrequirements, typically on a 7x24 basis. Second, each unit dissipatesheat as a natural by-product of its operation, and heat removal andcooling requires additional energy draw in the form of the computerroom air conditioning system. The CRAC system itself generates heat,which also requires cooling. Depending on the design, the CRAC sys-tem may require auxiliary equipment such as cooling towers, pumps,and so on, which draw additional power. Because electronic equip-ment is sensitive to ambient humidity, each element also places anadditional load on the humidity control system. And finally, each ele-4 The New Data Center
    • ment requires UPS support for continuous operation in the event of apower failure. Even in standby mode, the UPS draws power for monitor-ing controls, charging batteries, and fly-wheel operation.Air conditioning and air flow systems typically represent about 37% ofa data centers power bill. Although these systems are essential for IToperations, they are often over-provisioned in older data centers andthe original air flow strategy may not work efficiently for rack-mountopen systems infrastructure. For an operational data center, however,retrofitting or redesigning air conditioning and flow during productionmay not be feasible.For large data centers in particular, the steady accumulation of moreservers, network infrastructure, and storage elements and theiraccompanying impact on space, cooling, and energy capabilities high-lights the shortcomings of conventional data center design. Additionalspace simply may not be available, the air flow inadequate for suffi-cient cooling, and utility-supplied power already at their maximum. Andyet the escalating requirements for more applications, more data stor-age, faster performance, and higher availability continue unabated.Resolving this contradiction between supply and demand requiresmuch closer attention to both the IT infrastructure and the data centerarchitecture as elements of a common ecosystem.As long as energy was relatively inexpensive, companies tended tosimply buy additional floor space and cooling to deal with increasing ITprocessing demands. Little attention was paid to the efficiency of elec-trical distribution systems or the IT equipment they serviced. Withenergy now at a premium, maximizing utilization of available power byincreasing energy efficiency is essential.Industry organizations have developed new metrics for calculating theenergy efficiency of data centers and providing guidance for data cen-ter design and operations. The Uptime Institute, for example, hasformulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) toanalyze the relationship between total power supplied to the data cen-ter and the power that is supplied specifically to operate IT equipment.The total facilities power input divided by the IT equipment power drawhighlights the energy losses due to power conversion, heating/cooling,inefficient hardware, and other contributors. A SI-EER of 2 would indi-cate that for every 2 watts of energy input at the data center meter,only 1 watt is drives IT equipment. By the Uptime Institutes own mem-ber surveys, a SI-EER of 2.5 is not uncommon.The New Data Center 5
    • Chapter 1: Supply and DemandLikewise, The Green Grid, a global consortium of IT companies andprofessionals seeking to improve energy efficiency in data centers andbusiness computing ecosystems, has proposed a Data Center Infra-structure Efficiency (DCiE) ratio that divides the IT equipment powerdraw by the total data center facility power. This is essentially the recip-rocal of SI-EER, yielding a fractional ratio between the facilities powersupplied and the actual power draw for IT processing. With DCiE or SI-EER, however, it is not possible to achieve a 1:1 ratio that wouldenable every watt supplied to the data center to be productively usedfor IT processing. Cooling, air flow, humidity control, fire suppression,power distribution losses, backup power, lighting, and other factorsinevitably consume power. These supporting elements, however, canbe managed so that productive utilization of facilities power isincreased and IT processing itself is made more efficient via new tech-nologies and better product design.Although SI-EER and DCiE are useful tools for a top-down analysis ofdata center efficiency, it is difficult to support these high-level metricswith real substantiating data. It is not sufficient, for example, to simplyuse the manufacturers stated power figures for specific equipment,especially since manufacturer power ratings are often based on pro-jected peak usage and not normal operations. In addition, statedratings cannot account for hidden inefficiencies (for example, failure touse blanking panels in 19" racks) that periodically increase the overallpower draw depending on ambient conditions. The alternative is tometer major data center components to establish baselines of opera-tional power consumption. Although it may be feasible to design inmetering for a new data center deployment, it is more difficult for exist-ing environments. The ideal solution is for facilities and IT equipmentto have embedded power metering capability that can be solicited vianetwork management frameworks.6 The New Data Center
    • High-level SI-EER and DCiE metrics focus on data center energy effi-ciency to power IT equipment. Unfortunately, this does not provideinformation on the energy efficiency or productivity of the IT equipmentitself. Suppose that there were two data centers with equivalent IT pro-ductivity, the one drawing 50 megawatts of power to drive 25megawatts of IT equipment would have the same DCiE as a data cen-ter drawing 10 megawatts to drive 5 megawatts of IT equipment. TheIT equipment energy efficiency delta could be due to a number of dif-ferent technology choices, including server virtualization, moreefficient power supplies and hardware design, data deduplication,tiered storage, storage virtualization, or other elements. The practicalusefulness of high-level metrics is therefore dependent on underlyingopportunities to increase energy efficiency in individual products andIT systems. Having a tighter ratio between facilities power input and IToutput is good, but lowering the overall input number is much better.Data center energy efficiency has external implications as well. Cur-rently, data centers in the US alone require the equivalent of morethan 6 x 1000 megawatt power plants at a cost of approximately $3Bannually. Although that represents less than 2% of US power consump-tion, it is still a significant and growing number. Global data centerpower usage is more than twice the US figure. Given that all moderncommerce and information exchange is based ultimately on digitizeddata, the social cost in terms of energy consumption for IT processingis relatively modest. In addition, the spread of digital information andcommerce has already provided environmentally friendly benefits interms of electronic transactions for banking and finance, e-commercefor both retail and wholesale channels, remote online employment,electronic information retrieval, and other systems that have increasedproductivity and reduced the requirement for brick-and-mortar onsitecommercial transactions.Data center managers, however, have little opportunity to bask in theglow of external efficiencies especially when energy costs continue toclimb and energy sourcing becomes problematic. Although $3B maybe a bargain for modern US society as a whole, achieving higher levelsof data center efficiency is now a prerequisite for meeting the contin-ued expansion of IT processing requirements. More applications andmore data means either more hardware and energy draw or the adop-tion of new data center technologies and practices that can achievemuch more with far less.The New Data Center 7
    • Chapter 1: Supply and DemandWhat differentiates the new data center architecture from the old maynot be obvious at first glance. There are, after all, still endless racks ofblinking lights, cabling, network infrastructure, storage arrays, andother familiar systems and a certain chill in the air. The differences arefound in the types of technologies deployed and the real estaterequired to house them.As we will see in subsequent chapters, the new data center is anincreasingly virtualized environment. The static relationships betweenclients, applications, and data characteristic of conventional IT pro-cessing are being replaced with more flexible and mobile relationshipsthat enables IT resources to be dynamically allocated when and wherethey are needed most. The enabling infrastructure in the form of vir-tual servers, virtual fabrics, and virtual storage has the added benefitof reducing the physical footprint of IT and its accompanying energyconsumption. The new data center architecture thus reconciles theconflict between supply and demand by requiring less energy whilesupplying higher levels of IT productivity.8 The New Data Center
    • Running Hot and Cold 2Taking the heatDissipating the heat generated by IT equipment is a persistent prob-lem for data center operations. Cooling systems alone can account forone third to one half of data center energy consumption. Over-provi-sioning the thermal plant to accommodate current and futurerequirements leads to higher operational costs. Under-provisioning thethermal plant to reduce costs can negatively impact IT equipment,increase the risk of equipment outages, and disrupt ongoing businessoperations. Resolving heat generation issues therefore requires amulti-pronged approach to address (1) the source of heat from ITequipment, (2) the amount and type of cooling plant infrastructurerequired, and (3) the efficiency of air flow around equipment on thedata center floor to remove heat.Energy, Power, and HeatIn common usage, energy is the capacity of a physical system to dowork and is expressed in standardized units of joules (the work doneby a force of one newton moving one meter along the line of directionof the force). Power, by contrast, is the rate at which energy isexpended over time, with one watt of power equal to one joule ofenergy per second. The power of a 100-watt light bulb, for example, isequivalent to 100 joules of energy per second, and the amount ofenergy consumed by the bulb over an hour would be 6000 joules.Because electrical systems often consume thousands of watts, theamount of energy consumed is expressed in kilowatt hours (kWh), andin fact the kilowatt hour is the preferred unit used by power companiesfor billing purposes. A system that requires 10,000 watts of powerwould thus consume and be billed for 10 kWh of energy for each hourof operation, or 240 kWh per day, or 87,600 kWh per year. The typicalAmerican household consumes 10,656 kWh per year.The New Data Center 9
    • Chapter 2: Running Hot and ColdMedium and large IT hardware products are typically in the 1000+watt range. Fibre Channel directors, for example, can be as efficient as1300 watts (Brocade) to more than 3000 watts (competition). A largestorage array can be in the 6400 watt range. Although low-end serversmay be rated at ~200 watts, higher-end enterprise servers can be asmuch as 8000 watts. With the high population of servers and the req-uisite storage infrastructure to support them in the data center, plusthe typical 2x factor for the cooling plant energy draw, it is not difficultto understand why data center power bills keep escalating. Accordingto the Environmental Protection Agency (EPA), data centers in the UScollectively consume the energy equivalent of approximately 6 millionhouseholds, or about 61 billion kWh per year.Energy consumption generates heat. While energy consumption isexpressed in watts, heat dissipation is expressed in BTU (British Ther-mal Units) per hour (h). One watt is approximately 3.4 BTU/h. BecauseBTUs quickly add up to tens or hundreds of thousands per hour incomplex systems, heat can also be expressed in therms, with onetherm equal to 100,000 BTU. Your household heating bill, for example,is often listed as therms averaged per day or billing period.Environmental ParametersBecause data centers are closed environments, ambient temperatureand humidity must also be considered. ASHRAE Thermal Guidelinesfor Data Processing Environments provides best practices for main-taining proper ambient conditions for operating IT equipment withindata centers. Data centers typically run fairly cool at about 68 degreesFahrenheit and 50% relative humidity. While legacy mainframe sys-tems did require considerable cooling to remain within operationalnorms, open systems IT equipment is less demanding. Consequently,there has been a more recent trend to run data centers at higherambient temperatures, sometimes disturbingly referred to as“Speedo” mode data center operation. Although ASHRAEs guidelinespresent fairly broad allowable ranges of operation (50 to 90 degrees,20 to 80% relative humidity), recommended ranges are still somewhatnarrow (68 to 77 degrees, 40 to 55% relative humidity).10 The New Data Center
    • Rationalizing IT Equipment DistributionRationalizing IT Equipment DistributionServers and network equipment are typically configured in standard19" (wide) racks and rack enclosures, in turn, are arranged for accessi-bility for cabling and servicing. Increasingly, however, the floor plan fordata center equipment distribution must also accommodate air flowfor equipment cooling. This requires that individual units be mountedin a rack for consistent air flow direction (all exhaust to the rear or allexhaust to the front) and that the rows of racks be arranged to exhaustinto a common space, called a hot aisle/cold aisle plan, as shown inFigure 3. Cold aisleEquipment row Hot aisleEquipment row Air flow Cold aisleEquipment row Hot aisleFigure 3. Hot aisle/cold aisle equipment floor plan.A hot aisle/cold aisle floor plan provides greater cooling efficiency bydirecting cold to hot air flow for each equipment row into a commonaisle. Each cold aisle feeds cool air for two equipment rows while eachhot aisle allows exhaust for two equipment rows, thus enabling maxi-mum benefit for the hot/cold circulation infrastructure. Even greaterefficiency is achieved by deploying equipment with variable-speedfans.The New Data Center 11
    • Chapter 2: Running Hot and Cold More even cooling Equipment at bottom is cooler Server rack with constant speed fans Server rack with variable speed fansFigure 4. Variable speed fans enable more efficient distribution ofcooling.Variable speed fans increase or decrease their spin rate in response tochanges in equipment temperature. As shown in Figure 4, cold air flowinto equipment racks with constant speed fans favors the hardwaremounted in the lower equipment slots and thus nearer to the cold airfeed. Equipment mounted in the upper slots is heated by their ownpower draw as well as the heat exhaust from the lower tiers. Use ofvariable speed fans, by contrast, enables each unit to selectively applycooling as needed, with more even utilization of cooling throughout theequipment rack.Research done by Michael Patterson and Annabelle Pratt of Intel lever-ages the hot aisle/cold aisle floor plan approach to create a metric formeasuring energy consumption of IT equipment. By convention, theenergy consumption of a unit of IT hardware can be measured physi-cally via use of metering equipment or approximated via use of themanufacturers stated power rating (in watts or BTUs).As shown in Figure 5 Patterson and Pratt incorporate both the energydraw of the equipment mounted within a rack and the associated hotaisle/cold aisle real estate required to cool the entire rack. This “workcell” u nit thus provides a more accurate description of what is actuallyrequired to power and cool IT equipment and, supposing the equip-ment (for example, servers) is uniform across a row, provides a usefulmultiplier for calculating total energy consumption of an entire row ofmounted hardware.12 The New Data Center
    • Rationalizing IT Equipment Distribution Work cell Cold aisle Equipment racks Hot aisleFigure 5. The concept of work cell incorporates both equipment powerdraw and requisite cooling.When energy was plentiful and cheap, it was often easy to overlook thebasic best practices for data center hardware deployment and the sim-ple remedies to correct inefficient air flow. Blanking plates, forexample, are used to cover unused rack or cabinet slots and thusenforce more efficient airflow within an individual rack. Blankingplates, however, are often ignored, especially when equipment is fre-quently moved or upgraded. Likewise, it is not uncommon to finddecommissioned equipment still racked up (and sometimes actuallypowered on). Racked but unused equipment can disrupt air flow withina cabinet and become a heat trap for heat generated by active hard-ware. In raised floor data centers, decommissioned cabling candisrupt cold air circulation and unsealed cable cutouts can result incontinuous and fruitless loss of cooling. Because the cooling plantitself represents such a significant share of data center energy use,even seemingly minor issues can quickly add up to major inefficien-cies and higher energy bills.The New Data Center 13
    • Chapter 2: Running Hot and ColdEconomizersTraditionally, data center cooling has been provided by large air condi-tioning systems (computer room air conditioning, or CRAC) that usedCFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-ants. Since both CFCs and HCFCs are ozone depleting, currentsystems use ozone-friendly refrigerants to minimize broader environ-mental impact. Conventional CRAC systems, however, consumesignificant amounts of energy and may account for nearly half of adata center power bill. In addition, these systems are typically over-pro-visioned to accommodate data center growth and consequently incura higher operational expense than is justified for the required coolingcapacity.For new data centers in temperate or colder latitudes, economizerscan provide part or all of the cooling requirement. Economizer technol-ogy dates to the mid-1800s but has seen a revival in response to risingenergy costs. As shown in Figure 6, an economizer (in this case, a dry-side economizer) is essentially a heat exchanger that leverages cooleroutside ambient air temperature to cool the equipment racks. Humidifier/ dehumidifier Damper Particulate filter Outside air Air returnFigure 6. An economizer uses the lower ambient temperature of out-side air to provide cooling.Use of outside air has its inherent problems. Data center equipment issensitive to particulates that can build up on circuit boards and con-tribute to heating issues. An economizer may therefore incorporateparticulate filters to scrub the external air before the air flow enters thedata center. In addition, external air may be too humid or too dry fordata center use. Integrated humidifiers and dehumidifiers can condi-tion the air flow to meet operational specifications for data center use.As stated above, ASHRAE recommends 40 to 55% relative humidity.14 The New Data Center
    • Monitoring the Data Center EnvironmentDry-side economizers depend on the external air supply temperatureto be sufficiently lower than the data center itself, and this may fluctu-ate seasonally. Wet-side economizers thus include cooling towers aspart of the design to further condition the air supply for data centeruse. Cooling towers present their own complications, which are tough,especially in more arid geographies where water resources are expen-sive and scarce. Ideally, economizers should leverage as muchrecyclable resources as possible to accomplish the task of coolingwhile reducing any collateral environmental impact.Monitoring the Data Center EnvironmentBecause vendor wattage and BTU specifications may assume maxi-mum load conditions, using data sheet specifications or equipmentlabel declarations does not provide an accurate basis for calculatingequipment power draw or heat dissipation. An objective multi-pointmonitoring system for measuring heat and humidity throughout thedata center is really the only means to observe and proactivelyrespond to changes in the environment.A number of monitoring options are available today. For example,some vendors are incorporating temperature probes into their equip-ment design to provide continuous reporting of heat levels viamanagement software. Some solutions provide rack-mountable sys-tems that include both temperature and humidity probes andmonitoring through a Web interface. Fujitsu offers a fiber optic systemthat leverages the affect of temperature on light propagation to pro-vide a multi-point probe using a single fiber optic cable strungthroughout equipment racks. Accuracy is reported to be within a halfdegree Celsius and within 1 meter of the measuring point. In addition,new monitoring software products can render a three-dimensionalview of temperature distribution across the entire data center, analo-gous to an infrared photo of a heat source.Although monitoring systems add cost to data center design, they areinvaluable diagnostic tools for fine-tuning airflow and equipmentplacement to maximize cooling and keeping power and cooling coststo a minimum. Many monitoring systems can be retrofitted to existingdata center plants so that even older sites can leverage newtechnologies.The New Data Center 15
    • Chapter 2: Running Hot and Cold16 The New Data Center
    • Doing More with Less 3Leveraging virtualization and blade servertechnologiesOf the three primary components of an IT data center infrastructure—servers, storage and network—servers are by far the most populousand have the highest energy impact. Servers represent approximatelyhalf of the IT equipment energy cost and about a quarter of the totaldata center power bill. Server technology has therefore been a primecandidate for regulation via EPA Energy Star and other market-driveninitiatives and has undergone a transformation in both hardware andsoftware. Server virtualization and blade server design, for example,are distinct technologies fulfilling different goals but together have amultiplying affect on server processing performance and energy effi-ciency. In addition, multi-core processors and multi-processormotherboards have dramatically increased server processing power ina more compact footprint.VMs RebornThe concept of virtual machines dates back to mainframe days. Tomaximize the benefit of mainframe processing, a single physical sys-tem was logically partitioned into independent virtual machines. EachVM ran its own operating system and applications in isolation althoughthe processor and peripherals could be shared. In todays usage, VMstypically run on open systems servers and although direct-connectstorage is possible, shared storage on a SAN or NAS is the norm.Unlike previous mainframe implementations, todays virtualizationsoftware can support dozens of VMs on a single physical server. Typi-cally, 10 or fewer VM instances are run per physical platform althoughmore powerful server platforms can support 20 or more VMs.The New Data Center 17
    • Chapter 3: Doing More with LessThe benefits of server virtualization are as obvious as the potentialrisks. Running 10 VMs on a single server platform eliminates the needfor 9 additional servers with their associated cost, components, andaccompanying power draw and heat dissipation. For data centers withhundreds or thousands of servers, virtualization offers an immediatesolution for server sprawl and ever increasing costs.Like any virtualization strategy, however, the logical separation of VMsmust be maintained and access to server memory and externalperipherals negotiated to prevent conflicts or errors. VMs on a singleplatform are hosted by a hypervisor layer which runs either directly(Type 1 or native) on the server hardware or on top of (Type 2 orhosted) the conventional operating system already running on theserver hardware. Application Application Application Service console OS OS OS Hypervisor Hardware CPU Memory NIC Storage I/OFigure 7. A native or Type 1 hypervisor.In a native Type 1 virtualization implementation, the hypervisor runsdirectly on the server hardware as shown in Figure 7. This type ofhypervisor must therefore support all CPU, memory, network and stor-age I/O traffic directly without the assistance of an underlyingoperating system. The hypervisor is consequently written to a specificCPU architecture (for open systems, typically an Intel x86 design) andassociated I/O. Clearly, one of the benefits of native hypervisors is thatoverall latency can be minimized as individual VMs perform the normalfunctions required by their applications. With the hypervisor directlymanaging hardware resources, it is also less vulnerable over time tocode changes or updates that might be required if an underlying OSwere used.18 The New Data Center
    • VMs Reborn Application Application Application Application OS OS OS OS Hypervisor Host Operating System Hardware CPU Memory NIC Storage I/OFigure 8. A hosted or Type 2 hypervisor.As shown in Figure 8, a hosted or Type 2 server virtualization solutionis installed on top of the host operating system. The advantage of thisapproach is that virtualization can be implemented on existing serversto more fully leverage existing processing power and support moreapplications in the same footprint. Given that the host OS and hypervi-sor layer inserts additional steps between the VMs and the lower levelhardware, this hosted implementation incurs more latency than nativehypervisors. On the other hand, hosted hypervisors can readily supportapplications with moderate performance requirements and stillachieve the objective of consolidating compute resources.In both native and hosted hypervisor environments, the hypervisoroversees the creation and activity of its VMs to ensure that each VMhas its requisite resources and does not interfere with the activity ofother VMs. Without the proper management of shared memory tablesby the hypervisor, for example, one VM instance could easily crashanother. The hypervisor must also manage the software traps createdto intercept hardware calls made by the guest OS and provide theappropriate emulation of normal OS hardware access and I/O.Because the hypervisor is now managing multiple virtual computers,secure access to the hypervisor itself must be maintained. Efforts tostandardize server virtualization management for stable and secureoperation are being led by the Distributed Management Task Force(DMTF) through its Virtualization Management Initiative (VMAN) andthrough collaborative efforts by virtualization vendors and partnercompanies.The New Data Center 19
    • Chapter 3: Doing More with LessServer virtualization software is now available for a variety of CPUs,hardware platforms and operating systems. Adoption for mid-tier, mod-erate performance applications has been enabled by the availability ofeconomical dual-core CPUs and commodity rack-mount servers. High-performance requirements can be met with multi-CPU platforms opti-mized for shared processing. Although server virtualization hassteadily been gaining ground in large data centers, there has beensome reluctance to commit the most mission-critical applications toVM implementations. Consequently, mid-tier applications have beenfirst in line and as these deployments become more pervasive andproven, mission-critical applications will follow.In addition to providing a viable means to consolidate server hardwareand reduce energy costs, server virtualization enables a degree ofmobility unachievable via conventional server management. Becausethe virtual machine is now detached from the underlying physical pro-cessing, memory, and I/O hardware, it is now possible to migrate avirtual machine from one hardware platform to another non-disrup-tively. If, for example, an applications performance is beginning toexceed the capabilities of its shared physical host, it can be migratedonto a less busy host or one that supports faster CPUs and I/O. Thisapplication agility that initially was just an unintended by-product ofmigrating virtual machines has become one of the compelling reasonsto invest in a virtual server solution. With ever-changing business,workload and application priorities, the ability to quickly shift process-ing resources where most needed is a competitive businessadvantage.As discussed in more detail below, virtual machine mobility createsnew opportunities for automating application distribution within thevirtual server pool and implementing policy-based procedures toenforce priority handling of select applications over others. Communi-cation between the virtualization manager and the fabric via APIs, forexample, enable proactive response to potential traffic congestion orchanges in the state of the network infrastructure. This further simpli-fies management of application resources and ensures higheravailability.20 The New Data Center
    • Blade Server ArchitectureBlade Server ArchitectureServer consolidation in the new data center can also be achieved bydeploying blade server frames. The successful development of bladeserver architecture has been dependent on the steady increase in CPUprocessing power and solving basic problems around shared power,cooling, memory, network, storage, and I/O resources. Although bladeservers are commonly associated with server virtualization, these aredistinct technologies that have a multiplying benefit when combined.Blade server design strips away all but the most essential dedicatedcomponents from the motherboard and provides shared assets aseither auxiliary special function blades or as part of the blade chassishardware. Consequently, the power consumption of each blade serveris dramatically reduced while power supply, fans and other elementsare shared with greater efficiency. A standard data center rack, forexample, can accommodate 42 1U conventional rack-mount servers,but 128 or more blade servers in the same space. A single rack ofblade servers can therefore house the equivalent of 3 racks of conven-tional servers; and although the cooling requirement for a fullypopulated blade server rack may be greater than for a conventionalserver rack, it is still less than the equivalent 3 racks that would other-wise be required.As shown in Figure 9, a blade server architecture offloads all compo-nents that can be supplied by the chassis or by supporting specializedblades. The blade server itself is reduced to one or more CPUs andrequisite auxiliary logic. The degree of component offload and avail-ability of specialized blades varies from vendor to vendor, but the netresult is essentially the same. More processing power can now bepacked into a much smaller space and compute resources can bemanaged more efficiently. Brocade Access Gateway Power Power CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic Fan supply Network I/O CPU supply Memory Fans Network Memory I/O Bus AUX Storage Bus External SAN storageFigure 9. A blade server architecture centralizes shared resourceswhile reducing individual blade server elements.The New Data Center 21
    • Chapter 3: Doing More with LessBy significantly reducing the number of discrete components per pro-cessing unit, the blade server architecture achieves higher efficienciesin manufacturing, reduced consumption of resources, streamlineddesign and reduced overall costs of provisioning and administration.The unique value-add of each vendors offering may leverage hot-swapcapability, variable-speed fans, variable-speed CPUs, shared memoryblades and consolidated network access. Brocade has long workedwith the major blade server manufacturers to provide optimizedAccess Gateway and switch blades to centralize storage network capa-bility and the specific features of these products will be discussed inthe next section.Although consolidation ratios of 3:1 are impressive, much higherserver consolidation is achieved when blade servers are combinedwith server virtualization software. A fully populated data center rackof 128 blade servers, for example, could support 10 or more virtualmachines per blade for a total of 1280 virtual servers. That would bethe equivalent of 30 racks (at 42 servers per rack) of conventional 1Urack-mount servers running one OS instance per server. From anenergy savings standpoint, that represents the elimination of over1000 power supplies, fan units, network adapters, and other elementsthat contribute to higher data center power bills and cooling load.As a 2009 survey by blade.org shows, adoption of blade server tech-nology has been increasing in both large data centers and small/medium business (SMB) environments. Slightly less than half of thedata center respondents and approximately a third of SMB operationshave already implemented blade servers and over a third in both cate-gories have deployment plans in place. With limited data center realestate and increasing power costs squeezing data center budgets, thecombination of blade servers and server virtualization is fairly easy tojustify.Brocade Server Virtualization SolutionsWhether on standalone servers or blade server frames, implementingserver virtualization has both upstream (client) and downstream (stor-age) impact in the data center. Because Brocade offers a full spectrumof products spanning LAN, WAN and SAN, it can help ensure that aserver virtualization deployment proactively addresses the newrequirements of both client and storage access. The value of a servervirtualization solution is thus amplified when combined with Brocadesnetwork technology.22 The New Data Center
    • Brocade Server Virtualization SolutionsTo maximize the benefits of network connectivity in a virtualized serverenvironment, Brocade has worked with the major server virtualizationsolutions and managers to deliver high performance, high availability,security, energy efficiency, and streamlined management end to end.The following Brocade solutions can enhance a server virtualizationdeployment and help eliminate potential bottlenecks:Brocade High-Performance 8 Gbps HBAsIn a conventional server, a host bus adapter (HBA) provides storageaccess for a single operating system and its applications. In a virtualserver configuration, the HBA may be supporting 10 to 20 OSinstances, each running its own application. High performance istherefore essential for enabling multiple virtual machines to shareHBA ports without congestion. The Brocade 815 (single port) and 825HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and500,000 I/Os per second (IOPS) performance per port to ensure themaximum throughput for shared virtualized connectivity. BrocadeN_Port Trunking enables the 825 to deliver an unprecedented 16Gbps bandwidth (3200 MBps) and one million IOPS performance. Thisexceptional performance helps ensure that server virtualization con-figurations can expand over time to accommodate additional virtualmachines without impacting the continuous operation of existingapplications.Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking foran aggregate 16 Gbps bandwidth and 1000 IOPS.The New Data Center 23
    • Chapter 3: Doing More with LessThe Brocade 815 and 825 HBAs are further optimized for server virtu-alization connectivity by supporting advanced intelligent services thatenable end-to-end visibility and management. As discussed below,Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) andintegrated Quality of Service (QoS) provide powerful tools for simplify-ing virtual machine deployments and providing proactive alerts directlyto server virtualization managers.Brocade 8 Gbps Switch and Director PortsIn virtual server environments, the need for speed does not end at thenetwork or storage port. Because more traffic is now traversing fewerphysical links, building high-performance network infrastructures is aprerequisite for maintaining non-disruptive, high-performance virtualmachine traffic flows. Brocades support of 8 Gbps ports on bothswitch and enterprise-class platforms enables customers to build high-performance, non-blocking storage fabrics that can scale from smallVM configurations to enterprise-class data center deployments.Designing high-performance fabrics ensures that applications runningon virtual machines are not exposed to bandwidth issues and canaccommodate high volume traffic patterns required for data backupand other applications.Brocade Virtual Machine SAN BootFor both standalone physical servers and blade server environments,the ability to boot from the storage network greatly simplifies virtualmachine deployment and migration of VM instances from one serverto another. As shown in Figure 11, SAN boot centralizes managementof boot images and eliminates the need for local storage on each phys-ical server platform. When virtual machines are migrated from onehardware platform to another, the boot images can be readilyaccessed across the SAN via Brocade HBAs.24 The New Data Center
    • Brocade Server Virtualization Solutions ... ... Boot Servers images Brocade ... ... 825 HBAs Servers SAN switches Direct-attached storage (DAS) Storage arrays Boot imagesFigure 11. SAN boot centralizes management of boot images andfacilitates migration of virtual machines between hosts.Brocade 815 and 825 HBAs provide the ability to automaticallyretrieve boot LUN parameters from a centralized fabric-based registry.This eliminates the error-prone manual host-based configurationscheme required by other HBA vendors. Brocades SAN boot and bootLUN discovery facilitates migration of virtual machines from host tohost, removes the need for local storage and improves reliability andperformance.Brocade N_Port ID Virtualization for WorkloadOptimizationIn a virtual server environment, the individual virtual machineinstances are unaware of physical ports since the underlying hardwarehas been abstracted by the hypervisor. This creates potential problemsfor identifying traffic flows from virtual machines through shared phys-ical ports. NPIV is an industry standard that enables multiple FibreChannel addresses to share a single physical Fibre Channel port. In aserver virtualization environment, NPIV allows each virtual machineinstance to have a unique World Wide Name (WWN) or virtual HBAport. This in turn provides a level of granularity for identifying each VMattached to the fabric for end-to-end monitoring, accounting, and con-figuration. Because the WWN is now bound to an individual virtualmachine, the WWN follows the VM when it is migrated to another plat-form. In addition, NPIV creates the linkage required for advancedservices such as QoS, security, and zoning as discussed in the nextsection.The New Data Center 25
    • Chapter 3: Doing More with LessConfiguring Single Initiator/Target ZoningBrocade has been a pioneer in fabric-based zoning to segregate fabrictraffic and restrict visibility of storage resources to only authorizedhosts. As a recognized best practice for server to storage configura-tion, NPIV and single initiator/target zoning ensures that individualvirtual machines have access only to their designated storage assets.This feature minimizes configuration errors during VM migration andextends the management visibility of fabric connections to specific vir-tual machines.Brocade End-to-End Quality of ServiceThe combination of NPIV and zoning functionality on Brocade HBAsand switches provides the foundation for higher-level fabric servicesincluding end-to-end QoS. Because the traffic flows from each virtualmachine can be identified by virtual WWN and segregated via zoning,each can be assigned a delivery priority (low, medium or high) that isenforced fabric-wide from the host connection to the storage port, asshown in Figure 12. QoS Priorities App 1 App 2 App 3 App 4 High Medium Low Virtual Channels technology enables QoS at the ASIC level in the HBA Default QoS HBA priority Frame-level interleaving of is Medium outbound data maximizes initiator link utilizationFigure 12. Brocades QoS enforces traffic prioritization from the serverHBA to the storage port across the fabric.While some applications running on virtual machines are logical candi-dates for QoS prioritization (for example, SQL Server), Brocades TopTalkers management feature can help identify which VM applicationsmay require priority treatment. Because Brocade end-to-end QoS is ulti-mately tied to the virtual machines virtualized WWN address, the QoSassignment follows the VM if it is migrated from one hardware platform26 The New Data Center
    • Brocade Server Virtualization Solutionsto another. This feature ensures that applications enjoy non-disruptivedata access despite adds/moves and changes to the downstream envi-ronment and enables administrators to more easily fulfill client service-level agreements (SLAs).Brocade LAN and SAN SecurityMost companies are now subject to government regulations that man-date the protection and security of customer data transactions. Planninga virtualization deployment must therefore also account for basic secu-rity mechanisms for both client and storage access. Brocade offers abroad spectrum of security solutions, including LAN and WAN-basedtechnologies and storage-specific SAN security features. For example,Brocade SecureIron products, shown in Figure 13, provide firewall trafficmanagement and LAN security to safeguard access from clients to vir-tual hosts on the IP network.Figure 13. Brocade SecureIron switches provide firewall traffic man-agement and LAN security for client access to virtual server clusters.Brocade SAN security features include authentication via access controllists (ACLs) and role-based access control (RBAC) as well as securitymechanisms for authenticating connectivity of switch ports and devicesto fabrics. In addition, the Brocade Encryption Switch, shown inFigure 14, and FS8-18 Encryption Blade for the Brocade DCX Backboneplatform provide high-performance (96 Gbps) data encryption for data-at-rest. Brocades security environment thus protects data-in-flight fromclient to virtual host as well as data written to disk across the SAN.Figure 14. The Brocade Encryption Switch provides high-performancedata encryption to safeguard data written to disk or tape.The New Data Center 27
    • Chapter 3: Doing More with LessBrocade Access Gateway for Blade FramesServer virtualization software can be installed on conventional serverplatforms or blade server frames. Blade server form factors offer thehighest density for consolidating IT processing in the data center andleverage shared resources across the backplane. To optimize storageaccess from blade server frames, Brocade has partnered with bladeserver providers to create high-performance, high-availability AccessGateway blades for Fibre Channel connectivity to the SAN. BrocadeAccess Gateway technology leverages NPIV to simplify virtual machineaddressing and F_Port Trunking for high utilization and automatic linkfailover. By integrating SAN connectivity into a virtualized blade serverchassis, Brocade helps to streamline deployment and simplify manage-ment while reducing overall costs.The Energy-Efficient Brocade DCX Backbone Platform forConsolidationWith 4x the performance and over 10x the energy efficiency of otherSAN directors, the Brocade DCX delivers the high performance requiredfor virtual server implementation and can accommodate growth in VMenvironments in a compact footprint. The Brocade DCX supports 384ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speedinter-chassis links (ICLs) allow further expansion of the SAN core forscaling to meet the requirements of very large server virtualizationdeployments. The Brocade DCX is also designed to non-disruptively inte-grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging(DCB) for future virtual server connectivity. The Brocade DCX is alsoavailable in a 192-port configuration (as the Brocade DCX-4S) to supportmedium VM configurations, while providing the same high availability,performance, and advanced SAN services.The Brocade DCXs Adaptive Networking services for QoS, ingress ratelimiting, congestion detection, and management ensure that trafficstreams from virtual machines are proactively managed throughout thefabric and accommodate the varying requirements of upper-layer busi-ness applications. Adaptive Networking services provide greater agilityin managing application workloads as they migrate between physicalservers.28 The New Data Center
    • Brocade Server Virtualization SolutionsEnhanced and Secure Client Access with Brocade LANSolutionsBrocade offers a full line of sophisticated LAN switches and routers forEthernet and IP traffic from Layer 2/3 to Layer 4–7 application switch-ing. This product suite is the natural complement to Brocades robustSAN products and enables customers to build full-featured and securenetworks end to end. As with the Brocade DCX architecture for SANs,Brocade BigIron RX, shown in Figure 15, and FastIron SuperX switchesincorporate best-in-class functionality and low power consumption todeliver high-performance core switching for data center LAN backbones.Figure 15. Brocade BigIron RX platforms offer high-performance Layer2/3 switching in three compact, energy-efficient form factors.Brocade edge switches with Power over Ethernet (PoE) support enablecustomers to integrate a wide variety of IP business applications, includ-ing voice over IP (VoIP), wireless access points, and security monitoring.Brocade SecureIron switches bring advanced security protection for cli-ent access into virtualized server clusters, while Brocade ServerIronswitches provide Layer 4–7 application switching and load balancing.Brocade LAN solutions provide up to 10 Gbps throughput per port andso can accommodate the higher traffic loads typical of virtual machineenvironments.Brocade Industry Standard SMI-S MonitoringVirtual server deployments dramatically increase the number of dataflows and requisite bandwidth per physical server or blade server.Because server virtualization platforms can support dynamic migrationof application workloads between physical servers, complex traffic pat-terns are created and unexpected congestion can occur. Thiscomplicates server management and can impact performance andavailability. Brocade can proactively address these issues by integratingcommunication between Brocade intelligent fabric services with VMThe New Data Center 29
    • Chapter 3: Doing More with Lessmanagers. As the fabric monitors potential congestion on a per-VMbasis, it can proactively alert virtual machine management that a work-load should be migrated to a less utilized physical link. Because thisdiagnostic functionality is fine tuned to the workflows of each VM,changes can be restricted to only the affected VM instances.Open management standards such as the Storage Management Initia-tive (SMI) are the appropriate tools for integrating virtualizationmanagement platforms with fabric management services. As one of theoriginal contributors to the SMI-S specification, Brocade is uniquely posi-tioned to provide a truly open systems solution end to end. In addition,configuration management, capacity planning, SLA policies, and virtualmachine provisioning can be integrated with Brocade fabric servicessuch as Adaptive Networking, encryption and security policies.Brocade Professional ServicesEven large companies who want to take advantage of the cost savings,consolidation and lower energy consumption characteristic of server vir-tualization technology may not have the staff or in-house expertise toplan and implement a server virtualization project. Many organizationsfail to consider the overall impact of virtualization on the data centerand that in turn can lead to degraded application performance, inade-quate data protection, and increased management complexity. BecauseBrocade technology is ubiquitous in the vast majority of data centersworldwide and Brocade has years of experience in the most mission-crit-ical IT environments, it can provide a wealth of practical knowledge andinsight into the key issues surrounding client-to-server and server-to-storage data access. Brocade Professional Services has helped hun-dreds of customers upgrade to virtualized server infrastructures andprovides a spectrum of services from virtual server assessments, auditsand planning to end-to-end deployment and operation. A well-conceivedand executed virtualization strategy can ensure that a virtual machinedeployment achieves its budgetary goals and fulfills the prime directiveto do far more with much less.30 The New Data Center
    • FCoE and Server VirtualizationFCoE and Server VirtualizationFibre Channel over Ethernet is an optional storage network interconnectfor both conventional and virtualized server environments. As a meansto encapsulate Fibre Channel frames in Ethernet, FCoE enables a simpli-fied cabling solution for reducing the number of network and storageinterfaces per server attachment. The combined network and storageconnection is now provided by a converged network adapter (CNA), asshown in Figure 16. FC traffic FCoE and FC traffic Ethernet Ethernet traffic Ethernet trafficFigure 16. FCoE simplifies the server cable plant by reducing the num-ber of network interfaces required for client, peer-to-peer, and storageaccess.Given the more rigorous requirements for storage data handling andperformance, FCoE is not intended to run on conventional Ethernet net-works. In order to replicate the low latency, deterministic delivery, andhigh performance of traditional Fibre Channel, FCoE is best supportedon a new, hardened form of Ethernet known as Converged EnhancedEthernet (CEE), or Data Center Bridging (DCB), at 10 Gbps. Without theenhancements of DCB, standard Ethernet is too unreliable to supporthigh-performance block storage transactions. Unlike conventional Ether-net, DCB provides much more robust congestion management and high-availability features characteristic of data center Fibre Channel.DCB replicates Fibre Channels buffer-to-buffer credit flow control func-tionality via priority-based flow control (PFC) using 802.1Qbb pauseframes. Instead of buffer credits, pause quanta are used to restrict traf-fic for a given period to relieve network congestion and avoid droppedframes. To accommodate the larger payload of Fibre Channel frames,DCB-enabled switches must also support jumbo frames so that entireFibre Channel frames can be encapsulated in each Ethernet transmis-sion. Other standards initiatives such as TRILL (TransparentInterconnect for Lot of Links) are being developed to enable multiplepathing through DCB-switched infrastructures.The New Data Center 31
    • Chapter 3: Doing More with LessFCoE is not a replacement for conventional Fibre Channel but is anextension of Fibre Channel over a different link layer transport. Enablingan enhanced Ethernet to carry both Fibre Channel storage data as wellas other data types, for example, file data, Remote Direct MemoryAccess (RDMA), LAN traffic, and VoIP, allows customers to simplifyserver connectivity and still retain the performance and reliabilityrequired for storage transactions. Instead of provisioning a server withdual-redundant Ethernet and Fibre Channel ports (a total of four ports),servers can be configured with two DCB-enabled 10 Gigabit Ethernet(GbE) ports. For blade server installations, in particular, this reduction inthe number of interfaces greatly simplifies deployment and ongoingmanagement of the cable plant.The FCoE initiative has been developed in the ANSI T11 Technical Com-mittee, which deals with FC-specific issues and is included in a newFibre Channel Backbone Generation 5 (FC-BB-5) specification. BecauseFCoE takes advantage of further enhancements to Ethernet, close col-laboration has been required between ANSI T11 and the Institute ofElectrical and Electronics Engineers (IEEE), which governs Ethernet andthe new DCB standards.Storage access is provided by an FCoE-capable blade in a director chas-sis (end of row) or by a dedicated FCoE switch (top of rack), as shown inFigure 17. FCoE switch IP LAN FC SAN Servers with CNAsFigure 17. An FCoE top-of-rack solution provides both DCB and FibreChannel ports and provides protocol conversion to the data centerSAN.32 The New Data Center
    • FCoE and Server VirtualizationIn this example, the client, peer-to-peer, and block storage traffic share acommon 10 Gbps network interface. The FCoE switch acts as a FibreChannel Forwarder (FCF) and converts FCoE frames into conventionalFibre Channel frames for redirection to the fabric. Peer-to-peer or clus-tering traffic between servers in the same rack is simply switched atLayer 2 or 3, and client traffic is redirected via the LAN.Like many new technologies, FCoE is often overhyped as a cure-all forpervasive IT ills. The benefit of streamlining server connectivity, however,should be balanced against the cost of deployment and the availabilityof value-added features that simplify management and administration.As an original contributor to the FCoE specification, Brocade hasdesigned FCoE products that integrate with existing infrastructures sothat the advantages of FCoE can be realized without adversely impact-ing other operations. Brocade offers the 1010 (single port) and 1020(dual port) CNAs, shown in Figure 18, at 10 Gbps DCB per port. Fromthe host standpoint, the FCoE functionality appears as a conventionalFibre Channel HBA.Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000Switch facilitate a compact, high-performance FCoE deployment.The Brocade 8000 Switch provides top-of-rack connectivity for serverswith 24 ports of 10 Gbps DCB and 8 ports of 8 Gbps Fibre Channel.Fibre Channel ports support trunking for a total of 64 Gbps bandwidth,while the 10 Gbps DCB ports support standard Link Aggregation ControlProtocol (LACP). Fibre Channel connectivity can be directly to storageend-devices or to existing fabrics, enabling greater flexibility for allocat-ing storage assets to hosts.The New Data Center 33
    • Chapter 3: Doing More with Less34 The New Data Center
    • Into the Pool 4Transcending physical asset managementwith storage virtualizationServer virtualization achieves greater asset utilization by supportingmultiple instances of discrete operating systems and applications on asingle hardware platform. Storage virtualization, by contrast, providesgreater asset utilization by treating multiple physical platforms as asingle virtual asset or pool. Consequently, although storage virtualiza-tion does not provide a comparable direct benefit in terms of reducedfootprint or energy consumption in the data center, it does enable asubstantial benefit in productive use of existing storage capacity. Thisin turn often reduces the need to deploy new storage arrays, and soprovides an indirect benefit in terms of continued acquisition costs,deployment, management, and energy consumption.Optimizing Storage Capacity Utilization in theData CenterStorage administrators typically manage multiple storage arrays, oftenfrom different vendors with the unique characteristics of each system.Because servers are bound to Logical Unit Numbers (LUNs) in specificstorage arrays, high-volume applications may suffer from over-utiliza-tion of storage capacity while low-volume applications under-utilizetheir storage targets.The New Data Center 35
    • Chapter 4: Into the Pool Server 1 Server 2 Server 3 SAN LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55 Physical storage Array A Array C Array BFigure 19. Conventional storage configurations often result in over-and under-utilization of storage capacity across multiple storagearrays.As shown in Figure 19, the uneven utilization of storage capacityacross multiple arrays puts some applications at risk of running out ofdisk space, while neighboring arrays still have excess idle capacity.This problem is exacerbated by server virtualization, since each physi-cal server now supports multiple virtual machines with additionalstorage LUNs and a more dynamic utilization of storage space. Thehard-coded assignment of storage capacity on specific storage arraysto individual servers or VMs is too inflexible to meet the requirementsof the more fluid IT environments characteristic of todays datacenters.Storage virtualization solves this problem by inserting an abstractionlayer between the server farm and the downstream physical storagetargets. This abstraction layer can be supported on the host, the stor-age controller, within the fabric, or on dedicated virtualizationappliances.36 The New Data Center
    • Optimizing Storage Capacity Utilization in the Data Center Server 1 Server 2 Server 3 SAN LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6 Virtualized Storage Pool LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55 Physical storage Array A Array C Array BFigure 20. Storage virtualization aggregates the total storage capacityof multiple physical arrays into a single virtual pool.As illustrated in Figure 20, storage virtualization breaks the physicalassignment between servers and their target LUNs. The storagecapacity of each physical storage system is now assigned to a virtualstorage pool from which virtual LUNs (for example, LUNs 1 through 6at the top of the figure) can be created and dynamically assigned toservers. Because the availability of storage capacity is now no longerrestricted to individual storage arrays, LUN creation and sizing isdependent only on the total capacity of the virtual pool. This enablesmore efficient utilization of the aggregate storage space and facilitatesthe creation and management of dynamic volumes that can be sizedto changing application requirements.In addition to enabling more efficient use of storage assets, theabstraction layer provided by storage virtualization creates a homoge-nous view of storage. The physical arrays shown in Figure 20, forexample, could be from any vendor and have proprietary value-addedfeatures. Once LUNs are created and assigned to the storage pool,however, vendor-specific functionality is invisible to the servers. Fromthe server perspective, the virtual storage pool is one large genericstorage system. Storage virtualization thus facilitates sharing of stor-age capacity among systems that would otherwise be incompatiblewith each other.The New Data Center 37
    • Chapter 4: Into the PoolAs with all virtualization solutions, masking the underlying complexityof physical systems does not make that complexity disappear. Instead,the abstraction layer provided by virtualization software and hardwarelogic (the virtualization engine) must assume responsibility for errorsor changes that occur at the physical layer. In the case of storage virtu-alization specifically, management of backend complexity centersprimarily on the maintenance of the metadata mapping required tocorrelate virtual storage addresses to real ones, as shown inFigure 21. Real intiators Virtual target Virtualization Metadata engine map Virtual initiator Real targetsFigure 21. The virtualization abstraction layer provides virtual targetsto real hosts and virtual hosts to real targets.Storage virtualization proxies virtual targets (storage) and virtual initia-tors (servers) so that real initiators and targets can connect to thestorage pool without modification using conventional SCSI commands.The relationship between virtual and real storage LUN assignment ismaintained by the metadata map. A virtual LUN of 500 GB, for exam-ple, may map to storage capacity spread across several physicalarrays. Loss of the metadata mapping would mean loss of access tothe real data. A storage virtualization solution must therefore guaran-tee the integrity of the metadata mapping and provide safeguards inthe form of replication and synchronization of metadata map copies.38 The New Data Center
    • Building on a Storage Virtualization FoundationAs a data center best practice, creation of storage pools from multiplephysical storage arrays should be implemented by storage class. High-end RAID arrays contribute to one virtual pool; lower performancearrays should be assigned to a separate pool. Aggregating like assetsin the same pool ensures consistent performance and comparableavailability for all virtual LUNs and thus minimizes problematic incon-sistencies among disparate systems. In addition, there are benefits inmaintaining separate classes of virtualized storage systems for appli-cations such as lifecycle management as will be discussed in the nextsection.Building on a Storage Virtualization FoundationStorage virtualization is an enabling technology for higher levels ofdata management and data protection and facilitates centralizedadministration and automation of storage operations. Vendor litera-ture on storage virtualization is consequently often linked t o snapshottechnology for data protection, replication for disaster recovery, virtualtape backup, data migration, and information lifecycle management(ILM). Once storage assets have been vendor-neutralized and pooledvia virtualization, it is easier to overlay advanced storage services thatare not dependent on vendor proprietary functionality. Data replicationfor remote disaster recovery, for example, no longer depends on a ven-dor-specific application and licensing but can be executed via a third-party solution.One of the central challenges of next-generation data center design isto align infrastructure to application requirements. In the end, itsreally about the upper-layer business applications, their availabilityand performance, and safeguarding the data they generate and pro-cess. For data storage, aligning infrastructure to applications requiresa more flexible approach to the handling and maintenance of dataassets as the business value of the data itself changes over time. Asshown in Figure 22, information lifecycle management can leveragevirtualized storage tiers to pair the cost of virtual storage containers tothe value of the data they contain.Providing that each virtual storage tier is composed of a similar classof products, each tier represents different performance and availabil-ity characteristics, burdened cost of storage, and energy consumption.The New Data Center 39
    • Chapter 4: Into the PoolClass of storage Tier 1 Tier 2 Tier 1 Tier 4Burdened cost per GB 10x 4x 10x 0.5xValue of data High Moderate Low ArchiveFigure 22. Leveraging classes of storage to align data storage to thebusiness value of data over time.By migrating data from one level to another as its immediate businessvalue declines, capacity on high-value systems is freed to accommo-date new active transactions. Business practice or regulatorycompliance, however, can require that migrated data remain accessi-ble within certain time frames. Tier 2 and 3 classes may not have theperformance and 99.999% availability of Tier 1 systems but still pro-vide adequate accessibility before the data can finally be retired totape. In addition, if each tier is a virtual storage pool, maximum utiliza-tion of the storage capacity of a tier can help reduce overall costs andmore readily accommodate the growth of aged data without the addi-tion of new disk drives.Establishing tiers of virtual storage pools by storage class also pro-vides the foundation for automating data migration from one level toanother over time. Policy-based data migration can be triggered by anumber of criteria, including frequency of access to specific data sets,the age of the data, flagging transactions as completed, or appendingmetadata to indicate data status. Reducing or eliminating the need formanual operator intervention can significantly reduce administrativecosts and enhance the return on investment (ROI) of a virtualizationdeployment.40 The New Data Center
    • Centralizing Storage Virtualization from the FabricCentralizing Storage Virtualization from theFabricAlthough storage virtualization can be implemented on host systems,storage controllers, dedicated appliances, or within the fabric via direc-tors or switches, there are trade-offs for each solution in terms ofperformance and flexibility. Because host-based storage virtualizationis deployed per server, for example, it incurs greater overhead in termsof administration and consumes CPU cycles on each host. Dedicatedstorage virtualization appliances are typically deployed between multi-ple hosts and their storage targets, making it difficult to scale to largerconfigurations without performance and availability issues. Imple-menting storage virtualization on the storage array controllers is aviable alternative, providing that the vendor can accommodate hetero-geneous systems for multi-vendor environments. Because all storagedata flows through the storage network, or fabric, however, fabric-based storage virtualization has been a compelling solution for cen-tralizing the virtualization function and enabling more flexibility inscaling and deployment.The central challenge for fabric-based virtualization is to achieve thehighest performance while maintaining the integrity of metadata map-ping and exception handling. Fabric-based virtualization is nowcodified in an ANSI/INCITS T11.5 standard, which provides APIs forcommunication between virtualization software and the switching ele-ments embedded in a switch or director blade. The Fabric ApplicationInterface Standard (FAIS) separates the control path to a virtualizationengine (typically external to the switch) and the data paths betweeninitiators and targets. As shown in Figure 23, the Control Path Proces-sor (CPP) represents the virtualization intelligence layer and the FAISinterface, while the Data Path Controller (DPC) ensures that the properconnectivity is established between the servers, storage ports, and thevirtual volume created via the CPP. Exceptions are forwarded to theCPP, freeing the DPC to continue processing valid transactions.The New Data Center 41
    • Chapter 4: Into the Pool Virtualization engine Control path Storage poolFigure 23. FAIS splits the control and data paths for more efficientexecution of metadata mapping between virtual storage and servers.Because the DPC function can be executed in an ASIC at the switchlevel, it is possible to achieve very high performance without impactingupper-layer applications. This is a significant benefit over host-basedand appliance-based solutions. And because communication betweenthe virtualization engine and the switch is supported by standards-based APIs, it is possible to run a variety of virtualization softwaresolutions.The central role a switch plays in providing connectivity between serv-ers and storage and the FAIS-enabled ability to execute metadatamapping for virtualization also creates new opportunities for fabric-based services such as mirroring or data migration. With high perfor-mance and support for heterogeneous storage systems, fabric-basedservices can be implemented with much greater transparency thanalternate approaches and can scale over time to larger deployments.42 The New Data Center
    • Brocade Fabric-based Storage VirtualizationBrocade Fabric-based Storage VirtualizationEngineered to the ANSI/INCITS T11 FAIS specification, the BrocadeFA4-18 Application Blade provides high-performance storage virtual-ization for the Brocade 48000 Director and Brocade DCX Backbone.NOTE: Information for the Brocade DCX Backbone also includes theBrocade DCX-4S Backbone unless otherwise noted. Servers Virtualization contol processor Brocade DCX/DCX-4S or Brocade 48000 with the Brocade FA4-18 Application Blade Storage poolsFigure 24. The Brocade FA4-18 Application Blade provides line-speedmetadata map execution for non-disruptive storage pooling, mirroringand data migration.As shown in Figure 24, compatibility with both the Brocade 48000 andBrocade DCX chassis enables the Brocade FA4-18 Application Blade toextend the benefits of Brocade energy-efficient design and high band-width to advanced fabric services without requiring a separateenclosure. Interoperability with existing SAN infrastructures amplifiesthis advantage, since any server connected to the SAN can be directedto the FA4-18 blade for virtualization services. Line-speed metadatamapping is achieved through purpose-built components instead ofrelying on general-purpose processors that other vendors use.The New Data Center 43
    • Chapter 4: Into the PoolThe virtualization application is provided by third-party and partnersolutions, including EMC Invista software. For Invista specifically, aControl Path Cluster (CPC) consisting of two processor platformsattached to the FA4-18 provides high availability and failover in theevent of link or unit failure. Initial configuration of storage pools is per-formed on the Invista CPC and downloaded to the FA-18 for execution.Because the virtualization functionality is driven in the fabric andunder configuration control of the CPC, this solution requires no hostmiddleware or host CPU cycles for attached servers.For new data center design or upgrade, storage virtualization is a natu-ral complement to server virtualization. Fabric-based storagevirtualization offers the added advantage of flexibility, performance,and transparency to both servers and storage systems as well asenhanced control over the virtual environment.44 The New Data Center
    • Weaving a New Data 5Center FabricIntelligent design in the storage infrastructureIn the early days of SAN adoption, storage networks tended to evolvespontaneously in reaction to new requirements for additional ports toaccommodate new servers and storage devices. In practice, thismeant acquiring new fabric switches and joining them to an existingfabric via E_Port connection, typically in a mesh configuration to pro-vide alternate switch-to-switch links. As a result, data centers graduallybuilt very large and complex storage networks composed of 16- or 32-port Fibre Channel switches. At a certain critical mass, these largemulti-switch fabrics became problematic and vulnerable to fabric-widedisruptions through state change notification (SCN) broadcasts or fab-ric reconfigurations. For large data centers in particular, the responsewas to begin consolidating the fabric by deploying high-port-countFibre Channel directors at the core and using the 16- or 32-portswitches at the edge for device fan-out.Consolidation of the fabric brings several concrete benefits, includinggreater stability, high performance, and the ability to accommodategrowth in ports without excessive dependence on inter-switch links(ISLs) to provide connectivity. A well-conceived core/edge SAN designcan provide optimum pathing between groups of servers and storageports with similar performance requirements, while simplifying man-agement of SAN traffic. The concept of a managed unit of SAN ispredicated on the proper sizing of a fabric configuration to meet bothconnectivity and manageability requirements. Keeping the SAN designwithin rational boundaries, however, is now facilitated with new stan-dards and features that bring more power and intelligence to thefabric.The New Data Center 45
    • Chapter 5: Weaving a New Data Center FabricAs with server and storage consolidation, fabric consolidation is alsodriven by the need to reduce the number of physical elements in thedata center and their associated power requirements. Each additionalswitch means additional redundant power supplies, fans, heat genera-tion, cooling load, and data center real estate. As with blade serverframes, high-port-density platforms such as the Brocade DCX Back-bone enable more concentrated productivity in a smaller footprint andwith a lower total energy budget. The trend in new data center designis therefore to architect the entire storage infrastructure for minimalphysical and energy impact while accommodating inevitable growthover time. Although lower-port-count switches are still viable solutionsfor departmental, small-to-medium size business (SMB) and fan-outapplications, Brocade backbones are now the cornerstone for opti-mized data center fabric designs.Better Fewer but BetterStorage area networks substantially differ from conventional datacommunications networks in a number of ways. A typical LAN, forexample, is based on peer-to-peer communications with all end-points(nodes) sharing equal access. The underlying assumption is that anynode can communicate with any other node at any time. A SAN, bycontrast, cannot rely on peer-to-peer connectivity since some nodesare active (initiators/servers) and others are passive (storage targets).Storage systems do not typically communicate with each other (withthe exception of disk-to-disk data replication or array-based virtualiza-tion) across the SAN. Targets also do not initiate transactions, butpassively wait for an initiator to access them. Consequently, storagenetworks must provide a range of unique services to facilitate discov-ery of storage targets by servers, restrict access to only authorizedserver/target pairs, zone or segregate traffic between designatedgroups of servers and their targets, and provide notifications whenstorage assets enter or depart the fabric. These services are notrequired in conventional data communication networks. In addition,storage traffic requires deterministic delivery, whereas LAN and WANprotocols are typically best-effort delivery systems.These distinctions play a central role in the proper design of data cen-ter SANs. Unfortunately, some vendors fail to appreciate the uniquerequirements of storage environments and recommend what areessentially network-centric architectures instead of the more appropri-ate storage-centric approach. Applying a network-centric design tostorage inevitably results in a failure to provide adequate safeguardsfor storage traffic and a greater vulnerability to inefficiencies, disrup-tion, or poor performance. Brocades strategy is to promote storage-46 The New Data Center
    • Better Fewer but Bettercentric SAN designs that more readily accommodate the unique andmore demanding requirements of storage traffic and ensure stableand highly available connectivity between servers and storagesystems.A storage-centric fabric design is facilitated by concentrating key cor-porate storage elements at the core, while accommodating serveraccess and departmental storage at the edge. As shown in Figure 25,the SAN core can be built with high-port-density backbone platforms.With up to 384 x 8 Gbps ports in a single chassis or up to 768 ports ina dual-chassis configuration, the core layer can support hundreds ofstorage ports and, depending on the appropriate fan-in ratio, thou-sands of servers in a single high-performance solution. The BrocadeDCX Backbone, a 14U chassis with eight vertical blade slots is alsoavailable in a 192-port 8U Brocade DCX-4S with four horizontal bladeslots—with compatibility for any Brocade DCX blade. Because two oreven three backbone chassis can be deployed in a single 19" rack oradjacent racks, real estate is kept to a minimum. Power consumptionof less than a half watt per Gbps provides over 10x the energy effi-ciency of comparable enterprise-class products. Doing more with lessis thus realized through compact product design and engineeringpower efficiency down to the port Servers High- performance servers Edge switches Brocade DCX core Departmental storage Primary corporate storageFigure 25. A storage-centric core/edge topology provides flexibility indeploying servers and storage assets while accommodating growthover time.The New Data Center 47
    • Chapter 5: Weaving a New Data Center FabricIn this example, servers and storage assets are configured to bestmeet the performance and traffic requirements of specific businessapplications. Mission-critical servers with high-performance require-ments, for example, can be attached directly to the core layer toprovide the optimum path to primary storage. Departmental storagecan be deployed at the edge layer, while still enabling servers toaccess centralized storage resources. With 8 Gbps port connectivityand the ability to trunk multiple inter-switch links between the edgeand core, this design provides the flexibility to support different band-width and performance needs for a wide range of businessapplications in a single coherent architecture.In terms of data center consolidation, a single-rack, dual-chassis Bro-cade DCX configuration of 768 ports can replace 48 x 16-port or 24 x32-port switches, providing a much more efficient use of fabricaddress space, centralized management, and microcode version con-trol and a dramatic decrease in maintenance overhead, energyconsumption, and cable complexity. Consequently, current data centerbest practices for storage consolidation now incorporate fabric consol-idation as a foundation for shrinking the hardware footprint and itsassociated energy costs. In addition, because the Brocade DCX 8 Gbpsport blades are backward compatible with 1, 2, and 4 Gbps speeds,existing devices can be integrated into a new consolidated design with-out expensive upgrades.Intelligent by DesignThe new data center fabric is characterized by high port density, com-pact footprint, low energy costs, and streamlined management, butthe most significant differentiating features compared to conventionalSANs revolve around increased intelligence for storage data transport.New functionality that streamlines data delivery, automates dataflows, and adapts to changed network conditions both ensures stableoperation and reduces the need for manual intervention and adminis-trative oversight. Brocade has developed a number of intelligent fabriccapabilities under the umbrella term of Adaptive Networking servicesto streamline fabric operations.Large complex SANs, for example, typically support a wide variety ofbusiness applications, ranging from high-performance and mission-critical to moderate-performance requirements. In addition, storage-specific applications such as tape backup may share the same infra-structure as production applications. If all storage traffic types weretreated with the same priority, the potential would exist for congestionand disruption of high-value applications impacted negatively by the48 The New Data Center
    • Intelligent by Designtraffic load of moderate-value applications. Brocade addresses thisproblem via a quality of service mechanism, which enables the storageadministrator to assign priority values to different applications. QoS Priorities High Medium Servers Low Edge Tape switches Brocade DCX core DiskFigure 26. Brocade QoS gives preferential treatment to high-valueapplications through the fabric to ensure reliable delivery.As shown in Figure 26, applications running on conventional or virtual-ized servers can be assigned high, medium, or low priority deliverythrough the fabric. This QoS solution guarantees that essential butlower-priority applications such as tape backup do not overwhelm mis-sion-critical applications such as online transaction processing (OLTP).It also makes it much easier to deploy new applications over time ormigrate existing virtual machines since the QoS priority level of anapplication moderates its consumption of available bandwidth. Whencombined with the high performance and 8 Gbps port speed of Bro-cade HBAs, switches, directors, and backbone platforms, QoS providesan additional means to meet application requirements despite fluctua-tions in aggregate traffic loads.Because traffic loads vary over time and sudden spikes in workloadcan occur unexpectedly, congestion on a link, particularly between thefabric and a burdened storage port, can occur. Ideally, a flow controlmechanism would enable the fabric to slow the pace of traffic at thesource of the problem, typically a very active server generating anatypical workload. Another Adaptive Networking service, Brocadeingress rate limiting (IRL) proactively monitors the traffic levels on alllinks and, when congestion is sensed on a specific link, identifies theThe New Data Center 49
    • Chapter 5: Weaving a New Data Center Fabricinitiating source. Ingress rate limiting allows the fabric switch to throt-tle the transmission rate of a server to a speed lower than theoriginally negotiated link speed. mit Rate li Servers Tape Edge switches stion Conge Brocade DCX core DiskFigure 27. Ingress rate limiting enables the fabric to alleviate potentialcongestion by throttling the transmission rate of the offending initiator.In the example shown in Figure 27, the Brocade DCX monitors poten-tial congestion on the link to a storage array and proactively reducesthe rate of transmission at the server source. If, for example, theserver HBA had originally negotiated an 8 Gbps transmission ratewhen it initially logged in to the fabric, ingress rate limiting couldreduce the transmission rate to 4 Gbps or lower, depending on the vol-ume of traffic to be reduced to alleviate congestion at the storage port.Thus, without operator intervention, potentially disruptive congestionevents can be resolved proactively, while ensuring continuous opera-tion of all applications.Brocades Adaptive Networking services also enable storage adminis-trators to establish preferred paths for specific applications throughthe fabric and the ability to fail over from a preferred path to an alter-nate path if the preferred path is unavailable. This capability isespecially useful for isolating certain applications such as tape backupor disk-to-disk replication to ensure that they always enter or exit onthe same inter-switch link to optimize the data flow and avoid over-whelming other application streams.50 The New Data Center
    • Intelligent by Design Backup ERP OracleFigure 28. Preferred paths are established through traffic isolationzones, which enforce separation of traffic through the fabric based ondesignated applications.Figure 28 illustrates a fabric with two primary business applications(ERP and Oracle) and a tape backup segment. In this example, thetape backup preferred path is isolated from the ERP and Oracle data-base paths so that the high volume of traffic generated by backupdoes not interfere with the production applications. Because the pre-ferred path traffic isolation zone also accommodates failover toalternate paths, the storage administrator does not have to intervenemanually if issues arise in a particular isolation zone.To more easily identify which applications might require specializedtreatment with QoS, rate limiting, or traffic isolation, Brocade has pro-vided a Top Talkers monitor for devices in the fabric. Top Talkersautomatically monitors the traffic pattern on each port to diagnoseover- or under-utilization of port bandwidth.The New Data Center 51
    • Chapter 5: Weaving a New Data Center Fabric Port 14 Port 20 Port 56 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2Figure 29. By monitoring traffic activity on each port, Top Talkers can OA251 3/1 E12D2identify which applications would most benefit from Adaptive Network-ing services.Applications that generate higher volumes of traffic through the fabricare primary candidates for Adaptive Networking services, as shown inFigure 29. This functionality is especially useful in virtual server envi-ronments, since the deployment of new VMs or migration of VMs fromone platform to another can have unintended consequences. Top Talk-ers can help indicate when a migration might be desirable to benefitfrom higher bandwidth or preferred pathing.In terms of aligning infrastructure to applications, Top Talkers allowsadministrators to deploy fabric resources where and when they areneeded most. Configuring additional ISLs to create a higher-perfor-mance trunk, for example, might be required for particularly activeapplications, while moderate performance applications could continueto function quite well on conventional links.52 The New Data Center
    • Energy Efficient FabricsEnergy Efficient FabricsIn the previous era of readily available and relatively cheap energy,data center design focused more on equipment placement and conve-nient access than on the power requirements of the IT infrastructure.Today, many data centers simply cannot obtain additional powercapacity from their utilities or are under severe budget constraints tocover ongoing operational expense. Consequently, data center manag-ers are scrutinizing the power requirements of every hardwareelement and looking for means to reduce the total data center powerbudget. As we have seen, this is a major driver for technologies suchas server virtualization and consolidation of hardware assets acrossthe data center, including storage and storage networking.The energy consumption of data center storage systems and storagenetworking products has been one of the key focal points of the Stor-age Networking Industry Association (SNIA) in the form of the SNIAGreen Storage Initiative (GSI) and Green Storage Technical WorkingGroup (GS TWG). In January 2009, the SNIA GSI released the SNIAGreen Storage Power Measurement Specification as an initial docu-ment to formulate standards for measuring the energy efficiency ofdifferent classes of storage products. For storage systems, energy effi-ciency can be defined in terms of watts per megabyte of storagecapacity. For fabric elements, energy efficiency can be defined in wattsper gigabytes/second bandwidth. Brocade played a leading role in theformation of the SNIA GSI and participation in the GS TWG and leadsby example in pioneering the most energy-efficient storage fabric prod-ucts in the market.Achieving the greatest energy efficiency in fabric switches and direc-tors requires a holistic view of product design so that all componentsare optimized for low energy draw. Enterprise switches and directors,for example, are typically provisioned with dual-redundant power sup-plies for high availability. From an energy standpoint, it would bepreferable to operate with only a single power supply, but businessavailability demands redundancy for failover. Consequently, it is criticalto design power supplies that have at least 80% efficiency in convert-ing AC input power into DC output to service switch components.Likewise, the cooling efficiency of fan modules and selection andplacement of discrete components for processing elements and portcards all add to a product design optimized for high performance andlow energy consumption. Typically, for every watt of power consumedfor productive IT processing, another watt is required to cool the equip-The New Data Center 53
    • Chapter 5: Weaving a New Data Center Fabricment. Dramatically lowering the energy consumption of fabric switchesand directors therefore has a dual benefit in terms of reducing bothdirect power costs and indirect cooling overhead.The Brocade DCX achieves an energy efficiency of less than a watt ofpower per gigabit of bandwidth. That is 10x more efficient than compa-rable directors on the market and frees up available power for other ITequipment. To highlight this difference in product design philosophy, inlaboratory tests a fully loaded Brocade director consumed less power(4.6 Amps) than an empty chassis from a competitor (5.1 Amps). Thedifference in energy draw of two comparably configured directorswould be enough to power an entire storage array. Energy efficientswitch and director designs have a multiplier benefit as more ele-ments are added to the SAN. Although the fabric infrastructure as awhole is a small part of the total data center energy budget, it can beleveraged to reduce costs and make better use of available powerresources. As shown in Figure 30, power measurements on an 8 Gbpsport at full speed show the Brocade DCX advantage.Figure 30. Brocade DCX power consumption at full speed on an 8Gbps port compared to the competition.54 The New Data Center
    • Safeguarding Storage DataSafeguarding Storage DataUnfortunately, SAN security has been a back-burner issue for manystorage administrators due in part to several myths about the securityof data centers in general. These myths (listed below) are addressed indetail in Roger Bouchards Securing Fibre Channel Fabrics (BrocadeBookshelf) and include assumptions about data center physical secu-rity and the difficulty of hacking into Fibre Channel networks andprotocols. Given that most breaches in storage security occur throughoperator error and lost disks or tape cartridges, however, threats tostorage security are typically internal, not external, risks. SAN Security Myths • SAN Security Myth #1. SANs are inherently secure since they are in a closed, physically protected environment. • SAN Security Myth #2. The Fibre Channel protocol is not well known by hackers and there are almost no avenues available to attack FC fabrics. • SAN Security Myth #3. You cant “sniff” optical fiber without cutting it first and causing disruption. • SAN Security Myth #4. The SAN is not connected to the Inter- net so there is no risk from outside attackers. • SAN Security Myth #5. Even if fiber cables could be sniffed, there are so many protocol layers, file systems, and database formats that the data would not be legible in any case. • SAN Security Myth #6. Even if fiber cables could be sniffed, the amount of data to capture is simply too large to capture realistically and would require expensive equipment to do so. • SAN Security Myth #7. If the switches already come with built- in security features, why should I be concerned with imple- menting security features in the SAN?The centrality of the fabric in providing both host and storage connec-tivity provides new opportunities for safeguarding storage data. As withother intelligent fabric services, fabric-based security mechanisms canhelp ensure consistent implementation of security policies and theflexibility to apply higher levels of security where they are mostneeded.The New Data Center 55
    • Chapter 5: Weaving a New Data Center FabricBecause data on disk or tape is vulnerable to theft or loss, sensitiveinformation is at risk unless the data itself is encrypted. Best practicesfor guarding corporate and customer information consequently man-date full encryption of data as it is written to disk or tape and a securemeans to manage the encryption keys used to encrypt and decrypt thedata. Brocade has developed a fabric-based solution for encryptingdata-at-rest that is available as a blade for the Brocade DCX Backbone(Brocade FS8-18 Encryption Blade) or as a standalone switch (Bro-cade Encryption Switch). Servers Key management Storage arrays TapeFigure 31. The Brocade Encryption Switch provides secure encryptionfor disk or tape.Both the 16-port encryption blade for Brocade DCX and the 32-portencryption switch provide 8 Gbps per port for fabric or device connec-tivity and an aggregate 96 Gbps of hardware-based encryptionthroughput and 48 Gbps of data compression bandwidth. The combi-nation of encryption and data compression enables greater efficiencyin both storing and securing data. For encryption to disk, the IEEEAES256-XTS encryption algorithm facilitates encryption of disk blockswithout increasing the amount of data per block. For encryption totape, the AES256-GCM encryption algorithm appends authenticatingmetadata to each encrypted data block. Because tape devices accom-modate variable block sizes, encryption does not impede backupoperations. From the host standpoint, both encryption processes aretransparent and due to the high performance of the Brocade encryp-tion engine there is no impact on response time.56 The New Data Center
    • Safeguarding Storage DataAs shown in Figure 31, the Brocade Encryption Switch supports bothfabric attachment and end-device connectivity. Within both the encryp-tion blade and switch, virtual targets are presented to the hosts andvirtual initiators are presented to the downstream storage array ortape subsystem ports. Frame redirection, a Fabric OS technology, isused to forward traffic to the encryption device for encryption on datawrites and decryption on data reads. In the case of direct deviceattachment (for example, the tape device connected to the encryptionswitch in Figure 31), the encrypted data is simply switched to theappropriate port.Because no additional middleware is required for hosts or storagedevices, this solution easily integrates into existing fabrics and canprovide a much higher level of data security with minimal reconfigura-tion. Key management for safeguarding and authenticating encryptionkeys is provided via Ethernet connection to the Brocade encryptiondevice. Brocade Key Management Solutions • NetApp KM500 Lifetime Key Management (LKM) Appliance • EMC RSA Key Manager (RKM) Server Appliance • HP StorageWorks Secure Key Manager (SKM) • Thales Encryption Manager for Storage For more about these key management solutions, visit the Brocade Encryption Switch product page on www.brocade.com and find the Technical Briefs section a the bottom of the page.In addition to data encryption for disk and tape, fabric-based securityincludes features for protecting the integrity of fabric connectivity andsafeguarding management interfaces. Brocade switches and directorsuse access control lists (ACLs) to allow access to the fabric for onlyauthorized switches and end devices. Based on the port or devicesWWN, Switch Connection Control (SCC) and Device Connection Control(DCC) prevent the intentional or accidental connection of a new switchor device that would potentially pose a security threat, as shown inFigure 32. Once configured, the fabric is essentially locked down toprevent unauthorized access until the administrator specificallydefines a new connection. Although this requires additional manage-The New Data Center 57
    • Chapter 5: Weaving a New Data Center Fabricment intervention, it precludes disruptive fabric reconfigurations andsecurity breaches that could otherwise occur through deliberate actionor operator error. X Unauthorized deviceFigure 32. Using fabric ACLs to secure switch and device connectivity.For securing storage data-in-flight, Brocade also provides hardware-based encryption on its 8 Gbps HBAs and the Brocade 7800 ExtensionSwitch and FX8-24 Extension Blade products. In high-security environ-ments, meeting regulatory compliance standards can requireencrypting all data along the entire data path from host to the primarystorage target as well as secondary storage in disaster recovery sce-narios. This capability is now available across the entire fabric with noimpact to fabric performance or availability.Multi-protocol Data Center FabricsData center best practices have historically prescribed the separationof networks according to function. Creating a dedicated storage areanetwork, for example, ensures that storage traffic is unimpeded by themore erratic traffic patterns typical of messaging or data communica-tions networks. In part, this separation was facilitated by the fact thatnearly all storage networks used a unique protocol and transport, thatis, Fibre Channel, while LANs are almost universally based on Ether-net. This situation changed somewhat with the introduction of iSCSI fortransporting SCSI block data over conventional Ethernet and TCP/IP,although most iSCSI vendors still recommend building a dedicated IPstorage network for iSCSI hosts and storage.58 The New Data Center
    • Multi-protocol Data Center FabricsFibre Channel continues to be the protocol of choice for high-perfor-mance, highly available SANs. There are several reasons for this,including the ready availability of diverse Fibre Channel products andthe continued evolution of the technology to higher speeds and richerfunctionality over time. Still, although nearly all data centers worldwiderun their most mission-critical applications on Fibre Channel SANs,many data centers also house hundreds or thousands of moderate-performance standalone servers with legacy DAS. It is difficult to costjustify installation of Fibre Channel HBAs into low-cost servers if thecost of storage connectivity exceeds the cost of the server itself.iSCSI has found its niche market primarily in cost-sensitive small andmedium business (SMB) environments. It offers the advantage of low-cost per-server connectivity since iSCSI device drivers are readily avail-able for a variety of operating systems at no cost and can be run overconventional Ethernet or (preferably) Gigabit Ethernet interfaces. TheIP SAN switched infrastructure can be built with off-the-shelf, low-costEthernet switches. And various storage system vendors offer iSCSIinterfaces for mid-range storage systems and tape backup subsys-tems. Of course, Gigabit Ethernet does not have the performance of 4or 8 Gbps Fibre Channel, but for mid-tier applications Gigabit Ethernetmay be sufficient and the total cost for implementing shared storage isvery reasonable, even when compared to direct-attached SCSIstorage.Using iSCSI to transition from direct-attached to shared storage yieldsmost of the benefits associated with traditional SANs. Using iSCSI con-nectivity, servers are no longer the exclusive “owner” of their own(direct-attached) storage, but can share storage systems over the stor-age network. If a particular server fails, alternate servers can bind tothe failed servers LUNs and continue operation. As with conventionalFibre Channel SANs, adding storage capacity to the network is no lon-ger disruptive and can be performed on the fly. In terms ofmanagement overhead, the greatest benefit of converting from direct-attached to shared storage is the ability to centralize backup opera-tions. Instead of backing up individual standalone servers, backup cannow be performed across the IP SAN without disrupting client access.In addition, features such as iSCSI SAN boot can simplify serveradministration by centralizing management of boot images instead oftouching hundreds of individual servers.The New Data Center 59
    • Chapter 5: Weaving a New Data Center FabricOne significant drawback of iSCSI, however, is that by using commodityEthernet switches for the IP SAN infrastructure, none of the storage-specific features built into Fibre Channel fabric switches are available.Fabric login services, automatic address assignment, simple nameserver (SNS) registration, device discovery, zoning, and other storageservices are simply unavailable in conventional Ethernet switches.Consequently, although small iSCSI deployments can be configuredmanually to ensure proper assignment of servers to their storageLUNs, iSCSI is difficult to manage when scaled to larger deployments.In addition, because Ethernet switches are indifferent to the upper-layer IP protocols they carry, it is more difficult to diagnose storage-related problems that might arise. iSCSI standards do include theInternet Simple Name Server (iSNS) protocol for device authenticationand discovery, but iSNS must be supplied as a third-party add-on tothe IP SAN.Collectively, these factors overshadow the performance differencebetween Gigabit Ethernet and 8 Gbps Fibre Channel. Performancebecomes less of an issue when iSCSI is run over 10 Gigabit Ethernet,but that typically requires a specialized iSCSI network interface card(NIC) with TCP offload, Serial RDMA for iSCSI (iSER), 10 GbE switches,and 10 GbE storage ports. The cost advantage of iSCSI at 1 GbE istherefore quickly undermined when iSCSI attempts to achieve the per-formance levels common to Fibre Channel. Even with these additionalcosts, the basic fabric and storage services embedded in Fibre Chan-nel switches are still unavailable.For data center applications, however, low-cost iSCSI running overstandard Gigabit Ethernet does make sense when standalone DASservers are integrated into existing Fibre Channel SAN infrastructuresvia gateway products with iSCSI-to-Fibre Channel protocol conversion.The Brocade FC4-16IP iSCSI Blade for the Brocade 48000 Director, forexample, can aggregate hundreds of iSCSI-based servers for connec-tivity into an existing SAN, as shown in Figure 33. This enablesformerly standalone low-cost servers to enjoy the benefits of sharedstorage while advanced storage services are supplied by the fabricitself. Simplifying tape backup operations in itself is often sufficientcost justification for iSCSI integration via gateways and if free iSCSIdevice drivers are used, the per-server connectivity cost is negligible.60 The New Data Center
    • Multi-protocol Data Center Fabrics FC servers GbE switches Brocade director with iSCSI blade Rack-mount 1U servers FC storage arrays FC tapeFigure 33. Integrating formerly standalone mid-tier servers into thedata center fabric with an iSCSI blade in the Brocade DCX.As discussed in Chapter 3, FCoE is another multi-protocol option forintegrating new servers into existing data center fabrics. Unlike theiSCSI protocol which uses Layer 3 IP routing and TCP for packet recov-ery, FCoE operates at Layer 2 switching and relies on Fibre Channelprotocols for recovery. FCoE is therefore much closer to native FibreChannel in terms of protocol overhead and performance but doesrequire an additional level of frame encapsulation and decapsulationfor transport over Ethernet. Another dissimilarity to iSCSI is that FCoErequires a specialized host adapter card, a CNA that supports FCoEand 10 Gbps Data Center Bridging. In fact, to replicate the flow controland deterministic performance of native Fibre Channel, Ethernetswitches between the host and target must be DCB capable. FCoEtherefore does not have the obvious cost advantage of iSCSI but doesoffer a comparable means to simplify cabling by reducing the numberof server connections needed to carry both messaging and storagetraffic.Although FCoE is being aggressively promoted by some network ven-dors, the cost/benefit advantage has yet to be demonstrated inpractice. In current economic conditions, many customers are hesitantto adopt new technologies that have no proven track record or viableROI. Although Brocade has developed both CNA adapters and FCoEswitch products for customers who are ready to deploy them, the mar-ket will determine if simplifying server connectivity is sufficient costThe New Data Center 61
    • Chapter 5: Weaving a New Data Center Fabricjustification for FCoE adoption. At the point when 10 Gbps DCB-enabled switches and CNA technology become commoditized, FCoEwill certainly become an attractive option.Other enhanced solutions for data center fabrics include Fibre Chan-nel over IP (FCIP) for SAN extension, Virtual Fabrics (VF), andIntegrated Routing (IR). As discussed in the next section on disasterrecovery, FCIP is used to extend Fibre Channel over conventional IPnetworks for remote data replication or remote tape backup. VirtualFabrics protocols enable a single complex fabric to be subdivided intoseparate virtual SANs in order to segregate different applications andprotect against fabric-wide disruptions. IR SAN routing protocolsenable connectivity between two or more independent SANs forresource sharing without creating one large flat network. Virtual Fabric 1 Virtual Fabric 2 Virtual Fabric 2 Virtual Fabric 3Figure 34. Using Virtual Fabrics to isolate applications and minimizefabric-wide disruptions.62 The New Data Center
    • Multi-protocol Data Center FabricsAs shown in Figure 34, Virtual Fabrics is used to divide a single physi-cal SAN into multiple logical SANs. Each virtual fabric behaves as aseparate fabric entity, logical fabric, with its own simple name server(SNS) and registered state change notification (RSCN) Brocadedomain. Logical fabrics can span multiple switches, providing greaterflexibility in how servers and storage within a logical fabric can bedeployed. To isolate frame routing between the logical fabrics, VF tag-ging headers are applied to the appropriate frames as they are issued.The headers are then removed by the destination switch before theframes are sent on to the appropriate initiator or target. Theoretically,the VF tagging header would allow for 4096 logical fabrics on a singlephysical SAN configuration, although in practice only a few are typicallyused.Virtual Fabrics is a means to consolidate SAN assets while enforcingmanaged units of SAN. In the example shown in Figure 33, for exam-ple, each of the three Logical Fabrics could be administered by aseparate department with different storage, security, and bill-back pol-icies. Although the total SAN configuration may be quite large, thedivision into separately managed logical fabrics simplifies administra-tion while leveraging the data centers investment in SAN technology.Brocade Fabric OS supports Virtual Fabrics across Brocade switches,director, and backbone platforms.Where Virtual Fabrics technology can be used to isolate resources onthe same physical fabric, Integrated Routing (IR) is used to shareresources between separate physical fabrics. Without IR, connectingtwo or more fabrics together would create a large flat network, analo-gous to bridging in LAN environments. Creating very large fabrics,however, can lead to much greater complexity in management and vul-nerability to fabric-wide disruptions.The New Data Center 63
    • Chapter 5: Weaving a New Data Center Fabric SAN B SAN C IR SAN router SAN AFigure 35. IR facilitates resource sharing between physically indepen-dent SANs.As shown in Figure 35, IR SAN routers provide both connectivity andfault isolation between separate SANs. In this example, a server onSAN A can access a storage array on SAN B (dashed line) via the SANrouter. From the perspective of the server, the storage array is a localresource on SAN A. The SAN router performs network address transla-tion to proxy the appearance of the storage array and to conform to theaddress space of each SAN. Because each SAN is autonomous, fabricreconfigurations or RSCN broadcasts on one SAN will not adverselyimpact the others. Brocade products such as the Brocade 7800 Exten-sion Switch and FX4-24 Extension Blade for the Brocade DCXBackbone provide routing capability for non-disruptive resource shar-ing between independent SANs.Fabric-based Disaster RecoveryDeploying new technologies to achieve greater energy efficiency, hard-ware consolidation, and more intelligence in the data center fabriccannot ensure data availability if the data center itself is vulnerable todisruption or outage. Although data center facilities may be designedto withstand seismic or catastrophic weather events, a major disrup-tion can result in prolonged outages that put business or the viabilityof the enterprise at risk. Consequently, most data centers have somedegree of disaster recovery planning that provides either instanta-64 The New Data Center
    • Fabric-based Disaster Recoveryneous failover to an alternate site or recovery within acceptable timeframes for business resumption. Fortunately, disaster recovery tech-nology has improved significantly in recent years and now enablescompanies to implement more economical disaster recovery solutionsthat do not burden the data center with excessive costs oradministration.Disaster recovery planning today is bounded by tighter budget con-straints and conventional recovery point and recovery time (RPO/RTO)objectives. In addition, more recent examples of region-wide disrup-tions (for example, Northeast power blackouts and hurricanes Katrinaand Rita in the US) have raised concerns over how far away a recoverysite must be to ensure reliable failover. The distance between primaryand failover sites is also affected by the type of data protectionrequired. Synchronous disk-to-disk data replication, for example, is lim-ited to metropolitan distances, typically 100 miles or less.Synchronous data replication ensures that every transaction is safelyduplicated to a remote location, but the distance may not be sufficientto protect against regional events. Asynchronous data replication buf-fers multiple transactions before transmission, and so may miss themost recent transaction if a failure occurs. It does, however, tolerateextremely long-distance replication and is currently deployed for disas-ter recovery installations that span transoceanic and transcontinentaldistances.Both synchronous and asynchronous data replication over distancerequire some kind of wide area service such as metro dark fiber,dense wavelength division multiplexing (DWDM), Synchronous OpticalNetworking (SONET), or IP network—and the recurring monthly cost ofWAN links is typically the most expensive operational cost in a disasterrecovery implementation. To connect primary and secondary data cen-ter SANs efficiently, then, requires technology to optimize use of widearea links in order to transmit more data in less time and the flexibilityto deploy long-distance replication over the most cost-effective WANlinks appropriate for the application.The New Data Center 65
    • Chapter 5: Weaving a New Data Center FabricAchieving maximum utilization of metro or wide area links is facilitatedby combining several technologies, including high-speed bandwidth,port buffers, data compression, rate limiting, and specialized algo-rithms such as SCSI write acceleration and tape pipelining. Formetropolitan distances suitable for synchronous disk-to-disk data rep-lication, for example, native Fibre Channel extension can beimplemented up to 218 miles at 8 Gbps using Brocade 8 Gbps portcards in the Brocade 48000 or Brocade DCX. While the distance sup-ported is more than adequate for synchronous applications, the 8Gbps bandwidth ensures maximum utilization of dark fiber or MANservices. In order to avoid credit starvation at high speeds, Brocadeswitch architecture allocates additional port buffers for continuousperformance. Even longer distances for native Fibre Channel transportare possible at lower port speeds.Commonly available IP network links are typically used for long-dis-tance asynchronous data replication. Fibre Channel over IP enablesFibre Channel-originated traffic to pass over conventional IP infrastruc-tures via frame encapsulation of Fibre Channel within TCP/IP. FCIP isnow used for disaster recovery solutions that span thousands of milesand because it uses standard IP services is more economical thanother WAN transports. Brocade has developed auxiliary technologiesto achieve even higher performance over IP networks. Data compres-sion, for example, can provide a 5x or greater increase in link capacityand so enable slower WAN links to carry more useful traffic. A 45Megabits per second (Mbps) T3 WAN link typically provides about 4.5Megabytes per second (MBps) of data throughput. By using data com-pression, the throughput can be increased to 25 MBps. This isequivalent to using far more expensive 155 Mbps OC3 WAN links toachieve the same data throughput.Likewise, significant performance improvements over conventional IPnetworks can be achieved with Brocade FastWrite acceleration andtape pipelining algorithms. These features dramatically reduce theprotocol overhead that would otherwise occupy WAN bandwidth andenable much faster data transfers on a given link speed. BrocadeFICON acceleration provides comparable functionality for mainframeenvironments. Collectively, these features achieve the objectives ofmaximizing utilization of expensive WAN services, while ensuring dataintegrity for disaster recovery and remote replication applications.66 The New Data Center
    • Fabric-based Disaster Recovery Brocade Brocade DCX DCX DWDM Brocade Brocade FX8-24 in DCX FX8-24 in DCX IP Brocade Brocade 7800 7800 IPFigure 36. Long-distance connectivity options using Brocade devices.As shown in Figure 36, Brocade DCX and SAN extension products offera variety of ways to implement long-distance SAN connectivity fordisaster recovery and other remote implementations. For synchronousdisk-to-disk data replication within a metropolitan circumference,native Fibre Channel at 8 Gbps or 10 Gbps can be driven directly fromBrocade DCX ports over dark fiber or DWDM. For asynchronous repli-cation over hundreds or thousands of miles, the Brocade 7800 andFX8-24 extension platforms covert native Fibre Channel to FCIP fortransport over conventional IP network infrastructures. These solu-tions provide flexible options for storage architects to deploy the mostappropriate form of data protection based on specific applicationneeds. Many large data centers use a combination of extension tech-nologies to provide both synchronous replication within metroboundaries to capture every transaction and asynchronous FCIP-based extension to more distant recovery sites as a safeguard againstregional disruptions.The New Data Center 67
    • Chapter 5: Weaving a New Data Center Fabric68 The New Data Center
    • The New Data Center LAN 6Building a cost-effective, energy-efficient,high-performance, and intelligent networkJust as data center fabrics bind application servers to storage, thedata center Ethernet network brings server resources and processingpower to clients. Although the fundamental principles of data centernetwork design have not changed significantly, the network is underincreasing pressure to serve more complex and varied client needs.According to the International Data Corporation (IDC), for example, thegrowth of non-PC client data access is five times greater than that ofconventional PC-based users as shown by the rapid proliferation ofPDAs, smart phones, and other mobile and wireless devices. Thischange applies to traditional in-house clients as well as external cus-tomers and puts additional pressure on both corporate intranet andInternet network access.Bandwidth is also becoming an issue. The convergence of voice, video,graphics, and data over a common infrastructure is a driving forcebehind the shift from 1 GbE to 10 GbE in most data centers. Rich con-tent is not simply a roadside attraction for modern business but anecessary competitive advantage for attracting and retaining custom-ers. Use of multi-core processors in server platforms increases theprocessing power and reduces the number of requisite connectionsper platform, but also requires more raw bandwidth per connection.Server virtualization is having the same effect. If 20 virtual machinesare now sharing the same physical network port previously occupiedby one physical machine, the port speed must necessarily beincreased to accommodate the potential 20x increase in clientrequests.Server virtualizations dense compute environment is also driving portdensity in the network interconnect, especially when virtualization isinstalled on blade servers. Physical consolidation of network connec-The New Data Center 69
    • Chapter 6: The New Data Center LANtivity is important for both rationalizing the cable plant and in providingflexibility to accommodate mobility of VMs as applications aremigrated from one platform to another. Where previously server net-work access was adequately served by 1 Gbps ports, top-of-rackaccess layer switches now must provide compact connectivity at 10Gbps. This, in turn, requires more high-speed ports at the aggregationand core layers to accommodate higher traffic volumes.Other trends such as software as a service (SaaS) and Web-basedbusiness applications are shifting the burden of data processing fromremote or branch clients back to the data center. To maintain accept-able response times and ensure equitable service to multipleconcurrent clients, preprocessing of data flows helps offload serverCPU cycles and provides higher availability. Application layer (Layer 4–7) networking is therefore gaining traction as a means to balanceworkloads and offload networking protocol processing. By acceleratingapplication access, more transactions can be handled in less time andwith less congestion at the server front-end. Web-based applicationsin particular benefit from a network-based hardware assist to ensurereliability and availability to internal and external users.Even with server consolidation, blade frames, and virtualization, serv-ers collectively still account for the majority of data center power andcooling requirements. Network infrastructure, however, still incurs asignificant power and cooling overhead and data center managers arenow evaluating power consumption as one of the key criteria in net-work equipment selection. In addition, data center floor space is at apremium and more compact, higher-port-density network switches cansave valuable real estate.Another cost-cutting trend for large enterprises is the consolidation ofmultiple data centers to one or just a few larger regional data centers.Such large-scale consolidation typically involves construction of newfacilities that can leverage state-of-the-art energy efficiencies such assolar power, air economizers, fly-wheel technology, and hot/cold aislefloor plans (see Figure 3 on page 11). The selection of new IT equip-ment is also an essential factor in maximizing the benefit ofconsolidation, maintaining availability, and reducing ongoing opera-tional expense. Since the new data center network infrastructure mustnow support client traffic that was previously distributed over multipledata centers, deploying a high-performance LAN with advanced appli-cation support is crucial for a successful consolidation strategy. Inaddition, the reduction of available data centers increases the needfor security throughout the network infrastructure to ensure data integ-rity and application availability.70 The New Data Center
    • A Layered ArchitectureA Layered ArchitectureWith tens of thousands of installations worldwide, data center net-works have evolved into a common infrastructure built on multiplelayers of connectivity. The three fundamental layers common to nearlyall data center networks are the access, aggregation, and core layers.This basic architecture has proven to be the most suitable for providingflexibility, high performance, and resiliency and can be scaled frommoderate to very large infrastructures. Mission-critical General-purpose application servers application servers Access Aggregation Core External networkFigure 37. Access, aggregation, and core layers in the data centernetwork.As shown in Figure 37, the conventional three-layer network architec-ture provides a hierarchy of connectivity that enable servers tocommunicate with each other (for cluster and HPC environments) andwith external clients. Typically, higher bandwidth is provided at theaggregation and core layers to accommodate the high volume ofaccess layer inputs, although high-performance applications may alsoThe New Data Center 71
    • Chapter 6: The New Data Center LANrequire 10 Gbps links. Scalability is achieved by adding more switchesat the requisite layers as the population of physical or virtual serversand volume of traffic increases over time.The access layer provides the direct network connection to applicationand file servers. Servers are typically provisioned with two or more GbEor 10 GbE network ports for redundant connectivity. Server platformsvary from standalone servers to 1U rack-mount servers and bladeservers with passthrough cabling or bladed Ethernet switches. Accesslayer switches typically provide basic Layer 2 (MAC-based) and Layer 3(IP-based) switching for server connectivity and often have higherspeed 10 GbE uplink ports to consolidate connectivity to the aggrega-tion layer.Because servers represent the highest population of platforms in thedata center, the access layer functions as the fan-in point to join manydedicated network connections to fewer but higher-speed shared con-nections. Unless designed otherwise, the access layer is thereforetypically oversubscribed in a 6:1 or higher ratio of server network portsto uplink ports. In Figure 37 on page 71, for example, the mission-criti-cal servers could be provisioned with 10 GbE network interfaces and a1:1 ratio for uplink. The general purpose servers, by contrast, would beadequately supported with 1 GbE network ports and a 6:1 or higheroversubscription ratio.Access layer switches are available in a variety of port densities andcan be deployed for optimal cabling and maintenance. Options forswitch placement range from top of rack to middle of rack, middle ofrow, and end of row. As illustrated in Figure 38, top-of-rack accesslayer switches are typically deployed in redundant pairs with cablingrun to each racked server. This is a common configuration for mediumand small server farms and enables each rack to be managed as a sin-gle entity. A middle-of-rack configuration is similar but with multiple 1Uswitches deployed throughout the stack to further simplify cabling.For high-availability environments, however, larger switches withredundant power supplies and switch modules can be positioned inmiddle-of-row or end-of-row configurations. In these deployments, mid-dle-of-row placement facilitates shorter cable runs, while end-of-rowplacement requires longer cable runs to the most distant racks. Ineither case, high-availability network access is enabled by the hard-ened architecture of HA access switches.72 The New Data Center
    • A Layered Architecture Top of rack Middle of row End of row (ToR) (MoR) (EoR)Figure 38. Access layer switch placement is determined by availability,port density, and cable strategy.Examples of top-of-rack access solutions include Brocade FastIronEdge Series switches. Because different applications can have differ-ent performance and availability requirements, these access switchesoffer multiple connectivity options (10, 100, or 1000 Mbps and 10Gbps) and redundant features. Within the data center, the accesslayer typically supports application servers but can also be used tosupport in-house client workstations. In conventional use, the datacenter access layer supports servers and clients and workstations areconnected at the network edge.In addition to scalable server connectivity, upstream links to the aggre-gation layer can be optimized for high availability in metropolitan areanetworks (MANs) through value-added features such as the MetroRing Protocol (MRP) and Virtual Switch Redundancy Protocol (VSRP).As discussed in more detail later in this chapter, these featuresreplace conventional Spanning Tree Protocol (STP) for metro and cam-pus environments with a much faster, sub-second recovery time forfailed links.For modern data centers, access layer services can also include powerover Ethernet (PoE) to support voice over IP (VoIP) telecommunicationssystems and wireless access points for in-house clients as well assecurity monitoring. The ability to provide both data and power overEthernet greatly simplifies the wiring infrastructure and facilitatesresource management.The New Data Center 73
    • Chapter 6: The New Data Center LANAt the aggregation layer, uplinks from multiple access-layer switchesare further consolidated into fewer high-availability and high-perfor-mance switches, which provide advanced routing functions andupstream connectivity to the core layer. Examples of aggregation-layerswitches include the Brocade BigIron RX Series (with up to 5.12 Tbpsswitching capacity) with Layer 2 and Layer 3 switching and the Bro-cade ServerIron ADX series with Layer 4–7 application switching.Because the aggregation layer must support the traffic flows of poten-tially thousands of downstream servers, performance and availabilityare absolutely critical.As the name implies, the network core is the nucleus of the data cen-ter LAN and provides the top-layer switching between all devicesconnected via the aggregation and access layers. In a classic three-tiermodel, the core also provides connectivity to the external corporatenetwork, intranet, and Internet. In addition to high-performance 10Gbps Ethernet ports, core switches can be provisioned with OC-12 orhigher WAN interfaces. Examples of network core switches include theBrocade NetIron MLX Series switches with up to 7.68 Tbps switchingcapacity. These enterprise-class switches provide high availability andfault tolerance to ensure reliable data access.Consolidating Network TiersThe access/aggregation/core architecture is not a rigid blueprint fordata center networking. Although it is possible to attach serversdirectly to the core or aggregation layer, there are some advantages tomaintaining distinct connectivity tiers. Layer 2 domains, for example,can be managed with a separate access layer linked through aggrega-tion points. In addition, advanced service options available foraggregation-class switches can be shared by more downstreamdevices connected to standard access switches. A three-tier architec-ture also provides flexibility in selectively deploying bandwidth andservices that align with specific application requirements.With products such as the Brocade BigIron RX Series switches, how-ever, it is possible to collapse the functionality of a conventional multi-tier architecture into a smaller footprint. By providing support for 768 x1 Gbps downstream ports and 64 x 10 Gbps upstream ports, consoli-dation of port connectivity can be achieved with an accompanyingreduction in power draw and cooling overhead compared to a standardmulti-switch design, as shown in Figure 39.74 The New Data Center
    • Design Considerations 64 x 10 Gbps ports 768 x 1 Gbps portsFigure 39. A Brocade BigIron RX Series switch consolidates connectiv-ity in a more energy efficient footprint.In this example, Layer 2 domains are segregated via VLANs andadvanced aggregation-level services can be integrated directly into theBigIron chassis. In addition, different-speed port cards can be provi-sioned to accommodate both moderate- and high-performancerequirements, with up to 512 x 10 Gbps ports per chassis. For moderndata center networks, the advantage of centralizing connectivity andmanagement is complemented by reduced power consumption andconsolidation of rack space.Design ConsiderationsAlthough each network tier has unique functional requirements, theentire data center LAN must provide high availability, high perfor-mance, security for data flows, and visibility for management. Properproduct selection and interoperability between tiers is therefore essen-tial for building a resilient data center network infrastructure thatenables maximum utilization of resources while minimizing opera-tional expense. A properly designed network infrastructure, in turn, is afoundation layer for building higher-level network services to automatedata transport processes such as network resource allocation and pro-active network management.Consolidate to Accommodate GrowthOne of the advantages of a tiered data center LAN infrastructure isthat it can be expanded to accommodate growth of servers and clientsby adding more switches at the appropriate layers. Unfortunately, thisfrequently results in the spontaneous acquisition of more and moreequipment over time as network managers react to increasingdemand. At some point the sheer number of network devices makesthe network difficult to manage and troubleshoot, increases the com-plexity of the cable plant, and invariably introduces congestion pointsthat degrade network performance.The New Data Center 75
    • Chapter 6: The New Data Center LANNetwork consolidation via larger, higher-port-density switches can helpresolve space and cooling issues in the data center, and it can alsofacilitate planning for growth. Brocade BigIron RX Series switches, forexample, are designed to scale from moderate to high port-countrequirements in a single chassis for both access and aggregation layerdeployment (greater than 1500 x 1 Gbps or 512 x 10 Gbps ports).Increased port density alone, however, is not sufficient to accommo-date growth if increasing the port count results in degradedperformance on each port. Consequently, BigIron RX Series switchesare engineered to support over 5 Tbps aggregate bandwidth to ensurethat even fully loaded configurations deliver wire-speed throughput.From a management standpoint, network consolidation significantlyreduces the number of elements to configure and monitor and stream-lines microcode upgrades. A large multi-slot chassis that replaces 10discrete switches, for example, simplifies the network managementmap and makes it much easier to identify traffic flows through the net-work infrastructure.Network ResiliencyEarly proprietary data communications networks based on SNA and3270 protocols were predicated on high availability for remote useraccess to centralized mainframe applications. IP networking, by con-trast, was originally a best-effort delivery mechanism designed tofunction in potentially congested or lossy infrastructures (for exampledisruption due to nuclear exchange). Now that IP networking is themainstream mechanism for virtually all business transactions world-wide, high availability is absolutely essential for day-to-day operationsand best-effort delivery is no longer acceptable.Network resiliency has two major components: the high availabilityarchitecture of individual switches and the high availability design of amulti-switch network. For the former, redundant power supplies, fanmodules, and switching blades ensure that an individual unit can with-stand component failures. For the latter, redundant pathing throughthe network using failover links and routing protocols ensures that theloss of an individual switch or link will not result in loss of data access.Resilient routing protocols such as Virtual Router Redundancy Protocol(VRRP) as defined in RFC 3768 provide a standards-based mechanismto ensure high availability access to a network subnet even if a primaryrouter or path fails. Multiple routers can be configured as a single vir-tual router. If a master router fails, a backup router automaticallyassumes the routing task for continued service, typically within 3 sec-onds of failure detection. VRRP Extension (VRRPE) is an extension of76 The New Data Center
    • Design ConsiderationsVRRP that uses Bidirectional Forwarding Detection (BFD) to shrink thefailover window to about 1 second. Because networks now carry morelatency-sensitive protocols such as voice over IP, failover must be per-formed as quickly as possible to ensure uninterrupted access.Timing can also be critical for Layer 2 network segments. At Layer 2,resiliency is enabled by the Rapid Spanning Tree Protocol (RSTP).Spanning tree allows redundant pathing through the network while dis-abling redundant links to prevent multiple loops. If a primary link fails,conventional STP can identify the failure and enable a standby linkwithin 30 to 50 seconds. RSTP decreases the failover window to about1 second. Innovative protocols such as Brocade Virtual Switch Redun-dancy Protocol (VSRP) and Metro Ring Protocol (MRP), however, canaccelerate the failover process to a sub-second response time. In addi-tion to enhanced resiliency, VSRP enables more efficient use ofnetwork resources by allowing a link that is in standby or blockedmode for one VLAN to be active for another VLAN.Network SecurityData center network administrators must now assume that their net-works are under a constant threat of attack from both internal andexternal sources. Attack mechanisms such as denial of service (DoS)are today well understood and typically blocked by a combination ofaccess control lists (ACLs) and rate limiting algorithms that preventpacket flooding. Brocade, for example, provides enhanced hardware-based, wire-speed ACL processing to block DoS and the more sinisterdistributed DoS (DDoS) attacks. Unfortunately, hackers are constantlycreating new means to penetrate or disable corporate and governmentnetworks and network security requires more than the deployment ofconventional firewalls.Continuous traffic analysis to monitor the behavior of hosts is onemeans to guard against intrusion. The sFlow (RFC 3176) standarddefines a process for sampling network traffic at wire speed withoutimpacting network performance. Packet sampling is performed inhardware by switches and routers in the network and samples are for-ward to a central sFlow server or collector for analysis. Abnormal trafficpatterns or host behavior can then be identified and proactivelyresponded to in real time. Brocade IronView Network Manager (INM),for example, incorporates sFlow for continuous monitoring of the net-work in addition to ACL and rate limiting management of networkelements.The New Data Center 77
    • Chapter 6: The New Data Center LANOther security considerations include IP address spoofing and networksegmentation. Unicast Reverse Path Forwarding (uRPF) as defined inRFC 3704 provides a means to block packets from sources that havenot been already registered in a routers routing information base (RIB)or forwarding information base (FIB). Address spoofing is typically usedto disguise the source of DoS attacks, so uRPF is a further defenseagainst attempts to overwhelm network routers. Another spoofing haz-ard is Address Resolution Protocol (ARP) spoofing, which attempts toassociate an attackers MAC address with a valid user IP address tosniff or modify data between legitimate hosts. ARP spoofing can bethwarted via ARP inspection or monitoring of ARP requests to ensurethat only valid queries are allowed.For very large data center networks, risks to the network as a wholecan be reduced by segmenting the network through use of VirtualRouting and Forwarding (VRF). VRF is implemented by enabling arouter with multiple independent instances of routing tables and thisessentially turns a single router into multiple virtual routers. A singlephysical network can thus be subdivided into multiple virtual networkswith traffic isolation between designated departments or applications.Brocade switches and routers provide an entire suite of security proto-cols and services to protect the data center network and maintainstable operation and management.Power, Space and Cooling EfficiencyAccording to The Server and StorageIO Group, IT consultants, networkinfrastructure contributes only from 10% to 15% of IT equipmentpower consumption in the data center, as shown in Figure 40. Com-pared to server power consumption at 48%, 15% may not seem asignificant number, but considering that a typical data center canspend close to a million dollars per year on power, the energy effi-ciency of every piece of IT equipment represents a potential savings.Closer cooperation between data center administrators and the facili-ties management responsible for the power bill can lead to a closerexamination of the power draw and cooling requirements of networkequipment and selection of products that provide both performanceand availability as well as lower energy consumption. Especially fornetworking products, there can be a wide disparity between vendorswho have integrated energy efficiency into their product design philos-ophy and those who have not.78 The New Data Center
    • Design Considerations IT Data Center Typical Power Consumption Cooling/HVAC Servers External storage (all tiers) 50-60% 48% 80% Storage IT equipment (disk and tape) 37-40% 48-50% Network (SAN/LAN/WAN) Tape drive library Other OtherFigure 40. Network infrastructure typically contributes only 10% to15% of total data center IT equipment power usage.Designing for data center energy efficiency includes product selectionthat provides the highest productivity with the least energy footprint.Use of high-port-density switches, for example, can reduce the totalnumber of power supplies, fans, and other components that wouldotherwise be deployed if smaller switches were used. Combiningaccess and aggregation layers with a BigIron RX Series switch likewisereduces the total number of elements required to support host con-nectivity. Selecting larger end-of-row access-layer switches instead ofindividual top-of-rack switches has a similar affect.The increased energy efficiency of these network design options, how-ever, still ultimately depends on how the vendor has incorporatedenergy saving components into the product architecture. As with SANproducts such as the Brocade DCX Backbone, Brocade LAN solutionsare engineered for energy efficiency and consume less than a fourthof the power of competing products in comparable classes ofequipment.Network VirtualizationThe networking complement to server virtualization is a suite of virtu-alization protocols that enable extended sections of a shared multi-switch network to function as independent LANs (VLANs) or for a singleswitch to operate as multiple virtual switches (virtual routing and for-warding (VRF) as discussed earlier). In addition, protocols such asvirtual IPs (VIPs) can be used to extend virtual domains between datacenters or multiple sites over distance. As with server virtualization,the intention of network virtualization is to maximize productive use ofexisting infrastructure to reinforce traffic separation, security, availabil-The New Data Center 79
    • Chapter 6: The New Data Center LANity, and performance. Application separation via VLANs at Layer 2 orVRF at Layer 3, for example, can provide a means to better meet ser-vice-level agreements (SLAs) and conform to regulatory compliancerequirements. Likewise, network virtualization can be used to createlogically separate security zones for policy enforcement withoutdeploying physically separate networks.Application Delivery InfrastructureOne of the major transformations in business applications over thepast few years has been the shift from conventional applications toWeb-based enterprise applications. Use of Internet-enabled protocolssuch as HTTP (HyperText Transfer Protocol) and HTTPS (HyperTextTransfer Protocol Secure) has streamlined application developmentand delivery and is now a prerequisite for next-generation cloud com-puting solutions. At the same time, however, Web-based enterpriseapplications present a number of challenges due to increased networkand server loads, increased user access, greater applications load,and security concerns. The concurrent proliferation of virtualized serv-ers helps to alleviate the application workload issues but addscomplexity in designing resilient configurations that can provide con-tinuous access. As discussed in “Chapter 3: Doing More with Less”starting on page 17, implementing a successful server virtualizationplan requires careful attention to both upstream LAN network impactas well as downstream SAN impact. Application delivery controllers(also known as Layer 4–7 switches) provide a particularly effectivemeans to address the upstream network consequences of increasedtraffic volumes when Web-based enterprise applications are sup-ported on higher populations of virtualized servers. Applications Oracle Sap Microsoft Web/Mail/DNS Layer 2-3 switches Network ClientsFigure 41. Application congestion (traffic shown as a dashed line) ona Web-based enterprise application infrastructure.80 The New Data Center
    • Application Delivery InfrastructureAs illustrated in Figure 41, conventional network switching and routingcannot prevent higher traffic volumes generated by user activity fromoverwhelming applications. Without a means to balance the workloadbetween applications servers, response time suffers even when thenumber of application server instances has been increased via servervirtualization. In addition, whether access is over the Internet orthrough a company intranet, security vulnerabilities such as DoSattacks still exist.The Brocade ServerIron ADX application delivery controller addressesthese problems by providing hardware-assisted protocol processingoffload, server workload balancing, and firewall protection to ensurethat application access is distributed among the relevant servers andthat access is secured. As shown in Figure 42, this solution solvesmultiple application-related issues simultaneously. By implementingWeb-based protocol processing offload, server CPU cycles can be usedmore efficiently to process application requests. In addition, load bal-ancing across multiple servers hosting the same application canensure that no individual server or virtual machine is overwhelmedwith requests. By also offloading HTTPS/SSL security protocols, theBrocade ServerIron ADX provides the intended level of security withoutfurther burdening the server pool. The Brocade ServerIron ADX alsoprovides protection against DoS attacks and so facilitates applicationavailability. Applications Oracle Sap Microsoft Web/Mail/DNS VMs SSL/encryption DNS Mail Radius Firewalls IPD/IDS Cache Switches Brocade ServerIron ADX Application Delivery Controllers Layer 2-3 switches Network ClientsFigure 42. Application workload balancing, protocol processingoffload and security via the Brocade ServerIron ADX.The New Data Center 81
    • Chapter 6: The New Data Center LANThe value of application delivery controllers in safeguarding and equal-izing application workloads appreciates as more business applicationsshift to Web-based applications. Cloud computing is the ultimateextension of this trend, with the applications themselves migratingfrom clients to enterprise application servers, which can be physicallylocated across dispersed data center locations or outsourced to ser-vice providers. The Brocade ServerIron ADX provides global server loadbalancing (GSLB) for load balancing not only between individual serv-ers but between geographically dispersed server or VM farms. WithGSLB, clients are directed to the best site for the fastest content deliv-ery given current workloads and optimum network response time. Thisapproach also integrates enterprise-wide disaster recovery for applica-tion access without disruption to client transactions.As with other network solutions, the benefits of application deliverycontroller technology can be maintained only if the product architec-ture maintains or improves client performance. Few network designersare willing to trade network response time for enhanced services. TheBrocade ServerIron ADX, for example, provides an aggregate of 70Gbps Layer 7 throughput and over 16 million Layer 4 transactions persecond. Although the Brocade ServerIron ADX sits physically in thepath between the network and its servers, performance is actuallysubstantially increased compared to conventional connectivity. Inaddition to performance, the Brocade ServerIron ADX maintains Bro-cades track record of providing the industrys most energy efficientnetwork products by using less than half the power of the closest com-peting application delivery controller product.82 The New Data Center
    • Orchestration 7Automating data center processesSo far virtualization has been “the” buzzword of twenty-first century ITparliance and unfortunately has undergone depreciation due to over-use-in particular, over-marketing-of the term. As with elephants andblind men, virtualization appears to mean different things to differentpeople depending on their areas of responsibility and unique issues.For revitalizing an existing data center or designing a new one, theumbrella term “virtualization” covers three primary domains: virtual-ization of compute power in the form of server virtualization,virtualization of data storage capacity in the form of storage virtualiza-tion, and virtualization of the data transport in the form of networkvirtualization. The common denominator between these three primarydomains of virtualization is the use of new technology to streamlineand automate IT processes while maximizing productive use of thephysical IT infrastructure.As with graphical user interfaces, virtualization hides the complexity ofunderlying hardware elements and configurations. The complexitydoes not go away but is now the responsibility of the virtualization layerinserted between physical and logical domains. From a client perspec-tive, for example, an application running on a single physical serverbehaves the same as one running on a virtual machine. In this exam-ple, the hypervisor assumes responsibility for supplying all theexpected CPU, memory, I/O, and other elements typical of a conven-tional server. The level of actual complexity of a virtualizedenvironment is powers of ten greater than ordinary configurations butso is the level of productivity and resource utilization. The sameapplies to the other domains of storage and network virtualization,and this places tremendous importance on the proper selection ofproducts to extend virtualization across the enterprise.The New Data Center 83
    • Chapter 7: OrchestrationNext-generation data center design necessarily incorporates a varietyof virtualization technologies, but to virtualize the entire data centerrequires first of all a means to harmoniously orchestrate these tech-nologies into an integral solution, as depicted in Figure 43. Becauseno single vendor can provide all the myriad elements found in a mod-ern data center, orchestration requires vendor cooperation and newopen systems standards to ensure stability and resilience. The alterna-tive is proprietary solutions and products and the implicit vendormonopoly that accompanies single-source technologies. The marketlong ago rejected this vendor lock-in and has consistently supportedan open systems approach to technology development anddeployment. APIs Server Storage virtualization virtualization Orchestration framework APIs APIs Netowrk virtualizationFigure 43. Open systems-based orchestration between virtualizationdomains.For large-scale virtualization environments, standards-based orches-tration is all the more critical because virtualization in each domain isstill undergoing rapid technical development. The Distributed Manage-ment Task Force (DMTF), for example, developed the Open VirtualMachine Format (OVF) standard for VM deployment and mobility. TheStorage Networking Industry Association (SNIA) Storage ManagementInitiative (SMI) includes open standards for deployment and manage-ment of virtual storage environments. The American NationalStandards Institute T11.5 work group developed the Fabric ApplicationInterface Standard (FAIS) to promote open APIs for implementing stor-age virtualization via the fabric. IEEE and IETF have progressivelydeveloped more sophisticated open standards for network virtualiza-tion, from VLANs to VRF. The development of open standards and84 The New Data Center
    • common APIs is the prerequisite for developing comprehensiveorchestration frameworks that can automate the creation, allocation,and management of virtualized resources across data centerdomains. In addition, open standards become the guideposts for fur-ther development of specific virtualization technologies, so thatvendors can develop products with a much higher degree ofinteroperability.Data center orchestration assumes that a single conductor—in thiscase, a single management framework—provides configuration,change, and monitoring management over an IT infrastructure that isbased on a complex of virtualization technologies. This in turn impliesthat the initial deployment of an application, any changes to its envi-ronment, and proactive monitoring of its health are no longer manualprocesses but are largely automated according to a set of defined ITpolicies. Enabled by open APIs in the server, storage, and networkdomains, the data center infrastructure automatically allocates therequisite CPU, memory, I/O, and resilience for a particular application;assigns storage capacity, boot LUNs, and any required security or QoSparameters needed for storage access; and provides optimized clientaccess through the data communications network such as VLANs,application delivery, load balancing, or other network tuning to supportthe application. As application workloads change over time, the appli-cation can be migrated from one server resource to another, storagevolumes increased or decreased, QoS levels adjusted appropriately,security status changed, and bandwidth adjusted for upstream clientaccess. The ideal of data center orchestration is that the configura-tion, deployment, and management of applications on the underlyinginfrastructure should require little or no human intervention andinstead rely on intelligence engineered into each domain.Of course, servers, storage, and network equipment do not rack them-selves up and plug themselves in. The physical infrastructure must beproperly sized, planned, selected, and deployed before logical automa-tion and virtualization can be applied. With tight budgets, it may not bepossible to provision all the elements needed for full data centerorchestration, but careful selection of products today can lay thegroundwork for fuller implementation tomorrow.With average corporate data growth at 60% per year, data centerorchestration is becoming a business necessity. Companies cannotcontinue to add staff to manage increased volumes of applicationsand data, and administrator productivity cannot meet growth rateswithout full-scale virtualization and automation of IT processes. Serv-ers, storage, and networking, which formerly stood as isolatedThe New Data Center 85
    • Chapter 7: Orchestrationmanagement domains, are being transformed into interrelated ser-vices. For Brocade, technology, network infrastructure as a servicerequires richer intelligence in the network to coordinate provisioning ofbandwidth, QoS, resiliency, and security features to support server andstorage services.Because this uber-technology is still under construction, not all neces-sary components are currently available but substantial progress hasbeen made. Server virtualization, for example, is now a mature tech-nology that is moving from secondary applications to primary ones.Brocade is working with VMware, Microsoft, and others to coordinatecommunication between the SAN and LAN infrastructure and variousvirtualization hypervisors so that proactive monitoring of storage band-width and QoS can trigger migration of VMs to more availableresources should congestion occur, as shown in Figure 44. VMs move from first physical server to next available LAN Microsoft System Center VMM Microsoft System Center Operations Manager Brocade Brocade HBA plus QoS DCFM Engine Brocade Management Pack for Microsoft System Center VMM QoS Engine SANFigure 44. Brocade Management Pack for Microsoft Service CenterVirtual Machine Manager leverages APIs between the SAN andSCVMM to trigger VM migration.The SAN Call Home events displayed in the Microsoft System CenterOperations Center interface is shown in Figure 50 on page 94.86 The New Data Center
    • On the storage front, Brocade supports fabric-based storage virtualiza-tion with the Brocade FA4-18 Application Blade and Brocades StorageApplication Services (SAS) APIs. Based on FAIS standards, the BrocadeFA4-18 supports applications such as EMC Invista to maximize effi-cient utilization of storage assets. For client access, the Brocade ADXapplication delivery controller automates load balancing of clientrequests and offloads upper-layer protocol processing from the desti-nation VMs. Other capabilities such as 10 Gigabit Ethernet and 8 GbpsFibre Channel connectivity, fabric-based storage encryption and virtualrouting protocols can help data center network designers allocateenhanced bandwidth and services to accommodate both currentrequirements and future growth. Collectively, these building blocksfacilitate higher degrees of data center orchestration to achieve the ITbusiness goal of doing far more with much less.The New Data Center 87
    • Chapter 7: Orchestration88 The New Data Center
    • Brocade Solutions Optimized 8for Server VirtualizationEnabling server consolidation and end-to-endfabric managementBrocade has engineered a number of different network componentsthat enable server virtualization in the data center fabric. The sectionsin this chapter introduce you to these products and briefly describethem. For the most current information, visit www.brocade.com > Prod-ucts and Solutions. Choose a product from the drop-down list on theleft and then scroll down to view Data Sheets, FAQs, Technical Briefs,and White Papers.The server connectivity and convergence products described in thischapter are:• “Server Adapters” on page 89• “Brocade 8000 Switch and FCOE10-24 Blade” on page 92• “Access Gateway” on page 93• “Brocade Management Pack” on page 94• “Brocade ServerIron ADX” on page 95Server AdaptersIn mid-2008, Brocade released a family of Fibre Channel HBAs with8 and4 Gbps HBAs. Highlights of Brocade FC HBAs include:• Maximizes bus throughput with a Fibre Channel-to-PCIe 2.0a Gen2 (x8) bus interface with intelligent lane negotiation• Prioritizes traffic and minimizes network congestion with target rate limiting, frame-based prioritization, and 32 Virtual Channels per port with guaranteed QoSThe New Data Center 89
    • Chapter 8: Brocade Solutions Optimized for Server Virtualization• Enhances security with Fibre Channel-Security Protocol (FC-SP) for device authentication and hardware-based AES-GCM; ready for in- flight data encryption• Supports virtualized environments with NPIV for 255 virtual ports• Uniquely enables end-to-end (server-to-storage) management in Brocade Data Center Fabric environmentsBrocade 825/815 FC HBAThe Brocade 815 (single port) and Brocade 825 (dual ports) 8 GbpsFibre Channel-to-PCIe HBAs provides industry-leading server connec-tivity through unmatched hardware capabilities and unique softwareconfigurability. This class of HBAs is designed to help IT organizationsdeploy and manage true end-to-end SAN service across next-genera-tion data centers.Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown).The Brocade 8 Gbps FC HBA also:• Maximizes I/O transfer rates with up to 500,000 IOPS per port at 8 Gbps• Utilizes N_Port Trunking capabilities to create a single logical 16 Gbps high-speed link90 The New Data Center
    • Server AdaptersBrocade 425/415 FC HBAThe Brocade 4 Gbps FC HBA has capabilities similar to thosedescribed for the 8 Gbps version. The Brocade 4 Gbps FC HBA also:• Maximizes I/O transfer rates with up to 500,000 IOPS per port at 4 Gbps• Utilizes N_Port Trunking capabilities to create a single logical 8 Gbps high-speed linkFigure 46. Brocade 415 FC 4 Gbps HBA (single port shown).Brocade FCoE CNAsThe Brocade 1010 (single port) and Brocade 1020 (dual ports) 10Gbps Fibre Channel over Ethernet-to-PCIe CNAs provide server I/O con-solidation by transporting both storage and Ethernet networking trafficacross the same physical connection. Industry-leading hardware capa-bilities, unique software configurability, and unified management allcontribute to exceptional flexibility.The Brocade 1000 Series CNAs combine the powerful capabilities ofstorage (Fibre Channel) and networking (Ethernet) devices. Thisapproach helps improve TCO by significantly reducing power, cooling,and cabling costs through the use of a single adapter. It also extendsstorage and networking investments, including investments made inmanagement and training. Utilizing hardware-based virtualizationacceleration capabilities, organizations can optimize performance invirtual environments to increase overall ROI and improve TCO evenfurther.The New Data Center 91
    • Chapter 8: Brocade Solutions Optimized for Server VirtualizationLeveraging IEEE standards for Data Center Bridging (DCB), the Bro-cade 1000 Series CNAs provide a highly efficient way to transportFibre Channel storage traffic over Ethernet links-addressing the highlysensitive nature of storage traffic.Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel overEthernet-to-PCIe CNA.Brocade 8000 Switch and FCOE10-24 BladeThe Brocade 8000 is a top-of-rack Layer 2 CEE/FCoE switch with 24 x10 GbE ports for LAN connections and 8 x FC ports (with up to 8 Gbpsspeed) for Fibre Channel SAN connections. The Brocade 8000 pro-vides advanced Fibre Channel services, supports Ethernet and CEEcapabilities, and is managed by Brocade DCFM.Supporting Windows and Linux environments, the Brocade 8000Switch enables access to both LANs and SANs over a common serverconnection by utilizing Converged Enhanced Ethernet (CEE) and FCoEprotocols. LAN traffic is forwarded to aggregation-layer Ethernetswitches using conventional 10 GbE connections, and storage traffic isforwarded to Fibre Channel SANs over 8 Gbps FC connections.Figure 48. Brocade 8000 Switch.92 The New Data Center
    • Access GatewayThe Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though, non-blocking architecture designed for use with the Brocade DCX. It fea-tures 24 x 10 Gbps CEE ports and extends CEE/FCoE capabilities tobackbone platforms enabling end-of-row CEE/FCoE deployment. Byproviding first-hop connectivity for access-layer servers, the BrocadeFCOE10-24 also enables server I/O consolidation for servers with Tier3 and some Tier 2 virtualized applications.Figure 49. Brocade FCOE10-24 Blade.Access GatewayBrocade Access Gateway simplifies server and storage connectivity byenabling direct connection of servers to any SAN fabric-enhancingscalability by eliminating the switch domain identity and simplifyinglocal switch device management. Brocade blade server SAN switchesand the Brocade 300 and Brocade 5100 rack-mount switches are keycomponents of enterprise data centers, bringing a wide variety of scal-ability, manageability, and cost advantages to SAN environments.These switches can be used in Access Gateway mode, available in thestandard Brocade Fabric OS, for enhanced server connectivity toSANs.Access Gateway provides:• Seamless connectivity with any SAN fabric• Improved scalability• Simplified management• Automatic failover and failback for high availability• Lower total cost of ownershipThe New Data Center 93
    • Chapter 8: Brocade Solutions Optimized for Server VirtualizationAccess Gateway mode eliminates traditional heterogeneous switch-to-switch interoperability challenges by utilizing NPIV standards to pres-ent Fibre Channel server connections as logical devices to SAN fabrics.Attaching through NPIV-enabled edge switches or directors, AccessGateway seamlessly connects servers to Brocade, McDATA, Cisco, orother SAN fabrics.Brocade Management PackBrocade Management Pack for Microsoft System Center monitors thehealth and performance of Brocade HBA-to-SAN links and works withMicrosoft System Center to provide intelligent recommendations fordynamically optimizing the performance of virtualized workloads. Itprovides Brocade HBA performance and health monitoring capabilitiesto System Center Operations Manager (SCOM), and that informationcan be used to dynamically optimize server resources in virtualizeddata centers via System Center Virtual Machine Manager (SCVMM).It enables real-time monitoring of Brocade HBA links through SCOM,combined with proactive remediation action in the form of recom-mended Performance and Resource Optimization (PRO) Tips handledby SCVMM. As a result, IT organizations can improve efficiency whilereducing their overall operating costs.Figure 50. SAN Call Home events displayed in the Microsoft SystemCenter Operations Center interface.94 The New Data Center
    • Brocade ServerIron ADXBrocade ServerIron ADXThe Brocade ServerIron ADX Series of switches provides Layer 4–7switching performance in an intelligent, modular application deliverycontroller platform. The switches—including the ServerIron ADX 1000,4000, and 10000 models—enable highly secure and scalable serviceinfrastructures to help applications run more efficiently and withhigher availability. ServerIron ADX switches use detailed applicationmessage information beyond the traditional Layer 2 and 3 packetheaders, directing client requests to the most available servers. Theseintelligent application switches transparently support any TCP- or UDP-based application by providing specialized acceleration, content cach-ing, firewall load balancing, network optimization, and host offloadfeatures for Web services.The Brocade ServerIron ADX Series also provides a reliable line ofdefense by securing servers and applications against many types ofintrusion and attack without sacrificing performance.All Brocade ServerIron ADX switches forward traffic flows based onLayer 4–7 definitions, and deliver industry-leading performance forhigher-layer application switching functions. Superior content switch-ing capabilities include customizable rules based on URL, HOST, andother HTTP headers, as well as cookies, XML, and application content.Brocade ServerIron ADX switches simplify server farm managementand application upgrades by enabling organizations to easily removeand insert resources into the pool. The Brocade ServerIron ADX pro-vides hardware-assisted, standards-based network monitoring for allapplication traffic, improving manageability and security for networkand server resources. Extensive and customizable service healthcheck capabilities monitor Layer 2, 3, 4, and 7 connectivity along withservice availability and server response, enabling real-time problemdetection. To optimize application availability, these switches supportmany high-availability mode options, with real-time session synchroni-zation between two Brocade ServerIron ADX switches to protectagainst session loss during outages.Figure 51. Brocade ServerIron ADX 1000.The New Data Center 95
    • Chapter 8: Brocade Solutions Optimized for Server Virtualization96 The New Data Center
    • Brocade SAN Solutions 9Meeting the most demanding data centerrequirements today and tomorrowBrocade leads the pack in networked storage from the development ofFibre Channel to its current family of high-performance, energy-effi-cient SAN switches, directors, and backbones and advanced fabriccapabilities such as encryption and distance extension. The sectionsin this chapter introduce you to these products and briefly describethem. For the most current information, visit www.brocade.com > Prod-ucts and Solutions. Choose a product from the drop-down list on theleft and then scroll down to view Data Sheets, FAQs, Technical Briefs,and White Papers.The SAN products described in this chapter are:• “Brocade DCX Backbones (Core)” on page 98• “Brocade 8 Gbps SAN Switches (Edge)” on page 100• “Brocade Encryption Switch and FS8-18 Encryption Blade” on page 105• “Brocade 7800 Extension Switch and FX8-24 Extension Blade” on page 106• “Brocade Optical Transceiver Modules” on page 107• “Brocade Data Center Fabric Manager” on page 108The New Data Center 97
    • Chapter 9: Brocade SAN SolutionsBrocade DCX Backbones (Core)The Brocade DCX and DCX-4S Backbone offer flexible managementcapabilities as well as Adaptive Networking services and fabric-basedapplications to help optimize network and application performance. Tominimize risk and costly downtime, the platform leverages the proven“five-nines” (99.999%) reliability of hundreds of thousands of BrocadeSAN deployments.Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone.The Brocade DCX facilitates the consolidation of server-to-server,server-to-storage, and storage-to-storage networks with highly avail-able, lossless connectivity. In addition, it operates natively withBrocade and Brocade M-Series components, extending SAN invest-ments for maximum ROI. It is designed to support a broad range ofcurrent and emerging network protocols to form a unified, high-perfor-mance data center fabric.98 The New Data Center
    • Brocade DCX Backbones (Core)Table 1. Brocade DCX Capabilities Feature Details Industry-leading • Industry-leading Performance 8 Gbps per-port, full- capabilities for line-rate performance large enterprises • 13 Tbps aggregate dual-chassis bandwidth (6.5 Tbps for a single chassis) • 1 Tbps of aggregate ICL bandwidth • More than 5x the performance of competitive offerings High scalability • High-density, bladed architecture • Up to 384 8 Gbps Fibre Channel ports in a single chassis • Up to 768 8 Gbps Fibre Channel ports in a dual- chassis configuration • 544 Gbps aggregate bandwidth per slot plus local switching • Fibre Channel Integrated Routing • Specialty blades for 10 Gbps connectivity, Fibre Channel Routing over IP, and fabric-based applications Energy efficiency • Energy efficiency less than one-half watt per Gbps • 10x more energy efficient than competitive offerings Ultra-High • Designed to support 99.99% uptime Availability • Passive backplane, separate and redundant control processor and core switching blades • Hot-pluggable components, including redundant power supplies, fans, WWN cards, blades, optics Fabric services and • Adaptive Networking services, including QoS, applications ingress rate limiting, traffic isolation, and Top Talkers • Plug-in services for fabric-based storage virtualization, continuous data protection and replication, and online data migration Multiprotocol • Support for Fibre Channel, FICON, FCIP, and IPFC capabilities • Designed for future 10 Gigabit Ethernet (GbE), and fabric Data Center Bridging (DCB), and Fibre Channel interoperability over Ethernet (FCoE) • Native connectivity in Brocade and Brocade M-Series fabrics, including backward and forward compatibilityThe New Data Center 99
    • Chapter 9: Brocade SAN SolutionsTable 1. Brocade DCX Capabilities Feature Details Intelligent • Full utilization of the Brocade Fabric OS embedded management and operating system monitoring • Flexibility to utilize a CLI, Brocade DCFM, Brocade Advanced Web Tools, and Brocade Advanced Performance Monitoring • Integration with third-party management toolsBrocade 8 Gbps SAN Switches (Edge)Industry-leading Brocade switches are the foundation for connectingservers and storage devices in SANs, enabling organizations to accessand share data in a high-performance, manageable, and scalablemanner. To protect existing investments, Brocade switches are fullyforward and backward compatible—providing a seamless migrationpath to 8 Gbps connectivity and future technologies. This capabilityenables organizations to deploy 1, 2, 4, and 8 Gbps fabrics with highlyscalable core-to-edge configurations.Brocade standalone switch models offer flexible configurations rang-ing from 8 to 80 ports, and can function as core or edge switches,depending upon business requirements. With native E_Port interoper-ability, Brocade switches connect to the vast majority of fabrics inoperation today, allowing organizations to seamlessly integrate andscale their existing SAN infrastructures. Moreover, Brocade switchesare backed by FOS engineering, test, and support expertise to providereliable operation in mixed fabrics. All switches feature flexible portconfiguration with Ports On Demand capabilities for straightforwardscalability. Organizations can also experience high performancebetween switches by using Brocade ISL Trunking to achieve up to 64Gbps total throughput100 The New Data Center
    • Brocade 8 Gbps SAN Switches (Edge)Brocade switches meet high-availability requirements with Brocade5300, 5100, and 300 switches offering redundant, hot-pluggablecomponents. All Brocade switches feature non-disruptive softwareupgrades, automatic path rerouting, and extensive diagnostics. Lever-aging the Brocade networking model, these switches can provide afabric capable of delivering overall systemDesigned for flexibility, Brocade switches provide a low-cost solutionfor Direct-Attached Storage (DAS)-to-SAN migration, small SAN islands,Network-Attached Storage (NAS) back-ends, and the edge of core-to-edge enterprise SANs. As a result, these switches are ideal as stand-alone departmental SANs or as high-performance edge switches inlarge enterprise SANs.The Brocade 5300 and 5100 switches support full Fibre Channel rout-ing capabilities with the addition of the Fibre Channel IntegratedRouting (IR) option. Using built-in routing capabilities, organizationscan selectively share devices while still maintaining remote fabric iso-lation. They include a Virtual Fabrics feature that enables thepartitioning of a physical SAN into logical fabrics. This provides fabricisolation by application, business group, customer, or traffic type with-out sacrificing performance, scalability, security, or reliability.Brocade 5300 SwitchAs the value and volume of business data continue to rise, organiza-tions need technology solutions that are easy to implement andmanage and that can grow and change with minimal disruption. TheBrocade 5300 Switch is designed to consolidate connectivity in rapidlygrowing mission-critical environments, supporting 1, 2, 4, and 8 Gbpstechnology in configurations of 48, 64, or 80 ports in a 2U chassis.The combination of density, performance, and “pay-as-you-grow” scal-ability increases server and storage utilization, while reducingcomplexity for virtualized servers and storage.Figure 53. Brocade 5300 Switch.The New Data Center 101
    • Chapter 9: Brocade SAN SolutionsUsed at the fabric core or at the edge of a tiered core-to-edge infra-structure, the Brocade 5300 operates seamlessly with existingBrocade switches through native E_Port connectivity into Brocade FOSor M-EOS) environments. The design makes it very efficient in power,cooling, and rack density to help enable midsize and large server andstorage consolidation. The Brocade 5300 also includes Adaptive Net-working capabilities to more efficiently manage resources in highlyconsolidated environments. It supports Fibre Channel Integrated Rout-ing for selective device sharing and maintains remote fabric isolationfor higher levels of scalability and fault isolation.The Brocade 5300 utilizes ASIC technology featuring eight 8-portgroups. Within these groups, an inter-switch link trunk can supply up to68 Gbps of balanced data throughput. In addition to reducing conges-tion and increasing bandwidth, enhanced Brocade ISL Trunkingutilizes ISLs more efficiently to preserve the number of usable switchports. The density of the Brocade 5300 uniquely enables fan-out fromthe core of the data center fabric with less than half the number ofswitch devices to manage compared to traditional 32- or 40-port edgeswitches.Brocade 5100 SwitchThe Brocade 5100 Switch is designed for rapidly growing storagerequirements in mission-critical environments combining 1, 2, 4, and8 Gbps Fibre Channel technology in configurations of 24, 32, or 40ports in a 1U chassis. As a result, it provides low-cost access to indus-try-leading SAN technology and pay-as-you-grow scalability forconsolidating storage and maximizing the value of virtual serverdeployments.Figure 54. Brocade 5100 Switch.Similar to the Brocade 5300, the Brocade 5100 features a flexiblearchitecture that operates seamlessly with existing Brocade switchesthrough native E_Port connectivity into Brocade FOS or M-EOS environ-ments. With the highest port density of any midrange enterpriseswitch, it is designed for a broad range of SAN architectures, consum-ing less than 2.5 watts of power per port for exceptional power andcooling efficiency. It features consolidated power and fan assemblies102 The New Data Center
    • Brocade 8 Gbps SAN Switches (Edge)to improve environmental performance. The Brocade 5100 is a cost-effective building block for standalone networks or the edge of enter-prise core-to-edge fabrics.Additional performance capabilities include the following:• 32 Virtual Channels on each ISL enhance QoS traffic prioritization and “anti-starvation” capabilities at the port level to avoid perfor- mance degradation.• Exchange-based Dynamic Path Selection optimizes fabric-wide performance and load balancing by automatically routing data to the most efficient available path in the fabric. It augments ISL Trunking to provide more effective load balancing in certain con- figurations. In addition, DPS can balance traffic between the Brocade 5100 and Brocade M-Series devices enabled with Bro- cade Open Trunking.Brocade 300 SwitchThe Brocade 300 Switch provides small to midsize enterprises withSAN connectivity that simplifies IT management infrastructures,improves system performance, maximizes the value of virtual serverdeployments, and reduces overall storage costs. The 8 Gbps FibreChannel Brocade 300 provides a simple, affordable, single-switchsolution for both new and existing SANs. It delivers up to 24 ports of 8Gbps performance in an energy-efficient, optimized 1U form factor.Figure 55. Brocade 300 Switch.To simplify deployment, the Brocade 300 features the EZSwitchSetupwizard and other ease-of-use and configuration enhancements, aswell as the optional Brocade Access Gateway mode of operation (sup-ported with 24-port configurations only). Access Gateway modeenables connectivity into any SAN by utilizing NPIV switch standards topresent Fibre Channel connections as logical devices to SAN fabrics.Attaching through NPIV-enabled switches and directors, the Brocade300 in Access Gateway mode can connect to FOS-based, M-EOS-based, or other SAN fabrics.The New Data Center 103
    • Chapter 9: Brocade SAN SolutionsOrganizations can easily enable Access Gateway mode (see page 151)via the FOS CLI, Brocade Web Tools, or Brocade Fabric Manager. Keybenefits of Access Gateway mode include:• Improved scalability for large or rapidly growing server and virtual server environments• Simplified management through the reduction of domains and management tasks• Fabric interoperability for mixed vendor SAN configurations that require full functionalityBrocade VA-40FC SwitchThe Brocade VA-40FC is a high-performance Fibre Channel edgeswitch optimized for server connectivity in large-scale enterprise SANsAs organizations consolidate data centers, expand application ser-vices, and begin to implement cloud initiatives, large-scale serverarchitectures are becoming a standard part of the data center. Mini-mizing the network deployment steps and simplifying managementcan help organizations grow seamlessly while reducing operatingcosts.The Brocade VA-40FC helps meet this challenge, providing the firstFibre Channel edge switch optimized for server connectivity in largecore-to-edge SANs. By leveraging Brocade Access Gateway technology,the Brocade VA-40FC enables zero-configuration deployment andreduces management of the network edge—increasing scalability andsimplifying management for large-scale server architectures.Figure 56. Brocade VA-40FC Switch.The Brocade VA-40FC is in Access Gateway mode by default, which isideal for larger SAN fabrics that can benefit from the scalability offixed-port switches at the edge of the network. Some use cases forAccess Gateway mode are:• Connectivity of many servers into large SAN fabrics• Connectivity of servers into Brocade, Cisco, or any NPIV-enabled SAN fabrics• Connectivity into multiple SAN fabrics104 The New Data Center
    • Brocade Encryption Switch and FS8-18 Encryption BladeThe Brocade VA-40FC also supports Fabric Switch mode to providestandard Fibre Channel switching and routing capabilities that areavailable on all Brocade enterprise-class 8 Gbps solutions.Brocade Encryption Switch and FS8-18Encryption BladeThe Brocade Encryption Switch is a high-performance standalonedevice for protecting data-at-rest in mission-critical environments. Itscales non-disruptively, providing from 48 up to 96 Gbps of diskencryption processing power. Moreover, the Brocade EncryptionSwitch is tightly integrated with industry-leading, enterprise-class keymanagement systems that can scale to support key lifecycle servicesacross distributed environments.It is also FIPS 140-2 Level 3-compliant. Based on industry standards,Brocade encryption solutions for data-at-rest provide centralized, scal-able encryption services that seamlessly integrate into existingBrocade Fabric OS environments.Figure 57. Brocade Encryption Switch.Figure 58. Brocade FS8-18 Encryption Blade.The New Data Center 105
    • Chapter 9: Brocade SAN SolutionsBrocade 7800 Extension Switch and FX8-24Extension BladeThe Brocade 7800 Extension Switch helps provide network infrastruc-ture for remote data replication, backup, and migration. Leveragingnext-generation Fibre Channel and advanced FCIP technology, the Bro-cade 7800 provides a flexible and extensible platform to move moredata faster and further than ever before.It can be configured for simple point-to-point or comprehensive multi-site SAN extension. Up to 16 x 8 Gbps Fibre Channel ports and 6 x 1GbE ports provide unmatched Fibre Channel and FCIP bandwidth, portdensity, and throughput for maximum application performance overWAN links.Figure 59. Brocade 7800 Extension Switch.The Brocade 7800 is an ideal platform for building or expanding ahigh-performance SAN extension infrastructure. It leverages cost-effective IP WAN transport to extend open systems and mainframedisk and tape storage applications over distances that would other-wise be impossible, impractical, or too expensive with standard FibreChannel connections. A broad range of optional advanced extension,FICON, and SAN fabric services are available.• The Brocade 7800 16/6 Extension Switch is a robust platform for data centers and multisite environments implementing disk and tape solutions for open systems and mainframe environments. Organizations can optimize bandwidth and throughput through 16 x 8 Gbps FC ports and 6 x 1 GbE ports.• The Brocade 7800 4/2 Extension Switch is a cost-effective option for smaller data centers and remote offices implementing point-to- point disk replication for open systems. Organizations can opti- mize bandwidth and throughput through 4 x 8 Gbps FC ports and 2 x 1 GbE ports. The Brocade 7800 4/2 can be easily upgraded to the Brocade 7800 16/6 through software licensing.106 The New Data Center
    • Brocade Optical Transceiver ModulesThe Brocade FX8-24 Extension Blade, designed specifically for the Bro-cade DCX Backbone, helps provide the network infrastructure forremote data replication, backup, and migration. Leveraging next-gen-eration 8 Gbps Fibre Channel, 10 GbE and advanced FCIP technology,the Brocade FX8-24 provides a flexible and extensible platform tomove more data faster and further than ever before.Figure 60. Brocade FX8-24 Extension Blade.Up to two Brocade FX8-24 blades can be installed in a Brocade DCX orDCX-4S Backbone. Activating the optional 10 GbE ports doubles theaggregate bandwidth to 20 Gbps and enables additional FCIP portconfigurations (10 x 1 GbE ports and 1 x 10 GbE port, or 2 x 10 GbEports).Brocade Optical Transceiver ModulesBrocade optical transceiver modules, also known as Small Form-factorPluggables (SFPs), plug into Brocade switches, directors, and back-bones to provide Fibre Channel connectivity and satisfy a wide rangeof speed and distance requirements. Brocade transceiver modules areoptimized for Brocade 8 Gbps platforms to maximize performance,reduce power consumption, and help ensure the highest availability ofmission-critical applications. These transceiver modules support datarates up to 8 Gbps Fibre Channel and link lengths up to 30 kilometers(for 4 Gbps Fibre Channel).The New Data Center 107
    • Chapter 9: Brocade SAN SolutionsBrocade Data Center Fabric ManagerBrocade Data Center Fabric Manager (DCFM) Enterprise unifies themanagement of large, multifabric, or multisite storage networksthrough a single pane of glass. It features enterprise-class reliability,availability, and serviceability (RAS), as well as advanced features suchas proactive monitoring and alert notification. As a result, it helps opti-mize storage resources, maximize performance, and enhance thesecurity of storage network infrastructures.Brocade DCFM Enterprise configures and manages Brocade DCXBackbone family, directors, switches, and extension solutions, as wellas Brocade data-at-rest encryption, FCoE/DCB, HBA, and CNA prod-ucts. It is part of a common framework designed to manage entiredata center fabrics, from the storage ports to the HBAs, both physicaland virtual. Brocade DCFM Enterprise tightly integrates with BrocadeFabric OS (FOS) to leverage key features such as Advanced Perfor-mance Monitoring, Fabric Watch, and Adaptive Networking services.As part of a common management ecosystem, Brocade DCFM Enter-prise integrates with leading partner data center automation solutionsthrough frameworks such as the Storage Management Initiative-Speci-fication (SMI-S).Figure 61. Brocade DCFM main window showing the topology view.108 The New Data Center
    • Brocade LAN Network 10SolutionsEnd-to-end networking from the edge to thecore of todays networking infrastructuresBrocade offers a complete line of enterprise and service providerEthernet switches, Ethernet routers, application management, andnetwork-wide security products. With industry-leading features, perfor-mance, reliability, and scalability capabilities, these products enablenetwork convergence and secure network infrastructures to supportadvanced data, voice, and video applications. The complete Brocadeproduct portfolio enables end-to-end networking from the edge to thecore of todays networking infrastructures. The sections in this chapterintroduce you to these products and briefly describe them. For themost current information, visit www.brocade.com > Products and Solu-tions. Choose a product from the drop-down list on the left and thenscroll down to view Data Sheets, FAQs, Technical Briefs, and WhitePapers.The LAN products described in this chapter are:• “Core and Aggregation” on page 110• “Access” on page 112• “Brocade IronView Network Manager” on page 115• “Brocade Mobility” on page 116For a more detailed discussion of the access, aggregation, and corelayers in the data center network, see “Chapter 6: The New Data Cen-ter LAN” starting on page 69.The New Data Center 109
    • Chapter 10: Brocade LAN Network SolutionsCore and AggregationThe network core is the nucleus of the data center LAN. In a three-tiermodel, the core also provides connectivity to the external corporatenetwork, intranet, and Internet. At the aggregation layer, uplinks frommultiple access-layer switches are further consolidated into fewerhigh-availability and high-performance switches.For application delivery and control, see also “Brocade ServerIronADX” on page 95.Brocade NetIron MLX SeriesThe Brocade NetIron MLX Series of switching routers is designed toprovide the right mix of functionality and high performance. whilereducing TCO in the data center. Built with the Brocade state-of-the-art,fifth-generation, network-processor-based architecture and Terabit-scale switch fabrics, the NetIron MLX Series offers network planners arich set of high-performance IPv4, IPv6, MPLS, and Multi-VRF capabili-ties as well as advanced Layer 2 switching capabilities.The NetIron MLX Series includes the 4-slot NetIron MLX-4, 8-slotNetIron MLX-8, 16-slot NetIron MLX-16, and the 32-slot NetIron MLX-32. The series offers industry-leading port capacity and density withup to 256 x 10 GbE, 1536 x 1 GbE, 64 x OC-192, or 256 x OC-48 portsin a single system.Figure 62. Brocade NetIron MLX-4.110 The New Data Center
    • Core and AggregationBrocade BigIron RX SeriesThe Brocade BigIron RX Series of switches provides the first 2.2 billionpacket-per-second device that scale cost-effectively from the enter-prise edge to the core with hardware-based IP routing to 512,000 IProutes per line module. The high-availability design features redundantand hot-pluggable hardware, hitless software upgrades, and gracefulBGP and OSPF restart.The BigIron RX Series of Layer 2/3 Ethernet switches enables networkdesigners to deploy an Ethernet infrastructure that addresses todaysrequirements with a scalable and future-ready architecture that willsupport network growth and evolution for years to come. BigIron RXSeries incorporates the latest advances in switch architecture, systemresilience, QoS, and switch security in a family of modular chassis, set-ting leading industry benchmarks for price performance, scalabilityand TCO.Figure 63. Brocade BigIron RX-16.The New Data Center 111
    • Chapter 10: Brocade LAN Network SolutionsAccessThe access layer provides the direct network connection to applicationand file servers. Servers are typically provisioned with two or more GbEor 10 GbE network ports for redundant connectivity. Server platformsvary from standalone servers to 1U rack-mount servers and bladeservers with passthrough cabling or bladed Ethernet switches.Brocade TurboIron 24X SwitchThe Brocade TurboIron 24X switch is a compact, high-performance,high-availability, and high-density 10/1 GbE dual-speed solution thatmeets mission-critical data center ToR and High-Performance ClusterComputing (HPCC) requirements. An ultra-low-latency, cut-through,non-blocking architecture and low power consumption help provide acost-effective solution for server or compute-node connectivity.Additional highlights include:• Highly efficient power and cooling with front-to-back airflow, auto- matic fan speed adjustment, and use of SFP+ and direct attached SFP+ copper (Twinax)• High availability with redundant, load-sharing, hot-swappable, auto-sensing/switching power supplies and triple-fan assembly• End-to-end QoS with hardware-based marking, queuing, and con- gestion management• Embedded per-port sFlow capabilities to support scalable hard- ware-based traffic monitoring• Wire-speed performance with an ultra-low-latency, cut-through, non-blocking architecture ideal for HPC, iSCSI storage, real-time application environmentsFigure 64. Brocade TurboIron 24X Switch.112 The New Data Center
    • AccessBrocade FastIron CX SeriesThe Brocade FastIron CX Series of switches provides new levels of per-formance, scalability, and flexibility required for todays enterprisenetworks. With advanced capabilities, these switches deliver perfor-mance and intelligence to the network edge in a flexible 1U formfactor, which helps reduce infrastructure and administrative costs.Designed for wire-speed and non-blocking performance, FastIron CXswitches include 24- and 48-port models, in both Power over Ethernet(PoE) and non-PoE versions. Utilizing built-in 16 Gbps stacking portsand Brocade IronStack technology, organizations can stack up to eightswitches into a single logical switch with up to 384 ports. PoE modelssupport the emerging Power over Ethernet Plus (PoE+) standard todeliver up to 30 watts of power to edge devices, enabling next-genera-tion campus applications.Figure 65. Brocade FastIron CX-624S-HPOE Switch.Brocade NetIron CES 2000 SeriesWhether they are located at a central office or remote site, the avail-ability of space often determines the feasibility of deploying newequipment and services in a data center environment. The BrocadeNetIron Compact Ethernet Switch (CES) 2000 Series is purpose-builtto provide flexible, resilient, secure, and advanced Ethernet and MPLS-based services in a compact form factor.The NetIron CES 2000 Series is a family of compact 1U, multiserviceedge/aggregation switches that combine powerful capabilities withhigh performance and availability. The switches provide a broad set ofadvanced Layer 2, IPv4, and MPLS capabilities in the same device. Asa result, they support a diverse set of applications in data center, andlarge enterprise networks.The New Data Center 113
    • Chapter 10: Brocade LAN Network SolutionsFigure 66. Brocade NetIron CES 2000 switches, 24- and 48-port con-figurations in both Hybrid Fiber (HF) and RJ45 versions.Brocade FastIron Edge X SeriesThe Brocade FastIron Edge X Series switches are high-performancedata center-class switches that provide Gigabit copper and fiber-opticconnectivity and 10 GbE uplinks. Advanced Layer 3 routing capabilitiesand full IPv6 support are designed for the most demandingenvironments.FastIron Edge X Series offers a diverse range of switches that meetLayer 2/3 edge, aggregation, or small-network backbone-connectivityrequirements with intelligent network services, including superior QoS,predictable performance, advanced security, comprehensive manage-ment, and integrated resiliency. It is the ideal networking platform todeliver 10 GbE.Figure 67. Brocade FastIron Edge X 624.114 The New Data Center
    • Brocade IronView Network ManagerBrocade IronView Network ManagerBrocade IronView Network Manager (INM) provides a comprehensivetool for configuring, managing, monitoring, and securing Brocadewired and wireless network products. It is an intelligent network man-agement solution that reduces the complexity of changing, monitoring,and managing network-wide features such as Access Control Lists(ACLs), rate limiting policies, VLANs, software and configurationupdates, and network alarms and events.Using Brocade INM, organizations can automatically discover Brocadenetwork equipment and immediately acquire, view, and archive config-urations for each device. In addition, they can easily configure anddeploy policies for wired and wireless products.Figure 68. Brocade INM Dashboard (top) and Backup ConfigurationManager (bottom).The New Data Center 115
    • Chapter 10: Brocade LAN Network SolutionsBrocade MobilityWhile once considered a luxury, Wi-Fi connectivity is now an integralpart of the modern enterprise. To that end, most IT organizations aredeploying Wireless LANs (WLANs). With the introduction of the IEEE802.11n standard, these organizations can save significant capitaland feel confident in expanding their wireless deployments to busi-ness-critical applications. In fact, wireless technologies often matchthe performance of wired networks-all with simplified deployment,robust security, and at a significantly lower cost. Brocade offers all thepieces to deploy a wireless enterprise. In addition to indoor networkingequipment, Brocade also provides the tools to wirelessly connect mul-tiple buildings across a corporate campus.Brocade offers two models of controllers: the Brocade RFS6000 andRFS7000 Controller. Brocade Mobility controllers enables wirelessenterprises by providing an integrated communications platform thatdelivers secure and reliable voice, video, and data applications inWireless LAN (WLAN) environments. Based on an innovative architec-ture, Brocade mobility controllers provide:• Wired and wireless networking services• Multiple locationing technologies such as Wi-i and RFID• Resiliency via 3G/4G wireless broadband backhaul• High performance with 802.11n networksThe Brocade Mobility RFS7000 features a multicore, multithreadedarchitecture designed for large-scale, high-bandwidth enterprisedeployments. It easily handles from 8000 to 96,000 mobile devicesand 256 to 3000 802.11 dual-radio a/b/g/n access points or 1024adaptive access points (Brocade Mobility 5181 a/b/g or BrocadeMobility 7131 a/b/g/n) per controller. The Brocade Mobility RFS7000provides the investment protection enterprises require: innovativeclustering technology provides a 12X capacity increase, and smartlicensing enables efficient, scalable network expansion.116 The New Data Center
    • Brocade One 11Simplifying complexity in the virtualizeddata centerBrocade One, announced in mid-2010, is the unifying network archi-tecture and strategy that enables customers to simplify the complexityof virtualizing their applications. By removing network layers, simplify-ing management, and protecting existing technology investments,Brocade One helps customers migrate to a world where informationand services are available anywhere in the cloud.Evolution not RevolutionIn the data center, Brocade shares a common industry view that ITinfrastructures will eventually evolve to a highly virtualized, services-on-demand state enabled through the cloud. The process, an evolu-tionary path toward this desired end-state, is as important as reachingthe end-state. This evolution has already started inside the data centerand Brocade offers insights on the challenges faced as it moves out tothe rest of the network.The realization of this vision requires radically simplified network archi-tectures. This is best achieved through a deep understanding of datacenter networking intricacies and the rejection of rip-and-replacedeployment scenarios with vertically integrated stacks sourced from asingle vendor. In contrast, the Brocade One architecture takes a cus-tomer-centric approach with the following commitments:• Unmatched simplicity. Dramatically simplifying the design, deploy- ment, configuration, and ongoing support of IT infrastructures.• Investment protection. Emphasizing an approach that builds on existing customer multivendor infrastructures while improving their total cost of ownership.The New Data Center 117
    • Chapter 11: Brocade One• High-availability networking. Supporting the ever-increasing requirements for unparalleled uptime by setting the standard for continuous operations, ease of management, and resiliency.• Optimized applications. Optimizing current and future customer applications.The new Brocade converged fabric solutions include unique and pow-erful innovations customized to support virtualized data centers,including:• Brocade Virtual Cluster Switching™ (VCS). A new class of Brocade- developed technologies designed to address the unique require- ments of virtualized data centers. Brocade VCS, available in shipping product in late 2010, overcomes the limitations of con- ventional Ethernet networking by applying non-stop operations, any-to-any connectivity and the intelligence of fabric switching. Brocade Virtual Cluster Switching (VCS) Ethernet Distributed Logical Fabric Intelligence Chassis No STP Self-forming Logically flattens and Multi-path, deterministic Arbitrary topology collapses network layers Auto-healing, non-disruptive Network aware of all Scale edge and manage members, devices, VMs as if single switch Lossless, low latency Masterless control, no Auto-configuration Convergence ready reconfiguration Centralized or distributed VAL interaction management, end-to-end Dynamic Services Connectivity over distance, Native Fibre Channel, Security Services, Layer 4 - 7, and so onFigure 69. The pillars of Brocade VCS (detailed in the next section).• Brocade Virtual Access Layer (VAL). A logical layer between Bro- cade converged fabric and server virtualization hypervisors that will help ensure a consistent interface and set of services for vir- tual machines (VMs) connected to the network. Brocade VAL is designed to be vendor agnostic and will support all major hypervi- sors by utilizing industry-standard technologies, including the emerging Virtual Ethernet Port Aggregator (VEPA) and Virtual Ethernet Bridging (VEB) standards.118 The New Data Center
    • Industrys First Converged Data Center Fabric• Brocade Open Virtual Compute Blocks. Brocade is working with leading systems and IT infrastructure vendors to build tested and verified data center blueprints for highly scalable and cost-effec- tive deployment of VMs on converged fabrics.• Brocade Network Advisor. A best-in-class element management toolset that will help provide industry-standard and customized support for industry-leading network management, storage man- agement, virtualization management, and data center orchestration tools.• Multiprotocol Support. Brocade converged fabrics are designed to transport all types of network and storage traffic over a single wire to reduce complexity and help ensure a simplified migration path from current technologies.Industrys First Converged Data Center FabricBrocade designed VCS as the core technology for building large, high-performance and flat Layer 2 data center fabrics to better support theincreased adoption of server virtualization. Brocade VCS is built onData Center Bridging technologies to meet the increased network reli-ability and performance requirements as customers deploy more andmore VMs. Brocade helped pioneer DCB through industry standardsbodies to ensure that the technology would be suitable for the rigors ofdata center networking.Another key technology in Brocade VCS is the emerging IETF standardTransparent Interconnection of Lots of Links (TRILL), which will providea more efficient way of moving data throughout converged fabrics byautomatically determining the shortest path between routes. Both DCBand TRILL are advances to current technologies and are critical forbuilding large, flat, and efficient converged fabrics capable of support-ing both Ethernet and storage traffic. They are also examples of howBrocade has been able to leverage decades of experience in buildingdata center fabrics to deliver the industrys first converged fabrics.Brocade VCS also simplifies the management of Brocade convergedfabrics by managing multiple discrete switches as one logical entity.These VCS features allow customers to flatten network architecturesinto a single Layer 2 domain that can be managed as a single switch.This reduces network complexity and operational costs while allowingVCS users to scale their VM environments to global topologies.The New Data Center 119
    • Chapter 11: Brocade OneEthernet FabricIn the new data center LAN, Spanning Tree Protocol is no longer neces-sary, because the Ethernet fabric appears as a single logical switch toconnected servers, devices, and the rest of the network. Also, Multi-Chassis Trunking (MCT) capabilities in aggregation switches enable alogical one-to-one relationship between the access (VCS) and aggrega-tion layers of the network. The Ethernet fabric is an advanced multi-path network utilizing TRILL, in which all paths in the network areactive and traffic is automatically distributed across the equal-costpaths. In this optimized environment, traffic automatically takes theshortest path for minimum latency without manual configuration.And, unlike switch stacking technologies, the Ethernet fabric is master-less. This means that no single switch stores configuration informationor controls fabric operations. Events such as added, removed, or failedlinks are not disruptive to the Ethernet fabric and do not require alltraffic in the fabric to stop. If a single link fails, traffic is automaticallyrerouted to other available paths in less than a second. Moreover, sin-gle component failures do not require the entire fabric topology toreconverge, helping to ensure that no traffic is negatively impacted byan isolated issue.Distributed IntelligenceBrocade VCS also enhances server virtualization with technologiesthat increase VM visibility in the network and enable seamless migra-tion of policies along with the VM. VCS achieves this through adistributed services architecture that makes the fabric aware of all ofconnected devices and shares the information across those devices.Automatic Migration of Port Profiles (AMPP), a VCS feature, enables aVMs network profiles—such as security or QoS levels—to follow the VMduring migrations without manual intervention. This unprecedentedlevel of VM visibility and automated profile management helps intelli-gently remove the physical barriers to VM mobility that exists in currenttechnologies and network architectures.Distributed intelligence allows the Ethernet fabric to be “self-forming.”When two VCS-enabled switches are connected, the fabric is automati-cally created, and the switches discover the common fabricconfiguration. Scaling bandwidth in the fabric is as simple as connect-ing another link between switches or adding a new switch as required.120 The New Data Center
    • Industrys First Converged Data Center FabricThe Ethernet fabric does not dictate a specific topology, so it does notrestrict oversubscription ratios. As a result, network architects can cre-ate a topology that best meets specific application requirements.Unlike other technologies, VCS enables different end-to-end subscrip-tion ratios to be created or fine- tuned as application demands changeover time.Logical ChassisAll switches in an Ethernet fabric are managed as if they were a singlelogical chassis. To the rest of the network, the fabric looks no differentthan any other Layer 2 switch. The network sees the fabric as a singleswitch, whether the fabric contains as few as 48 ports or thousands ofports. Each physical switch in the fabric is managed as if it were a portmodule in a chassis. This enables fabric scalability without manualconfiguration. When a port module is added to a chassis, the moduledoes not need to be configured, and a switch can be added to theEthernet fabric just as easily. When a VCS-enabled switch is connectedto the fabric, it inherits the configuration of the fabric and the newports become available immediately.The logical chassis capability significantly reduces management ofsmall-form-factor edge switches. Instead of managing each top-of-rackswitch (or switches in blade server chassis) individually, organizationscan manage them as one logical chassis, which further optimizes thenetwork in the virtualized data center and will further enable a cloudcomputing model.Dynamic ServicesBrocade VCS also offers dynamic services so that you can add newnetwork and fabric services to Brocade converged fabrics, includingcapabilities such as fabric extension over distance, application deliv-ery, native Fibre Channel connectivity, and enhanced security servicessuch as firewalls and data encryption. Through VCS, the new switchesand software with these services behave as service modules within alogical chassis. Furthermore, the new services are then made avail-able to the entire converged fabric, dynamically evolving the fabric withnew functionality. Switches with these unique capabilities can join theEthernet fabric, adding a network service layer across the entire fabric.The New Data Center 121
    • Chapter 11: Brocade OneThe VCS ArchitectureThe VCS architecture, shown in Figure 70, flattens the network by col-lapsing the traditional access and aggregation layers. Since the fabricis self-aggregating, there is no need for aggregation switches to man-age subscription ratios and provide server-to-server communication.For maximum flexibility of server and storage connectivity, multipleprotocols and speeds are supported: 1 GbE, 10 GbE, 10 GbE with DCB,and Fibre Channel. Since the Ethernet fabric is one logical chassis withdistributed intelligence, the VM sphere of mobility spans the entireVCS. Mobility extends even further with the VCS fabric extensionDynamic Service. At the core of the data center, routers are virtualizedusing MCT and provide high-performance connectivity between Ether-net fabrics, inside the data center or across data centers.Servers running high-priority applications or other servers requiringthe highest block storage service levels connect to the SAN usingnative Fibre Channel. For lower-tier applications, FCoE or iSCSI storagecan be connected directly to the Ethernet fabric, providing shared stor-age for servers connected to that fabric. Remote data center Public VM Network VCS Core VM routers VM VM VM VCS fabric Layer 4–7 VM extension application delivery VCS fabric extension Security ervices VCS SAN(firewall, encryption) Dedicated Fibre FC/FCoE/ Channel SAN for iSCSI/NAS Tier 1 applications VM VM VM VM storage VM VM VM VM VM VM VM VM Rack-mount Blade VM VM servers serversFigure 70. A Brocade VCS reference network architecture.122 The New Data Center
    • “Best Practices for Energy AEfficient Storage Operations”Version 1.0October 2008Authored by Tom Clark, Brocade, Green Storage Initiative (GSI) Chairand Dr. Alan Yoder, NetApp, GSI Governing BoardReprinted with permission of the SNIAIntroductionThe energy required to support data center IT operations is becominga central concern worldwide. For some data centers, additional energysupply is simply not available, either due to finite power generationcapacity in certain regions or the inability of the power distribution gridto accommodate more lines. Even if energy is available, it comes at anever increasing cost. With current pricing, the cost of powering ITequipment is often higher than the original cost of the equipmentitself. The increasing scarcity and higher cost of energy, however, isbeing accompanied by a sustained growth of applications and data.Simply throwing more hardware assets at the problem is no longer via-ble. More hardware means more energy consumption, more heatgeneration and increasing load on the data center cooling system.Companies are therefore now seeking ways to accommodate datagrowth while reducing their overall power profile. This is a difficultchallenge.Data center energy efficiency solutions span the spectrum from moreefficient rack placement and alternative cooling methods to serverand storage virtualization technologies. The SNIAs Green Storage Ini-tiative was formed to identify and promote energy efficiency solutionsspecifically relating to data storage. This document is the first iterationof the SNIA GASSYs recommendations for maximizing utilization ofThe New Data Center 123
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”data center storage assets while reducing overall power consumption.We plan to expand and update the content over time to include newenergy-related storage technologies as well as SNIA-generated metricsfor evaluating energy efficiency in storage product selection.Some Fundamental ConsiderationsReducing energy consumption is both an economic and a social imper-ative. While data centers represent only ~2% of total energyconsumption in the US, the dollar figure is approximately $4B annu-ally. In terms of power generation, data centers in the US require theequivalent of six 1000 MegaWatt power plants to sustain current oper-ations. Global power consumption for data centers is more than twicethe US figures. The inability of the power generation and delivery infra-structure to accommodate the growth in continued demand, however,means that most data centers will be facing power restrictions in thecoming years. Gartner predicts that by 2009, half of the worlds datacenters will not have sufficient power to support their applications1.An Emerson Power survey projects that 96% of all data centers will nothave sufficient power by 2011.2 Even if there was a national campaignto build alternative energy generation capability, new systems wouldnot be online soon enough to prevent a widespread energy deficit. Thissimply highlights the importance of finding new ways to leverage tech-nology to increase energy efficiency within the data center andaccomplish more IT processing with fewer energy resources.In addition to the pending scarcity and increased cost of energy topower IT operations, data center managers face a continued explosionin data growth. Since 2000, the amount of corporate data generatedworldwide has grown from 5 exabytes (5 billion gigabytes) to over 300exabytes, with projections of ~1 zetabyte (1000 exabytes) by 2010.This data must be stored somewhere. The sustained growth of datarequires new tools for data management, storage allocation, dataretention and data redundancy.1. “Gartner Says 50 Percent of Data Centers Will Have Insufficient Power and Cooling Capacity by 2008,” Gartner Inc. Press Release, November 29, 20062. “Emerson Network Power Presents Industry Survey Results That Project 96 Percent of Today`s Data Centers Will Run Out of Capacity by 2011" Emerson Press Release, November 16, 2006124 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”The conflict between the available supply of energy to power IT opera-tions and the increasing demand imposed by data growth is furtherexacerbated by the operational requirement for high availability accessto applications and data. Mission-critical applications in particular arehigh energy consumers and require more powerful processors, redun-dant servers for failover, redundant networking connectivity,redundant fabric pathing, and redundant data storage in the form ofmirroring and data replication for disaster recovery. These top tierapplications are so essential for business operations, however, thatthe doubling of server and storage hardware elements and the accom-panying doubling of energy draw have been largely unavoidable. Heretoo, though, new green storage technologies and best practices canassist in retaining high availability of applications and data whilereducing total energy requirements.Shades of GreenThe quandary for data center managers is in identifying which newtechnologies will actually have a sustainable impact for increasingenergy efficiency and which are only transient patches whose initialenergy benefit quickly dissipates as data center requirements change.Unfortunately, the standard market dynamic that eventually separatesweak products from viable ones has not had sufficient time to elimi-nate the green pretenders. Consequently, analysts often complainabout the greenwashing of vendor marketing campaigns and theopportunistic attempt to portray marginally useful solutions as thecure to all the IT managers energy ills.Within the broader green environmental movement greenwashing isalso known as being “lite green” or sometimes “light green”. Thereare, however, other shades of green. Dark green refers to environmen-tal solutions that rely on across-the-board reductions in energy andmaterial consumption. For a data center, a dark green tactic would beto simply reduce the number of applications and associated hardwareand halt the expansion of data growth. Simply cutting back, however, isnot feasible for todays business operations. To remain competitive,businesses must be able to accommodate growth and expansion ofoperations.Consequently, viable energy efficiency for ongoing data center opera-tions must be based on solutions that are able to leverage state-of-the-art technologies to do much more with much less. This aligns to yetanother shade of environmental green known as “bright green”. Brightgreen solutions reject both the superficial lite green and the Ludditedark green approaches to the environment and rely instead on techni-The New Data Center 125
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”cal innovation to provide sustainable productivity and growth whilesteadily driving down energy consumption. The following SNIA GSI bestpractices include many bright green solutions that accomplish the goalof energy reduction while increasing productivity of IT storageoperations.Although the Best Practices recommendations listed below are num-bered sequentially, no prioritization is implied. Every data centeroperation has different characteristics and what is suitable for oneapplication environment may not work in another.These recommendations collectively fall into the category of “silverbuckshot” in addressing data center storage issues. There is no singlesilver bullet to dramatically reduce IT energy consumption and cost.Instead, multiple energy efficient technologies can be deployed in con-cert to reduce the overall energy footprint and bring costs undercontrol. Thin provisioning and data deduplication, for example, are dis-tinctly different technologies that together can help reduce theamount of storage capacity required to support applications and thusthe amount of energy-consuming hardware in the data center. Whenevaluating specific solutions, then, it is useful to imagine how they willwork in concert with other products to achieve greater efficiencies.Best Practice #1: Manage Your DataA significant component of the exponential growth of data is thegrowth of redundant copies of data. By some industry estimates, overhalf of the total volume of a typical companys data exists in the formof redundant copies dispersed across multiple storage systems andclient workstations. Consider the impact, for example, of emailing a4MB PowerPoint attachment to 100 users instead of simply sending alink to the file. The corporate email servers now have an additional400 MB of capacity devoted to redundant copies of the same data.Even if individual users copy the attachment to their local drives, theoriginal email and attachment may languish on the email server formonths before the user tidies their Inbox. In addition, some users maycopy the attachment to their individual share on a data center fileserver, further compounding the duplication. And to make mattersworse, the lack of data retention policies can result in duplicate copiesof data being maintained and backed up indefinitely.This phenomenon is replicated daily across companies of every sizeworldwide, resulting in ever increasing requirements for storage, lon-ger backup windows and higher energy costs. A corporate policy fordata management, redundancy and retention is therefore an essentialfirst step in managing data growth and getting storage energy costs126 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”under control. Many companies lack data management policies oreffective means to enforce them because they are already over-whelmed with the consequences of prior data avalanches. Respondingreactively to the problem, however, typically results in the spontaneousacquisition of more storage capacity, longer backup cycles and moreenergy consumption. To proactively deal with data growth, begin withan audit of your existing applications and data and begin prioritizingdata in terms of its business value.Although tools are available to help identify and reduce data redun-dancy throughout the network, the primary outcome of a data auditshould be to change corporate behavior. Are data sets periodicallyreviewed to ensure that only information that is relevant to business isretained? Does your company have a data retention policy and mecha-nisms to enforce it? Are you educating your users on the importance ofmanaging their data and deleting non-essential or redundant copies offiles? Are your Service Level Agreements (SLAs) structured to rewardmore efficient data management by individual departments? Giventhat data generators (i.e., end users) typically do not understandwhere their data resides or what resources are required to support it,creating policies for data management and retention can be a usefulmeans to educate end users about the consequences of excessivedata redundancy.Proactively managing data also requires aligning specific applicationsand their data to the appropriate class of storage. Without a logical pri-oritization of applications in terms of business value, all applicationsand data receive the same high level of service. Most applications,however, are not truly mission-critical and do not require the moreexpensive storage infrastructure needed for high availability and per-formance. In addition, even high-value data does not typically sustainits value over time. As we will see in the recommendations below,aligning applications and data to the appropriate storage tier andmigrating data from one tier to another as its value changes canreduce both the cost of storage and the cost of energy to drive it. Thisis especially true when SLAs are structured to require fewer backupcopies as data value declines.The New Data Center 127
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”Best Practice #2: Select the Appropriate Storage RAIDLevelStorage networking provides multiple levels of data protection, rangingfrom simple CRC checks on data frames to more sophisticated datarecovery mechanisms such as RAID. RAID guards against catastrophicloss of data when disk drives fail by creating redundant copies of dataor providing parity reconstruction of data onto spare disks.RAID 1 mirroring creates a duplicate copy of disk data, but at theexpense of doubling the number of disk drives and consequently dou-bling the power consumption of the storage infrastructure. The primaryadvantage of RAID 1 is that it can withstand the failure of one or all ofthe disks in one mirror of a given RAID set. For some mission-criticalenvironments, the extra cost and power usage characteristic of RAID 1may be unavoidable. Accessibility to data is sometimes so essential forbusiness operations that the ability to quickly switch from primary stor-age to its mirror without any RAID reconstruct penalty is an absolutebusiness requirement. Likewise, asynchronous and synchronous datareplication provide redundant copies of disk data for high availabilityaccess and are widely deployed as insurance against system or sitefailure. As shown in Best Practices #1, however, not all data is missioncritical and even high value data may decrease in value over time. It istherefore essential to determine what applications and data are abso-lutely required for continuous business operations and thus meritmore expensive and less energy efficient RAID protection.RAID 5s distributed parity algorithm enables a RAID set to withstandthe loss of a single disk drive in a RAID set. In that respect, it offers thebasic data protection against disk failure that RAID 1 provides, butonly against a single disk failure and with no immediate failover to amirrored array. While the RAID set does remain online, a failed diskmust be reconstructed from the distributed parity on the survivingdrives in the set, possibly impacting performance. Unlike RAID 1, how-ever, RAID 5 only requires one spare drive in a RAID set. Fewerredundant drives means less energy consumption as well as better uti-lization of raw capacity.By adding two additional drives, RAID 6 can withstand the loss of twodisk drives in a RAID set, providing a higher availability than RAID 5.Both solutions, however, are more energy efficient than RAID 1 mirror-ing (or RAID 1+0 mirroring and striping) and should be considered forapplications that do not require an immediate failover to a secondaryarray.128 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”  - Green technologies use less raw capacity to store and use the same data set Test - Power consumption falls accordingly Test Test 10 TB Test Test Test Test Test Test Test Test Test Test Test Test Test Archive Test Test Test Test Test Test Test Backup Archive Snapshots Test Test 5 TB “Growth” Backup Archive Archive Archive RAID10 Snapshots Backup Backup Backup Archive “Growth” Backup Data Snapshot s Snapshots Snapshot s Snapshots RAID DP “Growth” “G rowth” “Growth” “Growth” Data RAID DP RAID DP RAID DP RAID DP Snapshots Data Data Data Data “Growth” Snapshots “Growth” Snapshot s Snapshots Snapshot s Snapshots RAID10 1 TB RAIDDP “Growth” RAIDDP “G rowth” RAIDDP “Growth” RAIDDP “Growth” RAIDDP Data Data Data Data Data Data RAID 5/6 Thin Multi- Virtual Dedupe Provisioning Use Clones & Backups CompressionFigure 1. Software Technologies for Green Storage, © 2008 StorageNetworking Industry Association, All Rights Reserved, Alan Yoder,NetAppAs shown in Figure 1, the selection of the appropriate RAID levels toretain high availability data access while reducing the storage hard-ware footprint can enable incremental green benefits when combinedwith other technologies.Best Practice #3: Leverage Storage VirtualizationStorage virtualization refers to a suite of technologies that create a log-ical abstraction layer above the physical storage layer. Instead ofmanaging individual physical storage arrays, for example, virtualizationenables administrators to manage multiple storage systems as a sin-gle logical pool of capacity, as shown in Figure 2.The New Data Center 129
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”  Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 SAN LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6 LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55 virtualized storage pool LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55 Array A Array B Array C Array A Array B Array C physical storage physical storageFigure 2. Storage Virtualization: Technologies for Simplifying Data Stor-age and Management, T. Clark, Addison-Wesley, used with permissionfrom the authorOn its own, storage virtualization is not inherently more energy effi-cient than conventional storage management but can be used tomaximize efficient capacity utilization and thus slow the growth ofhardware acquisition. By combining dispersed capacity into a singlelogical pool, it is now possible to allocate additional storage toresource-starved applications without having to deploy new energy-consuming hardware. Storage virtualization is also an enabling foun-dation technology for thin provisioning, resizeable volumes, snapshotsand other solutions that contribute to more energy efficient storageoperations.Best Practice #4: Use Data CompressionCompression has long been used in data communications to minimizethe number of bits sent along a transmission link and in some storagetechnologies to reduce the amount of data that must be stored.Depending on implementation, compression can impose a perfor-mance penalty because the data must be encoded when written anddecoded (decompressed) when read. Simply minimizing redundant orrecurring bit patterns via compression, however, can reduce theamount of processed data that is stored by one half or more and thusreduce the amount of total storage capacity and hardware required.Not all data is compressible, though, and some data formats havealready undergone compression at the application layer. JPEG, MPEGand MP3 file formats, for example, are already compressed and willnot benefit from further compression algorithms when written to diskor tape.130 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”When used in combination with security mechanisms such as dataencryption, compression must be executed in the proper sequence.Data should be compressed before encryption on writes anddecrypted before decompression on reads.Best Practice #5: Incorporate Data DeduplicationWhile data compression works at the bit level, conventional data dedu-plication works at the disk block level. Redundant data blocks areidentified and referenced to a single identical data block via pointersso that the redundant blocks do not have to be maintained intact forbackup (virtual to disk or actual to tape). Multiple copies of a docu-ment, for example, may only have minor changes in different areas ofthe document while the remaining material in the copies have identi-cal content. Data deduplication also works at the block level to reduceredundancy of identical files. By retaining only unique data blocks andproviding pointers for the duplicates, data deduplication can reducestorage requirements by up to 20:1. As with data compression, thedata deduplication engine must reverse the process when data is readso that the proper blocks are supplied to the read request.Data deduplication may be done either in band, as data is transmittedto the storage medium, or in place, on existing stored data. In bandtechniques have the obvious advantage that multiple copies of datanever get made, and therefore never have to be hunted down andremoved. In place techniques, however, are required to address theimmense volume of already stored data that data center managersmust deal with.Best Practice #6: File DeduplicationFile deduplication operates at the file system level to reduce redun-dant copies of identical files. Similar to block level data deduplication,the redundant copies must be identified and then referenced viapointers to a single file source. Unlike block level data deduplication,however, file deduplication lacks the granularity to prevent redundancyof file content. If two files are 99% identical in content, both copiesmust be stored in their entirety. File deduplication therefore only pro-vides a 3 or 4 to 1 reduction in data volume in general. Rich targetssuch as full network-based backup of laptops may do much betterthan this, however.The New Data Center 131
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”Best Practice #7: Thin Provisioning of Storage to ServersIn classic server-storage configurations, servers are allocated storagecapacity based on the anticipated requirements of the applicationsthey support. Because exceeding that storage capacity over timewould result in an application failure, administrators typically over-pro-vision storage to servers. The result of fat provisioning is higher cost,both for the extra storage capacity itself and in the energy required tosupport additional spinning disks that are not actively used for ITprocessing.Thin provisioning is a means to satisfy the application servers expecta-tion of a certain volume size while actually allocating less physicalcapacity on the storage array or virtualized storage pool. This elimi-nates the under-utilization issues typical of most applications,provides storage on demand and reduces the total disk capacityrequired for operations. Fewer disks equate to lower energy consump-tion and cost and by monitoring storage usage the storageadministrator can add capacity only as required.Best Practice #8: Leverage Resizeable VolumesAnother approach to increasing capacity utilization and thus reducingthe overall disk storage footprint is to implement variable size vol-umes. Typically, storage volumes are of a fixed size, configured by theadministrator and assigned to specific servers. Dynamic volumes, bycontrast, can expand or contract depending on the amount of datagenerated by an application. Resizeable volumes require support fromthe host operating system and relevant applications, but can increaseefficient capacity utilization to 70% or more. From a green perspective,more efficient use of existing disk capacity means fewer hardwareresources over time and a much better energy profile.Best Practice #9: Writeable SnapshotsApplication development and testing are integral components of datacenter operations and can require significant increases in storagecapacity to perform simulations and modeling against real data.Instead of allocating additional storage space for complete copies oflive data, snapshot technology can be used to create temporary copiesfor testing. A snapshot of the active, primary data is supplemented bywriting only the data changes incurred by testing. This minimizes theamount of storage space required for testing while allowing the activenon-test applications to continue unimpeded.132 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”Best Practice #10: Deploy Tiered StorageStorage systems are typically categorized by their performance, avail-ability and capacity characteristics. Formerly, most application datawas stored on a single class of storage system until it was eventuallyretired to tape for preservation. Today, however, it is possible tomigrate data from one class of storage array to another as the busi-ness value and accessibility requirements of that data changes overtime. Tiered storage is a combination of different classes of storagesystems and data migration tools that enables administrators to alignthe value of data to the value of the storage container in which itresides. Because second-tier storage systems typically use slowerspinning or less expensive disk drives and have fewer high availabilityfeatures, they consume less energy compared to first-tier systems. Inaddition, some larger storage arrays enable customers to deploy bothhigh-performance and moderate-performance disks sets in the samechassis, thus enabling an in-chassis data migration.A tiered storage strategy can help reduce your overall energy consump-tion while still making less frequently accessed data available toapplications at a lower cost per gigabyte of storage. In addition, tieredstorage is a reinforcing mechanism for data retention policies as datais migrated from one tier to another and then eventually preserved viatape or simply deleted.Best Practice #11: Solid State StorageSolid state storage still commands a price premium compared tomechanical disk storage, but has excellent performance characteris-tics and much lower energy consumption compared to spinning media.While solid state storage may not be an option for some data centerbudgets, it should be considered for applications requiring high perfor-mance and for tiered storage architectures as a top-tier container.Best Practice #12: MAID and Slow-Spin Disk TechnologyHigh performance applications typically require continuous access tostorage and thus assume that all disk sets are spinning at full speedand ready to read or write data. For occasional or random access todata, however, the response time may not be as critical. MAID (mas-sive array of idle disks) technology uses a combination of cachememory and idle disks to service requests, only spinning up disks asrequired. Once no further requests for data in a specific disk set aremade, the drives are once again spun down to idle mode. Becauseeach disk drive represents a power draw, MAID provides inherentThe New Data Center 133
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”green benefits. As MAID systems are more accessed more frequently,however, the energy profile begins to approach those of conventionalstorage arrays.Another approach is to put disk drives into slow spin mode when norequests are pending. Because slower spinning disks require lesspower, the energy efficiency of slow spin arrays is inversely propor-tional to their frequency of access.Occasionally lengthy access times are inherent to MAID technology, soit is only useful when data access times of several seconds-the lengthof time it takes a disk to spin up-can be tolerated.Best Practice #13: Tape SubsystemsAs a storage technology, tape is the clear leader in energy efficiency.Once data is written to tape for preservation, the power bill is essen-tially zero. Unfortunately, however, businesses today cannot simply usetape as their primary storage without inciting a revolution among endusers and bringing applications to their knees. Although the obituaryfor tape technology has been written multiple times over the pastdecade, tape endures as a viable archive media. From a green stand-point, tape is still the best option for long term data retention.Best Practice #14: Fabric DesignFabrics provide the interconnect between servers and storage sys-tems. For larger data centers, fabrics can be quite extensive withthousands of ports in a single configuration. Because each switch ordirector in the fabric contributes to the data center power bill, design-ing an efficient fabric should include the energy and cooling impact aswell as rational distribution of ports to service the storage network.A mesh design, for example, typically incorporates multiple switchesconnected by interswitch links (ISLs) for redundant pathing. Multiple(sometimes 30 or more) meshed switches represent multiple energyconsumers in the data center. Consequently, consolidating the fabricinto higher port count and more energy efficient director chassis andcore-edge design can help simplify the fabric design and potentiallylower the overall energy impact of the fabric interconnect.Best Practice #15 - File System VirtualizationBy some industry estimates, 75% of corporate data resides outside ofthe data center, dispersed in remote offices and regional centers. Thispresents a number of issues, including the inability to comply with reg-ulatory requirements for data security and backup, duplication ofserver and storage resources across the enterprise, management andmaintenance of geographically distributed systems and increased134 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”energy consumption for corporate-wide IT assets. File system virtual-ization includes several technologies for centralizing and consolidatingremote file data, incorporating that data into data center best prac-tices for security and backup and maintaining local response-time toremote users. From a green perspective, reducing dispersed energyinefficiencies via consolidation helps lower the overall IT energyfootprint.Best Practice #16: Server, Fabric and StorageVirtualizationData center virtualization leverage virtualization of servers, the fabricand storage to create a more flexible and efficient IT ecosystem.Server virtualization essentially deduplicates processing hardware byenabling a single hardware platform to replace up to 20 platforms.Server virtualization also facilitates mobility of applications so that theproper processing power can be applied to specific applications ondemand. Fabric virtualization enables mobility and more efficient utili-zation of interconnect assets by providing policy-based data flows fromservers to storage. Applications that require first class handling aregiven a higher quality of service delivery while less demanding applica-tion data flows are serviced by less expensive paths. In addition,technologies such as NPIV (N_Port ID Virtualization) reduce the num-ber of switches required to support virtual server connections andemerging technologies such as FCoE (Fibre Channel over Ethernet)can reduce the number of hardware interfaces required to supportboth storage and messaging traffic. Finally, storage virtualization sup-plies the enabling foundation technology for more efficient capacityutilization, snapshots, resizeable volumes and other green storagesolutions. By extending virtualization end-to-end in the data center, ITcan accomplish more with fewer hardware assets and help reducedata center energy consumption.File system virtualization can also be used as a means of implement-ing tiered storage with transparent impact to users through use of aglobal name space.Best Practice #17: Flywheel UPS TechnologyFlywheel UPSs, while more expensive up front, are several percentmore efficient (typically > 97%), easier to maintain, more reliable anddo not have the large environmental footprint that conventional bat-tery-backed UPSs do. Forward-looking data center managers areincreasingly finding that this technology is less expensive in multipledimensions over the lifetime of the equipment.The New Data Center 135
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”Best Practice #18: Data Center Air ConditioningImprovementsThe combined use of economizers and hot-aisle/cold aisle technologycan result in PUEs of as low as 1.25. As the PUE (Power Usage Effec-tiveness ratio) of a traditional data center is often over 2.25, thisdifference can represent literally millions of dollars a year in energysavings.Economizers work by using outside air instead of recirculated air whendoing so uses less energy. Obviously climate is a major factor in howeffective this strategy is: heat and high humidity both reduce itseffectiveness.There are various strategies for hot/cold air containment. All dependon placing rows of racks front to front and back to back. As almost alldata center equipment is designed to draw cooled air in the front andeject heated air out the back, this results in concentrating the areaswhere heat evacuation and cool air supply are located.One strategy is to isolate only the cold aisles and to run the rest of theroom at hot aisle temperatures. As hot aisle temperatures are typicallyin the 95° F range, this has the advantage that little to no insulation isneeded in the building skin, and in cooler climates, some cooling isgotten via ordinary thermal dissipation through the building skin.Another strategy is to isolate both hot and cold aisles. This reduces thevolume of air that must be conditioned, and has the advantage thathumans will find the building temperature to be more pleasant.In general, hot aisle/cold aisle technologies avoid raised floor configu-rations, as pumping cool air upward requires extra energy.Best Practice #19: Increased Data Center temperaturesIncreasing data center temperatures can save significant amounts ofenergy. Ability to do this is dependent in much part on excellent tem-perature and power monitoring capabilities, and on conditioned aircontainment strategies. Typical enterprise class disk drives are ratedto 55° C (131° F), but disk lifetime suffers somewhat at these highertemperatures, and most data center managers think it unwise to getvery close to that upper limit. Even tightly designed cold aisle contain-ment measures may have 10 to 15 degree variations in temperaturefrom top to bottom of a rack; the total possible variation plus the maxi-mum measured heat gain across the rack must be subtracted fromthe maximum tolerated temperature to get a maximum allowable cold136 The New Data Center
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”aisle temperature. So the more precisely that air delivery can be con-trolled and measured, the higher the temperature one can run in the“cold” aisles.Benefits of higher temperatures include raised chiller water tempera-tures and efficiency, reduced fan speed, noise and power draw, andincreased ability to use outside air for cooling through an economizer.Best Practice #20: Work with Your Regional UtilitiesSome electrical utility companies and state agencies are partneringwith customers by providing financial incentives for deploying moreenergy efficient technologies. If you are planning a new data center orconsolidating an existing one, incentive programs can provide guid-ance for the types of technologies and architectures that will give thebest results.What the SNIA is Doing About Data Center EnergyUsageThe SNIA Green Storage Initiative is conducting a multi-prongedapproach for advancing energy efficient storage networking solutions,including advocacy, promotion of standard metrics, education, devel-opment of energy best practices and alliances with other industryenergy organizations such as The Green Grid. Currently, over 20 SNIAmembers have joined the SNIA GSI as voting members. A key requirement for customers is the ability to audit their currentenergy consumption and to take practical steps to minimize energyuse. The task of developing metrics for measuring the energy effi-ciency of storage network elements is being performed by the SNIAGreen Storage Technical Work Group (TWG). The SNIA GSI is support-ing the technical work of the GS-TWG by funding laboratory testingrequired for metrics development, formulation of a common taxonomyfor classes of storage and promoting GS-TWG metrics for industrystandardization.The SNIA encourages all storage networking vendors, channels, tech-nologists and end users to actively participate in the green storageinitiative and help discover additional ways to minimize the impact ofIT storage operations on power consumption. If, as industry analystsforecast, adequate power for many data centers will simply not beavailable, we all have a vital interest in reducing our collective powerrequirements and make our technology do far more with far less envi-ronmental impact.The New Data Center 137
    • Appendix A: “Best Practices for Energy Efficient Storage Operations”For more information about the SNIA Green Storage Initiative, link to:http://www.snia.org/forums/green/To view the SNIA GSI Green Tutorials, link to:http://www.snia.org/education/tutorials#greenAbout the SNIAThe Storage Networking Industry Association (SNIA) is a not-for-profitglobal organization, made up of some 400 member companies and7000 individuals spanning virtually the entire storage industry. SNIAsmission is to lead the storage industry worldwide in developing andpromoting standards, technologies, and educational services toempower organizations in the management of information. To thisend, the SNIA is uniquely committed to delivering standards, educa-tion, and services that will propel open storage networking solutionsinto the broader market. For additional information, visit the SNIA website at www.snia.org.NOTE: The section, “Green Storage Terminology” has been ommitedfrom this reprint, however, you can find green storage terms in the“Glossary” on page 141.138 The New Data Center
    • Online Sources BANSI ansi.orgASHRAE ashrae.comBlade Systems Alliance bladesystems.orgBrocade brocade.comBrocade Communities community.brocade.comBrocade Data Center Virtualization brocade.com/virtualizationBrocade TechBytes brocade.com/techbytesClimate Savers climatesaverscomputing.orgData Center Journal datacenterjournal.comData Center Knowledge datacenterknowledge.comGreen Storage Initiative snia.org/forums/greenGreener Computing greenercomputing.comIEEE ieee.orgIETF ietf.orgLEED usgbc.org/DisplayPage.aspx?CMSPageID=222SNIA snia.orgThe Green Grid thegreengrid.orgUptime Institute uptimeinstitute.orgUS Department of Energy - Data Centerswww1.eere.energy.gov/industry/saveenergynow/partnering_data_centers.htmlThe New Data Center 139
    • Appendix B: Online Sources140 The New Data Center
    • GlossaryData center network terminologyACL Access control list, a security mechanism for assigning various permissions to a network device.AES256-GCM An IEEE encryption standard for data on tape.AES256-XTS An IEEE encryption standard for data on disk.ANS American National Standards InstituteAPI Application Programming Interface, a set of calling conventions for program-to-program communication.ASHRAE American Society for Heating, Refrigerating, and Air Conditioning EngineersASI Application-specific integrated circuit, hardware designed for specific high-speed functions required by protocol applications such as Fibre Channel and Ethernet.Access Gateway A Brocade product designed to optimize storage I/O for blade server frames.Access layer Network switches that provide direct connection to servers or hosts.Active power The energy consumption of a system when powered on and under normal workload.Adaptive Brocade technology that enables proactive changes inNetworking network configurations based on defined traffic flows.Aggregation layer Network switches that provide connectivity between multiple access layer switches and the network backbone or core.Application server A compute platform optimized for hosting applications for other programs or client access.ARP spoofing Address Resolution Protocol spoofing, a hacker technique for associating a hackers Layer 2 (MAC) address with a trusted IP address.The New Data Center 141
    • GlossaryAsynchronous Data For storage, writing the same data to two separate diskReplication arrays based on a buffered scheme that may not capture every data write, typically used for long- distance disaster recovery.BTU British Thermal Unit, a metric for heat dissipation.Blade server A server architecture that minimizes the number of components required per blade, while relying on the shared elements (power supply, fans, memory, I/O) of a common frame.Blanking plates Metal plates used to cover unused portions of equipment racks to enhance air flow.Bright green Applying new technologies to enhance energy efficiency while maintaining or improving productivity.CEE Converged Enhanced Ethernet, modifications to conventional 10 Gbps Ethernet to provide the deterministic data delivery associated with Fibre Channel, also known as Data Center Bridging (DCB).CFC Chlorofluorocarbon, a refrigerant that has been shown to deplete ozone.Control path In networking, handles configuration and traffic exceptions and is implemented in software. Since it takes more time to handle control path messages, it is often logically separated from the data path to improve performance.CAN Converged network adapter, a DCB-enabled adapter that supports both FCoE and conventional TCP/IP traffic.CRAC Computer room air conditioningCore layer Typically high-performance network switches that provide centralized connectivity for the data center aggregation and access layer switches.Data compression Bit-level reduction of redundant bit patterns in a data stream via encoding. Typically used for WAN transmissions and archival storage of data to tape.Data deduplication Block-level reduction of redundant data by replacing duplicate data blocks with pointers to a single good block.Data path In networking, handles data flowing between devices (servers, clients, storage, and so on). To keep up with increasing speeds, the data path is often implemented in hardware, typical ASICs.Dark green Addressing energy consumption by the across-the- board reduction of energy consuming activities.142 The New Data Center
    • GlossaryDAS Direct-attached storage, connection of disks or disk arrays directly to servers with no intervening network.DCB Data Center Bridging, enhancements made to Ethernet LANs for use in data center environments, standards developed by IEEE and IETF.DCC Device Connection Control, a Brocade SAN security mechanism to allow only authorized devices to connect to a switch.DCiE Data Center Infrastructure Efficiency, a Green Grid for measuring IT equipment power consumption in relation to total data center power draw.Distribution layer Typically a tier in the network architecture that routes traffic between LAN segments in the access layer and aggregates access layer traffic to the core layer.DMTF Distributed Management Task Force, a standards body focused on systems management.DoS/DDoS Denial of service/Distributed denial of service, a hacking technique to prevent a server from functioning by flooding it with continuous network requests from rogue sourcesDWDM Dense wave division multiplexing, a technique for transmitting multiple data streams on a single fiber optic cable by using different wavelengths.Data center A facility to house computer systems, storage and network operationsERP Enterprise resource planning, an application that coordinates resources, information and functions of business across the enterprise.Economizer Equipment used to treat external air to cool a data center or building.Encryption A technique to encode data into a form that cant be understood so as to secure it from unauthorized access. Often, a key is used to encode and decode the data from its encrypted format.End of row EoR, provides network connectivity for multiple racks of servers by provisioning a high-availability switch at the end of the equipment rack row.Energy The capacity of a physical system to do work.Energy efficiency Using less energy to provide an equivalent level of energy service.Energy Star An EPA program that leverages market dynamics to foster energy efficiency in product design.Exabyte 1 billion bigabytesThe New Data Center 143
    • GlossaryFAIS Fabric Application Interface Standard, an ANSI standard for providing storage virtualization services from a Fibre Channel switch or director.FCF Fiber Channel forwarder, the function in FCoE that forwards frames between a Fibre Channel fabric and FCoE network.FCIP Fibre Channel over IP, an IETF specification for encapsulating Fibre Channel frames in TCP/IP, typically used for SAN extension and disaster recovery applications.FCoE Fibre Channel over Ethernet, an ANSI standard for encapsulating Fibre Channel frames over Converged Enhanced Ethernet (CEE) to simplify server connectivity.FICON Fibre Connectivity, a Fibre Channel Layer 4 protocol for mapping legacy IBM transport over Fibre Channel, typically used for distance applications.File deduplication Reduction of file copies by replacing duplicates with pointers to a single original file.File server A compute platform optimized for providing file-based data to clients over a network.Five-nines 99.999% availability, or 5.26 minutes of downtime per year.Flywheel UPS An uninterruptible power supply technology using a balanced flywheel and kinetic energy to provide transitional power.Gateway In networking, a gateway converts one protocol to another at the same layer of the networking stack.GbE Gigabit EthernetGigabit (Gb) 1000 megabitsGigabyte (GB) 1000 megabytesGreenwashing A by-product of excessive marketing and ineffective engineering.GSI Green Storage Initiative, a SNIA initiative to promote energy efficient storage practices and to define metrics for measuring the power consumption of storage systems and networks.GSLB Global server load balancing, a Brocade ServerIron ADX feature that enables client requests to be redirected to the most available and higher- performance data center resource.HBA Host bus adapter, a network interface optimized for storage I/O, typically to a Fibre Channel SAN.144 The New Data Center
    • GlossaryHCFC Hydrochlorofluorocarbon, a refrigerant shown to deplete ozone.HPC High-Performance Computing, typically supercomputers or computer clusters that provide teraflop (1012 floating point operations) levels of performance.HVAC Heating, ventilation and air conditioningHot aisle/cold aisle The arrangement of data center equipment racks to optimize air flow for cooling in alternating rows.Hot-swap The ability to replace a hardware component without disrupting ongoing operations.Hypervisor Software or firmware that enables multiple instances of an operating system and applications (for example, VMs) to run on a single hardware platform.ICL Inter-chassis link, high-performance channels used to connect multiple Brocade DCX/DCX-4S backbone platform chassis in two- or three-chassis configurations.Idle power The power consumption of a system when powered on but with no active workload.IEEE Institute of Electrical and Electronics Engineers, a standards body responsible for, among other things, Ethernet standards.IETF Internet Engineering Task Force, responsible for TCP/IP de facto standards.IFL Inter-fabric link, a set of Fibre Channel switch ports (Ex_Port on the router and E_Port on the switch) that can route device traffic between independent fabrics.IFR Inter-fabric routing, an ANSI standard for providing connectivity between separate Fibre Channel SANs without creating an extended flat Layer 2 network.ILM Information lifecycle management, a technique for migrating storage data from one class of storage system to another based on the current business value of the data.Initiator A SCSI device within a host that initiates I/O between the host and storage.IOPS/W Input/output operations per second per watt. A metric for evaluating storage I/O performance per fixed unit of energy.iSCSI Internet SCSI, an IETF standard for transporting SCSI block data over conventional TCP/IP networks.iSER iSCSI Serial RDMA, an IETF specification to facilitate direct memory access by iSCSI network adapters.The New Data Center 145
    • GlossaryISL Inter-switch Link, Fibre Channel switch ports (E_Ports) used to provide switch-to-switch connectivity.iSNS Internet Simple Name Server, an IETF specification to enable device registration and discovery in iSCSI environments.Initiator In storage, a server or host system that initiates storage I/O requests.kWh Kilowatt hours, a unit of electrical usage common used by power companies for billing purposes.LACP Link Aggregation Control Protocol, an IEEE specification for grouping multiple separate network links between two switches to provide a faster logical link.LAN Local area network, a network covering a small physical area, such as a home or office, or small groups of buildings, such as a campus or airport, typic allyl based on Ethernet and/or Wifi.Lite (or Light) green Solutions or products that purport to be energy efficient but which have only negligible green benefits.LUN Logical Unit Number, commonly used to refer to a volume of storage capacity configured on a target storage system.LUN masking A means to restrict advertisement of available LUNs to prevent unauthorized or unintended storage access.Layer 2 In networking, a link layer protocol for device-to device communication within the same subnet or network.Layer 3 In networking, a routing protocol (for example, IP) that enables devices to communicate between different subnets or networks.Layer 4–7 In networking, upper-layer network protocols (for example, TCP) that provide end-to-end connectivity, session management, and data formatting.MAID Massive array of idle disks. A storage array that only spins up disks to active state when data in a disk set is accessed or written.MAN Metropolitan area network, a mid-distance network often covering a metropolitan wide radius (about 200 km).MaxTTD Maximum time to data. For a given category of storage, the maximum time allowed to service a data read or write.MRP Metro Ring Protocol, a Brocade value-added protocol to enhance resiliency and recovery from a link or switch outage.146 The New Data Center
    • GlossaryMetadata In storage virtualization, a data map that associates physical storage locations with logical storage locations.Metric A standard unit of measurement, typically part of a system of measurements to quantify a process or event within a given domain. GB/W and IOPS/W are examples of proposed metrics that can be applied for evaluating the energy efficiency of storage systems.Non-removable A virtual tape backup system with spinning disks andmedia library shorter maximum time to data access compared to conventional tape.NAS Network-attached storage, use of an optimized file server or appliance to provide shared file access over an IP network.Near online storage Storage systems with longer maximum time to data access, typical of MAID and fixed content storage (CAS).Network Replacing multiple smaller switches and routers withconsolidation larger switches that provide higher port densities, performance and energy efficiency.Network Technology that enables a single physical networkvirtualization infrastructure to be managed as multiple separate logical networks or for multiple physical networks to be managed as a single logical network.NPIV N_Port ID Virtualization, a Fibre Channel standard that enables multiple logical network addresses to share a common physical network port.OC3 A 155 Mbps WAN link speed.OLTP On-line Transaction Processing, commonly associated with business applications that perform transactions with a database.Online storage Storage systems with fast data access, typical of most data center storage arrays in production environments.Open Systems A vendor-neutral, non-proprietary, standards-based approach for IT equipment design and deployment.Orchestration Software that enables centralized coordination between virtualization capabilities in the server, storage, and network domains to automate data center operations.PDU Power Distribution Unit. A system that distributes electrical power, typically stepping down the higher input voltage to voltages required by end equipment. A PDU can also be a single-inlet/multi-outlet device within a rack cabinetThe New Data Center 147
    • GlossaryPetabyte 1000 terabytesPoE/PoE+ Power over Ethernet, IEEE standards for powering IP devices such as VoIP phones over Ethernet cabling.Port In Fibre Channel, a port is the physical connection on a switch, host, or storage array. Each port has a personality (N_Port, E_Port, F_Port, and so on) and the personality defines the ports function within the overall Fibre Channel protocol.QoS Quality of service, a means to prioritize network traffic on a per-application basis.RAID Redundant array of independent disks, a storage technology for expediting reads and writes of data to disks and/or providing data recovery in the event of disk failure.Raised floor Typical of older data center architecture, a raised floor provides space for cable runs between equipment racks and cold air flow for equipment cooling.RBAC Role-based access control, network permissions based on defined roles or work responsibilities.Removable media A tape or optical backup system with removablelibrary cartridges or disks and >80ms maximum time to data access.Resizeable volumes Variable length volumes that can expand or contract depending on the data storage requirements of an application.RPO Recovery point objective, defines how much data is lost in a disaster.RSCN Registered state change notification, a Fibre Channel fabric feature that enables notification of storage resources leaving or entering the SAN.RTO Recovery time objective, defines how long data access is unavailable in a disaster.RSTP Rapid Spanning Tree Protocol, a bridging protocol that replaces conventional STP and enables an approximately 1-second recovery in the event of a primary link failureSAN Storage area network, a shared network infrastructure deployed between servers, disk arrays, and tape subsystems, typically based on Fibre Channel.SAN boot Firmware that enables a server to load its boot image across a SAN.SCC Switch Connection Control, a Brocade SAN security mechanism to allow only authorized switch-to-switch links.148 The New Data Center
    • GlossarySI-EER Site Infrastructure Energy Efficiency Ratio, a formula developed by The Uptime Institute to calculate total data center power consumption in relation to IT equipment power consumption.SLA Service-level agreement, typically a contracted assurance of response time or performance of an application.SMB Small and medium business, companies typically with typically fewer than 1000 employees.SMI-S Storage Management Initiative Specification, a SNIA standard based on CIM/WBEM for managing heterogeneous storage infrastructures.SNIA Storage Networking Industry Association, a standards body focused on data storage hardware and software.SNS Simple name server, a Fibre Channel switch feature that maintains a database of attached devices and capabilities to streamline device discovery.Solid state storage A storage device based on flash or other static memory technology that emulates conventional spinning disk media.SONET Synchronous Optical Networking, a WAN for multiplexing multiple protocols over a fiber optic infrastructure.Server A compute platform used to host one or more applications for client access.Server platform Hardware (typically CPU, memory, and I/O) used to support file or application access.Server virtualization Software or firmware that enables multiple instances of an operating system and applications to be run on a single hardware platform.sFlow An IETF specification for performing network packet captures at line speed for diagnostics and analysis.Single Initiator A method of securing traffic on a Fibre Channel fabricZoning so that only the storage targets used by a host initiator can connect to that initiator.Snapshot A point-in-time copy of a data set or volume used to restore data to a known good state in the event of data corruption or loss.SPOF Single point of failure.Storage taxonomy A hierarchical categorization of storage networking products based on capacity, availability, port count, and other attributes. A storage taxonomy is required for the development of energy efficiency metrics so that products in a similar class can be evaluated.The New Data Center 149
    • GlossaryStorage Technology that enables multiple storage arrays to bevirtualization logically managed as a single storage pool.Synchronous Data For storage, writing the same data to two separateReplication storage systems on a write-by-write basis so that identical copies of current data are maintained, typically used for metro distance disaster recovery.T3 A 45 Mbps WAN link speedTarget A SCSI target within a storage device that communicates with a host SCSI initiator.TCP/IP Transmission Control Protocol/Internet Protocol, used to move data in a network (IP) and to move data between cooperating computer applications (TCP). The Internet commonly relies on TCP/IP.Terabyte 1000 bigabytesThin provisioning Allocating less physical storage to an application than is indicated by the virtual volume size.Tiers Often applied to storage to indicate different cost/ performance characteristics and the ability to dynamically move data between tiers based on a policy such as ILM.ToR Top of rack, provides network connectivity for a rack of equipment by provisioning one or more switches in the upper slots of each rack.TRILL Transparent Interconnect for Lots of Links, an emerging IETF standard to enable multiple active paths through an IP network infrastructure.Target In storage, a storage device or system that receives and executes storage I/O requests from a server or host.Three-tier A network design that incorporates access,architecture aggregation, and core layers to accommodate growth and maintain performance.Top Talkers A Brocade technology for identifying the most active initiators in a storage network.Trunking In Fibre Channel, a means to combine multiple inter- switch links (ISLs) to create a faster virtual link.TWG Technical Working Group, commonly formed to define open, publicly available technology standards.Type 1 virtualization Server virtualization in which the hypervisor runs directly on the hardware.Type 2 virtualization Server virtualization in which the hypervisor runs inside an instance of an operating system.150 The New Data Center
    • GlossaryU A unit of vertical space (1.75 inches) used to measure how much rack space a piece of equipment requires, sometimes expressed as RU (Rack Unit).UPS Uninterruptible power supplyuRPF Unicast Reverse Path Forwarding, an IETF specification for blocking packets from unauthorized network addresses.VCS Virtual Cluster Switching, a new class of Brocade- developed technologies that overcomes the limitations of conventional Ethernet networking by applying non- stop operations, any-to-any connectivity, and the intelligence of fabric switching.VLAN Virtual LAN, an IEEE standard that enables multiple hosts to be configured as a single network regardless of their physical location.VM Virtual machine, one of many instances of a virtual operating system and applications hosted on a physical server.VoIP Voice over IP. A method of carrying telephone traffic over an IP network.VRF Virtual Routing and Forwarding, a means to enable a single physical router to maintain multiple separate routing tables and thus appear as multiple logical routers.VRRP Virtual Router Redundancy Protocol, an IETF specification that enable multiple routers to be configured as a single virtual router to provide resiliency in the event of a link or route failure.VSRP Virtual Switch Redundancy Protocol, a Brocade value- added protocol to enhance network resilience and recovery from a link or switch failure.Virtual Fabrics An ANSI standard to create separate logical fabrics within a single physical SAN infrastructure, often spanning multiple switches.Virtualization Technology that provides a logical abstraction layer between the administrator or user and the physical IT infrastructure.WAN Wide area network, commonly able to span the globe. WAN networks commonly employ TCP/IP networking protocols.WWN World Wide Name, a unique 64-bit identifier assigned to a Fibre Channel initiator or target.Work cell A unit of rack-mounted IT equipment used to calculate energy consumption, developed by Intel.The New Data Center 151
    • GlossaryZetabyte 1000 exabytesZoning A Fibre Channel standard for assigning specific initiators and targets as part of a separate group within a shared storage network infrastructure.152 The New Data Center
    • IndexSymbols Bidirectional Forwarding Detection"Securing Fibre Channel Fabrics" by (BFD) 77 Roger Bouchard 55 blade servers 21 storage access 28A VMs 22 blade.org 22access control lists (ACLs) 27, 57 blanking plates 13Access Gateway 22, 28 boot from SAN 24access layer 71 boot LUN discovery 25 cabling 72 Brocade Management Pack for oversubscription 72 Microsoft Service Center VirtualAdaptive Networking services 48 Machine Manager 86Address Resolution Protocol (ARP) Brocade Network Advisor 119 spoofing 78 Brocade One 117aggregation layer 71 Brocade Virtual Access Layer functions 74 (VAL) 118air conditioning 5 Brocade Virtual Cluster Switchingair flow systems 5 (VCS) 118ambient temperature 10, 14 BTU (British Thermal Units) perAmerican National Standards hour (h) 10 Institute T11.5 84ANSI/INCITS T11.5 standard 41ANSI/TIA-942 Telecommunications C Infrastructure Standard for Data CFC (chlorofluorocarbon) 14 Centers 2, 3 computer room air conditioningapplication delivery controllers 80 (CRAC) 4, 14 performance 82 consolidationapplication load balancing 81, 85 data centers 70ASHRAE Thermal Guidelines for server 21 Data Processing Environments 10 converged fabrics 118, 119asynchronous data replication 65 cooling 14Automatic Migration of Port Profiles cooling towers 15 (AMPP) 120 core layer 71 functions 74B customer-centric approach 117backup 59The New Data Center 153
    • IndexD Fdark fiber 65, 67 F_Port Trunking 28Data Center Bridging (DCB) 119 Fabric Application Interfacedata center consolidation 46, 48 Standard (FAIS) 41, 84data center evolution 117 fabric management 119Data Center Infrastructure Efficiency fabric-based security 55 (DCiE) 6 fabric-based storagedata center LAN virtualization 41 bandwidth 69 fabric-based zoning 26 consolidation 76 fan modules 53 design 75 FastWrite acceleration 66 infrastructure 70 Fibre Channel over Ethernet (FCoE) security 77 compared to iSCSI 61 server platforms 72 Fibre Channel over IP (FCIP) 62data encryption 56 FICON acceleration 66data encryption for data-at-rest. 27 floor plan 11decommissioned equipment 13 forwarding information basedehumidifiers 14 (FIB) 78denial of service (DoS) attacks 77 frame redirection in Brocade FOS 57dense wavelength division Fujitsu fiber optic system 15 multiplexing (DWDM) 65, 67Device Connection Control (DCC) 57 Gdisaster recovery (DR) 65 Gartner prediction 1distance extension 65 Gigabit Ethernet 59 technologies 66 global server load balancingdistributed DoS (DDoS) attacks 77 (GSLB) 82Distributed Management Task Force Green Storage Initiative (GSI) 53 (DMTF) 19, 84 Green Storage Technical Workingdry-side economizers 15 Group (GS TWG) 53E Heconomizers 14 HCFC (hydrochlorofluorocarbon) 14EMC Invista software 44, 87 high-level metrics 7Emerson Power survey 1 Host bus adapters (HBAs) 23encryption 56 hot aisle/cold aisle 11 data-in-flight 27 HTTP (HyperText Transferencryption keys 56 Protocol) 80energy efficiency 7 HTTPS (HyperText Transfer Protocol Brocade DCX 54 Secure) 80 new technology 70 humidifiers 14 product design 53, 79 humidity 10Environmental Protection Agency humidity probes 15 (EPA) 10 hypervisor 18EPA Energy Star 17 secure access 19Ethernet networks 69external air 14154 The New Data Center
    • IndexI OIEEE open systems approach 84 AES256-GCM encryption algo- Open Virtual Machine Format rithm for tape 56 (OVF) 84 AES256-XTS encryption algorithm outside air 14 for disk 56 ozone 14information lifecycle management (ILM) 39 Pingress rate limiting (IRL) 49 particulate filters 14Integrated Routing (IR) 63 Patterson and Pratt research 12Intel x86 18 power consumption 70intelligent fabric 48 power supplies 53inter-chassis links (ICLs) 28 preferred paths 50Invista software from EMC 44IP address spoofing 78 QIP network links 66 quality of serviceIP networks application tiering 49 layered architecture 71 Quality of Service (QoS) 24, 26 resiliency 76iSCSI 58 R Serial RDMA (iSER) 60 Rapid Spanning Tree ProtocolIT processes 83 (RSTP) 77 recovery point objective (RPO) 65K recovery time objective (RTO) 65key management solutions 57 refrigerants 14 registered state change notificationL (RSCN) 63Layer 4–7 70 RFC 3176 standard 77Layer 4–7 switches 80 RFC 3704 (uRPF) standard 78link congestion 49 RFC 3768 standard 76logical fabrics 63 role-based access control (RBAC) 27long-distance SAN connectivity 67 routing information base (RIB) 78M Smanagement framework 85 SAN boot 24measuring energy consumption 12 SAN design 45, 46metadata mapping 42, 43 storage-centric design 48Metro Ring Protocol (MRP) 77 securityMulti- Chassis Trunking (MCT) 120 SAN 55 SAN security myths 55N Web applications 81N_Port ID Virtualization security solutions 27 (NPIV) 24, 28 Server and StorageIO Group 78N_Port Trunking 23network health monitoring 85network segmentation 78The New Data Center 155
    • Indexserver virtualization 18 U IP networks 69 Unicast Reverse Path Forwarding mainstream 86 (uRPF) 78 networking complement 79 UPS systems 3service-level agreements (SLAs) 27 Uptime Institute 5 network 80sFlow V RFC 3176 standard 77 variable speed fans 12simple name server (SNS) 60, 63 Virtual Cluster Switching (VCS)Site Infrastructure Energy Efficiency architecture 122 Ratio (SI-EER) 5 Virtual Fabrics (VF) 62software as a service (SaaS) 70 virtual IPs (VIPs) 79Spanning Tree Protocol (STP) 73 virtual LUNs 37standardized units of joules 9 virtual machines (VMs) 17state change notification (SCN) 45 migration 86, 120Storage Application Services mobility 20 (SAS) 87 Virtual Router Redundancy ProtocolStorage Networking Industry (VRRP) 76 Association (SNIA) 53 Virtual Routing and Forwarding Green Storage Power Measure- (VRF) 78 ment Specification 53 virtual server pool 20 Storage Management Initiative Virtual Switch Redundancy Protocol (SMI) 84 (VSRP) 77storage virtualization 35 virtualization fabric-based 41 network 79 metadata mapping 38 orchestratration 84 tiered data storage 40 server 18support infrastructure 4 storage 35Switch Connection Control (SCC) 57 Virtualization Management Initiativesynchronous data replication 65 (VMAN) 19Synchronous Optical Networking VM mobility (SONET) 65 IP networks 70 VRRP Extension (VRRPE) 76Ttape pipelining algorithms 66 Wtemperature probes 15 wet-side economizers 15The Green Grid 6 work cell 12tiered data storage 40 World Wide Name (WWN) 25Top Talkers 26, 51top-of-rack access solution 73traffic isolation (TI) 51traffic prioritization 26Transparent Interconnection of Lots of Links (TRILL) 119156 The New Data Center