Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Attachment C.1. – Data Center Standard.doc

779 views

Published on

Published in: Business, Technology
  • Get access to 16,000 woodworking plans, Download 50 FREE Plans... ▲▲▲ http://t.cn/A6hKwZfW
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • How To Pick Out The Best Battery For a Solar Panel System, Battery Bank, or Off-Grid System ★★★ http://t.cn/AiFAb0DL
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • The #1 Woodworking Resource With Over 16,000 Plans, Download 50 FREE Plans...  http://ishbv.com/tedsplans/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Want to preview some of our plans? You can get 50 Woodworking Plans and a 440-Page "The Art of Woodworking" Book... Absolutely FREE  http://tinyurl.com/yy9yh8fu
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hi there! Get Your Professional Job-Winning Resume Here - Check our website! http://bit.ly/resumpro
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Attachment C.1. – Data Center Standard.doc

  1. 1. (Attach C.1) State of Iowa Data Center Standard October 8, 2009 Purpose: To provide a data center standard that protects critical computing infrastructure from risks associated with loss of power, fire, unmanaged temperature, or unauthorized access. Overview: This standard is intended to apply to all State of Iowa data centers as defined below. The intent of this standard is to reduce risk and increase the longevity of critical network assets. Several Iowa agency network engineers conducted research and toured both government and private data centers to provide state agencies with the following data center standard practices and best practices. Scope: For the purpose of this standard, all State of Iowa participating agencies, boards or commissions operating a data center facility will ensure the proper management, risk mitigation, redundancy, and reliability of the following data center areas: • Power • Physical Security • HVAC • Fire Suppression • Cable Management Agencies will be required to comply with the provisions as stated in the standard practice section of this standard no later than June 30, 2010. The Technology Governance Board TGB has the authority to determine entity compliance or non-compliance of this standard. Failure to comply with this standard will result in a review by the TGB. Updates: This document will be reviewed at least every two years and updated as needed. Definitions: Selected terms used in the Data Center Standard are defined below: • Agency - means any agency as listed in Iowa Code Chapter 8A Section 201 paragraph 4. • Best Practice – is a technique, method, process, or activity that is believed to be effective at delivering a particular outcome. Best practices noted in this document are viewed as recommendations, not requirements. • Critical IT infrastructure – is defined by business service restoration within 72 hours in an agency’s disaster recovery plan.
  2. 2. Data Center Standard (Attach C.1) • Data Center – is a facility dedicated to the purpose of securing data and systems and is used to house network server systems and associated components. It includes networked servers, controlled access, environmental controls such as air conditioning and fire suppression, power and electrical systems, and networking equipment. The threshold of what facilities is considered to be a data center is provided below: Space Type Typical Site Infrastructure System Characteristics Typically use under-floor or overhead air distribution systems and a few in-room computer room air conditioner (CRAC) units. CRAC units in localized data centers are more likely to Localized be air cooled and have constant-speed fans and are thus relatively low efficiency. data center Operational staff is likely to be minimal, which makes it likely that equipment orientation and airflow management are not optimized. Air temperature and humidity are tightly monitored. However, power and cooling redundancy reduce overall system efficiency. Typically use under-floor air distribution and in-room CRAC units. The larger size of the center relative to those listed above increases the probability that efficient cooling, e.g., a Mid-tier central chilled water plant and external storage central air handling units with variable data center speed fans, is used. Staff at this size data center may be aware of equipment orientation and airflow management best practices. However, power and cooling redundancy may reduce overall system efficiency. The most efficient equipment is expected to be found in these large data centers. Along with efficient center cooling, these data centers may have energy management systems. Enterprise- Equipment orientation and extensive airflow management best practices are most likely class data external storage implemented. However, enterprise-class data centers are designed with center maximum redundancy, which can reduce the benefits, gained from the operational and technological efficiency measures. • Environmental Stability – refers to the controls for fire suppression, temperature, humidity, and air quality. • Networking and data cabling – terminology pertaining to the installation and maintenance of twisted-pair and optical fiber cabling. • Physical Security – describes both measures that prevent or deter attackers from accessing a facility, resource, or information stored on physical media and guidance on how to design structures to resist various hostile acts. • Power and Electrical Systems – terminology relating to reliable, conditioned power that is provided for computer and networking systems located within a data center. • Standard Practice – is a technique, method, process, or activity that is believed to be effective at delivering a particular outcome. Standard practices noted in this document are viewed as requirements, not recommendations. • Visitor – Any non-authorized state personnel, non-authorized vendors, or the general public using or touring State of Iowa facilities. Data Center Standard Practices: State of Iowa data center standard practices require that: 1. The following physical security practices be implemented: a. Barriers shall exist that restrict access to data center rooms; b. Physical access shall be restricted to selected personnel, with an auditable physical security process using security card access. If a security card system is
  3. 3. Data Center Standard (Attach C.1) not present, room(s) shall be secured by key or keypad system. A key system shall have an audited checkout process; c. Access shall be restricted to employees and vendors who need to maintain equipment or infrastructure in the room(s). An escort is required for all visitors and vendors to the room(s). In addition, visitors and vendors shall be given a physical access token (badge or access device) that identifies visitors as non- employee(s); d. Whenever practical, critical IT infrastructure should reside inside data centers. It is not the intent to apply this standard to non-critical servers, network infrastructure or communication assets located inside of unimproved utility closets; e. If the site is subject to Payment Card Industry (PCI) rules and requirements, video cameras shall be used to monitor sensitive areas. Recorded video shall be retained for a minimum of three months. 2. The following environmental stability practices be implemented: a. Smoke detectors and sprinkler systems or clean agent fire suppression gaseous systems are required; b. Monitoring, alarming and alerting shall be in effect in case fire and all fire suppression systems must be installed and maintained in accordance with local fire code; c. Air handling equipment must supply sufficient cooling and humidity controls to meet the most restrictive equipment cooling and humidity specifications of the equipment residing within the data center; d. Storage of flammable or combustible materials (e.g. wood, cardboard and corrugated paper, plastic or foam packing materials, flammable liquids or solvents) shall not be allowed in the room(s). 3. The following power and electrical system practices be implemented: a. All devices, including servers, networking equipment, etc., shall be protected by conditioned power and suitable UPS sufficient to maintain power until power is restored through commercial power or generator backup; b. Cabinets and racks shall be properly grounded, in accordance with existing commercial building grounding and bonding standards. 4. The following networking and data cabling practices be implemented: a. Data cabling shall be installed and tested in accordance with industry standards and best practices listed in the ANSI/TIA-568 family of Telecommunications Standards; b. Data cabling routed outside of cabinets shall be protected and contained, using solutions such as cable trays, flexible conduit, J-hooks, etc.; c. Data cabling routed within or between bayed cabinets shall be done in a manner so as to not inhibit air flow through the cabinet. Cabling within a cabinet shall be dressed in such a way as to enhance air flow through the cabinet; d. Twisted-pair and fiber panels shall be labeled, and all cables shall be labeled at both ends, including twisted-pair and fiber patch cords; e. Cabling, cable lengths, and terminations shall meet current BICSI cabling and termination standards. 5. Waivers to the standard may be granted using current Iowa Administrative Code Chapter 25, Section 11-25.6 (8A). 3
  4. 4. Data Center Standard (Attach C.1) Data Center Best Practices: State of Iowa data center best practices recommend that: 1. The following physical security practices be implemented: a. Video camera surveillance and security escorts should be considered in cases where large data centers contain sensitive information; b. Gates or gate-like systems should be used above dropped ceilings and below raised floors to deny access into false floor/ceiling space; c. Biometric identification systems and processes are recommended for access to highly sensitive areas of a data center; d. Where possible, mantraps should be established to segment areas of the data center, with location-based access only; e. Limit or avoid windows in the room(s); f. Food and drink should not be allowed. 2. The following environmental stability practices be implemented: a. Redundant cooling is recommended. N+1 or outside air should augment cooling systems. Use of outside air should be considered to help economize cooling; b. A clean agent fire-suppression system such as FM-200 is recommended, where possible; c. Monitoring, alarming, and alerting should be in effect for instances of temperature and humidity thresholds and failures; d. Monitoring, alarming, and alerting are recommended for water detection; e. Blanking panels should be placed in cabinets to help direct air flow through rack- mounted devices; f. Temperature and humidity range requirements should be measured at multiple entry points on equipment racks, and at the ventilation output ducts. 3. The following power and electrical system practices be implemented: a. Power availability should be 100 percent and should guide decision making on UPS and power distribution; b. Monitoring, alarming, and alerting should be in effect for instances of UPS thresholds and failures, and power or breaker failures; c. Room-level PDUs should be protected by room UPS; d. Cabinet-level PDUs should be protected either by room or cabinet UPS; 4. The following networking and data cabling practices be implemented: a. Data cabling installers should make a best effort to maintain neat and easily identifiable cabling systems, in order to support debugging and documentation efforts; b. Data cabling exterior to a cabinet should be routed through overhead cable trays, where possible, and twisted-pair and fiber cabling should be segregated within such trays; c. Data cabling installers should test all new, installed cables, and test results should be provided to the customer in electronic form. 4
  5. 5. Data Center Standard (Attach C.1) Data Center Standard Appendices Related Reference Materials: When implementing this standard please reference the following materials used to create best and standard practices: 1. Physical security practices: a. ANSI/BICSI-002 (Release December 2009) b. Data Center Physical Security Checklist (SANS) (See Appendix A page 5) c. 19 Ways to Build Physical Security into a Data Center (CSO) (See Appendix B page 9) d. Let's get physical: Data center security (search CIO) (See Appendix C page 12) 2. Environmental stability practices: a. Local Temperature Control in Data Center Cooling (ZDNet) http://www.hpl.hp.com/techreports/2006/HPL-2006-42.pdf 3. Power and electrical system practices: a. ANSI/NECA/BICSI-607 (Release August 2009) b. J-STD-607-A c. Guidelines for Specification of Data Center Power Density (APC) http://www.apcmedia.com/salestools/NRAN-69ANM9_R0_EN.pdf d. Crash Course: Data Center Power (Power Management) (See Appendix D page 14) 4. Networking and data cabling: a. ANSI/TIA/EIA-568-B series b. ANSI/TIA/EIA-569-B c. ANSI/TIA/EIA-942 d. Siemon Network Cabling Standards Guide (Siemon) (See Appendix E page 18) e. Building Industry Consulting Service International (www.bicsi.org) 5. Data Center Facility Definitions (See Appendix F) Appendix A. Data Center Physical Security Checklist (SANS) Data Center Physical Security Checklist retains full rights 8D FDB5 DE3D F8B5 06E4 A169 4E46 This checklist is not a comprehensive physical security checklist. It merely provides a reasonable starting point in regards to physical security for a data center. Always obtain written permission from proper management before performing security testing of any kind. Ensure that all the testing performed (physical penetration, fire control, social engineering) is outlined explicitly in the permission received from management. Data Center Management may require that a Non-Disclosure Agreement be signed because of the potential exposure of security procedures. This checklist, as designed, only covers the physical aspects of your security setup. You will need other checklists to secure networks, operating systems, applications and other potential targets. Using the checklist The checklist is broken into two sections, property and people. Property includes, but is not limited to the building, infrastructure, servers, laptops and data. People is further broken down into users and outsiders. Users are employees, clients and others who need access to business data. Outsiders are 5
  6. 6. Data Center Standard (Attach C.1) those who are not directly employed by the business. Cleaning crews, security guards, and service engineers are examples of outsiders. Property Section - Place a check by each item that passes. 1.1 Site Location ____ 1.1.1 Natural Disaster Risks The site location SHOULD be where the risk of natural disasters are acceptable. Natural Disasters include but are not limited to forest fires, lightning storms, tornadoes, hurricanes, earthquakes and floods. ____ 1.1.2 Man-Made Disaster Risks The Site Location SHOULD be located in an area where the possibility of man-made disaster is low. Man-made disasters include but are not limited to plane crashes, riots, explosions, and fires. The Site SHOULD NOT be adjacent to airports, prisons, freeways, stadiums, banks, refineries, pipelines, tank farms, and parade routes. ____ 1.1.3 Infrastructure The electrical utility powering the site SHOULD have a 99.9% or better reliability of service. Electricity MUST be received from two separate substations (or more) preferably attached to two separate power plants. Water SHOULD be available from more than one source. Using well water as a contingency SHOULD be an option. There MUST be connectivity to more than one access provider at the site. ____ 1.1.4 Sole purpose A data center SHOULD NOT share the same building with other offices, especially offices not owned by the organization. If space must be shared due to cost then the data center SHOULD not have walls adjacent to other offices. 1.2 Site Perimeter ____ 1.2.1 Perimeter There SHOULD be a fence around the facility at least 20 feet from the building on all sides. There SHOULD be a guard kiosk at each perimeter access point. There SHOULD be an automatic authentication method for data center employees (such as a badge reader reachable from a car). The area surrounding the facility MUST be well lit and SHOULD be free of obstructions that would block surveillance via CCTV cameras and patrols. Where possible, parking spaces should be a minimum of 25 feet from the building to minimize damage from car bombs. There SHOULD NOT be a sign advertising that the building is in fact a data center or what company owns it. ____ 1.2.2 Surveillance There SHOULD be CCTV cameras outside the building monitoring parking lots and neighboring property. There SHOULD be guards patrolling the perimeter of the property. Vehicles belonging to data center employees, contractors, guards, and cleaning crew should have parking permits. Service engineers and visitor vehicles should be parked in visitor parking areas. Vehicles not fitting either of these classifications should be towed. ____ 1.2.3 Outside Windows and Computer Room Placement The Site Location MUST NOT have windows to the outside placed in computer rooms. Such windows could provide access to confidential information via Van Eck Radiation and a greater vulnerability to HERF gun attacks. The windows also cast sunlight on servers unnecessarily introducing heat to the computer rooms. Computer rooms SHOULD be within the interior of the data center. If a computer room must have a wall along an outside edge of a data center there SHOULD be a physical barrier preventing close access to that wall. ____ 1.2.4 Access Points Loading docks and all doors on the outside of the building should have some automatic authentication method (such as a badge reader). Each entrance should have a mantrap (except for the loading dock), a security kiosk, physical barriers (concrete barricades), and CCTV cameras to ensure each person entering the facility is identified. Engineers and Cleaning Crew requiring badges to enter the building MUST be required to produce picture ID in exchange for the badge allowing access. A log of equipment being placed in and 6
  7. 7. Data Center Standard (Attach C.1) removed from the facility must be kept at each guard desk listing what equipment was removed, when and by whom. Security Kiosks SHOULD have access to read the badge database. The badge database SHOULD have pictures of each user and their corresponding badge. Badges MUST be picture IDs. 1.3 Computer Rooms ____ 1.3.1 Access There SHOULD be signs at the door(s) marking the room as restricted access and prohibiting food, drink, and smoking in the computer room. There SHOULD be an automatic authentication method at the entrance to the room (such as a badge reader). Doors should be fireproof. There SHOULD only be two doors to each computer room (one door without windows is probably a violation of fire code). Access should be restricted to those who need to maintain the servers or infrastructure of the room. Access should be restricted to emergency access only during moratoriums for holidays. Service Engineers MUST further go to the NOC to obtain access to computer room badges. ____ 1.3.2 Infrastructure Computer Rooms should be monitored by CCTV cameras. Each computer room SHOULD have redundant access to power, cooling, and networks. There should be at least an 18" access floor to provide for air flow and cable management. Computer rooms should have air filtration. Computer rooms should have high ceilings to allow for heat dispersal. (Level, 1) ____ 1.3.3 Environment Each computer room SHOULD have temperature between 55 and 75 degrees Fahrenheit and a humidity of between 20 and 80 percent.(Safeguarding, 5:2) Environmental sensors should log the temperature and humidity of the room and report it to the NOC for monitoring and trend analysis(Level, 1). ____ 1.3.4 Fire Prevention There SHOULD be a Halon or other total flooding agent solution in place in each computer room. There MUST be fire extinguishers located in each computer room. There MUST be emergency power off switches inside each computer room. There MAY be respirators in computer rooms. There MUST NOT be wet pipe sprinkler systems installed. ____ 1.3.5 Shared Space If the space is being leased then the computer room will probably be shared space. A clause should be entered into the lease stating that competitors of the business may not have equipment located in the same computer room. Lists of clients utilizing the same room should be monitored to ensure compliance. Computer equipment in shared spaces MUST at a minimum be in a locked cabinet. 1.4 Facilities ____ 1.4.1 Cooling Towers There MUST be redundant cooling towers. Cooling towers MUST be isolated from the Data Center parking lot. ____ 1.4.2 Power There MUST at least be battery backup power onsite with sufficient duration to switch over to diesel power generation. If there is no diesel backup then there should be 24 hours of battery power. There SHOULD be diesel generators on site with 24 hours of fuel also on site. A contract SHOULD be in place to get up to a week of fuel to the facility. ____ 1.4.3 Trash All papers containing sensitive information SHOULD be shredded on site or sent to a document destruction company before being discarded. Dumpsters SHOULD be monitored by CCTV. ____ 1.4.4 NOC The NOC MUST have fire, power, weather, temperature, and humidity monitoring systems in place. The NOC MUST have redundant methods of communication with the outside. The NOC MUST be manned 24 hours a day. The NOC MAY monitor news channels for events which effect the health of the data center. 1.5 Disaster Recovery 7
  8. 8. Data Center Standard (Attach C.1) ____ 1.5.1 Disaster Recovery Plan The data center MUST have a disaster recovery plan. Ensure that the plan addresses the following questions: What constitutes a disaster? Who gets notified regarding a disaster and how? Who conducts damage assessment and decides what back-up resources are utilized? Where are backup sites located and what is done to maintain them on what schedule? How often and under what conditions is the plan updated? If the organization does not own the data center what downtime does the service level agreement with the center allow? A list of people within the organization to notify MUST be maintained by the NOC of the data center including pager, office, home, and cell numbers and Instant Message Names if available. How often are those people updated? ____ 1.5.2 Offsite Backup There MUST be regular offsite backups of essential information. There must be a backup policy in place listing the procedure for restoring from backup and allowing for the scheduling of practice runs to test that the backups work. ____ 1.5.3 Redundant Site Redundant servers MAY be set up in another data center. If these are set up then they must be tested during a "dry run" to ensure that they will switch over properly during a disaster. People Section - Place a check by each item that passes. 2.1 Outsiders ____ 2.1.1 Guards Security guards SHOULD submit to criminal background checks. Guards SHOULD be trained to follow and enforce physical security policy strictly (for example ensuring that everyone in the facility is wearing a badge). ____ 2.1.2 Cleaning Staff Cleaning crews SHOULD work in groups of at least two. Cleaning crew SHOULD be restricted to offices and the NOC. If cleaning staff must access a Computer Room for any reason they MUST be escorted by NOC personnel. ____ 2.1.3 Service Engineers Service Engineers MUST log their entering and leaving the building at the entrance to the building. The NOC SHOULD log their badge exchange to access a computer room. ____ 2.1.4 Visitors Visitors MUST be escorted by the person whom they are visiting at all times. Visitors MUST NOT be allowed access to a computer room without written approval from data center management. All visitors who enter Computer Rooms must sign Non Disclosure Agreements. 2.2 Users ____ 2.2.1 Education Users must be educated to watch out for potential intruders who may shoulder surf or directly attempt social engineering. Users should be educated on securing workstations and laptops within the facility and laptops outside the facility, awareness of surroundings, and emergency procedures. ____ 2.2.2 Policy All users at the facility must sign Non Disclosure Agreements. A Physical Security Policy SHOULD be signed by each user and enforced by security guards. 2.3 Disaster Recovery ____ 2.3.1 Organizational Chart An organizational chart should be maintained detailing job function and responsibility. Ideally the org chart would also have information on which functions the worker has been cross trained to perform. ____ 2.3.2 Job Function Documentation "It's not enough to document only what your current employees know at the moment about existing systems and hardware. All new work, all changes, must be documented as well. ____ 2.3.3 Cross Training 8
  9. 9. Data Center Standard (Attach C.1) Data Center employees should be cross trained in a number of other job functions. This allows for a higher chance of critical functions being performed in a crisis. ____ 2.3.4 Contact Information A contact database MUST be maintained with contact information for all Data Center employees. ____ 2.3.5 Telecommuting Data Center employees should regularly practice telecommuting. If the data center is damaged or the ability to reach the data center is diminished then work can still be performed remotely. ____ 2.3.6 Disparate Locations If the organization has multiple Data Centers then personnel performing duplicate functions should be placed in disparate centers. This allows for job consciousness to remain if personnel at one center are incapacitated. 9
  10. 10. Data Center Standard (Attach C.1) Appendix B. 19 Ways to Build Physical Security into a Data Center (CSO) 19 Ways to Build Physical Security into a Data Center Mantraps, access control systems, bollards and surveillance. Your guide to securing the data center against physical threats and intrusions. by Sarah D. Scalet, CSO November 01, 2005 There are plenty of complicated documents that can guide companies through the process of designing a secure data center—from the gold-standard specs used by the federal government to build sensitive facilities like embassies, to infrastructure standards published by industry groups like the Telecommunications Industry Association, to safety requirements from the likes of the National Fire Protection Association. But what should be the CSO's high-level goals for making sure that security for the new data center is built into the designs, instead of being an expensive or ineffectual afterthought? Read below to find out how a fictional data center is designed to withstand everything from corporate espionage artists to terrorists to natural disasters. Sure, the extra precautions can be expensive. But they're simply part of the cost of building a secure facility that also can keep humming through disasters. 1. Build on the right spot. Be sure the building is some distance from headquarters (20 miles is typical) and at least 100 feet from the main road. Bad neighbors: airports, chemical facilities, power plants. Bad news: earthquake fault lines and (as we've seen all too clearly this year) areas prone to hurricanes and floods. And scrap the "data center" sign. 2. Have redundant utilities. Data centers need two sources for utilities, such as electricity, water, voice and data. Trace electricity sources back to two separate substations and water back to two different main lines. Lines should be underground and should come into different areas of the building, with water separate from other utilities. Use the data center's anticipated power usage as leverage for getting the electric company to accommodate the building's special needs. 3. Pay attention to walls. Foot-thick concrete is a cheap and effective barrier against the elements and explosive devices. For extra security, use walls lined with Kevlar. 4. Avoid windows. Think warehouse, not office building. If you must have windows, limit them to the break room or administrative area, and use bomb-resistant laminated glass. 5. Use landscaping for protection. Trees, boulders and gulleys can hide the building from passing cars, obscure security devices (like fences), and also help keep vehicles from getting too close. Oh, and they look nice too. 6. Keep a 100-foot buffer zone around the site. Where landscaping does not protect the building from vehicles, use crash-proof barriers instead. Bollard planters are less conspicuous and more attractive than other devices. 10
  11. 11. Data Center Standard (Attach C.1) 7. Use retractable crash barriers at vehicle entry points. Control access to the parking lot and loading dock with a staffed guard station that operates the retractable bollards. Use a raised gate and a green light as visual cues that the bollards are down and the driver can go forward. In situations when extra security is needed, have the barriers left up by default, and lowered only when someone has permission to pass through. 8. Plan for bomb detection. For data centers that are especially sensitive or likely targets, have guards use mirrors to check underneath vehicles for explosives, or provide portable bomb- sniffing devices. You can respond to a raised threat by increasing the number of vehicles you check�perhaps by checking employee vehicles as well as visitors and delivery trucks. 9. Limit entry points. Control access to the building by establishing one main entrance, plus a back one for the loading dock. This keeps costs down too. 10. Make fire doors exit only. For exits required by fire codes, install doors that don't have handles on the outside. When any of these doors is opened, a loud alarm should sound and trigger a response from the security command center. 11. Use plenty of cameras. Surveillance cameras should be installed around the perimeter of the building, at all entrances and exits, and at every access point throughout the building. A combination of motion-detection devices, low-light cameras, pan-tilt-zoom cameras and standard fixed cameras is ideal. Footage should be digitally recorded and stored offsite. 12. Protect the building's machinery. Keep the mechanical area of the building, which houses environmental systems and uninterruptible power supplies, strictly off limits. If generators are outside, use concrete walls to secure the area. For both areas, make sure all contractors and repair crews are accompanied by an employee at all times. 13. Plan for secure air handling. Make sure the heating, ventilating and air-conditioning systems can be set to recalculate air rather than drawing in air from the outside. This could help protect people and equipment if there were some kind of biological or chemical attack or heavy smoke spreading from a nearby fire. For added security, put devices in place to monitor the air for chemical, biological or radiological contaminant. 14. Ensure nothing can hide in the walls and ceilings. In secure areas of the data center, make sure internal walls run from the slab ceiling all the way to subflooring where wiring is typically housed. Also make sure drop-down ceilings don't provide hidden access points. 15. Use two-factor authentication. Biometric identification is becoming standard for access to sensitive areas of data centers, with hand geometry or fingerprint scanners usually considered less invasive than retinal scanning. In other areas, you may be able to get away with less- expensive access cards. 16. Harden the core with security layers. Anyone entering the most secure part of the data center will have been authenticated at least three times, including: a. At the outer door. Don't forget you'll need a way for visitors to buzz the front desk. b. At the inner door. Separates visitor area from general employee area. 11
  12. 12. Data Center Standard (Attach C.1) c. At the entrance to the "data" part of the data center. Typically, this is the layer that has the strictest "positive control," meaning no piggybacking allowed. For implementation, you have two options: 1. A floor-to-ceiling turnstile. If someone tries to sneak in behind an authenticated user, the door gently revolves in the reverse direction. (In case of a fire, the walls of the turnstile flatten to allow quick egress.) 2. A "mantrap." Provides alternate access for equipment and for persons with disabilities. This consists of two separate doors with an airlock in between. Only one door can be opened at a time, and authentication is needed for both doors. d. At the door to an individual computer processing room. This is for the room where actual servers, mainframes or other critical IT equipment is located. Provide access only on an as- needed basis, and segment these rooms as much as possible in order to control and track access. 17. Watch the exits too. Monitor entrance and exit—not only for the main facility but for more sensitive areas of the facility as well. It'll help you keep track of who was where when. It also helps with building evacuation if there's a fire. 18. Prohibit food in the computer rooms. Provide a common area where people can eat without getting food on computer equipment. 19. Install visitor rest rooms. Make sure to include bathrooms for use by visitors and delivery people who don't have access to the secure parts of the building. 12
  13. 13. Data Center Standard (Attach C.1) Appendix C. Let's get physical: Data center security Let's get physical: Data center security By Mark Brunelli, News Writer 03 Jun 2004 | SearchCIO.com CHICAGO -- Enterprises often forget that physically securing the data center is just as important as virtually securing the information it holds, said security expert Kevin Beaver Wednesday at TechTarget's Data Center Decisions 2004 conference. Beaver, founder and principal consultant of Principle Logic LLC of Kennesaw, Ga., gave attendees a refresher course on the 10 most common mistakes companies make when it comes to the physical layout of their precious information systems. Whether your data center is in-house or outsourced to a third party, Beaver said, always be on the lookout for these 10 serious and, possibly expensive, lapses in judgment: 1. Weak or missing security policies: Don't take the time to develop security policies only to put them on a shelf and forget about them. It's important to make sure security policies are effectively communicated to employees. A good security policy includes a simple introduction that conveys the purpose of the policy, the policy statement itself and information about how compliance will be measured. It should also include information about what sanctions will be taken against those that fail to comply. 2. Poor physical access controls: To be sure that everyone entering the data center has a reason to do so, implement strong visitor sign-in procedures and then enforce those rules. If keycards are required to enter the data center, check regularly to make sure they work. Companies that have no receptionist or a distracted receptionist should consider hiring guards around the clock. "I have seen some glaring vulnerabilities in that area," Beaver said. 3. Specific security concerns: Constantly check the data center for vulnerabilities. Look to see how many access points there are and if people tend to prop doors open. Don't leave media such as CD-ROMs and other documentation laying around. Try to make sure that wires are not exposed. For companies that outsource their data center, make sure the third-party secures documentation about your infrastructure. "If anybody can reach it, they can potentially do bad things with it," Beaver said. 4. Location and layout: There is much debate over which floor of an office building is best for housing a data center. First-floor data centers are vulnerable to car crashes, while second-floor data centers may be vulnerable to fires that start below. Either way, try to be aware of where your data center resides in the building and develop disaster recovery plans accordingly. 5. Unsecured computers: Beaver said that it's important to lock screens when employees get up and walk away from their computer, and that locking screensavers are recommended. "Everybody knows that once physical access is gained all bets are off," he said. 6. Utility weakness: Beaver said to confirm that the proper fire protection policies are in place. Also, make sure there are working back-up generators or battery power in the event of an electrical outage. 13
  14. 14. Data Center Standard (Attach C.1) 7. Rogue employees: Everyone inside the data center should have a reason to be there. Don't assume someone is trustworthy just because they have gained access to the data center. To solve the problem of rogue employees, vendors and others passing through the data center, refer to internal policies or create them if necessary. Next, have some awareness training for employees. Finally, make it a human resources (HR) issue. It is HR's job to punish employees who break the rules. 8. Separation of physical and logical security: Physical and logical security should be converged into one because they are both equally important. After all, there is a lot of overlap between the two. Both require risk assessment and countermeasures to mitigate risks. And "the goal of both is to keep the bad guys out and the good guys honest," Beaver said. 9. Outsourcing all data center security responsibilities: Companies should never outsource 100% of their data centers' security responsibilities to a third-party company. Rather, Beaver said, put someone in charge of making sure the third party is properly handling your physical security, compliance and other needs. 10. No third-party security assessments and/or audits: The security of data centers is a continually evolving process. Every time a new technology is introduced, a new vulnerability appears that needs to be addressed. That is why it's important to occasionally bring in a third-party auditor or consultant. Companies that outsource data center operations should consider sending auditors to the third-party company in question. "Get somebody that has physical security and technical security experience involved," Beaver said. "It may not be the same person." Conference attendee Bruce Peterson, vice president of systems with The ServiceMaster Co. in Downers Grove, Ill., is no stranger to physical security overhauls. His company recently implemented several new changes designed to increase security, including what he calls a "man trap." Whenever someone leaves or enters his company's data center, they have to go through two doors and swipe an access card at each one. This way the data center is never fully exposed to the outside. "If you don't have your card and you follow somebody in, you're going to get caught," Peterson said. Service Master also installed video cameras at every access point and removed motion detectors that used to open doors, because from the inside they can be easily tampered with. The company even went as far as to install chicken wire above the drop ceiling as an added measure against intrusion. "I think right now we're pretty secure," Peterson said. "I feel pretty good about it." 14
  15. 15. Data Center Standard (Attach C.1) Appendix D. Crash Course: Data Center Power (Power Management) Crash Course: Data Center Power (Power Management) Powerful Designs By Ron Anderson Data center power usage will be the No. 1 infrastructure concern facing IT executives over the next three years, according to a Robert Frances Group research report. Five years ago, the average power requirement per rack was 1 to 3 kilowatts. With requirements for processor cycles, memory and storage continuing to grow, along with the density of the equipment packed into each rack, it's now common for a typical rack to require 5 to 7 kilowatts, with high- density blade server implementations hogging 24 to 30 kilowatts per rack. Couple this dramatic increase in power consumption with the rising price of electricity and it's clear why this issue is becoming increasingly critical. We recently surveyed 228 Network Computing readers with infrastructure responsibilities and asked how likely it is that they'll enhance their data center's cooling and power capacity during the next year. Indeed, 37 percent of those surveyed said capacity increases will happen or are likely to happen, while another 25 percent said they are studying the issue. To address this coming crisis in data center power requirements, you must redesign your data center or build a new one. A modular and flexible design will be the key to future-proofing your investment. And you can't afford to ignore efficiency concerns, since energy costs aren't going to decrease over the next 10 years. Hand in Glove We're focusing here on data center power design, but any conversation about power in the data center must include cooling. One equipment rack that consumes 24 kilowatts of power (equivalent to about 30 kVA) requires six to seven tons, or about 78,000 BTUs, of cooling capacity. And that's only one part of the equation; air flow, measured in CFM (cubic feet per minute), is equally critical. The equipment in our 24-kilowatt rack will require about 3,800 CFM of air flow to maintain operating temperatures within manufacturer specifications. The average data center is designed to deliver 300 CFM of air flow per rack, but the average perforated floor tile in a raised-floor data center can deliver a maximum of only 600 CFM to 700 CFM--assuming that the floor is high enough to permit that volume of air flow and that the space under the floor isn't significantly restricted by the tangle of power and data cables inhabiting that space--creating a double-edged design problem. Determining the optimum height for a data center's raised floor is a matter of striking the proper balance between air flow and weight. The higher the floor, the more air flow, but the less load the floor can bear. Thirty inches is a typical height in a raised-floor data center. If you don't already assess equipment efficiency as a regular part of your purchasing evaluations, consider this as a critical component for your next round. The idea is to attain the lowest TCO in light of the fact that power and cooling costs are escalating. 15
  16. 16. Data Center Standard (Attach C.1) Sun Microsystems' SWaP (Space, Watts and Performance) metric is a good, if self-serving, attempt by an equipment vendor to quantify efficiency's effect on your TCO for a particular piece of equipment. The metric is determined by dividing a performance value, such as operations per second, by the equipment rack space times the power consumption. If enough people become SWaP-sensitive, your next server could come with a bright yellow DOE EnergyGuide sticker listing the average yearly utility cost for operating the device, similar to the stickers affixed to new clothes washers, dryers and water heaters. Your facilities staff will want guidance on the amount of power needed for the redesign. You'll probably be inclined to develop a response using the equipment's nameplate labels. This methodology involves adding together the wattage figures from the labels to get an idea of the amount of power you're using now--and then estimating additional power required based on projected growth. But if you follow this script, you're likely to overestimate your power requirements and spend more than you need to, because the wattage data on the nameplate shows the amount of power that can be produced, not the amount of power required to run the equipment. According to a recent American Power Conversion study, a fully populated IBM BladeCenter running at 100 percent utilization consumes 25 percent less power than is indicated on the nameplate--4,050 watts versus the 5,400 watts listed. The disparity between the power value on the nameplate and the actual power consumed will be large, so don't use the nameplate to estimate your requirements. If your data center redesign is still a year or more away, consider spending capital now on power usage-monitoring equipment. Accurate reports on existing power consumption over time are the best indicator of your actual needs and the best predictor of future growth requirements. Equipment vendors are slowly becoming aware of the need to provide actual power consumption figures, but, if you don't monitor power usage on a day-to-day basis, you'll need good contacts at your vendors to help you locate the information. Also, remember to insist on vendor-supplied actual power consumption figures on your next data center equipment RFP; the legwork you save will be your own. Future-Proof through Modularity It isn't likely that you'll need to deliver 24 kilowatts of power to every rack in your new or redesigned data center from day one. However, you'll probably need to deliver more power to the racks before you're able to redesign your data center yet again, so it's important to plan a power path that will enable you to respond to future business computing needs in a timely fashion. Most data-center-class PDU (power distribution units) and UPS vendors offer systems with a modular capacity in rack-sized cabinets that can initially be outfitted to deliver 25 percent to 50 percent of total capacity. As power needs increase, additional power modules can be easily installed in the existing cabinets without adding floor space and without downtime. You must ensure that your power delivery chain, including circuits, emergency power generation and battery backup, are all up to supporting the extra power requirements Instead of installing a fully loaded PDU-UPS combo to begin with, consider deploying two or more sparsely configured combos with an eye toward future expansion. It will initially require 16
  17. 17. Data Center Standard (Attach C.1) more floor space and capital, but your upgrade path will be clearly defined and downtime will be minimized. An interesting product from APC, the InfraStraXure Cooling Distribution Unit, brings the same modularity to data center cooling that these modular UPSs provide on the power side. The Cooling Distribution Unit provides from one to 12 balanced, chilled water feeds to half-rack- wide, 30-kVA cooling units. Piping between the Cooling Distribution Unit and APC's InfraStruXure InRow RC data center cooling units is through seamless, flexible piping that's quick and easy to install. You can start with 30 kVA of cooling and ramp up to 360 kVA by adding half-rack-wide cooling units. Of course, your chilled water plant would need to grow as your cooling needs grow, but changes in the data center itself would be non-disruptive. Liebert provides a similar modular approach through its X-treme Density, or XD, line of water or waterless data center cooling products. The XD line coolers can be mounted on the ceiling (XDO model) or on top of equipment racks (XDV model) to save valuable floor space. The XD line is scalable, and the company says the products can cool more than 500 watts per square foot. AC vs. DC There's considerable disagreement over the potential energy savings afforded by switching from AC- to DC-powered servers and storage in the data center--estimates range from 10 percent to 25 percent. The telecom industry has been using DC-powered equipment for years, so there is precedent for going this route, but significant savings must be demonstrated to propel DC power into the enterprise data center. The driving force behind the switch is the inefficiency of today's AC power supplies. The typical AC- to DC-power supply found in most servers is 70 percent to 80 percent efficient, which means that 20 percent to 30 percent of every watt delivered to a piece of equipment produces nothing but heat. Not only do you pay for the wasted electricity to run the equipment, you also pay for the electricity to cool the heat produced by that wasted electricity. Not good. In a data center using DC power, a rectifier--equipment that converts your utility's AC grid power to DC power--typically operates at over 90 percent efficiency and doesn't need to be located in the data center. In a conversion to DC power, you'll need a rip-and-replace mentality to make the transition--all PDUs and UPSs must be replaced, for instance. And existing AC power cables would need to be replaced, as they wouldn't be large enough to carry lower-voltage, higher-amperage DC power. Racks must be retrofitted with large copper bus bars to distribute the DC power within the rack, and each system in the rack would need to be converted. Even enterprises contemplating a new data center designed for DC power from the ground up must be aware of potential pitfalls. There are fewer vendors for DC-based equipment, so the initial costs will be higher than for comparable AC-based equipment. The added upfront costs might be offset through lower annual operating costs, but a careful cost-benefit analysis must be applied to determine the potential for long-term savings (see "Sneak Preview: Rackable Systems' C1000"). 17
  18. 18. Data Center Standard (Attach C.1) Finally, data center engineers and IT pros are AC-savvy, but know relatively little about DC power, so any plan to implement a DC-powered data center should factor in retraining costs. It is important, however, to note that vendors are working to enhance the efficiency of their AC power supplies. For example, IBM claims that its BladeCenter power supplies are 90 percent efficient. Even if you have to pay more up-front for this increased efficiency, it is money well spent since your total cost of ownership will be significantly reduced when you consider power costs over the life of the equipment. If AC efficiency improvements become a trend, DC's primary competitive advantage would become much less compelling. If you plan to install new power and cooling infrastructure in the next year or two, you'll be on the bleeding edge by going the DC route. IBM, Hewlett-Packard and Sun all supply DC- powered systems, but choices for DC-powered hardware are more limited. Ron Anderson is Network Computing's labs director. Write to him at randerson@nwc.com. 18
  19. 19. Data Center Standard (Attach C.1) Appendix E. The 568-C Family of Standards: An Update and an Overview The '568-C Family of Standards: An Update and an Overview By Valerie Maguire The ANSI/TIA-568 family of Telecommunications Standards contains the requirements for balanced twisted-pair and optical fiber cabling, which provide the foundation for the design, installation, and maintenance best practices described in BICSI's Telecommunications Distribution Methods Manual (TDMM). With the newly published '568-C.0, '568-C.1, and '568- C.3 and almost fully-finalized '568-C.2 Standards encompassing 305 pages of detailed information and containing 151 tables and 121 figures, it can be challenging to remain up to date with the latest TIA telecommunications cabling specifications. This article will help to summarize the content, enhancements, and critical revisions of this important series of Standards. The American National Standards Institute (ANSI) mandates that subcommittees responsible for the publication of standards reaffirm, revise, or rescind their document every 5 years. As a result, the ANSI/TIA-568 family of Standards has undergone 3 sets of revisions since the original document was published in 1991. This mandate provides an opportunity for TIA subcommittees to evaluate the document content to ensure that material is up-to-date, information is centralized, and duplication is reduced or eliminated. A key outcome of the last ANSI review process was the decision to divide the three main documents that comprised the previous edition ANSI/TIA-568-B ('568-B) family of Standards into four main documents. This decision was driven by the need to have one common standard that could be used to address generic cabling needs when a specific premise standard, such as the commercial building, data center, residential, or industrial standard, does not exist. This common content applies to previously unsupported environments such as non office areas of an airport or stadium and also serves as a repository of generic requirements that are applicable to all specific premise and component standards. The new ANSI/TIA-568-C ('568-C) family of Standards contains the following main documents: ANSI/TIA-568-C.0, "Generic Telecommunications Cabling for Customer Premises", published 2009 ANSI/TIA-568-C.1, "Commercial Building Telecommunications Cabling Standard", published 2009 ANSI/TIA-568-C.2, "Balanced Twisted-Pair Telecommunication Cabling and Components Standard", pending publication: August, 2009 ANSI/TIA-568-C.3, "Optical Fiber Cabling Components Standard", published 2008, errata issued in October, 2008 The '568-C series incorporates material from '568 B.1, '568-B.2, '568-B.3, the 18 addenda to the '568 B series, as well as necessary updates and revisions. Table 1 provides a summary of the content appearing in the four main '568-C documents. Figure 1 shows how the '568 C documents interrelate with each other and other important TIA cabling Standards. 19
  20. 20. Data Center Standard (Attach C.1) Key updates and changes to the '568-C documents include: ANSI/TIA-568-C.0:  Generic terminology has been introduced to describe cabling segments and connection points  Category 6A has been added as a recognized media  Optical fiber link test requirements were moved to this document  Optical fiber link performance requirements were moved to this document 20
  21. 21. Data Center Standard (Attach C.1)  The installation bend radius requirement for UTP and F/UTP cables has changed to "4x cable o.d." and the patch cord bend radius requirement has changed to "1x cable o.d." to accommodate larger diameter cables  Stewardship text has been added recognizing the need to support sustainable environments and conserve fossil fuels ANSI/TIA-568-C.1:  Category 6A has been added as a recognized media  850nm laser-optimized 50/125µm optical fiber is recommended if multimode optical fiber is used for backbone cabling  Category 5, 150 Ohm STP, and 50 Ohm and 75 Ohm coaxial cabling have been removed from the list of recognized media ANSI/TIA-568-C.2:  Category 5e cabling is recommended for support of 100 MHz applications  Category 5 channel performance values have been preserved in an informative annex  Balanced twisted-pair channel and permanent performance requirements were moved to this document  Performance equations for individual transmission parameters are listed in a single table for all categories  Coupling attenuation has been introduced as a parameter that is under study for characterizing radiated peak power generated by common mode currents for screened cables  One laboratory test method has been defined for all categories of connecting hardware ANSI/TIA-568-C.3  ISO nomenclature for optical fiber cable type (i.e. OM1, OM2, OM3, OS1, and OS2) has been added to transmission performance tables  Recommended connector strain relief, housing, and adapter color coding has been added to support installations when color is used to identify fiber type  Minimum OFL bandwidth for 62.5/125 mm optical fiber cable has been increased from 160 MHz·km at 850 nm to 200 MHz·km at 850 nm. 21
  22. 22. Data Center Standard (Attach C.1) Figure 2: Comparison of '568-C.0 and '568-C.1 Terminology An initial cause of concern and confusion for those reviewing the '568 C.0 Standard for the first time is the new terminology introduced for the functional elements that describe generic infrastructures. It's important to remember that the '568 C.0 terminology is only to be used when a specific customer premise standard defining terminology does not exist. As shown in figure 2, the generic infrastructure topology is actually fully consistent with the commercial building topology specified in '568 C.1. It is interesting to note that optical fiber link performance specifications are contained in '568- C.0, while balanced twisted-pair channel and permanent link specifications are contained in '568-C.2. This represents a deviation from the original '568-C series planning outline and was a cause of considerable debate in the TIA subcommittees. Ultimately, it was agreed that, since the balanced twisted-pair channel and permanent link specifications are so dependent upon the modeling configurations described in annex J of '568-C.2, it was most logical to move the cabling specifications into '568-C.2 and keep this interdependent information together. Another deviation from the original '568-C series planning outline was the agreement to move balanced twisted-pair field tester and field testing requirements from the '568-C.2 Standard into a standalone document (pending ANSI/TIA 1152). This carefully weighed decision supported reducing the overall page count of '568 C.2, as well as ensuring that updates or even simple reaffirmations of future revisions of the proposed ANSI/TIA-1152 Standard could be quickly addressed without the need to open the entire balanced twisted-pair cabling content of '568- C.2 for review. 22
  23. 23. Data Center Standard (Attach C.1) Although there is always an understandable degree of trepidation and resistance to change when something new comes along, the '568-C family of Standards is a user-friendly and well- organized compilation of the critical information that RCDDs and other cabling professionals need to know to excel in their areas of expertise. Since there are 5 years to go until the next ANSI review cycle, now is the time to familiarize yourself with the content of these important Standards! Copies of the standard references in this article may be purchased through the IHS Standards Store (www.global.ihs.com). Table 1 – Content Overview of the ‘568-C Series of Telecommunication Standards ‘568-C.0 ‘568-C.1 ‘568-C.2 “Generic ‘568-C.3 “Commercial Building “Balanced Twisted-Pair Telecommunications “Optical Fiber Cabling Telecommunications Cabling Telecommunication Cabling Cabling for Customer Components Standard” Standard” and Components Standard” Premises” Entrance Facilities Mechanical Requirements Optical Fiber Cable Cabling System Structure  Design  Channels, permanent  Inside plant, indoor-  Generic topology links, cord, and outdoor, outside plant,  Electrical protection  Length connectors drop cable  OSP connections  Recognized cabling  Pair assembly and  Wavelength color code specification Equipment Rooms Installation Requirements  Performance marking  Attenuation, overfilled  Reliability modal bandwidth –  Design length, and effective  Pull tension  Cabling practices modal bandwidth -  Bend radius Transmission Requirements  Cable termination length  Separation from power Telecommunications Rooms  Channels, permanent  Grounding and and Enclosures Connecting hardware and links, cord, and bonding connectors adapters  Polarity (optical fiber  Design  Return loss, insertion only)  Cross-connections and loss, NEXT loss,  Duplex and array interconnections PSNEXT loss, FEXT  Keying and fiber Optical Fiber  Centralized optical fiber loss, ACRF, positions Transmission/Test cabling PSACRF, TCL, TCTL,  Identification Requirements ELTCTL, coupling Backbone Cabling attenuation, Patch Cords and Fiber  Optical fiber cabling propagation delay, Transitions field test instruments  Star topology propagation delay  Multimode test  Length skew, PSANEXT loss,  Simplex considerations (e.g. average PSANEXT  Duplex (A-to-A and A- mandrel wrap) Horizontal Cabling loss, PSAACRF, and to-B)  Link attenuation average PSAACRF  Array (Type-A, Type-B,  DC loop resistance  Topology and Type-C) Annex A: Centralized Optical and DC resistance  Length Fiber Cabling unbalance  Recognized cabling Annex A: Connector Annex B: Optical Fiber Polarity  Bundled and hybrid performance specifications cables Annex A: Connector  Consecutive-fiber and Reliability Annex B: Measurement  Attenuation and return reverse-pair Work Area loss positioning for duplex Requirements systems Annex C: Test Procedures  Cords Annex D: Connector Transfer  Mechanical,  Method A and Method  Open office cabling Impedance Test Method temperature, humidity, 23
  24. 24. Data Center Standard (Attach C.1) B for array systems Annex E: Connector Test fixtures Annex C: Multi-Tenant Cabling Annex F: Multiport Annex D: Application Measurement Considerations Information Annex G: Installation in Annex E: Optical Fiber Field Higher Temperatures Test Guidelines  Installation Annex H: Propagation Delay Annex F: Environmental  Administration Derivations impact, durability, Classifications Annex I: Return Loss Limit retention, flex, and Derivation  Consolidation points twist Annex J: Modeling  MICE (mechanical, Configurations ingress, climatic, and Annex K: NEXT Loss Limit electromagnetic) Considerations conditions Annex L: PSAACRF and AFEXT Loss Normalization Annex M: Category 5 Channel Parameters 24
  25. 25. Data Center Standard (Attach C.1) Appendix F. Data Center Facility Definitions Data Center Typical IT Equipment and Site Infrastructure System Characteristics, by Space Type Space Type Typical Site Infrastructure System Characteristics Server Typically conditioned through an office HVAC system. To support VOIP and wireless closeta applications, UPS and DC power systems are sometimes included in server closets. Environmental conditions are not as tightly maintained as for other data center types. HVAC energy efficiency associated with server closets is probably similar to the efficiency of office HVAC. Server Typically conditioned through an office HVAC system, with additional cooling capacity, rooma probably in the form of a split system specifically designed to condition the room. The cooling system and UPS equipment are typically of average or low efficiency because there is no economy of scale to make efficient systems more cost competitive. Localized Typically use under-floor or overhead air distribution systems and a few in-room computer data centerb room air conditioner (CRAC) units. CRAC units in localized data centers are more likely to be air cooled and have constant-speed fans and are thus relatively low efficiency. Operational staff is likely to be minimal, which makes it likely that equipment orientation and airflow management are not optimized. Air temperature and humidity are tightly monitored. However, power and cooling redundancy reduce overall system efficiency. Mid-tier Typically use under-floor air distribution and in-room CRAC units. The larger size of the data centerb center relative to those listed above increases the probability that efficient cooling, e.g., a central chilled water plant and external storage central air handling units with variable speed fans, is used. Staff at this size data center may be aware of equipment orientation and airflow management best practices. However, power and cooling redundancy may reduce overall system efficiency. Enterprise- The most efficient equipment is expected to be found in these large data centers. Along with class data efficient center cooling, these data centers may have energy management systems. centerb Equipment orientation and extensive airflow management best practices are most likely external storage implemented. However, enterprise-class data centers are designed with maximum redundancy, which can reduce the benefits gained from the operational and technological efficiency measures. a Note: Does not meet the definition of a data center. b Note: Meets the definition of a data center. (U.S. Environmental Protection Agency, 2007) 25

×