Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Like this? Share it with your network

Share

Pmdc Smdc Technical Overview 24 Mar2010

  • 3,253 views
Uploaded on

Technical overview

Technical overview

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
3,253
On Slideshare
3,244
From Embeds
9
Number of Embeds
4

Actions

Shares
Downloads
128
Comments
0
Likes
0

Embeds 9

http://www.slideshare.net 5
http://jwiersma.wordpress.com 2
http://www.health.medicbd.com 1
http://www.linkedin.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • New data center infrastructure decisions This slide has no audio.
  • You need to optimize data center operations costs around energy because they are up to 50 percent of the facilities lifetime operational costs This chart highlights where the operational cost go to run the physical data center facility – this does not include the IT equipment and IT staff. Using this chart, let’s say the capital cost of the data center was about 50 million dollars; using a range of 1,500 to 2,500 dollars per square foot and looking at a 20,000-square-foot data center. Your cost will vary depending on your cost per square foot, which needs to include the design point on availability, energy efficiency and other aspects. Over a 20-year lifespan you would spend 3 to 5 times the capital costs in operational costs. At a 3x rate, this is an incremental amount of 90 to 150 million dollars to support that 50-million-dollar original data center build. So the operational cost, which in many organizations has not been considered, represent three to five times what will eventually be spent on the data center itself. What deserves significant focus as well is that as much as half of that operational cost—the single largest element—is energy. That assumes that energy’s only going to increase by 10 percent a year, which has been the historical trend up until the last two years where we’ve seen double-digit increases just in the past year. These cost curves, the green bars that represent energy, could accelerate even far beyond assumptions that we’ve used on this chart. The red line shows the potential impact of designing the data center for a leadership energy efficiency: showing the impact on operational costs from what IBM has seen in the past year on the average DCiE (43 percent) to a leadership design point of 66 percent. So again, when you look at the electrical and mechanical systems, those are the things that draw all the energy from the data center and then the energy, so the capital cost, 60 percent of the capital cost is associated with energy-related components and more than 50 percent of the ongoing operational cost is associated with energy-related components. By focusing on energy-optimized solutions, even at the data center design and build levels, IBM is going to help optimize the overall cost structure for your business. The expense curve as illustrated by the horizontal bar graph is based on data center design which operates at a 2.4+ efficiency. This is based on typical data center operations. The red line illustrates the EMDC which is designed to operate at a PUE of 1.43. 1.43 can easily be improved through adoption of more energy efficient technologies. However EMDC was not designed for a specific site, therefore knowing what technologies we could take advantage of is ambiguous. Therefore we have only taken advantage of those technologies that are industry accepted and globally available regardless of site.
  • IBM’s data center family We’re announcing a comprehensive family of datacenters. What we're really signaling to the market is that IBM is moving from a custom data center design business to a more standardized repeatable data center design business which has huge benefits for our clients in scalability, in repeatability, in fine-tuning that solution over time, in energy efficiency. We can spend a lot more focus in the actual design elements that we repeat over and over again than we can if we do each and every design individually with an individual customer. We are going to continue to be in the custom-design business, so that business isn't going to go away for we are going to shift our portfolio into four new members or four members of a standardized design family. The first is the one you should already be familiar with, the scalable modular data center which we’ve installed over 43 installations of this in the last 12 months around the world. It’s a turnkey datacenter that basically is a whole datacenter from walls to cooling to fire prevention to security. Typically scales between 500 to 2500 square feet or 50 to 250 square meters, generally installed from customer contract to final hands over in eight to 12 weeks, typically costs about 20 percent less than a traditional IBM datacenter design and because it uses very energy efficient components, in (row) cooling, state of the art UPS technology, we found generally the energy efficiency of the customer site improves by 15 to 30 percent. We’ve learned we can apply the same principles we learned in this deployment in much larger datacenter environments. The first much larger datacenter environment that we’re announcing is the enterprise modular data center . This is for 5,000 to 20,000 square foot environment. It is an environment where we really believe the bulk of the data centers are based on market research. It's leadership in terms of energy efficiency, so of every dollar spent on energy, 66 cents is being used to drive the IT equipment, the servers, the storage, the telecommunications here only 33 percent of that is being spent on the power and cooling infrastructure even though this is at a very high level of resiliency. Because it is a standardized design, we believe that the deployment will be about 25 percent faster and we're actually opening up this architecture to the rest of the market so that the market can provide new enhancements, new capabilities, new features to this data center in the same way as the market provides, you know, many features to a standardized Ford Mustang, whether that's exhaust systems or upgrades. The third member of the family is the portable modular datacenter . It is in essence a data center in a shipping container-like structure, 20 feet or 40 feet, focused on portability, meaning you could locate these in remote locations. They're a complete data center in a box with its own UPS and cooling capacity, so you don't need to put in another UPS or cooling pod beside it. This data center in a container is basically self-contained. It can also be used if customers have completely run out of capability in an existing data center and while they're expanding their current facilities and need some temporary capacity. This container also in essence fits any industry standard technology, so whether the customer is an HP customer or a Dell customer or an EMC customer or a Sun customer, you can put those technologies inside of it. I will tell you, however, that we have preconfigured these for IBM solutions as well and know that we have, for instance if you're using iDataplex, we can fit more processors into one of these devices and power and cool them than any kind of portable technology in the marketplace The final new member of the family - High Density Zone - is really targeted to those clients who already have existing datacenters but are running out of cooling and potentially UPS capacity and may want to install some high density servers or storage devices. This is, in essence, a plug and play solution, a high density zone that provides a client the opportunity to install this zone in an existing datacenter. All they really have to provide is the power and the zone itself is self-contained and can provide the cooling and can also provide the UPS capacity if required. This is done with very low cost in an existing datacenter where we typically see, compared to retrofitting, it’s at about 35 percent lower cost for a retrofit than a retrofit in an existing environment would have and obviously with minimal disruption. So now that’s the overview of our family of datacenter family. Let me know describe enterprise modular data center in a little more detail.
  • I have been mentioning some of the benefits of the PMDC solution, the PMDC also has some very unique features that are not found in any other containerized solution. As I have mentioned, this solution provides a complete data center infrastructure, including UPS system, batteries, chiller, cooling systems, fire detection and suppression, remote monitoring and so on. This solution also provides a user friendly environment which is comfortable to work in and provides adequate service area. The entire PMDC can be designed to an N+1 or 2N design and is very efficient with a data center infrastructure efficiency of between 66% and 74%. A unique feature to this solution is the special interior lining of the container which extends the environmental protection offered by the contianer itself. The lining provides additional water, humidity, fire and smoke protection as well as Radio Frequency Interference and ElectroMagnetic Interference. The container lining also controls temperature swings which prevents condensation concerns within the container. And the PMDC is manufactured in a controlled environment so you are sure every PMDC will be consistent, of the highest quality and is tested prior to shipment. On the left side of the chart you can see a few pictures of the inside of a PMDC. Now we will look at a few sample configurations of the PMDC, please turn to chart 10
  • Extend the life by adding cooling and UPS capacity This highlights the high-density zone, a racking system that comes with its own UPS capability and its own cooling capability, which really is comparable to a client taking their current data center, ripping it apart, and putting extra UPS and cooling capacity in. This, in essence, can provide that extra capacity at minimal disruption. When we’ve done the cost analysis comparing the high-density zone versus a retrofit, it’s typically about one-third lower cost than a retrofit and clearly has minimal disruption. All you really need is access to floor space, access to a power source, and access to a liquid cooling source from your chillers. So this is a way that customers can extend the life of their data center, even if they’re completely tapped out.
  • IBM Unveils Austria's First Green Data Center at kika/Leiner From "Green Philosophy" to "Green IT" at kika/Leiner     ST. PÖLTEN, AUSTRIA - 31 Mar 2008: IBM (NYSE: IBM) and kika/Leiner today announced the construction of a new energy efficient "green" data center which will reduce electric power consumption by up to 40 percent. The new data center offers kika/Leiner a way to extend their environmental vision beyond traditional business areas. As kika/Leiner expands throughout central Europe and the Middle East, their need for information technology (IT) services has increased considerably. To meet this demand, the market-leading furniture retailer in Austria turned to IBM to design an energy efficient data center using new "green" technologies that are part of IBM's Project Big Green. The new data center is planned to begin operation in May. IBM Site and Facilities Services team started out with a risk analysis, then developed a data center concept, drafted the construction plans, and as the general contractor established the entire data center infrastructure, including the electrical system, emergency power supply and the climate control system. IBM will support kika/Leiner in moving equipment to the new location and also will take over a major part of the IT operation. The building is a free-standing cube with about 1,000 square feet of IT space that fulfils all state-of-the-art technical security requirements of a data center. It is locked, has no windows, is equipped with an automatic fire-extinguishing system, and is protected against flooding. The data center does not contain any working space and entrance is restricted. Free cooling will be used in cold months, meaning the air conditioning for the data center will come directly from the cold outside air. Only on warm days will the data center be automatically cooled. "kika/Leiner perfectly combines ecology and economy," said Leo Steiner, general manager of IBM Austria. "The additional work and expense for green technology pays off within a few months, and the benefit for the environment pays off from the very first day." A separate high density computing area ensures the separation of IT equipment with higher or lower heat emissions and optimizes the cooling calculation, capacity and efficiency. This area of the data center features racks with the newest IBM BladeCenter technology. IBM BladeCenter integrates servers, networks, storage and business applications in highly efficient one-inch systems that sit in a rack like books in a shelf. IBM BladeCenter uses up to 24 percent less energy than competitive systems. IBM Cool Blue technology provides a method to control and monitor BladeCenter power and heat requirements. Hot air from the IT equipment is reduced to room temperature by water-cooled heat exchangers attached to the BladeCenter racks. The high density area covers about a third of the data center IT space and, if required, can be extended. Another third of the data center is space for conventional computing servers with low heat emissions. The last third will remain empty for future expansion. kika/Leiner centrally operates the IT for all their international locations from St. Pölten. This covers merchandise management, the compilation of electronic catalogues, e-mail traffic, time recording, the data warehouse and much more. The various furniture stores and branches in the eastern European countries and the Middle East are connected with the data center. Local sites connect to the network via thin clients and are ready to go -- an instant model for quick expansion. The new data center also contributes to increased IT security and business continuity because the old data center serves as a backup location to the new center. IBM's partnership with kika/Leiner plays to both companies' beliefs in environmental sustainability. For example, by implementing well-directed lighting and by using energy-saving lights, kika/Leiner managed to reduce its own electric power consumption by 18 percent in Austria in 2007. In new stores in Brünn and Pilsen in the Czech Republic, a completely new lighting concept has been implemented using energy efficient lighting. Sustainability is paramount to kika/Leiner's "Grüne Linie" (Green Line). Their furniture is made with natural materials and the company provides one of the most distinguished and best known ecological furniture trademarks in Austria. All "Grüne Linie" products are certified with internationally approved environmental seals, including the "Österreichisches Umweltzeichen" (Austrian Environmental Seal of Approval) and the "Europäische Umweltzeichen" (European Environmental Seal of Approval). Consumers are offered more transparency and it also raises the awareness for lasting products. The brand was recently re-launched and is available in 50 kika and Leiner furnishing stores
  • Leverage IBM’s experience to help So I really do appreciate your participation and hope you appreciate how IBM can help you save costs. There is a variety of sources that different sources based on their interests. If they’re interested in the green aspects, there’s a www.ibm.com address. If they’re CIOs, there’s a focus for CIOs. If they’re a facilities organization, there are places where the facilities people can go. But I also would highlight for you, as salespeople and delivery people, that all of this information is also available on SalesOne, with many, many client references. We now have about 100 client references that are available for you to use in videos, in write-ups, in press releases, et cetera. And we’ve found that it’s very powerful to use customer references associated with your industry or your location. You know, understanding that a neighbor just across the street was able to do something kind of brings that home.
  • these kinds data center builds, are not just for our clients. We also do the same thing within IBM. You may know that we recently announced the opening of our largest green data center in North America in Boulder in June 2008. This chart represents some of the new green techniques that are used in the center. This includes not only the IT related items, but also facility and industry related activities to build a comprehensive, integrated plan. In the interest of time I’m not going to go through each one of these particular items but will talk to a couple of things of interest that will allow us to design with the most energy efficiency: We are building a data center modularly in fashion from a power density standpoint. We’re going to start at 90 watts a square foot and over time increase the power density. It is designed with that modularity and the reserved capacity in mind. Allows for 2.25 times the standard power density of IT equipment, based on its watts per square foot rating This is a level 3+ data centre and in addition to the existing square footage of data centers that is in the Boulder Campus we will bring up the amount of data center space in that facility to over 300,000 square feet. Retrofitted an entire building on the Boulder campus: 98 percent of original building’s shell reused, 65 percent of materials from original building recycled, 25 percent of newly purchased material from recycled products We are designing the space to have a low PUE energy metric We will continue to plan for aggressive server and storage virtualization Due to favorable Boulder climate, center can switch to free or pre-cooling mode, utilizing a water economizer to dramatically reduce energy consumption 75% of the year Center’s mechanical system design is 40 percent more efficient than one without heat exchangers for free-cooling, equating to a reduction of approximately 6,550 tons of carbon dioxide emissions Variable speed pumps and motors installed in cooling systems to balance capacity to actual load, further reducing energy usage and costs Partially powered by alternative energy sources, with more than one million kilowatt hours per year of wind-powered electricity being purchased, resulting in a reduction of approximately two million pounds of carbon dioxide produced per year
  • Talking points for the RTP LDC

Transcript

  • 1. March 24, 2010 IBM’s data center family TM solutions Designing data centers for maximum flexibility Bret W. Lehman, PE Global Offering Executive GTS Site and Facilities Services
  • 2. New data center infrastructure decisions Data center requirements Business objectives
    • Digital Realty Trust; Emerging Trends in the data centre market, DCD London, November 2007; US EPA Study, August 2007
    • High availability 1
    • Provide required capacity 1
    • Optimize capital costs
    • Meet business and IT growth
    • Align capital and operating costs
    • Flexible to support new technology
    • Faster time to deploy
    • Reduce risk
    • Security
    • Maximize scalability
    • Maximize flexibility for technology evolution
    • Minimize capital and operational costs
  • 3. Defer 40-50% of the lifecycle costs by implementing modular data centers to align business and IT requirements Data center capital costs 60% costs from mechanical / electrical systems Shell 7% Mechanical 20% Fees 24% Fit-Up Costs 9% Instrumentation & Controls 4% Power 36% Source: IBM Estimates Pay as you grow Modular approach aligns capacity to business need Year Power (range of kw) 1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 2 2.5 3
  • 4. Optimize lifecycle costs and reduce operational costs by up to 50% Example: One 20,000 square foot data center
    • Cumulative cost to run a data center.
    • 10% annual energy increase.
    • Data center operational costs are 3-5 times the capital costs.
    • 75% of operation costs is for energy.
    Cumulative Cost of Operations ($Millions) Year 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Source: IBM Estimates for $50M $100 $150 $200 $250 $50 0 Energy Cost Staffing Bldg. Maint. & Mgmt. R E Tax
  • 5. IBM’s Data Center Family TM Comprehensive Set of Custom and Standard Capabilities Portable modular data center Enterprise modular data center High density zone Scalable modular data center
    • Turnkey data center for 500-2,500 sq ft
    • Rapid deployment in 8-12 weeks
    • 20% less cost than traditional data centers
    • 15-30% improved energy efficiency
    • Standardized design starting with modules as small as 5,000 sq ft
    • Scalable density by 3X within each module; expandable raised floor by 4X with no disruption
    • Energy efficient without compromising resiliency
    • Up to 50% reduced capital and operational costs
    • Fully functional data center; multi-vendor support
    • Portable - temporary and remote data centers
    • Rapidly deploy in 12-14 weeks
    • Designed for high availability
    • Leadership energy efficiency: 77% DCiE
    • “ Plug and play” infrastructure to support high density servers in existing data centers
    • Non-disruptive implementation
    • 35% lower cost than retrofitting existing data center
  • 6. Portable Modular Data Center
  • 7. Competitive Differentiators: flexibility and availability
    • Available in both IEC and NEMA designs
    • Global installations easily implemented
    • Global service and support through IBM and global suppliers
    Global Solution
    • Multiple cooling solutions for most energy efficient applications
    • PUE of 1.30 or better (DCiE – 77%)
    • Most energy efficient infrastructure equipment
    Energy Efficient Designs
    • Flexible designs created to meet client requirements
    • 20-foot, 40-foot and 53-foot containers available
    • Stackable containers
    Flexible Designs
    • Supports any vendor equipment, not just IBM
    • Mix and match servers, storage and network equipment
    • Supports blade servers, rack-mount severs and IBM iDataPlex servers
    • Can support non-rack mounted equipment also
    Open IT Architecture Design
    • IT rack rail system enables complete front and rear rack access
    • Service and maintenance with no outside exposure
    • No shelters, buildings or covers required
    Internal Service
    • All containers fully insulated and sealed - R34 insulation
    • All doors triple sealed with fire, water and smoke gaskets
    • Allows installation in any outdoor environment
    • All containers maintain true ISO standards
    Fully insulated containers
    • Complete turn-key solution from single source: chiller, UPS, generator, fire systems, monitoring, etc.
    • Designed, built and tested as a single, integrated solution
    • Easy to install and relocate as a single unit
    Complete data center infrastructure
  • 8. Technology Architectures
    • PMDC Insulation
      • R34 Insulation
      • Operating Range: -50 o F to 150 o F
      • RFI / EMI Attenuation: 21.5dB
      • Acoustic Isolation: 31dB
      • 80mm thick
      • Exceeds requirements of EN1047-2 Classification and methods of test for resistance to fire. Data rooms and data containers
        • Exposure to direct fire for 30 minutes
        • Allowable internal temperature – 70 o C, humidity – 85%RH
        • PMDC tested limits: temperature – <50 o C, humidity – up to 70%RH
      • Water tightness: IPX-5 (EN60529 - Specification for degrees of protection provided by enclosures (IP code) )
        • Protected against water jets - Water projected at all angles through a 6.3mm nozzle at a flow rate of 12.5 liters/min at a pressure of 30kN/m2 for 3 minutes from a distance of 3 meters.
      • Non-combustible according to levels established by ISO 1182 standard
  • 9. Technology Architectures
    • Types of equipment
      • UPS Systems
        • Static or flywheel UPS systems
        • N, N+1 and 2N designs available
        • Global partners: Eaton, Emerson , APC
      • Chiller Units
        • Air cooled systems
        • N, N+1 and 2N designs
        • Dual pump packages
        • Global partners: Emerson, Multi-Stack, Trane
      • Cooling Systems
        • DX, chilled water, refrigerant and natural free air cooling
        • N, N+1 and 2N
        • DX: overhead fan coil units or InRow units
        • Chilled water: RDHx, overhead fan coil units, InRow units
        • Refrigerant: RDHx, InRow units
        • Natural Free Air cooling: custom
        • Global Partners: IBM / Vette Corp. / Coolcentric, APC, Emerson/Liebert
  • 10. Technology Architectures
    • Types of equipment
      • Power Distribution
        • Buss bar or copper cable
        • N and 2N designs available
        • Global partners: Schneider Electric, Eaton, Starline, ABB
      • Rack Power Distribution
        • Basic, switched and metered power distribution
        • N and 2N designs available
        • Global Partners: APC, Eaton, Liebert, Server Technology
      • Fire detection / suppression
        • VESDA detection system
        • Gaseous suppression system (FM200, Novec 1230, Argonite, etc.)
        • Global Partners: Minimax, Fike
  • 11. Technology Architectures
    • Connectivity
      • PMDC cabling solutions supplied by:
        • IBM
        • Anixter
        • Panduit
        • Siemon
        • CommScope
      • Cabling solutions designed based on client needs:
        • Copper – minimum Cat 6
        • Fiber – OM3 10 GIG
      • IBM network engineers combine with partner network engineers on the latest network designs to meet client needs
  • 12. The PMDC has a broad ecosystem of infrastructure partners to provide complete and flexible solutions Data center power, cooling and monitoring solutions. Construction and design of modular data centers. Electrical components and systems for power quality. Reliable power, precision cooling, connectivity and embedded solution. Products for wiring and communications applications. Network cabling solutions. High density cooling solutions. Network cabling infrastructure. IBM Fiber Transport System S-Line Solution
  • 13. Technology Architectures
    • Rack Layouts
      • Rack layouts depend on PMDC configuration
        • “ All-in-One” design
        • Multi-container or IT-only container design
      • 20’ Containers:
        • All-in-One design – up to 5 standard 19” racks
        • Multi-container design - up to 8 standard 19” racks or up to 7 IBM iDataPlex racks
      • 40’ Containers:
        • All-in-One design - up to 8 standard 19” racks
        • Multi-container design - up to 18 standard 19” racks or up to 14 IBM iDataPlex racks
      • 53’ Containers:
        • All-in-One design - up to 12 standard 19” racks
        • Multi-container design - up to 25 standard 19” racks or up to 18 IBM iDataPlex racks
  • 14. PMDC configurations Single Container Solution
    • All-in-one design
      • IT equipment and infrastructure in a single container
      • Very compact solution
      • Use when space for containers is limited
      • Use when IT equipment needs are minimal
    Multi-Container Solution
    • IT Equipment Container (Server Container)
      • IT equipment, cooling, power distribution, fire suppression, remote monitoring, physical security
      • Use for maximized IT equipment installations
      • Supported by physical infrastructure container or existing building services
    • Physical Infrastructure Container (Services Container)
      • UPS/batteries, power switchboard, chiller, fire detection/suppression, cooling, monitoring
      • Designed to support IT equipment container
      • 2N or N+1 design
  • 15. IBM PMDC Standard Design Layout Examples A starting point to a customized, flexible container design 20’ Low Density PMDC 6kW/rack – 8 racks 20’ Medium Density PMDC 14kW/rack – 8 racks 20’ High Density PMDC 22kW/rack – 8 racks 40’ Low Density PMDC 6kW/rack – 17 racks 40’ Medium Density PMDC 14kW/rack – 17 racks 40’ High Density PMDC 22kW/racks – 17 racks
  • 16. Services Container (example)
  • 17. Mechanical System Architectures
    • Mechanical
    • Mechanical system can consist of:
          • Chiller units (chilled water cooling)
          • Condensers (DX cooling)
          • IT c ooling systems
            • Rear door heat exchangers (chilled water)
            • Overhead fan coil units (chilled water or DX)
            • InRow cooling units (chilled water or DX)
          • Humidification systems
          • Dehumidification systems
          • Fresh air make-up systems
  • 18. Cooling Solutions
    • Cooling Topology-Internal
      • Internal cooling can be DX, chilled water or natural free air cooling
      • Most popular: chilled water RDHx
        • Chiller to CDU to RDHx units
  • 19. PMDC Cooling Options: Overhead Fan Coil Units
    • Overhead Fan Coil Units
      • DX or chilled water
      • Up to 17kW/rack cooling
      • Alone or with RDHx
      • Does not consume floor space
    Overhead Fan Coil Units
  • 20. PMDC Cooling Options: IBM Rear Door Heat Exchanger
    • IBM Rear Door Heat Exchanger (RDHx)
      • Chilled water
      • Up to 22kW/rack cooling
      • Alone or with overhead fan coil units
      • Only adds ~4” to the rack depth
      • Adapts to many 19” racks or iDataPlex racks
      • No moving parts, no fans, no electricity, no noise
      • IBM technology licensed to CoolCentric (Vette Corp.)
    • RDHX Cooling Performance
      • Water flow rate range = 6-15 gpm.
      • Max pressure drop for RDHX = 13 psi
      • Temperature: 18 o C +/- 1 o C (64.4 o F +/- 1.8 o F)
      • Fan energy: none for RDHx units
        • No electricity, no moving parts
      • Depending upon flow conditions, each RDHX can remove 100,000 BTU/hr. (30kW).
  • 21. PMDC Cooling Options: InRow Cooling
    • InRow Cooling
    • Air or water cooled
    • Efficient, effective close-coupled cooling
    • Takes up floorspace
    • Up to 29kW cooling in DX units (600mm wide)
    • Up to 30kW cooling in chilled water units (300mm wide)
  • 22. PMDC Cooling Configuration Examples – InRow Cooling InRow Cooling Units using Chilled Water InRow Cooling Units (DX)
  • 23. Additional System Architectures
    • Electrical Systems can consist of:
        • UPS system and batteries (static or rotary)
        • Power distribution
        • Electrical switchgear
        • Automatic Transfer Switch
        • Rack power distribution
    • Fire suppression
        • Fire suppression system designed for specific infrastructure, client, local code and insurance underwriter requirements
        • Typical suppression system uses inert gas (FM200, Novec 1230, Argonite, etc.)
        • Fire suppression system can be altered to meet client needs
    • BMS/monitoring
        • Basic PMDC monitoring typically consists of APC Netbotz, ISX Central system or similar
        • All facility points can be monitored (i.e., temp, humidity, security, leak detection, equipment points, etc.)
        • Additional monitoring can be accomplished through Tivoli system
          • Application monitoring, power monitoring and AEM
          • Consolidated IT and facilities monitoring (availability, capacity and energy)
  • 24. Cable, water and network penetrations
    • Physical access data: Egress/Ingress & all service utilities - All egress / ingress of power lines, chilled water lines, network, etc. via cable glands
          • Multiple cable glands can be installed
  • 25. Floor Loading
    • Floor loading data
    • - Max container payload:
          • 62,400lbs for 20’ container (390 lbs/sf infrastructure equipment)
          • 66,200lbs for 40’ container (207 lbs/sf infrastructure equipment)
          • 3,000 lbs/rack (54,000 lbs IT load per 40’ container)
    • Security capabilities/options for container - Access control at all doors
          • - Numeric keypad
          • - Magnetic card stripe reader
          • Biometric scanner
        • CCTV internal, focused on doors
        • Security fencing and external CCTV optional
    Container Security
  • 26. Data Center Physical Controls
    • “ Mantrap” and Biometric Reader Access
      • Individual Areas
        • Fingerprint Verification
        • Dual Authentication
        • Badge-out Required (Anti-Pass-back)
      • All Doors Alarmed
        • Monitored by Site Security with Periodic Testing
    • Closed Circuit Television (CCTV)
      • Digital Cameras and Recorders with Alarm Pop-ups
      • Doors and Emergency Power Off (EPOs)
      • 24 Hour Recording / 30 Day Retention
    • Construction Requirements
      • Wall Structure (Slab-to-Slab) with No Ground Floor Windows
      • Adherence to Local Building and Fire Codes
      • Scheduled Inspections
  • 27. PMDC Production Services Container (20-ft) PMDC Services Container PMDC Electrical Room in Services Container with Overhead Fan Coil Unit
  • 28. PMDC Production Unit Pictures Main Electrical Switchboard in Services Container UPS and Battery Cabinet in Services Container
  • 29. PMDC Production Unit Pictures PMDC Chiller Unit in Services Container
  • 30. PMDC Demo Unit Pictures PMDC Chiller Unit in Services Container
  • 31. PMDC Server Container (20-ft) Cold Aisle View Hot Aisle View
  • 32. Shipping, deployment, installation, movement
    • Shipping
      • PMDC is shipped globally from two locations
        • Miami, FL
        • Sylvania, GA
      • Shipped complete with all infrastructure and IT racks installed
        • Can be shipped with IT equipment, but typically not
        • Airbags and strapping system used for shipping
      • IT equipment can be installed at client location, PMDC ready for testing and start-up
  • 33. Shipping, deployment, installation, movement
    • Deployment
      • PMDC is fully designed, built and tested at manufacturing site
      • When PMDC arrives on-site, it is ready for connection and start-up
      • Installation site preparation:
        • Concrete pad or level surface prepared
        • Power, water and network sources brought to installation location
        • Rigging equipment to place container(s)
    • Installation
      • Locate container(s) on level surface
      • Connect power, water and network connections
      • Test PMDC infrastructure systems
      • Install and connect IT equipment
      • Final test entire system
  • 34. Shipping, deployment, installation, movement
    • Movement
      • Prepare new installation location
        • Concrete pad or level surface prepared
        • Power, water and network sources brought to installation location
        • Rigging equipment to place container(s)
      • Shut down IT systems
      • Disconnect power water and network from container(s)
      • Secure racks and place airbags
      • Rigging equipment places container(s) on truck trailer
      • Move container(s)
    • Installation
      • Locate container(s) on level surface
      • Connect power, water and network connections
      • Test PMDC infrastructure systems
      • Final test entire system
  • 35. Ease of Installation – Stand-alone Solution Utility Electrical Feed Water Feed Network Connection C H I L L E D W A T E R C O N D I T I O N E D P O W E R Concrete Pad Concrete Pad 1 2 3
  • 36. Ease of Installation – Client Supplied Mechanical and Electrical Network Connection C H I L L E D W A T E R C O N D I T I O N E D P O W E R Concrete Pad Client owned MEP Plant
  • 37. Standard PMDC designs Overhead Fan Coil (2x17kW) Overhead Fan Coil (6x17kW) 160kW 10 mins 120kVA 6kW/rack 17 102kW 2 x 40’ Overhead Fan Coil (2x17kW) RDHx (17 units) 230kW 10 mins 225kVA 14kW/rack 17 238kW 2 x 40’ Overhead Fan Coil (3x17kW) RDHx (17 units) 420kW 10 mins 450kVA 22kW/rack 17 374kW 2 x 40’ Overhead Fan Coil (2x17kW) RDHx (8 units) 200kW 10 mins 225kVA 22kW/rack 8 176kW 2 x 20’
    • Redundant chilled water piping, valves and storage vessel
    • Engine generator with ATS
    • Redundant engine generator.
    • Humidifier
    • Access control
    • CCTV
    • Lighting and emergency lighting
    • Redundant chiller pumps
    Overhead Fan Coil (2x17kW) Overhead Fan Coil (2x17kW)
    • All include :
    • Complete insulation
    • Fire detection/suppression
    • Remote monitoring (temperature, humidity, water detection, fire alarm, UPS alarm and aircon alarm)
    • Optional :
    • Redundant main electrical panel (2N)
    • Redundant power distribution and cabling
    • Redundant chiller
    RDHx (8 units) 160kW 10 mins 160kVA 14kW/rack 8 112kW 2 x 20’ Overhead Fan Coil (3x17kW) 80kW 10 mins 80kVA 6kW/rack 8 48kW 2 x 20’ Electrical Room Cooling IT Cooling Chiller Batteries UPS Power Density Racks Total Power Containers Custom and single container PMDC solutions are also available (alter number of racks, power density, redundancy level, etc.)
  • 38. Competitive Differentiators Not published Not available No Outdoor Only No No No Yes Some Yes Company V Not published Not published Yes ??? No No Yes Yes Yes No Company B Not available $559k (IT container only) 20’ - $600,000 (IT container only) 20’ - $550,000 (IT container only) Pricing 20’ No No No Yes Flexible Designs Not published Not available 40’ - $1.3M (IT container) 40’ - $960,000 (IT container) Pricing 40’ Indoor only Indoor/ outdoor (limited) Indoor Only Indoor/ Outdoor Installation Capability No No No Yes Global Models and Service Yes Yes No Yes ISO Standard No No Yes Yes Open Architecture Yes No No Yes Internal Service No Some No Yes Insulated No No (purchased from others) No (purchase from others) Yes Complete Integrated and Tested Infrastructure Company R Company S Company H IBM PMDC
  • 39. IBM provides a complete solution and can help clients implement globally DETERMINE REQUIREMENTS DETAILED PLANNING / DESIGN TURNKEY SOLUTION START UP TESTING/ SITE TURNOVER
    • Intended use
    • Installation location
    • Capacity
    • Power density
    • Infrastructure
    • Redundancy
    What are your data center requirements?
    • Mechanical and electrical systems
    • Cooling systems
    • Number of containers and racks
    • Redundancy levels
    • Fire protection
    • Security systems
    Create a design based on the requirements, defining:
    • Installation site design
    • Site preparation
    • Electrical, mechanical and network feeds
    • PMDC integration
    • PMDC installation
    • PMDC testing
    Turnkey solution:
    • IT equipment relocation and migration
    • Start up / test PMDC system
    • Client training
    Site turnover:
  • 40. Installing a data center in a dessert A Middle Eastern University needed a quick data center solution in a dessert location
    • Business challenge
    • Traditional data center construction would take 18 – 24 months
    • Needed immediate data processing capability
    • Dessert location made on-site construction difficult
    • Solution
    • PMDC provided complete data center in shipping containers, including data center infrastructure
    • Plug-and-play installation in a remote location
    • Benefit
    • 4 weeks delivery time
    • PMDC fully tested prior to shipment
    • Avoided expensive and difficult construction of data center in a nontraditional environment
  • 41. Scalable Modular Data Center
  • 42. Scalable modular data center (SMDC) is an innovative solution for today’s data centers A cost-effective, high quality 500-2,5000 square foot (50-250 square meters) data center. The SMDC can be custom designed and installed in nearly any working environment in less time and at a lower cost than a traditional raised floor data center.
    • SMDC benefits
    • Rapid deployment: A fully functional, cost effective data center in 8-12 weeks
    • Energy efficient design: UPS systems and in-row, load variable cooling
    • High quality: complete turn-key solution from planning to install and start-up
  • 43. Scalable, modular cooling: A quick view of perimeter vs in-row cooling
    • Perimeter Cooling
      • Cools entire room and under raised floor
      • Must deliver cold air at great distances
        • Must overcome obstructions
        • Floor tiles must be strategically placed and moved/adjusted
      • Hot return air mixes with cooler room air lowering return air temperature
        • Increases room air temperature
        • Lowers delta T across coil
      • Not capable of cooling high density loads
        • Effective to 2 – 6kW/rack
    • In-Row Cooling
      • Cools only the racks and IT equipment
      • Close coupled, very short air paths
        • No obstructions to air flow
        • No raised floor or floor tiles required, all air flow at rack level
      • Very little, if any, hot/cold air mixing
        • Easy hot air containment (rack or row level)
        • Increased delta T across coil
      • Can support very high density loads
        • Up to 30kW/rack and more
  • 44. What does a typical SMDC infrastructure look like? SMDC Floor Plan (500sf/50sm) - 2 rows of 5 IT equipment racks - 4 InRow cooling units per row - One UPS/batteries and PDU per row - Chilled water distribution unit 3D view of SMDC solution
  • 45. How can SMDC help with high density server installations? Traditional Raised Floor Environment Only 8 servers Wasted Rack/Floor Space
      • Perimeter Cooling: Cooling capacity ~3kW/rack
      • BladeCenter load limited to ~3kW
      • Limited to 1 BladeCenter/rack with only 8 servers
    BladeCenter
      • 7U out of 42U useable
    SMDC Environment BladeCenter BladeCenter BladeCenter BladeCenter BladeCenter BladeCenter 84 servers No Wasted Rack/Floor Space
      • Up to 6 BladeCenters/rack with up to 14 servers each
      • Fully populated rack - 42U utilized
      • In-Row Cooling w/containment: Cooling capacity ~30kW/rack
      • BladeCenter full load ~5kW each
  • 46. Client requirements Solution
    • Implemented IBM Scalable Modular Data Center solution with advanced InfraStruXure ® architecture from IBM Alliance Partner APC
    • Standardized on IBM BladeCenter®
    • Uses “green” design concepts such as free cooling, separate high density computing area using in-row cooling, flexible expansion area for future growth
    Benefits
    • Supports corporate sustainability (“Green Line”)
    • Reduce electric power consumption by up to 40%
    • Uses energy efficient servers which require 24% less energy than competition
    • Improved security, reliability, and TCO
    “ “ In IBM we have an IT partner who meets our ideal expectations for sustainable business&quot; - Dr. Herbert Koch, manager of the kika/Leiner group Scalable Modular Data Center - kika/Leiner One of Europe’s top 5 furniture businesses goes Green
    • Business expansion across Europe and Middle East
    • Aging data center threatens growth
    • Need for a rapidly deployable and Green data center concept on limited floor area
  • 47. Leverage IBM’s experience to help Green IT: www.ibm.com/ green For CIOs: : www.ibm.com/cio For facilities managers: www. ibm.com/services/siteandfacilities
  • 48. Back-up Slides
  • 49. Providing a flexible, cost-effective data center “Down Under” IBM utilizes PMDC to meet immediate business needs as well as support future IT growth
    • Business challenge:
    • Needed additional data center capacity fuelled by unexpected increase in demand
    • Unable to secure more data center space
    • Co-location space did not meet needs
    • Tight deadlines to implement
    • Solution:
    • Provided two 20’ PMDC containers, IT and infrastructure containers
    • Open IT architecture to support client IT equipment
    • Complete infrastructure solution: UPS, batteries, chiller, RDHx cooling units, security, power distribution, fire systems, engine generator, switchgear, fuel tanks, etc.
    • Designed, built, tested and delivered as a complete solution
    • Benefits:
    • Total PMDC solution fully tested prior to shipment
    • Plug and play solution
    • Allows for future disaster recovery operations
    • Addressed IT needs much faster (~8 weeks) than possible through other methods
  • 50. Quick response to immediate needs with PMDC Fire in data center only a slight set-back
    • Business challenge:
    • Fire in data center stopped IT operations
    • New data center construction would take 18 – 24 months
    • Needed immediate data processing capability
    • Solution:
    • PMDC provided nearly immediate response and ability to restart IT operations quickly
    • PMDC provided complete data center solution
    • Temporary solution became permanent
    • Benefits:
    • Delivery, install and running within weeks
    • PMDC fully tested prior to shipment
    • Re-established data center operations without major loss to business
    • Avoided expensive and difficult construction of data center
  • 51. Quick install of PMDC provides data center power solution IBM utilizes PMDC to address immediate power concerns
    • Business challenge:
    • Growing data center quickly running out of available power
    • No physical space within existing facility to install new equipment
    • Equipment delivery time and construction/installation time was too long
    • Solution:
    • PMDC power infrastructure solution
    • Provided 500kVA UPS N+1, 20 min batteries, 1MW engine generator, switchgear, fuel tanks, etc.
    • Designed, built, tested and delivered as a complete solution
    • Benefits:
    • Delivery, install and running within weeks
    • PMDC power solution fully tested prior to shipment
    • Plug and play solution
    • Avoided expensive and difficult construction of new power house
  • 52. PMDC provides a disaster recover solution for a Polish Bank IBM PMDCA bridges the gap while new data center under construction
    • Business challenge:
    • Polish bank needed a disaster recovery data center and turned to IBM for a solution
    • New data center construction would take some time and Nordea needed an interim solution to host the DR platform for its core banking
    • Solution:
    • IBM PMDC container solution providing an immediate interim solution
    • PMDC data center solution contains fully insulated container, UPS system, cooling systems, remote monitoring, fire systems, electrical switchgear and power distribution
    • Supporting IBM Power™ 595 servers, IBM Storage Area Network, IBM System Storage DS8300
    • Benefits:
    • PMDC delivered in 5 weeks, tested and running within 14 weeks
    • Complete turn-key solution, including PMDC installation, IT equipment relocation, IT equipment installation and testing
    • Temporary solution to be used only as long as needed
    • IBM to de-install and remove at end of usage time
  • 53. Designing IBM’s new data center infrastructure Boulder’s smart design is responsive to change and energy efficient
    • LEED Silver Certification
    • Energy Management Programs ($700K)
    • Power Company Rebates
    • Government Incentives
    • Renewable Energy Certificates
    • Environmental Programs
    • 1 M kilowatt hours per year of wind generated electricity
    • 2 Million pounds reduced CO 2 emissions
    • 98% building shell reused
    • 65% materials recycled
    • 33% new products from recycled materials
    • Cooling
    • 50% energy savings from free cooling
    • $370-700K savings per year from variable speed chillers and CRAC’s
    • 1.6% energy savings for every 1 0 change in set point temperature
    • Electrical
    • Modular power density expansion options
    • Other building systems
    • Energy Efficient Lighting
    • High “R” Value Insulation
    • Design / Build
    • 300,000 square feet
    • Modular power density of 90 to 140 watts per square foot
    • Best equipment layout
    • Level 3+ design
    • Operate
    • 70-75% DCiE with free cooling
    • 30% servers targeted for virtualization
    • Quadruple physical utilization of servers
    • 15-20% less power over allocation with power management software
    Industry Related Facilities Related IT Related
  • 54. RTP Leadership DC design points
    • Variable speed CRAC and chillers (energy)
    • Removal of CRAC filters after start-up (energy)
    • Elevated Temperature & Humidity criteria (energy)
    • Rainwater harvesting and reuse (green)
    • Provisions for direct water-cooled equipment (flexibility)
    • VESDA air sampling fire detection system ( no false alarms )
    • Flex-Head sprinkler assemblies (flexibility)
    • Chilled water storage + 1m gal condenser water ( resiliency )
    • Water-side economizing / airside over the MEPs
    • Under floor static air pressure controls ( resiliency )
    • 360 degree design for chilled water- allows for zone maintenance
    Mechanical
    • Industry Leading Integration of IT Equipment and Building Mgmt Systems ( BMS ) to automatically optimize to varying IT equipment configurations and building loads
    • Provides for future integration with IBM Systems Management software products and allows development of additional energy efficiency software programming, data mining and analysis
    • Backup and Recovery offering
    • Virtualization offerings
    • 3D billing with energy rider
    Integrated Infrastructure
    • Utility capacity, reliability & robustness (reliability)
    • Dual feeds to distribution panels for IT loads
    • Monitoring of branch circuits at PDU/RDC level
    • ARC flash assessment
    • 6, 10, 15 Mw in 3 phases (modularity, flexibility)
    • High density zones
    • 480V equipment (flexibility, energy)
    • Catcher bus maintenance (reliability) (design TBD)
    • High efficiency static double-conversion UPS (energy)
    • Timestamps – faults are recorded at same time / waveform capture (reliability)
    • PLC building control system ( efficiency / reliability )
    • Double-ended substations (reliability)
    Electrical
    • State of the art building technology
    • LEED silver or gold
    • Viewing Area / fireproof window onto raised floor
    • Base build (not a retrofit)
    • Reflective roof (energy)
    Architectural