Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ever Green Dc


Published on

Published in: Business, Technology
  • Be the first to comment

  • Be the first to like this

Ever Green Dc

  1. 1. - Pravin Agarwal Sr. Consultant (MS technologies & Virtualization) +91 9324 338551
  2. 2. Disasters Happen
  3. 3. Disasters Happen
  4. 4. <ul><li>Defined by the Telecommunications Infrastructure Standard for Data Centers (TIA 942) </li></ul><ul><li>Classifies data centers into Tiers </li></ul><ul><li>Each Tier offers a higher degree of sophistication and reliability </li></ul>
  5. 5. <ul><li>Basic: 99.671% availability </li></ul><ul><li>Annual downtime of 28.8 hours </li></ul><ul><li>Susceptible to disruptions from both planned and unplanned activity </li></ul><ul><li>Single path for power and cooling distribution, no redundant components (N) </li></ul><ul><li>May or may not have a raised floor, UPS or generator </li></ul><ul><li>3 months to implement </li></ul>
  6. 6. <ul><li>Redundant Components: 99.741% availability </li></ul><ul><li>Annual downtime of 22.0 hours </li></ul><ul><li>Less susceptible to disruption from both planned and unplanned activity </li></ul><ul><li>Single path for power and cooling disruption, includes redundant components (N+1) </li></ul><ul><li>Includes raised floor, UPS and generator </li></ul><ul><li>3 to 6 months to implement </li></ul>
  7. 7. <ul><li>Concurrently Maintainable: 99.982% availability </li></ul><ul><li>Annual downtime of 1.6 hours </li></ul><ul><li>Enables planned activity without disrupting computer hardware operation, but unplanned events will still cause disruption </li></ul><ul><li>Multiple power and cooling distribution paths but with only one path active, includes redundant components (N+1) </li></ul><ul><li>Includes raised floor, UPS and generator </li></ul><ul><li>15 to 20 months to implement </li></ul>
  8. 8. <ul><li>Fault Tolerant: 99.995% availability </li></ul><ul><li>Annual downtime of 0.4 hours </li></ul><ul><li>Planned activity does not disrupt critical load and data center can sustain at least one worst-case unplanned event with no critical load impact </li></ul><ul><li>Multiple active power and cooling distribution paths with redundant components </li></ul><ul><li>15 to 20 months to implement </li></ul>
  9. 9. Start-up Expenses Estimated $ per Sq ft Estimated $ for 30,000 Sq ft Facility INR     $0.00   Land 400 $1,20,00,000.00 Rs. 60,00,00,000.00 Raised Floor 220 $66,00,000.00 Rs. 33,00,00,000.00 Design Engineering 50 $15,00,000.00 Rs. 7,50,00,000.00 Power Distribution 41 $12,30,000.00 Rs. 6,15,00,000.00 UPS 28 $8,40,000.00 Rs. 4,20,00,000.00 Generator & Bus 55 $16,50,000.00 Rs. 8,25,00,000.00 Fire Suppression 20 $6,00,000.00 Rs. 3,00,00,000.00 Security Systems 3 $90,000.00 Rs. 45,00,000.00 Environmental Monitoring 5 $1,50,000.00 Rs. 75,00,000.00 Other Construction 45 $13,50,000.00 Rs. 6,75,00,000.00 Network Termination Equip 1000 $3,00,00,000.00 Rs. 1,50,00,00,000.00 Network Install 600 $1,80,00,000.00 Rs. 90,00,00,000.00 Insurance 5 $1,50,000.00 Rs. 75,00,000.00 Reserved 10 $3,00,000.00 Rs. 1,50,00,000.00       Rs. 0.00 Estimated TOTAL $   $7,44,60,000.00 Rs. 3,72,30,00,000.00
  10. 10. <ul><li>30,000 sq. ft. facility required </li></ul><ul><li>Capital costs range from $12 million to $36 million (average: $22 million) </li></ul><ul><li>Operating costs range from $1 million to $4 million per year (average: 3.5 million) </li></ul><ul><li>Rural or urban, but travel time important </li></ul>
  11. 11. <ul><li>Data Centre Requirements: </li></ul><ul><ul><li>Tier 3 </li></ul></ul><ul><ul><li>30,000 sq. ft. </li></ul></ul><ul><ul><ul><li>10,000 sq. ft. for servers and racks </li></ul></ul></ul><ul><ul><ul><li>5,000 sq. ft. for future growth </li></ul></ul></ul><ul><ul><ul><li>15,000 sq. ft. for support </li></ul></ul></ul>Data Center, with individual representations of each of the physical components: servers, racks, ACUs, PDUs, chilled water pipes, power and data cables, floor grilles…
  12. 12. <ul><li>Entrance Room </li></ul><ul><ul><li>Analogy: “Entrance Facility” </li></ul></ul><ul><li>Main Distribution Area (MDA) </li></ul><ul><ul><li>Analogy: “Equipment Room” </li></ul></ul><ul><li>Horizontal Distribution Area (HDA) </li></ul><ul><ul><li>Analogy: “Telecom Room” </li></ul></ul><ul><li>Zone Distribution Area (ZDA) </li></ul><ul><ul><li>Analogy: “Consolidation Point” </li></ul></ul><ul><li>Equipment Distribution Area (EDA) </li></ul><ul><ul><li>Analogy: “Work Area” </li></ul></ul>
  13. 13. <ul><li>Requirements & guidelines for the design & installation of a data center or computer room </li></ul><ul><li>Intended for use by designers needing comprehensive understanding of data center design </li></ul><ul><li>Comprehensive document </li></ul>Cabling Architectural design Fire protection Network Design Environmental design Water intrusion Location Electrical design Redundancy Access
  14. 14. <ul><li>Computer room </li></ul><ul><li>Telecommunications room </li></ul><ul><li>Entrance room </li></ul><ul><li>Main distribution area </li></ul><ul><li>Horizontal distribution area </li></ul><ul><li>Zone distribution area </li></ul><ul><li>Equipment distribution area </li></ul><ul><li>Backbone cabling </li></ul><ul><li>Horizontal cabling </li></ul>Spaces Cabling subsystems
  15. 15. Power Racks & Physical Structure Cooling Security & Fire Surpression StructuredCabling CPI Management – IP Based CPI Management – Building Management Systems UPS & Batteries PDU Surge Protection Switch Gear Branch Circuits Dist Panels Transformers Generators CRAC Chillers Cooling Towers Condensers Ductwork Pump Packages Piping ADU Server Racks Telco Racks Raised Floor Dropped Ceiling Air Dams Aisle Partitions Power Data Conduit Trays Overhead Sub-floor In-rack Room Security Rack Security EPO Halon FM-200 INERGEN® Novec™ Not widely adopted but can utilize SNMP traps Traditional Facilities management – Analog DCI Room Zone Row Rack
  16. 16. Cooling CRAC Chillers Cooling Towers Condensers Ductwork Pump Packages Piping ADU Power UPS & Batteries PDU Surge Protection Switch Gear Branch Circuits Dist Panels Transformers Generators Racks & Physical Structure Server Racks Telco Racks Raised Floor Dropped Ceiling Air Dams Aisle Partitions
  17. 17. Cooling CRAC Chillers Cooling Towers Condensers Ductwork Pump Packages Piping ADU CRAC and CRAH <ul><li>Operational efficiencies can range from 40-90% </li></ul><ul><li>Standard speed and variable speed drives available </li></ul><ul><li>Almost always over-provisioned </li></ul><ul><li>Measured in tonnage </li></ul><ul><li>Single largest power OpEx in the Data Center </li></ul><ul><li>Not tradtionally managed via the network </li></ul>Air Distribution <ul><li>Aid CRAC in getting air to targeted areas </li></ul><ul><li>Typically just extra fans, no cooling capacity </li></ul><ul><li>Inexpensive, flexible and no “forklift” required </li></ul><ul><li>Can be within racks, rear door or end of row </li></ul><ul><li>Can buy a customer extra time to plan a move </li></ul><ul><li>Typically not managed via the network but can be </li></ul>
  18. 18. <ul><ul><li>Each watt consumed by IT infrastructure carries a “burden factor” of 1.2 to 2.5 for power consumption associated with cooling, conversion/distribution and lighting </li></ul></ul>Sources: EYP Mission Critical Facilities, Cisco IT, Network World, Customer Interviews, APC Fewer the power supplies to support a service, fewer the conversion losses 50% 35% 15%
  19. 19. <ul><li>1/3 rack footprint </li></ul><ul><li>Chilled water </li></ul><ul><li>18kW nominal capacity </li></ul><ul><li>30kW w/containment </li></ul><ul><li>Hot swappable fans </li></ul><ul><li>Dual power feeds </li></ul><ul><li>kW metering </li></ul><ul><li>Network manageable </li></ul><ul><li>Inexpensive way to meet high density requirements </li></ul>In row Cooling - Densities up to 30kW Cooling IT Rack Front View Cooling – Rack/Row IT Rack HP BladeSystem 4 x Blade Chassis in one rack equates to ~15kW
  20. 20. Source: Gartner 2006 20,000 ft² 800kW +33% 100-200 Racks Annual Operating Expense = $800k Annual Operating Expense = $4.6M* *Peripheral DC costs considered Legacy DC designed to accommodate 2-3kW per Rack Introducing 1/3 high-density infrastructure into a legacy facility is cost prohibitive Legacy Server High-Density Server Power per Server 2-3 kW per rack > 20 kW per rack Power per Floor Space 30-40 W/ft² 700-800 W/ft² Cooling Needs—chilled airflow 200-300 cfm 3,000 cfm
  21. 21. <ul><li>Conduct a cooling checkup/survey. </li></ul><ul><li>Route data cabling in the hot aisles and power cable in the cold aisles. </li></ul><ul><li>Control air path leaks and manage cabling system pathways. </li></ul><ul><li>Remove obstructions below raised floor and seal cutouts. </li></ul><ul><li>Separate blade server cabinets. </li></ul><ul><li>Implement ASHRAE TC9.9 hot aisle/cold aisle design. </li></ul><ul><li>Place CRAC units at the ends of the hot aisles. </li></ul><ul><li>Manage floor vents. </li></ul><ul><li>Install air flow assisting devices as needed. </li></ul><ul><li>In extreme cases, consider self-contained cooling units. </li></ul>
  22. 22. <ul><li>See Cooling top 10 Steps! </li></ul><ul><li>Standardize on rack SOE </li></ul><ul><li>Implement scalable UPS systems </li></ul><ul><li>Increase Voltage </li></ul><ul><li>Target higher UPS loading </li></ul><ul><li>Investigate DC power </li></ul><ul><li>Load balance </li></ul><ul><li>Limit branch circuit proliferation </li></ul><ul><li>Monitor power </li></ul><ul><li>Manage and target power based on monitoring benchmark </li></ul>
  23. 24. Technology I/O Subsystem CPU Operating System Networking Storage Management Applications
  24. 25. <ul><li>Cost savings/cost avoidance – a natural by product of virtualization </li></ul><ul><li>Increase server utilization – optimize existing investments, including legacy systems </li></ul><ul><li>Reclaim data center floor space </li></ul><ul><li>Increase agility – faster and easier response to incidents, problems, new mandates for energy efficiency, compliance </li></ul><ul><li>Higher availability </li></ul><ul><li>Do more with less </li></ul>
  25. 26. <ul><ul><li>Process Recommendations </li></ul></ul><ul><ul><li>Virtualize to cut Data Center energy consumption by 50 to 70% </li></ul></ul><ul><ul><li>Find the Hidden Data Center to extend the life of your facilities (detailed scenario) </li></ul></ul><ul><ul><li>Plan for Advanced Power Management </li></ul></ul>
  26. 27. 70% Server Volume Reduction + Refresh with Energy Efficient Products = 50-70% less Data Center energy use
  27. 28. <ul><li>Increased server utilization to nearly 80% </li></ul><ul><li>Consolidated servers by a 20:1 ratio </li></ul><ul><li>Data center space at 20:1 </li></ul><ul><li>No staff increase needed </li></ul><ul><li>New servers deployed in hours not weeks </li></ul>Common use case: 200 Servers Virtualized to 10 Physical Chassis ~ Power and cooling savings @ $ 0.10/kWh $ 595 per server $113.050 annual power savings* DR - Site Production SAN Backup Server DEV/TEST Backup Server
  28. 29. Traditional Disparate Hardware Elements Virtualization Enables Higher Levels of Efficiency Virtualized Shared Resource Pools Storage network
  29. 30. <ul><li>Simplifies management </li></ul><ul><ul><li>Centralize server & storage management </li></ul></ul><ul><ul><li>Improve service levels </li></ul></ul><ul><li>Increases flexibility </li></ul><ul><ul><li>Optimize capacity planning </li></ul></ul><ul><ul><li>Eliminate scheduled downtime </li></ul></ul><ul><li>Reduces total cost of ownership </li></ul><ul><ul><li>Improve capacity utilization </li></ul></ul><ul><ul><li>Require fewer physical systems </li></ul></ul><ul><li>Enables high speed SAN-based non-disruptive server workload movement </li></ul><ul><ul><li>Migration (VMotion) </li></ul></ul><ul><ul><li>Failover (HA – High Availability) </li></ul></ul><ul><ul><li>Load balancing (DRS – Distributed Resource Scheduler) </li></ul></ul>Server Virtualization and Network Storage APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS Storage network
  30. 31. <ul><li>Reduce TCO including large capital expenditures for Data Center facilities </li></ul><ul><li>Resolve immediate environmental issues e.g., hot spots </li></ul><ul><li>Combat server and storage growth rates as they exceed data center capacity </li></ul><ul><li>Stop data center sprawl to simplify capacity planning and operations </li></ul><ul><li>Contain escalating energy costs as power, costs begin to exceed IT equipment costs </li></ul><ul><li>Support increasing compute density which can overwhelm existing power/cooling capacity </li></ul><ul><li>Comply with corporate “Green” initiatives such as carbon neutrality </li></ul>Investment Risky Underinvestment Wasteful Over Provisioning OPTIMAL INVESTMENT Reliability
  31. 32. Find The Hidden Data Center Approach
  32. 33. <ul><li>Four main types of virtualization technologies emerging: </li></ul><ul><ul><li>Server Virtualization </li></ul></ul><ul><ul><ul><li>Virtualizes the physical CPU, Memory, I/O of servers </li></ul></ul></ul><ul><ul><li>I/O Virtualization </li></ul></ul><ul><ul><ul><li>Virtualizes the physical network topology and mappings between servers and storage </li></ul></ul></ul><ul><ul><li>File Virtualization </li></ul></ul><ul><ul><ul><li>Virtualizes files and namespaces across file servers </li></ul></ul></ul><ul><ul><li>Storage Virtualization </li></ul></ul><ul><ul><ul><li>Virtualizes physical block storage devices </li></ul></ul></ul>
  33. 34. <ul><li>“ Pools” of commonly grouped physical resources </li></ul><ul><li>Dynamic allocations based on application level grouping and usage policies </li></ul><ul><li>Interconnected and controlled through an intelligent interconnect fabric </li></ul>Server Processing I/O Storage Applications Stand-By Resource Pool Compute Networking and Storage Virtualization Virtual Server Virtual Server Virtual Server Virtual Server Intelligent Fabric
  34. 35. <ul><ul><li>1,000 </li></ul></ul><ul><ul><li>Direct attach </li></ul></ul><ul><ul><li>3000 cables/ports </li></ul></ul><ul><ul><li>200 racks </li></ul></ul><ul><ul><li>400 power whips </li></ul></ul><ul><ul><li>80 </li></ul></ul><ul><ul><li>Tiered SAN and NAS </li></ul></ul><ul><ul><li>300 cables/ports </li></ul></ul><ul><ul><li>10 racks </li></ul></ul><ul><ul><li>20 power whips </li></ul></ul><ul><ul><li>Servers </li></ul></ul><ul><ul><li>Storage </li></ul></ul><ul><ul><li>Network </li></ul></ul><ul><ul><li>Facilities </li></ul></ul>Server, Storage, and Network Consolidation Physical Virtual
  35. 36. Physical Server ESX Layer Benifites Performance Scalability Stability - No Manage Service, Vmotion, etc Browns Virtual Machines
  36. 37. Physical Server ESX Layer Benifites Performance Scalability Stability SAN Vmotion OS - No Manage Service, Vmotion, etc Browns Virtual Machines
  37. 38. Physical Server ESX Layer Benifites Performance Scalability Stability Windows OS SAN Vmotion Manage Services Browns Virtual Machines
  38. 39. <ul><li>Infrastructure in the data centre running out of capacity </li></ul><ul><ul><li>SAN ports </li></ul></ul><ul><ul><li>IP ports </li></ul></ul><ul><li>Disaster Recovery planning in place but the ability to execute was not present </li></ul><ul><li>Desire to increase the capabilities and service offerings of the IT department </li></ul><ul><li>Server environment well structured and built to an SOE </li></ul>Case Study – Darwin City Council
  39. 40. <ul><li>Virtual Infrastructure built on Dell 2950’s with VT-enabled Woodcrest processors </li></ul><ul><li>Network segmented into three security zones </li></ul><ul><li>Virtualisation architecture designed to enhance the overall security of the DCC network </li></ul>Case Study – Darwin City Council
  40. 41. Virtualisation Cost Analysis No change (three years) As is cost (Hardware, Electricity) -$194,166.41 Provisioning of new hardware -$26,974.36 Total -$221,140.77 Greenhouse Emissions (tonnes) 387.23 Assume Software Costs are static Virtualisation (three years) Virtualisation Hardware -$49,700.00 Gain in Productivity $29,587.50 Virtualisation software -$14,700.00 Internal Implementation Costs (including provisioning) -$6,069.23 Consulting Costs -$16,000.00 Total -$56,881.73 Greenhouse Emissions (tonnes) 44.28 Net Change $164,259.04 74% Reduction Greenhouse reduction 114.32 Tonnes per annum Electricity Savings $19,053.00 Over 3 years Server count reduction 10 NPV $153,973.94 After 3 Years 77%
  41. 42. <ul><li>46 X86-based servers (retired 30 servers) </li></ul><ul><li>122 X86-based hosts </li></ul><ul><li>10 ESX servers hosting 86 VMs </li></ul><ul><li>86 Virtual hosts vs. 36 Physical hosts </li></ul><ul><li>70% virtualized in the x86 space </li></ul>
  42. 43. <ul><li>Cost of each Blade: $6,250.00 </li></ul><ul><ul><li>Includes Disk, Memory, Dual Proc, etc. </li></ul></ul><ul><li>Number of additional servers: </li></ul><ul><ul><li>86 virtual-10 ESX servers=76 </li></ul></ul><ul><li>Cost to provide physical Servers </li></ul><ul><li>76 x $6,250= $475,000.00 </li></ul>