This presentation covers ways to increase data center efficiency. From what we consider the basics to more advanced techniques and then through services that are available. Many of these are covered through individual white papers and presentations but we wanted to bring these topics together under one presentation.
The document discusses strategies for making data centers more energy efficient without compromising performance or reliability. It outlines 10 best practices including using more efficient processors and power supplies, server virtualization, improved cooling practices, and monitoring systems. Implementing these holistic strategies from the IT equipment level upwards can significantly reduce energy usage through cascading effects while freeing up capacity.
Commercial Overview DC Session 3 The Greening Of The Data Centrepaul_mathews
Data centers consume large amounts of electricity and produce significant carbon emissions. They are often inefficient, with only around 50% of energy going to IT loads while the rest is lost to physical infrastructure. This can be improved through better sizing, modular scalable designs, high-efficiency equipment, and optimization techniques like hot/cold aisle containment. Achieving higher data center infrastructure efficiency (DCiE) through monitoring and improvement strategies can reduce electricity bills by up to 50% and lower environmental impact.
Data Center Cooling Design - Datacenter-serverroommarlisaclark
Keep your data center cool and healthy with our smart Data Center Cooling Design which makes sure your data centers never get exhausted and work efficiently. Visit: http://www.datacenter-serverroom.com/rack-row-room-data-center-cooling
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Improving your PUE while consolidating into an existing live data centerSchneider Electric
While there are multiple consolidation options to consider, upgrading an existing data center has a significantly lower capital investment, requires no new real estate acquisition, can be phased to match IT refresh cycles and IT virtualization, and can be done while the data center is live. This session explores these considerations which are particularly important in the Federal space as well as a high density POD overlay discussion and approaches to reducing PUE.
NDY Melbourne mission critical market sector leader and associate director Hayley McLoughlin recently presented her "Beyond The Future - Upgrading a Data Centre Beyond its Planned Capacity" presentation at the recent Data Centre Dynamics Conference in Melbourne.
Hayley’s insightful presentation investigates the challenges of a data centre upgrade project by presenting a case study on a particularly challenging Data Centre upgrade that NDY is currently delivering in the Asia Pacific region.
This document discusses how to design data centers for maximum energy efficiency. It identifies the main culprits of inefficiency as power equipment, cooling equipment, lighting, oversized equipment, and poor configuration. It recommends 8 characteristics of highly efficient data centers, including using scalable power and cooling, row-based cooling, high-efficiency UPS, high voltage distribution, variable speed drives, capacity management tools, and room layout tools. The document outlines 7 key elements of efficient data center design that incorporate these characteristics and can save up to 40% on energy costs.
The document discusses strategies for making data centers more energy efficient without compromising performance or reliability. It outlines 10 best practices including using more efficient processors and power supplies, server virtualization, improved cooling practices, and monitoring systems. Implementing these holistic strategies from the IT equipment level upwards can significantly reduce energy usage through cascading effects while freeing up capacity.
Commercial Overview DC Session 3 The Greening Of The Data Centrepaul_mathews
Data centers consume large amounts of electricity and produce significant carbon emissions. They are often inefficient, with only around 50% of energy going to IT loads while the rest is lost to physical infrastructure. This can be improved through better sizing, modular scalable designs, high-efficiency equipment, and optimization techniques like hot/cold aisle containment. Achieving higher data center infrastructure efficiency (DCiE) through monitoring and improvement strategies can reduce electricity bills by up to 50% and lower environmental impact.
Data Center Cooling Design - Datacenter-serverroommarlisaclark
Keep your data center cool and healthy with our smart Data Center Cooling Design which makes sure your data centers never get exhausted and work efficiently. Visit: http://www.datacenter-serverroom.com/rack-row-room-data-center-cooling
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Improving your PUE while consolidating into an existing live data centerSchneider Electric
While there are multiple consolidation options to consider, upgrading an existing data center has a significantly lower capital investment, requires no new real estate acquisition, can be phased to match IT refresh cycles and IT virtualization, and can be done while the data center is live. This session explores these considerations which are particularly important in the Federal space as well as a high density POD overlay discussion and approaches to reducing PUE.
NDY Melbourne mission critical market sector leader and associate director Hayley McLoughlin recently presented her "Beyond The Future - Upgrading a Data Centre Beyond its Planned Capacity" presentation at the recent Data Centre Dynamics Conference in Melbourne.
Hayley’s insightful presentation investigates the challenges of a data centre upgrade project by presenting a case study on a particularly challenging Data Centre upgrade that NDY is currently delivering in the Asia Pacific region.
This document discusses how to design data centers for maximum energy efficiency. It identifies the main culprits of inefficiency as power equipment, cooling equipment, lighting, oversized equipment, and poor configuration. It recommends 8 characteristics of highly efficient data centers, including using scalable power and cooling, row-based cooling, high-efficiency UPS, high voltage distribution, variable speed drives, capacity management tools, and room layout tools. The document outlines 7 key elements of efficient data center design that incorporate these characteristics and can save up to 40% on energy costs.
Implementing Hot and Cold Air Containment in Existing Data CentersSchneider Electric
This document discusses implementing hot and cold air containment in existing data centers. It describes various containment methods like cold aisle containment, ducted hot aisle containment, and rack air containment. It emphasizes assessing facility constraints, reviewing all potential solutions, and selecting containment methods based on the facility's air distribution system and constraints. Maintaining proper airflow patterns and temperatures is also important ongoing for effective containment.
This presentation discusses how to optimize data center efficiency. It begins with an introduction of new regulations for energy efficiency in Europe. Then it discusses basics of data center design including power, cooling, and availability requirements. It notes that energy costs are rising significantly. The presentation explores optimizing hardware, UPS systems, and cooling to improve efficiency. It provides examples of efficiency gains from right-sizing infrastructure and high-efficiency UPS systems. Overall, the presentation suggests that data center efficiency can be improved by up to 40% through optimization techniques.
Commercial Overview SCS Session 1 Server Rack Strategiespaul_mathews
This document provides an overview of server rack strategies, including:
1) It introduces tower, rack, and blade servers and their uses in small, medium, and large scale environments.
2) It discusses the purpose and design of server racks/cabinets for securely and efficiently holding electronic equipment and facilitating airflow/cooling.
3) It outlines best practices for efficient ventilation and power distribution in server racks to reduce energy costs and emissions as data center heat loads increase over time.
Proactively Managing Your Data Center Infrastructurekimotte
Attached is the presentation from our Proactively Manage Data Center Infrastructure Webinar - to view the webinar with audio, go here:http://blog.eecnet.com/proactive-manage-data-center/
High Efficiency Indirect Air Economizer Based Cooling for Data CentersSchneider Electric
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
The document summarizes Tom Greenbaum's presentation on data center lessons learned from his experience as the Data Center Operations Manager at Intel's Rio Rancho campus. It discusses innovations at the Intel data centers including reuse of an existing factory building as a data center, high-density air cooled cabinets, an air economizer proof of concept, a container data center proof of concept, and support for the Encanto supercomputer housed in the factory data center building.
Row-based data center cooling works by capturing hot air from IT equipment before it mixes with surrounding room air, rather than supplying cold air. It does this through back-to-front airflow of row coolers located near IT racks in a pod-based layout. While commonly misunderstood, row coolers do not require turning vanes or placement in every row to effectively cool both rows of a pod through hot air capture. They can also cool loads outside their immediate pod when designed and placed properly.
The document discusses green IT and how organizations can reduce their carbon footprint through various IT practices. It outlines that ICT accounts for about 2% of global CO2 emissions and describes strategies like virtualization, data center consolidation, power management of devices, and recycling/reusing equipment to cut energy use and emissions. The future may bring more legislation around IT sustainability as well as more energy-efficient technologies and dynamic power management across the IT infrastructure.
This document discusses green computing or green IT, which aims to maximize energy efficiency during a product's lifetime through approaches like virtualization, power management, and proper recycling. It describes how virtualization allows combining multiple physical systems into virtual machines on a single powerful system to reduce power usage. Using LCD/LED displays and terminal servers connected to thin clients also decreases energy costs. Implementing power management at the administrative level allows automatically turning off hardware like monitors during inactivity to save energy. Data centers and their high energy usage are mentioned as areas green IT approaches could be applied through virtualization and power usage effectiveness methods.
The document discusses challenges facing data centers and introduces a new product called the GP100 that aims to address these challenges. It summarizes the issues of balancing electrical loads, gaining efficiencies, density limitations, and high power needs. The GP100 is presented as a revolutionary 3-phase power supply that delivers high power density in a compact 1RU form factor. It claims to eliminate the need for load balancing and allow for greater efficiencies, cost savings, and reliability compared to traditional power solutions. The document outlines how the GP100 could transform various industries by providing critical power needs in bandwidth-constrained environments.
Data Center Cooling Strategies for Efficiency - Techniques to Reduce YourEnergy Bill by 20-80%
Data center cooling is a hot topic. When you consider the challenges of cooling the latest generation servers, growing cost of infrastructure equipment, and ever growing concern around energy efficiency, it's easy to understand the focus.
To view the recorded webinar presentation, please visit, http://www.42u.com/cooling-strategies-webinar.htm
Green data Data Center is the only way going forward.To improve efficiency and PUE count we are leveraging technology and resources to cut down on emissions and utilizing power in a better way to reduce losses.This project speaks about the latest trends in Green IT and also how can banks use the technologies to upgrade their legacy systems
The document discusses the advantages of liquid cooling over traditional air cooling for data centers. Liquid cooling allows for higher compute capacity and energy savings by removing heat more efficiently and accurately from servers. It also discusses a new nanofluid being developed by the NanoHex project that could improve heat transfer even further when integrated into a liquid cooling system. Finally, it presents a unique liquid cooling device developed by Thermacore Europe that is ready to install in servers and could double the compute capabilities of a data center while delivering coolant directly to processors.
Gartner has developed new metrics to measure data center energy efficiency beyond PUE. The new metrics are Idle Energy (IE), which measures the energy consumed when equipment is idle, and Computational Energy (CE), which measures the energy used for useful computation. IE + CE = Total Energy. Reducing IE improves efficiency. An example shows how improving IE from 56.43% to 49.37% through power capping increased annual useful energy by over 525,000 kWh, saving $55,000. The document recommends using kWh to evaluate efficiencies, reducing IE, and using CE to quantify energy for IT services.
The document discusses several key factors for designing an efficient data center. It recommends (1) virtualizing servers and storage to increase utilization, (2) optimizing space, power, and cooling usage to reduce wasted capacity, and (3) rearranging equipment layouts to address fragmentation and maximize available capacity. It also stresses the importance of selecting an appropriate location and tier level of resilience based on the data center's needs. Overall, the document provides guidance on foundational strategies to design an efficient and cost-effective data center.
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...Maria Demitras
Implementing data center best practices and using CFD models allowed Great Lakes to suggest a data center layout that would improve PUE and efficiency. Jason Hallenbeck, DCDC, explains the concepts behind how data center floor design can save or kill your PUE and cooling efficiency—as found in this proposal. Find Jason presenting at the BICSI Fall Conference on September 14th at 1:30 pm.
Virtual Power Systems - Intelligent Control of Energy (ICE) and Software Defi...Steve Houck
VPS provides a Software Defined Power solution using intelligent batteries and software to optimize power distribution in data centers. This allows data centers to increase power utilization from 20-60% to over 90% by peak shaving and dynamically allocating power budgets. It can generate 20-50% additional revenue and defer $10-15M/MW in CapEx and $1M/MW/yr in OpEx. The solution is deployed non-disruptively using VPS hardware and software to monitor and control power distribution.
Effective data center design doesn't have to be complicated. Learn how simple topology solutions and proven, cost-effective technologies can help simplify operations and achieve the business and performance objectives of your data center.
Smart energy is a trending concept in a field crowded with competing and complementary terms, technologies, and approaches – but has the business case yet been made? Andy Lawrence will share his views on where this technology segment is headed.
Overcoming Rack Power Limits with Virtual Power Systems Dynamic Redundancy an...Steve Houck
Summary
This paper describes how SourceMix, a dynamic redundancy technology from VPS, allows Intel® Rack Scale Design (Intel® RSD) customers to take full advantage of system composability and module upgradeability by extending the existing data center power infrastructure.
Implementing Hot and Cold Air Containment in Existing Data CentersSchneider Electric
This document discusses implementing hot and cold air containment in existing data centers. It describes various containment methods like cold aisle containment, ducted hot aisle containment, and rack air containment. It emphasizes assessing facility constraints, reviewing all potential solutions, and selecting containment methods based on the facility's air distribution system and constraints. Maintaining proper airflow patterns and temperatures is also important ongoing for effective containment.
This presentation discusses how to optimize data center efficiency. It begins with an introduction of new regulations for energy efficiency in Europe. Then it discusses basics of data center design including power, cooling, and availability requirements. It notes that energy costs are rising significantly. The presentation explores optimizing hardware, UPS systems, and cooling to improve efficiency. It provides examples of efficiency gains from right-sizing infrastructure and high-efficiency UPS systems. Overall, the presentation suggests that data center efficiency can be improved by up to 40% through optimization techniques.
Commercial Overview SCS Session 1 Server Rack Strategiespaul_mathews
This document provides an overview of server rack strategies, including:
1) It introduces tower, rack, and blade servers and their uses in small, medium, and large scale environments.
2) It discusses the purpose and design of server racks/cabinets for securely and efficiently holding electronic equipment and facilitating airflow/cooling.
3) It outlines best practices for efficient ventilation and power distribution in server racks to reduce energy costs and emissions as data center heat loads increase over time.
Proactively Managing Your Data Center Infrastructurekimotte
Attached is the presentation from our Proactively Manage Data Center Infrastructure Webinar - to view the webinar with audio, go here:http://blog.eecnet.com/proactive-manage-data-center/
High Efficiency Indirect Air Economizer Based Cooling for Data CentersSchneider Electric
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
The document summarizes Tom Greenbaum's presentation on data center lessons learned from his experience as the Data Center Operations Manager at Intel's Rio Rancho campus. It discusses innovations at the Intel data centers including reuse of an existing factory building as a data center, high-density air cooled cabinets, an air economizer proof of concept, a container data center proof of concept, and support for the Encanto supercomputer housed in the factory data center building.
Row-based data center cooling works by capturing hot air from IT equipment before it mixes with surrounding room air, rather than supplying cold air. It does this through back-to-front airflow of row coolers located near IT racks in a pod-based layout. While commonly misunderstood, row coolers do not require turning vanes or placement in every row to effectively cool both rows of a pod through hot air capture. They can also cool loads outside their immediate pod when designed and placed properly.
The document discusses green IT and how organizations can reduce their carbon footprint through various IT practices. It outlines that ICT accounts for about 2% of global CO2 emissions and describes strategies like virtualization, data center consolidation, power management of devices, and recycling/reusing equipment to cut energy use and emissions. The future may bring more legislation around IT sustainability as well as more energy-efficient technologies and dynamic power management across the IT infrastructure.
This document discusses green computing or green IT, which aims to maximize energy efficiency during a product's lifetime through approaches like virtualization, power management, and proper recycling. It describes how virtualization allows combining multiple physical systems into virtual machines on a single powerful system to reduce power usage. Using LCD/LED displays and terminal servers connected to thin clients also decreases energy costs. Implementing power management at the administrative level allows automatically turning off hardware like monitors during inactivity to save energy. Data centers and their high energy usage are mentioned as areas green IT approaches could be applied through virtualization and power usage effectiveness methods.
The document discusses challenges facing data centers and introduces a new product called the GP100 that aims to address these challenges. It summarizes the issues of balancing electrical loads, gaining efficiencies, density limitations, and high power needs. The GP100 is presented as a revolutionary 3-phase power supply that delivers high power density in a compact 1RU form factor. It claims to eliminate the need for load balancing and allow for greater efficiencies, cost savings, and reliability compared to traditional power solutions. The document outlines how the GP100 could transform various industries by providing critical power needs in bandwidth-constrained environments.
Data Center Cooling Strategies for Efficiency - Techniques to Reduce YourEnergy Bill by 20-80%
Data center cooling is a hot topic. When you consider the challenges of cooling the latest generation servers, growing cost of infrastructure equipment, and ever growing concern around energy efficiency, it's easy to understand the focus.
To view the recorded webinar presentation, please visit, http://www.42u.com/cooling-strategies-webinar.htm
Green data Data Center is the only way going forward.To improve efficiency and PUE count we are leveraging technology and resources to cut down on emissions and utilizing power in a better way to reduce losses.This project speaks about the latest trends in Green IT and also how can banks use the technologies to upgrade their legacy systems
The document discusses the advantages of liquid cooling over traditional air cooling for data centers. Liquid cooling allows for higher compute capacity and energy savings by removing heat more efficiently and accurately from servers. It also discusses a new nanofluid being developed by the NanoHex project that could improve heat transfer even further when integrated into a liquid cooling system. Finally, it presents a unique liquid cooling device developed by Thermacore Europe that is ready to install in servers and could double the compute capabilities of a data center while delivering coolant directly to processors.
Gartner has developed new metrics to measure data center energy efficiency beyond PUE. The new metrics are Idle Energy (IE), which measures the energy consumed when equipment is idle, and Computational Energy (CE), which measures the energy used for useful computation. IE + CE = Total Energy. Reducing IE improves efficiency. An example shows how improving IE from 56.43% to 49.37% through power capping increased annual useful energy by over 525,000 kWh, saving $55,000. The document recommends using kWh to evaluate efficiencies, reducing IE, and using CE to quantify energy for IT services.
The document discusses several key factors for designing an efficient data center. It recommends (1) virtualizing servers and storage to increase utilization, (2) optimizing space, power, and cooling usage to reduce wasted capacity, and (3) rearranging equipment layouts to address fragmentation and maximize available capacity. It also stresses the importance of selecting an appropriate location and tier level of resilience based on the data center's needs. Overall, the document provides guidance on foundational strategies to design an efficient and cost-effective data center.
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...Maria Demitras
Implementing data center best practices and using CFD models allowed Great Lakes to suggest a data center layout that would improve PUE and efficiency. Jason Hallenbeck, DCDC, explains the concepts behind how data center floor design can save or kill your PUE and cooling efficiency—as found in this proposal. Find Jason presenting at the BICSI Fall Conference on September 14th at 1:30 pm.
Virtual Power Systems - Intelligent Control of Energy (ICE) and Software Defi...Steve Houck
VPS provides a Software Defined Power solution using intelligent batteries and software to optimize power distribution in data centers. This allows data centers to increase power utilization from 20-60% to over 90% by peak shaving and dynamically allocating power budgets. It can generate 20-50% additional revenue and defer $10-15M/MW in CapEx and $1M/MW/yr in OpEx. The solution is deployed non-disruptively using VPS hardware and software to monitor and control power distribution.
Effective data center design doesn't have to be complicated. Learn how simple topology solutions and proven, cost-effective technologies can help simplify operations and achieve the business and performance objectives of your data center.
Smart energy is a trending concept in a field crowded with competing and complementary terms, technologies, and approaches – but has the business case yet been made? Andy Lawrence will share his views on where this technology segment is headed.
Overcoming Rack Power Limits with Virtual Power Systems Dynamic Redundancy an...Steve Houck
Summary
This paper describes how SourceMix, a dynamic redundancy technology from VPS, allows Intel® Rack Scale Design (Intel® RSD) customers to take full advantage of system composability and module upgradeability by extending the existing data center power infrastructure.
Teacher enthusiasm is considered one of the most important qualities of effective teachers. An enthusiastic teacher excites students and sparks their curiosity and motivation to learn. Teacher enthusiasm can lead to better teaching evaluations, improved student performance, and better classroom behavior. Several studies have shown that teacher enthusiasm promotes student engagement in lessons and is positively related to student achievement. Enthusiastic teachers serve as role models for students and help students develop positive attitudes and enjoyment of learning. To examine the effects of teacher enthusiasm, the researcher will survey 165 college students and 20 teachers about perceptions of instructor enthusiasm, student engagement, and motivation to learn.
The document discusses Rotary International's new database integration with ClubRunner to allow member and club data to be automatically synced between their systems. It provides an overview of the new API integration compared to the old email method, details which member fields are supported, and instructions for clubs to opt-in and begin using the new integration features.
The NASA-contracted Antares cargo rocket, bound for the International Space Station, crashed on Tuesday during liftoff. This is an account of the launch from Pulse Social Media manager Ally Coonradt, an attendee of the event.
Rencana Program Kegiatan PGRI Jember selama Enam Bulanpgri jember
Rapat koordinasi antara pengurus PGRI Kabupaten Jember dan pengurus cabang PGRI di seluruh Kabupaten Jember membahas rencana program kerja enam bulan mendatang dan hasil keputusan rapat. Rencana program kerja mencakup organisasi, kesejahteraan guru, informasi dan komunikasi, serta program-program lainnya. Hasil rapat menetapkan kontribusi anggota baru sebesar Rp30.000 dan anggota lama Rp10.000 serta biaya
This document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a data center efficiency metric. While PUE is a useful high-level metric, it does not provide enough detail to optimize efficiency. PUE only measures the ratio of total facility power to IT equipment power, but does not account for factors like server utilization, resilience, or diversity of the IT load. The document argues that more detailed energy monitoring data is needed at the server, rack, and application level over time to properly evaluate efficiency and enable tangible efficiency actions.
The document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a metric for datacenter efficiency. While PUE is a widely used high-level metric, it does not provide enough information on its own to optimize efficiency. To enable effective efficiency actions, more detailed energy monitoring data is needed, including power consumption at the individual IT device level trended over time. Gathering additional operational data beyond just PUE can provide insights to reduce energy waste throughout the entire datacenter system.
How to obtain energy savings in Data Centre A/C?
Enrico Boscaro, CAREL Group Marketing Manager – HVAC Industrial, has spoken about High Efficiency technologies for
Data Centre Air Conditioning in this webinar organized by Eurovent Middle East
This document discusses options for data center owners and operators to consider when their aging infrastructure may no longer meet current or future needs. As digital traffic and the internet of things continue to grow rapidly, data center infrastructure is facing unprecedented challenges. The document outlines various strategies to evaluate such as tuning up existing facilities, targeted modernization of critical components, adopting pod-based architectures, and building new infrastructure to right-size capacity. Each option involves analyzing business needs, costs, efficiency gains, and potential downtime to determine the best path forward.
Data Centers: Compute Faster and More Sustainably with Microconvective Cooling®JetCool Technologies
As the demand for AI, ML, and big data grows, data centers are facing significant energy challenges. It is estimated that 3% of the planet's total energy consumption is used by data centers with 30% of that attributed to cooling equipment.
Inferior in-rack cooling solutions use considerable energy and resources often requiring power- and water-intensive infrastructure like chillers and evaporative coolers. Depending on the local climate, these systems can be used year-round with U.S. data centers alone consuming an estimated 660 billion liters of water in 2020.
Processors rely on optimized cooling solutions to maximize power density and increase compute performance. With suboptimal cooling solutions, heat can cause processor throttling and limited device lifetime. As power density and processor performance increase so do the expectations for cooling technology.
Recognizing the need for a simple, scalable, and sustainable liquid cooling option for data centers, JetCool developed SmartPlate. SmartPlate is a simple water cooling system that outfits processors with a fully-sealed, drop-in replacement cooling solution that lowers chip temperatures by 37%. SmartPlate technology helps accelerate performance by extracting 30% faster processing for TDPs over 1,000 Watts. Sustainability is one of JetCool's primary principles. Our consumers use 50% less electricity and 90% less water than the competing liquid cooling solutions. Reach out to our team to learn more about partnering with JetCool today.
The document discusses various strategies for optimizing the energy efficiency of data centers, including:
1) Establishing an energy baseline and forecasting IT growth to determine optimization opportunities.
2) Implementing metrics like PUE and DCE to measure efficiency and compare to other data centers.
3) Improving airflow management through practices like hot/cold aisle layouts and blanking panels.
4) Matching cooling capacity to IT load and eliminating hot spots through technologies like modular cooling systems.
5) Considering alternative cooling technologies like carbon dioxide cooling that can reduce energy use by up to 30%.
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESKamran Hassan
in this paper common problems and challenges of data center have been identified and methods have been explained to improve the efficiency and reliability of data center
Slides: The Top 3 North America Data Center Trends for CoolingGraybar
The document summarizes a presentation on trends in North American data center cooling. The top 3 trends discussed are: 1) Increased use of economizers as the primary cooling mode rather than supplemental to reduce energy costs; 2) Regulations requiring economizer use in most climate zones; and 3) Data center workloads becoming more dynamic, requiring cooling systems to adapt quickly. Indirect air-to-air heat exchangers are presented as the most efficient economizer option. Liquid cooling is discussed but seen as mainly suitable for niche HPC applications currently. Established technologies like perimeter cooling and containment are evolving to higher efficiencies.
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...IBMAsean
The document discusses actions that data centers can take to increase responsiveness, reduce costs, and become more environmentally friendly. It outlines five building blocks: diagnose energy usage, build energy efficient infrastructure, optimize cooling, implement virtualization, and continuously measure and manage energy usage. Data centers that follow these principles can achieve 40-50% energy savings, reduce operational costs by $1.3 million per year, and lower their environmental impact by reducing emissions equivalent to 1,300 cars.
This document discusses energy efficiency in data centers. It notes that data center electricity usage has doubled from 2000 to 2005 due to more powerful servers, denser server configurations, and larger storage needs. This increased usage leads to higher costs, energy waste, cooling challenges, and larger carbon footprints. The document then discusses various motivations for improving energy efficiency, such as rising electricity costs. It provides examples of PUE values from different data centers and outlines ways to improve PUE and energy efficiency, including maximizing server efficiency, reducing the data center's "house load" through techniques like free cooling, and optimizing power conversion. The document emphasizes the importance of reusing waste heat from servers through techniques like heating buildings or driving chillers
Rainer Weidmann
T-Systems Representative
Deutsche Telekom
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
An exploration of the benefits and limitations of the popular Power Usage Effectiveness (PUE) metric, for gauging datacenter efficiency.
How to avoid the pitfalls inherent in the definition of PUE; and some suggested means by which the PUE concept can be enhanced in real-world applications.
CORPORATE PLUG: For more information about how Raritan helps solve this problem, I encourage you to see: http://www.raritan.com/resources/screenshots/power-iq/
Implementing the Poughkeepsie Green Data CenterElisabeth Stahl
The document summarizes IBM's implementation of an energy efficient data center transformation project at their Poughkeepsie facility. Key steps included conducting thermal analysis to identify issues, reorganizing the data center into hot and cold aisles, adding rear door heat exchangers and water cooling, implementing energy management techniques like virtualization, and instrumenting the data center to monitor energy usage and results. The transformation improved power usage effectiveness while increasing computing capacity in the same footprint and providing flexibility for future growth.
The document discusses the next wave of green IT and making data centers more energy efficient. It notes that data center energy costs are significant and that McKinsey predicts data centers will produce more greenhouse gases than airlines by 2020. It provides best practices for building sustainable green data centers, including exploiting virtualization, improving server utilization rates, and designing efficient cooling systems.
Metering Energy Consumption in Data Centres - Colin LoveGoodCampus
ULCC operates a data center that provides hosting services for external customers. To manage power usage and charge customers appropriately, ULCC meters power at multiple points in the data center. This includes meters on the main panel, in-rack power monitors, and in-rack PDU monitors. ULCC devolves energy budgets to customers to give them more visibility and control over their power usage, while still ensuring overall power demand does not exceed capacity. ULCC aims to continue improving data center efficiency through measures like hot aisle containment and more efficient cooling.
Download this at http://parker.com/egt
Currently, cooling, modularity, monitoring and control are common issues in grid tie applications, resulting in decreased efficiency and costly downtime. Future trends are driving towards power converter systems that offer lower cost, higher power density, higher efficiency, as well as high availability and yield. All this while demanding modularity for plug and play and with predictive and preventative maintenance monitoring.
As providers of 85 MGW grid tie conversion systems worldwide, Parker has an extensive product portfolio, application knowledge and experience, therefore well positioned to guide on methods to solve these specific industry challenges, through:
Advanced 2-phase refrigerant invert cooling - 60% smaller in size and provides increased energy output over air cooled solutions
Modular power electronics – ensuring units can be replaced quickly and easily
Implementing a predictive maintenance schedule, such as self-monitoring to reduce potential downtime issues
Techniques to improve the control of your system.
This document discusses strategies for improving data center efficiency through server virtualization. It notes that servers currently account for 40% of data center electricity use, and virtualization can help consolidate servers to reduce power consumption. The key is to first address efficiency at the server level before considering other infrastructure upgrades. The document outlines various approaches to virtualizing servers, such as spreading workload across physical hosts, using supplemental cooling, or designating high-density and low-density server areas. No single strategy is best and many factors must be considered to maximize efficiency gains from virtualization.
This webinar discusses energy efficient measures for server rooms. It begins with introductions of the speakers and an overview of their client's goals of reducing carbon emissions by 40% by 2020. However, they have discovered that server rooms are a major problem area. Data from the DCDI 2013 census shows that server room energy use is ballooning. In-house server rooms have low utilization rates, high cooling overhead and energy is a low priority without separate metering. Outsourced data centers have significant advantages in these areas. The webinar then discusses various energy efficiency strategies that can be implemented in server rooms like consolidation, virtualization, temperature adjustments, containment and free cooling. Case studies show energy reductions of over 50% are possible
Green cloud computing aims to make cloud infrastructure more energy efficient and environmentally friendly. Adopting measures like using more renewable energy sources, virtualizing servers, and improving data center cooling can help reduce carbon emissions and operational costs. Virtualizing servers allows multiple virtual machines to run on a single physical server, increasing efficiency and hardware utilization. Data centers also aim to lower their power usage effectiveness rating by implementing designs with hot-aisle/cold-aisle configurations and adopting newer technologies. Transitioning to renewable energy sources for power can further reduce the carbon footprint of cloud infrastructure and lead to more stable energy prices over time.
Similar to PAC 2.5 Efficiency is Attainable, What are you Waiting for? (20)
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
2. 2
Data Center World – Certified Vendor Neutral
Each presenter is required to certify that their
presentation will be vendor-neutral.
As an attendee you have a right to enforce this policy of
having no sales pitch within a session by alerting the
speaker if you feel the session is not being presented in
a vendor neutral fashion. If the issue continues to be a
problem, please alert Data Center World staff after the
session is complete.
3. 3
Agenda- Possible ways to Increase Efficiency
• Basics - case study using Trade-off Tool
• Size power and cooling to your current load
• Turn up the IT inlet temperature
• Virtualize your servers
• Deploy hot or cold aisle containment
• Use DCIM: Energy Efficiency
• Free Cooling – air side economizers
• Services - supply side energy procurement service, energy
efficiency assessment , Optimization
White Papers
4. 4
Data center power and cooling infrastructure worldwide
wastes more than 60,000,000 megawatt-hours per year of
electricity that does no useful work powering IT
equipment
5. 5
Where Does the Power Go in a 2N Data Cener?
Power flow in a typical 2N data center
at 50% load
=
7. 7
• High-level planning tools
•Show actual implications of various deployment decisions
•Accurately calculate potential impacts on a data center
Data Center Efficiency Calculator: TradeOff Tools
10. 10
Efficiency gain is
greatest at lighter
loads.
High-efficiency UPS to Improve Power Efficiency
LBNL report on UPS efficiency: http://hightech.lbl.gov/documents/UPS/Final_UPS_Report.pdf, Figure 17, page 23.
13. 13
Variable-speed Drives on Pumps and Chillers to Improve
Cooling Efficiency
Chillers and pumps with fixed-speed motors are configured for maximum expected load and worst case (hot)
conditions, and therefore spend much of their operating time with their motors working harder than necessary.
Pumps and chillers equipped with variable-speed drives (VFDs) and appropriate controls can reduce their
speed and energy consumption to match the current IT load and outdoor conditions. The energy
improvement varies, but can be as large as 10% or more.
21. 21
Scaled Buildout: Right-sizing benefits over a 10-year
data center lifetime
4.0
3.5
3.0
2.5
1.5
2.0
UPFRONT buildout – day one
SCALED buildout – as needed
1 3 4 5 6 7 8 9 102
PUE
26. 26
Virtualized - Spread out the load
• No new power and cooling
equipment requiredEasy . . .
But . . .
• Takes up valuable floor space
• Relies on existing unpredictable cooling
architecture
• No power/cooling efficiency benefit
What about Phase #2? Rack space
you can not use…
32. 32
Hot and Cold Aisle Containment
Blanking panels
Supplemental cooling
InRow™ cooling
Dynamic
Rear Air
containment
Aisle
containment
Quick,
least expensive
Best
efficiency
44. 44
White papers
WP118: Virtualization and Cloud Computing: Optimized Power, Cooling and Management
Maximizes Benefits
WP135: Impact of Hot and Cold Aisle Containment on Data Center Temperature and
Efficiency
WP143: Data Center Projects: Growth Model
WP153: Implementing Hot and Cold Air Containment in Existing Data Centers
TradeOff Tools
TT6: Data Center Efficiency Calculator
TT5: UPS Efficiency Comparison Calculator
TT11: Cooling Economizer Mode PUE Calculator
TT9: Virtualization Energy Cost Calculator
Resources
45. 45
3 Key Things You Have Learned During this Session
1. .
2. .
3. .
When do companies want to discuss energy efficiency? For the most part only during cost savings planning and also when they run out of utility power.
This presentation covers ways to increase efficiency. From what I call the basics to more advanced techniques and then through services that are available. Many of these are covered through individual white papers and presentations. However I wanted to bring together these topics under one presentation
Data center power and cooling infrastructure worldwide wastes more than 60,000,000 megawatt-hours per year of electricity that does no useful work powering IT equipment. This represents an enormous financial burden on industry, and is a significant public policy environmental issue. This presentation describes the principles of a new, commercially available data center architecture that can be implemented today to dramatically improve the electrical efficiency of data centers.
The energy flow through a typical 2N data center (at 50% load) is shown in this illustration. Power enters the data center as electrical energy, and virtually all power (99.99%+) leaves the data center as heat. The rest is converted to computing by the IT equipment.
Note that in this example 47% of the electrical power entering the facility actually powers the IT load (called USEFUL power in the figure on the previous slide), and the rest is consumed – converted to heat – by power, cooling, and lighting equipment.
An insignificant amount of power goes to the fire protection and physical security systems, and is not shown in this breakdown.
This data center is currently showing an estimated annual Power Usage Effectiveness (PUE) of 2.13. Therefore, 53% of the input power is not doing the “useful work” of the data center (powering the IT loads) and is therefore considered to be data center inefficiency (or “waste,” in the terminology of an efficiency model).
Tradeoff tools are high-level planning tools, or online calculators, that quantify how different planning decisions would affect a data center. These precisely formulated calculators apply unique configurations to specific data center designs or plans to show the actual implications of various deployment decisions.
Current APC TradeOff Tools include:
Data Center Carbon Calculator
Data Center Efficiency Calculator
Data Center Capital Cost Calculator
Virtualization Energy Cost Calculator
Data Center Power Sizing Calculator
Data Center AC vs. DC Calculator
Data Center InRow® Containment Selector
These tools allow customers to accurately calculate the impact that new equipment, server virtualization, design changes, or different heat containment strategies will have on their facility; these tools also codify data center best practices.
TradeOff tools help customers shift from having efficiency as a conceptual ideal to something they can predict, plan, and implement.
SPEAKER NOTE: Select data center efficiency calculator and discuss its usefulness for customers.
We are going to walk through an example of the “basics” for raising efficiency in a data center using a trade off tool
Technologies are now available that substantially increase the efficiency obtainable by UPS systems. The chart shown here compares efficiencies of a recently introduced high-efficiency UPS to UPS efficiency data published by Lawrence Berkley National Labs.
This figure shows that the efficiency of the latest UPS systems is significantly higher for any IT load, and the efficiency gain is greatest at lighter loads. For example, at 30% load the newest UPS systems pick up over 10% in efficiency when compared to the average of currently installed UPS systems. In this case the actual wattage losses of the UPS can be shown to be reduced by 65%. It is important to note that UPS losses (heat) must also be cooled by the air conditioner, creating further power consumption.
Some newer UPS systems offer an “economy” mode of operation that allows the UPS manufacturer to claim higher efficiency. However, this mode does not provide complete isolation from utility-mains power quality problems and is not recommended for data center use. The high-efficiency UPS and the efficiency data used in the architecture described in this paper and shown in Figure 8 are for a double conversion on-line UPS with complete isolation from input power irregularities.
NOTE: Chart is plotting UPS efficiency
Use the UPS Eff Comparison tool to compare the efficiency of a different UPS models.
So you can swap out the UPS and it may have a material change in your PUE
However you should use the capital cost calculator to see your payback time before you make that change
Pumps and chillers in the data center cooling plant traditionally operate with fixed speed motors. The motors in such arrangements must be configured for maximum expected load and worst case (hot) outdoor environmental conditions. However, data centers typically run at only part of their design capacity, and they spend most of their operating life with outdoor conditions cooler than worst-case. Therefore, chillers and pumps with fixed-speed motors spend much of their operating time with their motors working harder than necessary.
Pumps and chillers equipped with variable-speed drives (VFDs) and appropriate controls can reduce their speed and energy consumption to match the current IT load and the current outdoor conditions. The energy improvement varies depending on conditions, but can be as large as 10% or more, especially for data centers that are not operating at full rated IT load, or for data centers with chiller or pump redundancy. Variable-speed drives on pumps and chillers can be considered a form of “automatic rightsizing.”
Some of the efficiency gains of variable-speed drives can be obtained by stage-control or multiple fixed-speed pumps and chillers. However, these systems can require substantial engineering and typically deliver less than half of the gains of VFDs.
Variable-speed drives on pumps and chillers are an additional cost, compared with fixed-speed devices. For some seasonal or intermittent applications the energy savings from this extra investment may have a poor return on Investment. However, for data centers that run 7 x 24 all times of the year the payback times can be as short as a few months, depending on the specific data center.
So we saw significant increase in efficiency as the PUE dropped from 2.02 to 1.49
Simply by raising you load to higher level of load typically increases efficiency. However the closer you are to capacity the higher risk you can expect in overloading power and cooling
Cooling towers with chiller bypass are a popular solution. Depending on your climate and the temperature you are running will effect the benefit. Talk about using the cooling economizer tool to estimate the number of hours on economizer mode.
You can use the Cooling Economizer Mode PUE Calculator to estimate the number of economizer hours in your region.
Then the little nicknacks that will help blanking Planels – especuially if you are running hotaisle containment
Allowable humidity of 20-80% RH is for A1 and A2 only. A4 = 8-90% RH
Allowable maximum dewpoint for A1 = 62.6F and A4 = 75.2F
Also, first column should say RH and DP
It is intuitive to most IT managers to believe when they virtualized that heat load will go down in the data center because the number of servers goes down. Every project is different, but its fair to say that most virtualization projects can get away with a taking their original power, cooling and IT racks and spread around the physical servers. However, the consequences of doing this are –
Lack of scalability – your data IT racks will have physical room, but there is inadequate power and cooling to add servers
Less efficient cooling – 1) moving lots of air around, cooling profile has changed 2)more heat in a smaller
Here you see graphically where you have a data center with existing power and cooling and you do not upgrade it when you do a virtualization project. You can see the actual IT load goes down from 65% to 45%. However how effective and efficient your data center is goes down. This is of course an unintended consequence.
So lets discuss a tactic that’s best practice for virtualization of the IT servers to realize your total efficiency entitlement.
Taking and consolidating your physical servers into fewer, higher density racks. When you do this you have to update your cooling to the new generation of CRAC’s and containment that is designed to work with high density loads. You will also most likely need to upgrade your power distribution to accommodate higher amperage power cords.
So you need to plan this project in advance – but the benefits are Predictable high-density cooling
Enables higher-efficiency cooling
Optimized utilization of floor space
So lets discuss a tactic that’s best practice for virtualization of the IT servers to realize your total efficiency entitlement.
Taking and consolidating your physical servers into fewer, higher density racks. When you do this you have to update your cooling to the new generation of CRAC’s and containment that is designed to work with high density loads. You will also most likely need to upgrade your power distribution to accommodate higher amperage power cords.
So you need to plan this project in advance – but the benefits are Predictable high-density cooling
Enables higher-efficiency cooling
Optimized utilization of floor space
Here is a real world case study where a customer actually did a virtualization project. They spread out the IT load and maintained their current power and cooling. The result was an inefficiency in the data center for an electrical perspective and also a space use perspective. The reaction was to move the servers from 5 racks to 2 racks, and add new in- row computer room air conditioners. The PUE was then dramatically decreased.
So there is always a question of funding for IT projects since there is the impression by many companies that IT functions are a cost so the budget approval process and ROI calculations are very beneficial and usually necessary. SE offers an on-line free tool that IT and data center managers can use to predict the beneficial effect on energy use for a virtualization project. You just need to know basic information about your utility billing price, the number of servers and the ratio of servers to virtualize. The tool gives you the result of actual $$ savings by consolidating servers and then gives you the actual $$ savings by rightsizing and upgrading your power and cooling. You can compare this yearly savings with the capital cost of doing the project and determine for yourself if it meets the ROI goals for a project.
The reduction in IT load as a result of server consolidation offers a new opportunity to take advantage of modular, scalable power and cooling architecture. Until now, the usual argument in favor of scalable architecture has been the ability to start small and grow as needed, to avoid over-investment and wasted operational cost from infrastructure that may never be used. With virtualization, scalable architecture now allows scaling down to remove unneeded capacity at the time of initial virtualization, with the later option to re-grow as the new virtualized environment re-populates. Scaling up or scaling down, the idea is the same – power and cooling devices are less efficient at lower loading, so it is wasteful to be running more power or cooling than you need. “Right-sized” infrastructure keeps capacity at a level that is appropriate for the actual demand (taking into account whatever redundancy and safety margins are desired).
Impact of virtualization over sizing
Since vitalizing can significantly reduce load, over sizing is an important efficiency issue in a virtualized data center. Even without virtualization, over sizing has long been a primary contributor to data center inefficiency. Server consolidation and server power management, by reducing load even more, will shift efficiency further toward the low end of the efficiency curve, if power and cooling systems stay the same. While the electric bill will indeed go down because of the lower IT load and less air conditioning needed to cool it, the proportion of utility power that reaches the IT loads – in other words, efficiency – will drop, which signifies lost power that could be conserved to reduce energy consumption, and the electric bill, even more.
Prevention of hot and cold air mixing is a key to all efficient data center cooling strategies.
Both HACS and CACS offer improved power density and efficiency. A hot-aisle containment system (HACS) is a more efficient
approach than a cold-aisle containment system (CACS) because it allows higher hot aisle temperatures and increased chilled water temperatures which results in increased economizer mode hours and significant electrical cost savings. Plus you can
maintain a comfortable temperature in the uncontained area of the data center. In CACs the room become the hot air area.
A SE white paper #135 shows that HACS can save 43% in the annual cooling system
energy cost corresponding to 15% reduction in the annualized PUE compared to CACS. This paper concludes that all new
data center designs should use HACS as the default containment strategy with a few exceptions. In cases where
containment was not initially deployed or planned for. For existing raised floor data centers with a perimeter cooling
unit layout, it may be easier and less costly to implement CACS.
In your standard building you have 4 different domain management systems – power quality, Building comfort, Security, and Data Center
This is a White Space Energy Efficiency Management software dashboard. Its a standard Web interface.
You have your choice – use standard values that are prelloaded, Very simple. It gives you places to input values. You can measure these values and hand load. Or you can associate meters within you facility.
Put in your measured loads. Monitor your meters automatically over your network – have your values in real time. You get your energy usage by category – power, cooling, lightinging, etc…..
Where do your energy dollars go?
This is a granular cost breakout screen. There a many checkboxes in these software packages. They are associated to the sub-systems. Or you can create new ones and add to make it custom.
Once you establish your data center. You will get standard granularity of subsystems. Again you have your choice of preset values, measured values, and metered real time values. You can add granularity by picking additional devices under the subcategories.
Very useful in identifying where your energy dollars are going and where to target improvements.
What is free-cooling?
Free-cooling is an economical method of using low external air temperatures to assist in chilling water, which can then be used for industrial process, or air conditioning systems in data centers. When the ambient air temperature drops to a set temperature, a modulating valve allows all or part of the chilled water to by-pass an existing chiller and run through the Free Cooling system, which uses less power and uses the lower ambient air temperature to cool the water in the system. (Source: Wikipedia)
Air conditioning systems designed to take advantage of this technology have the potential to gain another 5-10% in overall data center efficiency, depending on geographic location.
When combined with the expected incremental gains in the performance of air conditioning technology, this would allow the PUE to reach the range of 1.1, compared to 1.4 for the system architecture described in this paper.
EcoBreeze™ is a modular indirect evaporative and air-to-air heat exchanger cooling solution.
The EcoBreeze has the unique ability to switch automatically between air-to-air and indirect evaporative heat exchange to consistently provide cooling to data centers in the most efficient way. The design of the EcoBreeze is able to reduce energy consumption by leveraging temperature differences between outside ambient air compared to IT return air to provide economized cooling to the data center.
The EcoBreeze not only provides multiple types of air economization, but its modular design allows the unit to adapt to the future cooling needs of the data center. These features, coupled with the fact that the unit uses outside air and is able to automatically switch to the most efficient of cooling modes really set the EcoBreeze apart from other cooling solutions.
Note: The water connection is only for the operation of the evaporative cooling mode, and is not used as chilled water.