Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air December 5,  2007 Presented By:
Speakers and Sponsor  <ul><li>Lars Strong,   P.E. for Upsite Technologies   </li></ul><ul><ul><li>Lead Engineer and Servic...
Agenda  <ul><li>Energy Trends </li></ul><ul><li>Measuring Data Center Performance </li></ul><ul><li>Leading Causes of Inef...
A Snapshot: Energy Trends   √ Server density has increased significantly over the past decade √ The average server’s power...
Energy Trends: Specific to Cooling  <ul><li>According to HP, in 85 percent of data centers, most of the non-IT power is us...
Data Center Power Flow
Data Center  Coefficient of Efficiency (CoE) <ul><li>CoE = total power / critical power </li></ul><ul><ul><li>Critical pow...
Tier Performance Standards <ul><li>Tier I: Basic Site Infrastructure </li></ul><ul><ul><li>Room dedicated to support IT eq...
Coefficient of Efficiency (CoE) <ul><li>Interesting revelations </li></ul><ul><ul><li>At a Coe of 2.0 it takes twice the “...
What Leads to Inefficient CoE Sources of Mechanical Inefficiencies Thermal Incapacity and Excessive Bypass Airflow   Misma...
Thermal Incapacity Defined <ul><li>Thermal incapacity is the portion of the mechanical system that is running, but not con...
Bypass Airflow: Defined <ul><li>Conditioned air is not getting to the air intakes of computer equipment </li></ul><ul><ul>...
White Paper  <ul><li>A comprehensive survey of actual cooling conditions in 19 computer rooms comprising 204,400 ft2 of ra...
Consequences of Thermal Incapacity   <ul><li>Inefficient cooling system </li></ul><ul><ul><li>Operating cooling capacity i...
Consequences of Thermal Incapacity <ul><li>Inefficient cooling system (cont.) </li></ul><ul><ul><li>Rooms with the greates...
Not Limited to High-Density Clusters  <ul><li>Study done by Uptime Institute found that the highest % of hot spots were fo...
How Can So  Much  Excess Capacity be Installed?   <ul><li>Historically data center managers have relied on vendors and con...
The Culprit: Airflow Management <ul><li>Three categories of air movement challenges </li></ul><ul><li>Below floor obstruct...
Deciphering Hot Spots: Zone vs. Vertical   <ul><li>Two Varieties of hotspots </li></ul><ul><ul><li>Zone hotspots typically...
Raised-Floor Utilization: Legacy Layout <ul><li>All aisles have elevated “mixed” temperature (starved supply airflow compo...
<ul><li>Cold air escapes through cable cutouts </li></ul><ul><li>Escaping cold air reduces static pressure resulting in in...
Cold/Hot Aisle–Ideal Implementation: No Bypass Airflow <ul><li>Average power per rack (assuming one perforated tile per ra...
Sealing Options Need to be Evaluated for: <ul><li>Sealing effectiveness </li></ul><ul><li>Self sealing (is labor required)...
A Case Study #1: Success <ul><li>Business: Major carmaker with 10,000 ft 2  data center.  </li></ul><ul><li>Computing need...
Case Study #1: Success <ul><li>Problem Statement:   </li></ul><ul><li>IT equipment reliability problems due to high intake...
A Case Study #1: Success   <ul><li>Solution Approach: Comprehensive remediation </li></ul><ul><li>Comprehensive evaluation...
Case Study #1: Success   <ul><li>Business Benefit: </li></ul><ul><li>Increase in the cooling capacity of the existing CRAC...
A Case Study #2: Failure <ul><li>After rearrangement of 30 perforated tiles, 250 servers automatically thermaled off </li>...
How to get started… <ul><li>KoldWorks Cooling Services </li></ul><ul><ul><li>KoldProfile —Cooling Assessment </li></ul></u...
Cooling Tools  <ul><li>Temperature Strip </li></ul><ul><li>TroubleShooter  </li></ul><ul><ul><li>Test to see if there’s po...
Q&A  To Arrange a Complimentary 15Minute Cooling Evaluation  [email_address] To Receive a copy of the Presentation  [email...
Upcoming SlideShare
Loading in …5
×

Eliminating Data Center Hot Spots

2,457 views

Published on

Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air

Data center cooling is a hot topic. But, when you consider the challenges associated with cooling the latest generation servers, the growing cost of infrastructure equipment, and the risks associated with data center hot spots brought on by high-density clusters and premature hardware failure, it's easy to understand the focus.

To view the recorded webinar event, please visit http://www.42u.com/data-center-hot-spots-webinar.htm

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,457
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
152
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Ladies and Gentlemen: Thanks for standing by and welcome to today’s session in the DirectNET web Seminar Serious. Today’s presentation is entitled: “Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air .” During the presentation, all participants will be in a listen only mode. However, we encourage your questions or comments at anytime through the “chat” feature located at the lower left of your screen. These questions will be addressed as time allows. As a reminder, this Web Seminar is being recorded, today, December 5 th , 2007 and a recording will be sent to all attendees within 48 hours.
  • &lt;Jen&gt; Before we get started today, I’d like to introduce our speaker: Joining us from Upsite Technologies is Lars Lars Strong , P.E. Lars has been with Upsite Technologies, Inc., since its inception in 2001. Prior to this, Strong was the lead engineer and/or project manager for several private consulting firm endeavors and consulted with numerous international corporate data centers on design and construction management. Strong’s recent focus has been the identification and remediation of problems associated with the fluid dynamics and thermodynamics of data center cooling infrastructure. Moderating today’s conference is Patrick Cameron, Director of Business Development for DirectNET. Patrick is responsible for managing DirectNET’s suite of datacenter infrastructure solutions.  Prior to DirectNET Patrick spent the last eight years designing and implementing software and hardware solutions at Accenture and Akamai Technologies.  &lt;Rebecca&gt; Patrick, I’ll turn the conference over to you.
  • &lt;Patrick&gt;: Before we get started today, I’d like to quickly go over our agenda. The Uptime’s Institutes&apos; Reducing bypass airflow is the technical backdrop. It illustrates the science of controlling your DC environment. Our goal today is to have a higher level discussion of this science as well the mitigation factors in play and their impacts to your business. Discussion of our customer trends Changing spaces -&gt; Changing metrics -&gt; Need for new baselines Whitepaper highlights with respect to inefficiency Best practices for mitigation Look at those best practices in action -&gt; Case studies Q&amp;A Collecting questions through out
  • Server density is increasing with the adoption of blades and virtualization These servers are requiring more power Higher density takes less space but results in higher operating cost both in power and cooling and management Increasing energy costs lead to top level energy scrutiny Power and cooling limit growth not physical space Patrick &lt;paraphrase&gt; To accommodate additional computing requirements it is not as easy as just adding servers. Now there needs to be a broader conversation with facilities and a need to plan for growth together. Lars &lt;paraphrase&gt;: Make mention of the ‘higher’ level that this applies to – c-levels. Mention the incident about the servers that were deemed needed by the IT team that the Facilities team did not have bandwidth w/ existing data center. Patrick: Power is in some ways easier to understand and manage but cooling is a bigger challenge and makes up the brunt of the costs. http://h20219.www2.hp.com/services/library/GetPage.aspx?pageid=540289&amp;statusid=0&amp;audienceid=0&amp;ccid=225&amp;langid=121
  • Patrick: HP study shows where the power is going. Only 1/3 goes to IT equipment, another small percentage is lost in conversion leaving nearly 2/3s of most DCs energy going to cooling. When thinking about improvement this is a big target but it requires a different understanding and skill set to address. &lt;TRANSITION&gt; Patrick: So, for some trends increase in density, increase in power our customers have a very high comfort level for the other like cooling gaining this expertise is often more of a stretch and can be a bit overwhelming. Lars can you walk our audience through what they need to be looking at to understand and manage the cooling of their environments? http://h20219.www2.hp.com/services/library/GetPage.aspx?pageid=540289&amp;statusid=0&amp;audienceid=0&amp;ccid=225&amp;langid=121
  • TRANSITION &lt;Patrick&gt;: That’s a very important point, Lars. the area for the greatest opportunity for improving power use is the mechanical equipment and as the HP study showed it is also the highest potential cost. But, how do we really know how to measure where we are? How can we measure our energy efficiency?
  • TRANSITION &lt;Patrick&gt;: In the last slide, you showed a sample data center with a CoE of 2.4, which falls to a typical CoE. Is that normal?
  • &lt;TRANSITION&gt; Patrick: Moving Target  Depending on the availability needs CoE ratings will vary but there are general targets for tiers as we are continually finding opportunities for improvement on all tiers.
  • TRANSITION &lt;Patrick&gt;: What did you find to be the significant factors?
  • TRANSITION&lt;Patrick&gt;: lar, because I think Thermal incapacity and Bypass Airflow may be new terms for our audience, can you spent a little time defining these?
  • Patrick: So, what we’re really talking about here is the difference between the rated capacity of the air handler and the cooling being delivered to IT equipment?
  • &lt;Patrick&gt;: So, the less bypass airflow the better, right?
  • Both of these topics are covered in detail in the whitepaper you will receive for attending this webinar. This thorough analysis of 19 computer rooms provides additional detail on the science behind optimizing your datacenter to reduce TI and BPAF. Today we will be referencing the experience and data collected during the creation of this whitepaper as we talk through how these issues were affecting individual customers. &lt;Patrick&gt; Lars tell me a bit more about the lessons learned as you distilled the broader finding of this research.
  • Patrick: What you’re telling us is that in this study you found examples where people had over 2x’s the needed capacity and STILL had hot spots? Lars: Yes, but it gets worse.
  • &lt;Patrick&gt;: These all seem like symptoms. What are the problems?
  • Patrick Transition: How is it that a data center can end up with 14 times cooling infrastructure than is needed
  • Hammer -&gt;nails Patrick: I can see how this makes sense, but if the answer isn’t more cooling, let talk a bit more about what is the answer. http://www.hpac.com/GlobalSearch/Article/24486/#capitalize
  • Patrick &lt;paraphrase&gt;: This leads to the manifestation of hot spots?
  • In either case exceeding 77  F with a relative humidity of less than 40% are serious threats to maximum information availability and hardware reliability TRANSTION&lt;Patrick&gt;: So how would this apply to my physical floor plan?
  • Patrick: So, Lars, does this assume that if we just move to a hot/cold aisle arrangement that we will solve our bypass airflow issues?
  • Patrick: So, this is easy to see in your color coded pictures but how do I fit this into my datacenter?
  • TRANSITION&lt;Patrick&gt;: What are some of the common ways to fix this problem?
  • Patrick It seems like this is very exhaustive list of the perfect solutions. Let’s talk through a case study so we can understand how these atrributes affect the data center.
  • &lt;Patrick Transition&gt; This is a nice review of the science, but how did these changes affect the bottom line?
  • Patrick: I’m sure folks are wondering “why can’t I just do this myself?”
  • &lt;Patrick Transition&gt; This example provides a great reminder that we need to be deliberate in our approach here. Your team at Upsite does this type of work all the time. How do you recommend moving forward?
  • We have also put together a initial troubleshooting kit to help you start to gather the information you need to make informed decisions about your space. The kit includes the Upsite Temperature Strip is a liquid crystal thermometer with an acrylic self-adhesive backing that quickly and accurately measures the air-intake temperature of IT equipment. The Upsite Temperature Strip indicates if the air temperature is within acceptable limits based on standards established by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) and equipment manufacturers. The troubleshooter card helps you quantify the pressure in your raised floor.
  • &lt;Patrick&gt; This concludes the presentation piece of the this webinar. Before we move to questions I wanted to remind you that copies of this presentation are available upon request. To receive a copy of today’s presentation refer to the link datacentertrends@directnet.us on the screen. If the issues discussed here seem relevant to your data center we are also offering a 15 minute complimentary cooling evaluation to further explore whether this approach could be beneficial to you. If this is of interest please email the address above and you will be contacted to schedule the evaluation. When talking about the positive attributes of potential solutions for bypass airflow you mentioned dressing raw edges.  What are the ramifications here? How do I know if my computer room will benefit from this type of analysis? How much does an analysis cost? How does your product differ from the foam units I have seen that do the same thing?
  • Eliminating Data Center Hot Spots

    1. 1. Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air December 5, 2007 Presented By:
    2. 2. Speakers and Sponsor <ul><li>Lars Strong, P.E. for Upsite Technologies </li></ul><ul><ul><li>Lead Engineer and Services Product Manager </li></ul></ul><ul><ul><li>Consulted numerous Data Centers internationally on infrastructure design and management structure </li></ul></ul><ul><ul><li>Robust experience with fluid dynamics and thermodynamics of data center cooling infrastructure </li></ul></ul><ul><li>Patrick Cameron , Director of Business Development, DirectNET </li></ul><ul><ul><li>Product manager for DirectNET’s suite of data center infrastructure products. </li></ul></ul><ul><ul><li>Afcom information session presenter and SME for remote management solutions. </li></ul></ul><ul><ul><li>Ten years of consulting experience designing and implementing custom hardware and software solutions. </li></ul></ul>
    3. 3. Agenda <ul><li>Energy Trends </li></ul><ul><li>Measuring Data Center Performance </li></ul><ul><li>Leading Causes of Inefficiency </li></ul><ul><li>Methods for Improvement </li></ul><ul><li>Recommendations in Action: Case Study Review </li></ul><ul><li>Q&A </li></ul>
    4. 4. A Snapshot: Energy Trends √ Server density has increased significantly over the past decade √ The average server’s power consumption has quadrupled √ Higher density and the resultant higher operating temperatures spawn increased administration costs √ Executives are starting to look more closely at the energy budgets associated with IT infrastructure √ Customers are running out of power and cooling capacity well before they reach the spatial limits of their facilities
    5. 5. Energy Trends: Specific to Cooling <ul><li>According to HP, in 85 percent of data centers, most of the non-IT power is used by the cooling resources </li></ul>Source: “Data center cooling strategies”, HP, August 2007
    6. 6. Data Center Power Flow
    7. 7. Data Center Coefficient of Efficiency (CoE) <ul><li>CoE = total power / critical power </li></ul><ul><ul><li>Critical power is computer communication equipment consumption </li></ul></ul><ul><ul><ul><li>Sum of PDU loads </li></ul></ul></ul><ul><ul><li>Total Power is that required to support both UPS and Mechanical systems </li></ul></ul><ul><li>CoE = Building service entrance usage / sum of PDU loads (works best for standalone data centers) </li></ul>Ideal CoE 1.6 Target CoE 2.0 Typical CoE 2.4 to 2.8 and higher Many >3.0
    8. 8. Tier Performance Standards <ul><li>Tier I: Basic Site Infrastructure </li></ul><ul><ul><li>Room dedicated to support IT equipment </li></ul></ul><ul><li>Tier II: Redundant Capacity Components Site Infrastructure </li></ul><ul><ul><li>Redundant components for increased reliability </li></ul></ul><ul><li>Tier III: Concurrently Maintainable Site Infrastructure </li></ul><ul><ul><li>Alternate distribution paths, one active </li></ul></ul><ul><li>Tier IV: Fault Tolerant Site Infrastructure </li></ul><ul><ul><li>Dual active distribution paths </li></ul></ul><ul><li>Tiers I & II: Tactical solutions </li></ul><ul><li>Tiers III & IV: Strategic Investments </li></ul>
    9. 9. Coefficient of Efficiency (CoE) <ul><li>Interesting revelations </li></ul><ul><ul><li>At a Coe of 2.0 it takes twice the “critical power” to operate even an efficient data center </li></ul></ul><ul><ul><li>When CoE gets above 2.4 most of the additional power is going into inefficient mechanical systems </li></ul></ul><ul><ul><li>As the CoE increases the environment in the computer room can deteriorate </li></ul></ul><ul><ul><li>Adding more cooling units increases CoE and may not reduce Hotspots </li></ul></ul>
    10. 10. What Leads to Inefficient CoE Sources of Mechanical Inefficiencies Thermal Incapacity and Excessive Bypass Airflow Mismatched Expectations Mismatched Architectures No Master Plan Failure to Measure and Monitor Failure to Use Best Practices
    11. 11. Thermal Incapacity Defined <ul><li>Thermal incapacity is the portion of the mechanical system that is running, but not contributing to a dry bulb temperature change because of return air temperatures, system configuration problems, or other factors </li></ul><ul><li>Most thermal incapacity can be inexpensively recovered by a mechanical system “tune-up” </li></ul>
    12. 12. Bypass Airflow: Defined <ul><li>Conditioned air is not getting to the air intakes of computer equipment </li></ul><ul><ul><li>Escaping through cable cutouts and holes under cabinets </li></ul></ul><ul><ul><li>Escaping through misplaced perforated tiles </li></ul></ul><ul><ul><li>Escaping through holes in computer room perimeter walls, ceiling, or floor </li></ul></ul>
    13. 13. White Paper <ul><li>A comprehensive survey of actual cooling conditions in 19 computer rooms comprising 204,400 ft2 of raised floor. </li></ul><ul><ul><li>Size from 2,500 square feet (2,500 ft2 or 230 m2) to 26,000 ft2 (2,400 m2) </li></ul></ul><ul><li>More than 15,000 individual pieces of data were collected. </li></ul>
    14. 14. Consequences of Thermal Incapacity <ul><li>Inefficient cooling system </li></ul><ul><ul><li>Operating cooling capacity is 2.6 times the critical load (UPS output) </li></ul></ul><ul><ul><li>At Coefficients of Efficiency of 2.0 – 2.4 </li></ul></ul><ul><ul><li>10% of the racks had “hotspots” at the intake air exceeding 77°F (25 °C) </li></ul></ul>
    15. 15. Consequences of Thermal Incapacity <ul><li>Inefficient cooling system (cont.) </li></ul><ul><ul><li>Rooms with the greatest excess of cooling capacity had the worst environment </li></ul></ul><ul><ul><li>At Coefficients of Efficiency > 3.0 </li></ul></ul><ul><ul><li>Up to 25% of the racks had “hotspots” </li></ul></ul><ul><ul><li>More cooling capacity </li></ul></ul><ul><ul><ul><li>Poorer environment </li></ul></ul></ul><ul><ul><ul><li>Wasting capital and operating expenses </li></ul></ul></ul>
    16. 16. Not Limited to High-Density Clusters <ul><li>Study done by Uptime Institute found that the highest % of hot spots were found in computer rooms with very light loads. </li></ul><ul><li>Between 3.2 and 14.7 times more cooling capacity was running in those rooms than was required. </li></ul><ul><li>60% of the cold air cools the room but not the critical load except by recirculation </li></ul>
    17. 17. How Can So Much Excess Capacity be Installed? <ul><li>Historically data center managers have relied on vendors and contractors </li></ul><ul><ul><li>Vendors are motivated to sell more equipment </li></ul></ul><ul><ul><li>Contractors are motivated to perform installations </li></ul></ul><ul><li>Ignorance of science behind cooling and capacity management </li></ul>
    18. 18. The Culprit: Airflow Management <ul><li>Three categories of air movement challenges </li></ul><ul><li>Below floor obstruction </li></ul><ul><ul><li>Cables, pipes, etc. </li></ul></ul><ul><li>Raised floor performance </li></ul><ul><ul><li>Cable openings, perforated tile placement, etc. </li></ul></ul><ul><li>Above floor circulation </li></ul><ul><ul><li>Cabinet layout, cooling unit orientation, ceiling height, etc. </li></ul></ul>
    19. 19. Deciphering Hot Spots: Zone vs. Vertical <ul><li>Two Varieties of hotspots </li></ul><ul><ul><li>Zone hotspots typically exist over large areas of raised floor </li></ul></ul><ul><ul><li>Vertical hotspots are more discrete and may exist just at the top few U of an isolated cabinet </li></ul></ul>In either case , exceeding 77  F with a relative humidity of less than 40% are serious threats to maximum information availability and hardware reliability.
    20. 20. Raised-Floor Utilization: Legacy Layout <ul><li>All aisles have elevated “mixed” temperature (starved supply airflow compounds problem) </li></ul><ul><li>Fails to deliver predictable air intake temperatures </li></ul><ul><li>Reduces return air temperature which reduces cooling unit capacity and removes moisture </li></ul><ul><li>Removed moisture must be reinserted into the computer room </li></ul>
    21. 21. <ul><li>Cold air escapes through cable cutouts </li></ul><ul><li>Escaping cold air reduces static pressure resulting in insufficient cold aisle airflow </li></ul><ul><li>Result is vertical and zone hotspots in high heat load areas </li></ul>Computer Room Layout Options: The Effect of Bypass Airflow
    22. 22. Cold/Hot Aisle–Ideal Implementation: No Bypass Airflow <ul><li>Average power per rack (assuming one perforated tile per rack and 15°F temperature drop across the cooling unit coil) </li></ul><ul><ul><li>3.3 kW per perforated tile (700 CFM) </li></ul></ul><ul><ul><li>6.6 kW per grate (1,400 CFM) </li></ul></ul>
    23. 23. Sealing Options Need to be Evaluated for: <ul><li>Sealing effectiveness </li></ul><ul><li>Self sealing (is labor required) </li></ul><ul><li>Ease of recabling (is labor required) </li></ul><ul><li>Dresses raw edges (NFPA 75 requirement) </li></ul><ul><li>Static dissipative </li></ul><ul><li>Install it and forget it (is policing required) </li></ul><ul><li>Does not contribute to contamination </li></ul>
    24. 24. A Case Study #1: Success <ul><li>Business: Major carmaker with 10,000 ft 2 data center. </li></ul><ul><li>Computing needs: Support of all North American operations, sales and corporate functions. </li></ul>
    25. 25. Case Study #1: Success <ul><li>Problem Statement: </li></ul><ul><li>IT equipment reliability problems due to high intake temperatures </li></ul><ul><li>Failure rates were so high that IT equipment manufacturers were threatening to void warranties and charge for all service calls, a potentially very costly situation </li></ul><ul><li>No redundant cooling capacity </li></ul><ul><li>Thermal Incapacity and Bypass Airflow Issues: </li></ul><ul><li>Unsealed cable openings wasting 43% of conditioned air volume </li></ul>
    26. 26. A Case Study #1: Success <ul><li>Solution Approach: Comprehensive remediation </li></ul><ul><li>Comprehensive evaluation of the computer room’s cooling health </li></ul><ul><li>Adjustment of cooling infrastructure </li></ul><ul><ul><li>Sealing bypass openings </li></ul></ul><ul><ul><li>Perforated tile location and number </li></ul></ul><ul><ul><li>Cooling unit set points and calibration </li></ul></ul><ul><li>No downtime or exposure to downtime from: construction activities, adjustment of computer room layout, or the purchase of additional cooling units or perforated tiles </li></ul><ul><li>Results: </li></ul><ul><li>All IT equipment air-intake temperatures brought within recommended range </li></ul><ul><li>Maximum 16°F drop occurred at critical enterprise servers. </li></ul><ul><li>Bypass airflow reduced from 43% to less than 10% </li></ul>
    27. 27. Case Study #1: Success <ul><li>Business Benefit: </li></ul><ul><li>Increase in the cooling capacity of the existing CRAC units </li></ul><ul><li>Cooling capacity to support growth </li></ul><ul><li>There was also the side benefit of the noise level dropping significantly </li></ul><ul><li>“ Decreasing the operating temperatures in hotspot areas improves our equipment reliability, decreases outages, and helps us meet our business continuity goals”, quote from customer. </li></ul>
    28. 28. A Case Study #2: Failure <ul><li>After rearrangement of 30 perforated tiles, 250 servers automatically thermaled off </li></ul><ul><li>Internal safety controls in hardware turn off to prevent overheating </li></ul><ul><li>Result: Internet service for critical application service provider halted during prime time </li></ul>
    29. 29. How to get started… <ul><li>KoldWorks Cooling Services </li></ul><ul><ul><li>KoldProfile —Cooling Assessment </li></ul></ul><ul><ul><li>KoldSeminar—Education & Profile </li></ul></ul><ul><ul><li>KoldCheck—Cooling Audit </li></ul></ul><ul><ul><li>KoldTune—Cooling Remediation </li></ul></ul>KoldLok Raised-Floor Grommets KoldLok Integral KoldLok Surface
    30. 30. Cooling Tools <ul><li>Temperature Strip </li></ul><ul><li>TroubleShooter </li></ul><ul><ul><li>Test to see if there’s poor or good airflow of conditioned air over holes in your perforated raised floor tiles </li></ul></ul>Receive a Complimentary Strip or Troubleshooter: [email_address]
    31. 31. Q&A To Arrange a Complimentary 15Minute Cooling Evaluation [email_address] To Receive a copy of the Presentation [email_address]

    ×