NREL is a national laboratory operated by the Alliance for Sustainable Energy, LLC for the US Department of Energy. The document discusses trends in data center design, including examples from NREL of large energy savings. It covers topics like environmental conditions, cooling systems, electrical systems, and data center metrics. The presentation provides information on optimizing data center efficiency and reducing costs through best practices in design and operation.
One of our most popular webinar presentations on data center cooling: 2007 Data Center Cooling Study: Comparing Conventional Raised Floors with Close Coupled Cooling Technology.
If you're looking for a solution, it's simple physics: Water is 3,500 times more effective at cooling than air. But, liquid cooling carries a large stigma particularly because of the large price tag. And, if you're like other Data Center Managers, the words of Jerry McGuire may be ringing in your head "Show me the money!"
To view the recorded webinar presentation, please visit http://www.42u.com/data-center-liquid-cooling-webinar.htm
Lowering operating costs through cooling system designAFCOM
Learn more about achieving maximum energy efficiency through cooling system design. This presentation was given during the Spring 2012 Data Center World Conference in Las Vegas, NV. Learn more by visiting www.datacenterworld.com.
The segmentation of data centers into alternating hot and cold aisles is an established best practice. A number of manufacturers are taking this premise of airflow separation a step further by marketing "containment" solutions. By containing the hot or cold aisle, the air paths have little chance to mix, presenting data center operators with both reliability and efficiency gains.
To view the recording of the webinar presentation, please visit http://www.42u.com/webinars/Aisle-Containment-Webinar/playback.htm
Utilization of Computer Room Cooling Infrastructure: Measurement Reveals Oppo...Upsite Technologies
Study of data centers reveals the average computer room has cooling capacity that is nearly four times the IT heat load. When running cooling capacity is excessively over-implemented, then potentially large operating cost reductions are possible by turning off cooling units and/or reducing fan speeds for units with variable frequency drives (VFD). Using data from 45 sites reviewed by Upsite Technologies, this presentation will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF). Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and potential gains to be realized by AFM improvements.
This presentation was originally delivered at AFCOM's Data Center World conference in May, 2014 in Las Vegas, Nevada. The presentation discuss the state of cooling and airflow management, and also introduces Upsite's newest solution, AisleLok Modular containment. For more information, please visit http://upsite.com/aislelok-modular-containment
4 steps to quickly improve pue through airflow managementUpsite Technologies
It’s well known that cooling typically accounts for around half of a data center's total power consumption. Given this, it's imperative that cooling is optimized to achieve a low Power Usage Effectiveness (PUE). While this too may be common knowledge, the question still remains, how can this be done quickly, with all possible benefits realized, and with the fastest return on investment?
One of our most popular webinar presentations on data center cooling: 2007 Data Center Cooling Study: Comparing Conventional Raised Floors with Close Coupled Cooling Technology.
If you're looking for a solution, it's simple physics: Water is 3,500 times more effective at cooling than air. But, liquid cooling carries a large stigma particularly because of the large price tag. And, if you're like other Data Center Managers, the words of Jerry McGuire may be ringing in your head "Show me the money!"
To view the recorded webinar presentation, please visit http://www.42u.com/data-center-liquid-cooling-webinar.htm
Lowering operating costs through cooling system designAFCOM
Learn more about achieving maximum energy efficiency through cooling system design. This presentation was given during the Spring 2012 Data Center World Conference in Las Vegas, NV. Learn more by visiting www.datacenterworld.com.
The segmentation of data centers into alternating hot and cold aisles is an established best practice. A number of manufacturers are taking this premise of airflow separation a step further by marketing "containment" solutions. By containing the hot or cold aisle, the air paths have little chance to mix, presenting data center operators with both reliability and efficiency gains.
To view the recording of the webinar presentation, please visit http://www.42u.com/webinars/Aisle-Containment-Webinar/playback.htm
Utilization of Computer Room Cooling Infrastructure: Measurement Reveals Oppo...Upsite Technologies
Study of data centers reveals the average computer room has cooling capacity that is nearly four times the IT heat load. When running cooling capacity is excessively over-implemented, then potentially large operating cost reductions are possible by turning off cooling units and/or reducing fan speeds for units with variable frequency drives (VFD). Using data from 45 sites reviewed by Upsite Technologies, this presentation will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF). Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and potential gains to be realized by AFM improvements.
This presentation was originally delivered at AFCOM's Data Center World conference in May, 2014 in Las Vegas, Nevada. The presentation discuss the state of cooling and airflow management, and also introduces Upsite's newest solution, AisleLok Modular containment. For more information, please visit http://upsite.com/aislelok-modular-containment
4 steps to quickly improve pue through airflow managementUpsite Technologies
It’s well known that cooling typically accounts for around half of a data center's total power consumption. Given this, it's imperative that cooling is optimized to achieve a low Power Usage Effectiveness (PUE). While this too may be common knowledge, the question still remains, how can this be done quickly, with all possible benefits realized, and with the fastest return on investment?
Clarifying ASHRAE's Recommended Vs. Allowable Temperature Envelopes and How t...Upsite Technologies
The topic of raising temperatures in data centers used to be met with much criticism in the industry, but in recent years has become more accepted. A big driver for this acceptance has been ASHRAE’s expanded envelope for recommended and allowable server inlet temperatures. However, while this has eased the discussion, there are still some questions that have been left unanswered. What’s the difference between recommended and allowable? Which one is best to use? What steps must be taken to safely raise set points? How do you ensure servers are still adequately cooled? What if you have different server types (A1, A2, A3, A4)? This presentation will examine these questions to give a clearer understanding of ASHRAE’s recommended and allowable guidelines. Also covered will be an explanation on how, in some cases, it is possible to raise cooling control set points without raising server inlet temperatures.
The purpose of this presentation is to discuss the underlying science behind data center airflow management and how applying best practices can make the greatest impact on the computer room, both in terms of energy savings and capacity.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Mark Seymour, Director, Future Facilities.
Data Center Cooling Efficiency: Understanding the Science of the 4 Delta T'sUpsite Technologies
While the term Delta T may be commonly used in the industry, there is much misunderstanding about where and why temperatures are changing in computer rooms. While two ΔT’s are commonly known, there are actually four different ΔT’s which contribute to the health of the data center. Understanding the sources of these differences and measuring them in your site provides insight about how to further improve the efficiency and capacity of computer room cooling.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
For Most Data Centers, Liquid and Air Cooling Will Not be Mutually ExclusiveUpsite Technologies
A recent report from Technavio indicates that the adoption of liquid-based cooling is high, as it is considered more efficient than air-based cooling. Globally liquid-based cooling is expected to grow at a remarkable rate through 2020, posting a CAGR of almost 16% during the forecast period. So, why is this level of adoption happening? Increasing rack densities lead by high performance computing (HPC) and the quest to improve efficiency are driving an increase in liquid cooling design strategies and deployment. While still relatively sparse, liquid cooling will become more prevalent, but this does not mean the end of air cooling. In this session, we’ll discuss how to implement liquid cooling while maintaining appropriate air-cooling conditions and fully realize efficiency gains. Lastly, we’ll discuss how to get started and get ahead of the market when it comes to improving cooling efficiency.
Myths of Data Center Containment:Whats's True and What's NotUpsite Technologies
This presentation focuses on common misconceptions about containment in data centers and provides participants with a technical understanding of the science behind containment. This understanding will enable managers to more fully realize the benefits of their own containment systems or be able to make informed decisions about deploying containment.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
How IT Decisions Impact Facilities: The Importance of Mutual UnderstandingUpsite Technologies
Decisions and actions typically under the jurisdiction of the IT side of data center management can have a profound impact on the mechanical systems and resultant operating costs and capacity of the data center. By understanding these impacts, IT and facilities management are able to develop a cooperative approach to managing the data center, resulting in a more effective and efficient operation, thereby reducing operating costs.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Ian Seaton, Industry Guru & Technical Advisor, Upsite Technologies
Green buildings : Challange in Operation and MaintenanceTejwant Navalkar
The paper looks at the existing maintenance practices with some telling pictures and goes on to suggest changes in the approach to maintenance in line with the Green Building requirements.
A brief insight of what goes to make a building green is given to put the challenges in Operation and Maintenance in proper perspective.
For download link head to http://solarreference.com/solar-cooling-training-presentation/
Also available from SOLAIR website.
A presentation from the SOLAIR project on sizing of solar air conditioners. their website has a lot of details information. For similar useful resources visit us on http://solarreference.com
Data center cooling infrastructure slideLivin Jose
CRAC vs CRAH, what is Air-Side Economizer, What is chillers, What is cooling tower, what is CRAC, What is CRAH, what is the importance of cooling in data center, what is Water Side Economizer,
Cooling Optimization 101: A Beginner's Guide to Data Center CoolingUpsite Technologies
As new personnel enter the industry, they are often bombarded with a slew of buzz words and marketing messages that would lead them to believe that data centers almost run themselves. And while monitoring and DCIM solutions are improving the management of power and cooling, an understanding of the fundamental science is crucial to both see through the hype and get the most out of management systems. More so, as the veterans in our industry start to retire, much of the basic knowledge around power and cooling is often overlooked when training their successors. This session will provide that basic knowledge and give a fundamental understanding of the power and cooling infrastructure in a data center, with an emphasis on cooling optimization. In this session, you’ll learn how to recover stranded cooling capacity, reduce operating costs, improve IT equipment reliability, and prolong the life and capacity of the data center.
Bits, Bytes and BTUs: Warm Water Liquid Cooling at NRELinside-BigData.com
In this video from the 2014 HPC User Forum in Seattle, Steven Hammond from NREL presents: Bits, Bytes and BTUs: Warm Water Liquid Cooling at NREL.
Learn more: http://insidehpc.com/video-gallery-hpc-user-forum-2014-seattle/
Clarifying ASHRAE's Recommended Vs. Allowable Temperature Envelopes and How t...Upsite Technologies
The topic of raising temperatures in data centers used to be met with much criticism in the industry, but in recent years has become more accepted. A big driver for this acceptance has been ASHRAE’s expanded envelope for recommended and allowable server inlet temperatures. However, while this has eased the discussion, there are still some questions that have been left unanswered. What’s the difference between recommended and allowable? Which one is best to use? What steps must be taken to safely raise set points? How do you ensure servers are still adequately cooled? What if you have different server types (A1, A2, A3, A4)? This presentation will examine these questions to give a clearer understanding of ASHRAE’s recommended and allowable guidelines. Also covered will be an explanation on how, in some cases, it is possible to raise cooling control set points without raising server inlet temperatures.
The purpose of this presentation is to discuss the underlying science behind data center airflow management and how applying best practices can make the greatest impact on the computer room, both in terms of energy savings and capacity.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Mark Seymour, Director, Future Facilities.
Data Center Cooling Efficiency: Understanding the Science of the 4 Delta T'sUpsite Technologies
While the term Delta T may be commonly used in the industry, there is much misunderstanding about where and why temperatures are changing in computer rooms. While two ΔT’s are commonly known, there are actually four different ΔT’s which contribute to the health of the data center. Understanding the sources of these differences and measuring them in your site provides insight about how to further improve the efficiency and capacity of computer room cooling.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
For Most Data Centers, Liquid and Air Cooling Will Not be Mutually ExclusiveUpsite Technologies
A recent report from Technavio indicates that the adoption of liquid-based cooling is high, as it is considered more efficient than air-based cooling. Globally liquid-based cooling is expected to grow at a remarkable rate through 2020, posting a CAGR of almost 16% during the forecast period. So, why is this level of adoption happening? Increasing rack densities lead by high performance computing (HPC) and the quest to improve efficiency are driving an increase in liquid cooling design strategies and deployment. While still relatively sparse, liquid cooling will become more prevalent, but this does not mean the end of air cooling. In this session, we’ll discuss how to implement liquid cooling while maintaining appropriate air-cooling conditions and fully realize efficiency gains. Lastly, we’ll discuss how to get started and get ahead of the market when it comes to improving cooling efficiency.
Myths of Data Center Containment:Whats's True and What's NotUpsite Technologies
This presentation focuses on common misconceptions about containment in data centers and provides participants with a technical understanding of the science behind containment. This understanding will enable managers to more fully realize the benefits of their own containment systems or be able to make informed decisions about deploying containment.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
How IT Decisions Impact Facilities: The Importance of Mutual UnderstandingUpsite Technologies
Decisions and actions typically under the jurisdiction of the IT side of data center management can have a profound impact on the mechanical systems and resultant operating costs and capacity of the data center. By understanding these impacts, IT and facilities management are able to develop a cooperative approach to managing the data center, resulting in a more effective and efficient operation, thereby reducing operating costs.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Ian Seaton, Industry Guru & Technical Advisor, Upsite Technologies
Green buildings : Challange in Operation and MaintenanceTejwant Navalkar
The paper looks at the existing maintenance practices with some telling pictures and goes on to suggest changes in the approach to maintenance in line with the Green Building requirements.
A brief insight of what goes to make a building green is given to put the challenges in Operation and Maintenance in proper perspective.
For download link head to http://solarreference.com/solar-cooling-training-presentation/
Also available from SOLAIR website.
A presentation from the SOLAIR project on sizing of solar air conditioners. their website has a lot of details information. For similar useful resources visit us on http://solarreference.com
Data center cooling infrastructure slideLivin Jose
CRAC vs CRAH, what is Air-Side Economizer, What is chillers, What is cooling tower, what is CRAC, What is CRAH, what is the importance of cooling in data center, what is Water Side Economizer,
Cooling Optimization 101: A Beginner's Guide to Data Center CoolingUpsite Technologies
As new personnel enter the industry, they are often bombarded with a slew of buzz words and marketing messages that would lead them to believe that data centers almost run themselves. And while monitoring and DCIM solutions are improving the management of power and cooling, an understanding of the fundamental science is crucial to both see through the hype and get the most out of management systems. More so, as the veterans in our industry start to retire, much of the basic knowledge around power and cooling is often overlooked when training their successors. This session will provide that basic knowledge and give a fundamental understanding of the power and cooling infrastructure in a data center, with an emphasis on cooling optimization. In this session, you’ll learn how to recover stranded cooling capacity, reduce operating costs, improve IT equipment reliability, and prolong the life and capacity of the data center.
Bits, Bytes and BTUs: Warm Water Liquid Cooling at NRELinside-BigData.com
In this video from the 2014 HPC User Forum in Seattle, Steven Hammond from NREL presents: Bits, Bytes and BTUs: Warm Water Liquid Cooling at NREL.
Learn more: http://insidehpc.com/video-gallery-hpc-user-forum-2014-seattle/
Este é um documento disponibilzado pela Ashrae na internet para consultas sobre TC 9.9 para operação em Data Centers no mundo todo, esse guia fala sobre as classes e os seus limites operacionais mínimos e máximos
The green data center has moved from theoretical to the realistic, with IT leaders being challenged to construct new data centers (or retrofits the existing one) with energy saving features, sustainable materials and other environmental efficiencies in mind.
This project deals with the effects that are caused by the data center. How severe these effects are and how to overcome these. The measures that has been provided are not only in constructional point of view but also focusing on other dimensions.
Utilizing Analytics to Drive Change in Buildings - Apem Sept 18 2015buildpulse
Utilizing Analytics to Drive Change in Buildings, Brice Kosnik the CEO of buildpulse discusses ways customers are using analytics to improve commercial and public buildings. Examples are drawn from customers who have school districts, hospitals, and office buildings. This presentation was given to the Association of Professional Energy Managers
CPD Presentation Evaporative cooling in data centresColt UK
Data centres that use evaporative cooling can cut their energy bills by up to 80% compared to conventional cooling methods!
The specifications for the environmental operating conditions of IT equipment used in data centres have recently been revised, opening the way to evaporative cooling in such buildings. Evaporative cooling can provide a highly effective solution, with low installation and running costs, minimal maintenance requirements and quiet operation.
This seminar covers:
• Revisions to the specifications for the environmental operating conditions of IT equipment in data centres
• Options for cooling in a data centre
• Implementing evaporative cooling in a data centre.
Plug Load Efficiency for Zero Energy Buildings Webinar 1 29 2013Shanti Pless
Plug loads represent a growing opportunity for efficiency, especially in Zero Energy Buildings where these loads represent a growing percent of the total load. This webinar addresses strategies for plug load efficiency in NREL's net zero RSF.
Similar to US Trends in Data Centre Design with NREL Examples of Large Energy Savings (20)
Heidi Fraser-Krauss, Director of IT at the University of York explores some of the issues she encountered in trying to understand the true costs of the central IT provision at the university
• Janet Cloud Services frameworks
• Knowing IT costs to make informed decisions
• The outputs you will receive
• Benchmarking with peers
• How Janet works with you
• Our charges
• Modelling costs of cloud vs in-house
• Questions & discussion
• Electricity Incentivisation Scheme (EIS) at the University of Cambridge
• Design of Engineering’s Data Centre cooling system
• Energy use from 2010 onwards
• Next steps
What does central IT really cost? An attempt to find out! - Heidi Fraser-Krau...JISC's Green ICT Programme
• To understand where money is spent.
• To be able to compare our costs with those of other providers- cloud.....
• To be able to price services that we offer to others.
• Curiosity!
• To have evidence to argue for more resource.
• To understand central vs. local provision costs.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
US Trends in Data Centre Design with NREL Examples of Large Energy Savings
1. NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC.
US
Trends
in
Data
Centre
Design
with
NREL
Examples
of
Large
Energy
Savings
Understanding
and
Minimising
The
Costs
of
Data
Centre
Based
IT
Services
Conference
University
of
Liverpool
O?o
Van
Geet,
PE
June
17,
2013
3. 3
BPG
Table
of
Contents
• Summary
• Background
• Informa?on
Technology
Systems
• Environmental
CondiLons
• Air
Management
• Cooling
Systems
• Electrical
Systems
• Other
Opportuni?es
for
Energy
Efficient
Design
• Data
Center
Metrics
&
Benchmarking
5. 5
Data
Center
equipment’s
environmental
condiLons
should
fall
within
the
ranges
established
by
ASHRAE
as
published
in
the
Thermal
Guidelines
book.
Environmental
CondiLons
ASHRAE
Reference:
ASHRAE
(2008),
(2011)
(@
Equipment
Intake) Recommended Allowable
Temperature
Data
Centers
ASHRAE
18°
–
27°C
15°
–
32°C
(A1)
5°
–
45°C
(A4)
Humidity
(RH)
Data
Centers
ASHRAE
5.5°C
DP
–
60%
RH
and
15oC
DP
20%
–
80%
RH
Environmental
SpecificaLons
(°C)
8. 8
EsLmated
Savings
Baseline
System
DX
Cooling
with
no
economizer
Load
1
ton
of
cooling,
constant
year-‐round
Efficiency
(COP)
3
Total
Energy
(kWh/yr)
10,270
RECOMMENDED
RANGE
ALLOWABLE
RANGE
Results
Hours
Energy
(kWh)
Hours
Energy
(kWh)
Zone1:
DX
Cooling
Only
25
8
2
1
Zone2:
Mul?stage
Indirect
Evap.
+
DX
(H80)
26
16
4
3
Zone3:
Mul?stage
Indirect
Evap.
Only
3
1
0
0
Zone4:
Evap.
Cooler
Only
867
97
510
57
Zone5:
Evap.
Cooler
+
Outside
Air
6055
417
1656
99
Zone6:
Outside
Air
Only
994
0
4079
0
Zone7:
100%
Outside
Air
790
0
2509
0
Total
8,760
538
8,760
160
Es0mated
%
Savings
-‐
95%
-‐
98%
9. 9
Data
Center
Efficiency
Metric
• Power
Usage
EffecLveness
(P.U.E.)
is
an
industry
standard
data
center
efficiency
metric.
• The
raLo
of
power
used
or
lost
by
data
center
facility
infrastructure
(pumps,
lights,
fans,
conversions,
UPS…)
to
power
used
by
compute.
• Not
perfect,
some
folks
play
games
with
it.
• 2011
survey
esLmates
industry
average
is
1.8.
• Typical
data
center,
half
of
power
goes
to
things
other
than
compute
capability.
9
“IT power” + “Facility power”
P.U.E. =
“IT power”
12. “I
am
re-‐using
waste
heat
from
my
data
center
on
another
part
of
my
site
and
my
PUE
is
0.8!”
ASHRAE
&
friends
(DOE,
EPA,
TGG,
7x24,
etc..)
do
not
allow
reused
energy
in
PUE
&
PUE
is
always
>1.0.
Another
metric
has
been
developed
by
The
Green
Grid
+;
ERE
–
Energy
Reuse
EffecLveness.
h?p://www.thegreengrid.org/en/Global/Content/white-‐papers/ERE
13. 13
ERE
–
Adds
Energy
Reuse
Utility
Cooling
UPS PDU
IT
Rejected
Energy
(a)
(b)
(c) (d)
(f)
(e)
Reused
Energy
(g)
14. 14
Credit:
Haselden
ConstrucLon
• More
than
1300
people
in
DOE
office
space
on
NREL’s
campus
• 33,445
m2
• Design/build
process
with
required
energy
goals
̶
50%
energy
savings
from
code
̶
LEED
Pla?num
• Replicable
̶ Process
̶ Technologies
̶ Cost
• Site,
source,
carbon,
cost
ZEB:B
̶
Includes
plugs
loads
and
datacenter
• Firm
fixed
price
-‐
US
$22.8/m2
construcLon
cost
(not
including
$2.5/m2
for
PV
from
PPA/
ARRA)
• Opened
June
10,
2010
(First
Phase)
DOE/NREL
Research
Support
Facility
15. 15
RSF
Datacenter
• Fully
containing
hot
aisle
– Custom
aisle
floor
and
door
seals
– Ensure
equipment
designed
for
cold
aisle
containment
§ And
installed
to
pull
cold
air
Ø Not
hot
air…
–
1.18
annual
PUE
–
ERE
=
0.9
• Control
hot
aisle
based
on
return
temperature
of
~90F.
• Waste
heat
used
to
heat
building.
• Outside
air
and
EvaporaLve
Cooling
• Low
fan
energy
design
• 176
Sq
m.
Credit:
Marjorie
Scho?/NREL
18. 18
Move
to
Liquid
Cooling
• Server
fans
are
inefficient
and
noisy.
– Liquid
doors
are
an
improvement
but
we
can
do
beger!
• Power
densiLes
are
rising
making
component-‐
level
liquid
cooling
soluLons
more
appropriate.
• Liquid
Benefit
– Thermal
stability,
reduced
component
failures.
– Beger
waste
heat
re-‐use
op?ons.
– Warm
water
cooling,
reduce/eliminate
condensa?on.
– Provide
cooling
with
higher
temperature
coolant.
• Eliminate
expensive
&
inefficient
chillers.
• Save
wasted
fan
energy
and
use
it
for
compuLng.
• Unlock
your
cores
and
overclock
to
increase
throughput!
19. 19
Liquid
Cooling
–
Overview
Water
and
other
liquids
(dielectrics,
glycols
and
refrigerants)
may
be
used
for
heat
removal.
• Liquids
typically
use
LESS
transport
energy
(14.36
Air
to
Water
Horsepower
ra?o
for
example
below).
• Liquid-‐to-‐liquid
heat
exchangers
have
closer
approach
temps
than
Liquid-‐to-‐air
(coils),
yielding
increased
outside
air
hours.
20. 20
2011
ASHRAE
Liquid
Cooling
Guidelines
NREL
ESIF
HPC
(HP
hardware)
using
24
C
supply,
40
C
return
–W4/W5
21. 21
NREL
HPC
Data
Center
Showcase
Facility
• 10MW,
929
m2
• Leverage
favorable
climate
• Use
direct
water
to
rack
cooling
• DC
manager
responsible
for
ALL
DC
cost
including
energy!
• Waste
heat
captured
and
used
to
heat
labs
&
offices.
• World’s
most
energy
efficient
data
center,
PUE
1.06!
• Lower
CapEx
and
OpEx.
Leveraged
exper0se
in
energy
efficient
buildings
to
focus
on
showcase
data
center.
Chips to bricks approach
• Opera?onal
1-‐2013,
Petascale+
HPC
Capability
in
8-‐2013
• 20-‐year
planning
horizon
̶ 5
to
6
HPC
genera?ons.
High
Performance
CompuLng
22. 22
CriLcal
Data
Center
Specs
• Warm
water
cooling,
24C
̶ Water
much
beger
working
fluid
than
air
-‐
pumps
trump
fans.
̶ U?lize
high
quality
waste
heat,
40C
or
warmer.
̶ +90%
IT
heat
load
to
liquid.
• High
power
distribuLon
̶ 480VAC,
Eliminate
conversions.
• Think
outside
the
box
̶ Don’t
be
sa?sfied
with
an
energy
efficient
data
center
nestled
on
campus
surrounded
by
inefficient
laboratory
and
office
buildings.
̶ Innovate,
integrate,
op?mize.
Dashboards
report
instantaneous,
seasonal
and
cumulaLve
PUE
values.
23. 23
• Data
center
equivalent
of
the
“visible
man”
– Reveal
not
just
boxes
with
blinky
lights,
but
the
inner
workings
of
the
building
as
well.
– Tour
views
into
pump
room
and
mechanical
spaces
– Color
code
pipes,
LCD
monitors
NREL
ESIF
Data
Center
Cross
SecLon
24. 24
• 2.5 MW – Day one
capacity (Utility $500K/
yr/MW)
• 10 MW – Ultimate
Capacity
• Petaflop
• No Vapor Compression
for Cooling
Data Center
25. 25
Summer Cooling Mode
PUE –
Typical Data Center =
1.5 – 2.0
NREL ESIF= 1.04
* 30% more energy
efficient than your
typical “green” data
center
Data Center
26. 26
Winter Cooling Mode
ERE – Energy Reuse
Effectiveness
How efficient are we
using the waste heat to
heat the rest of the
building?
NREL ESIF= .7 (we use
30% of waste heat)
(more with future campus
loops)
Future Campus
Heating Loop
Future
Campus
Heating
Loop
High Bay
Heating
Loop
Office
Heating
Loop
Conference
Heating
Loop
Data Center
27. 27
95 deg
Air
75 deg
Air
• Water to rack Cooling for High Performance
Computers handles 90% of total load
• Air Cooling for Legacy Equipment handles 10% of total Load
Data Center – Cooling Strategy
28. 28
PUE
1.0X
-‐-‐
Focus
on
the
“1”
Facility PUE
IT Power Consumption
Energy Re-use
We all know how to do this!
True efficiency requires 3-D optimization.
29. 29
Facility PUE
IT Power Consumption
Energy Re-use
We all know how to do this!
Increased work per watt
Reduce or eliminate fans
Component level heat exchange
Newest processors are more efficient.
True efficiency requires 3-D optimization.
PUE
1.0X
-‐-‐
Focus
on
the
“1”
30. 30
Facility PUE
IT Power Consumption
Energy Re-use
True efficiency requires 3-D optimization.
We all know how to do this!
Increased work per watt
Reduce or eliminate fans
Component level heat exchange
Newest processors are more efficient.
Direct liquid cooling,
Higher return water temps
Holistic view of data center
planning
PUE
1.0X
-‐-‐
Focus
on
the
“1”
31. 31
What’s
Next?
ü Energy
Efficient
supporLng
infrastructure.
ü Pumps,
large
pipes,
high
voltage
(380
to
480)
electrical
to
rack
ü Efficient
HPC
for
planned
workload.
ü Capture
and
re-‐use
waste
heat.
Can
we
manage
and
“opLmize”
workflows,
with
varied
job
mix,
within
a
given
energy
“budget”?
Can
we
do
this
as
part
of
a
larger
“ecosystem”?
31 Steve Hammond
32. 32
Other
Factors
32 5
DemandSMART: Comprehensive Demand Response
Balancing supply and demand on the electricity grid is difficult and expensive. End users
that provide a balancing resource are compensated for the service.
Annual Electricity Demand As a Percent of Available Capacity
50%
100%
Winter Spring Summer Fall
75%
25%
90%
4MW solar
Use waste heat
Better rates, shed load
DC as part of Campus Energy System
33. 33
ParLng
Thoughts
• Energy Efficient Data Centers – been there, done that
– We know how, let’s just apply best practices.
– Don’t fear H20: Liquid cooling will be increasingly prevalent.
• Metrics will lead us into sustainability
– If you don’t measure/monitor it, you can’t manage it.
– As PUE has done; ERE, Carbon Use Effectiveness (CUE), etc. will help drive
sustainability.
• Energy Efficient and Sustainable Computing – it’s all about the “1”
– 1.0 or 0.06? Where do we focus? Compute & Energy Reuse.
• Holistic approaches to Energy Management.
– Lots of open research questions.
– Projects may get an energy allocation rather than a node-hour allocation.
35. 35
• Thermoelectric
power
generaLon
(coal,
oil,
natural
gas
and
nuclear)
consumes
about
1.1
gallon
per
kW
hour,
on
average.
• This
amounts
to
about
9.6
M
gallons
per
MW
year.
• We
esLmate
about
2.5
M
gallons
water
consumed
per
MW
year
for
on-‐site
evaporaLve
cooling
towers
at
NREL.
• If
chillers
need
0.2MW
per
MW
of
HPC
power,
then
chillers
have
an
impact
of
2.375M
gallons
per
year
per
MW.
• Actuals
will
depend
on
your
site,
but
evap.
cooling
doesn’t
necessarily
result
in
a
net
increase
in
water
use.
• Low
Energy
use
=
Lower
water
use.
Energy
Reuse
uses
NO
water!
Water
ConsideraLons
“We shouldn’t use evaporative cooling, water is scarce.”
NREL PIX 00181
36. 36
Data
Center
Efficiency
• Choices regarding power, packaging, cooling, and energy
recovery in data centers drive TCO.
• Why should we care?
• Carbon footprint.
• Water usage.
• Mega$ per MW year.
• Cost: OpEx ~ IT CapEx!
• A
less
efficient
data
center
takes
away
power
and
dollars
that
could
otherwise
be
used
for
compute
capability.
37. 37
HolisLc
Thinking
• Approach
to
Cooling:
Air
vs
Liquid
and
where?
– Components,
Liquid
Doors
or
CRACs,
…
• What
is
your
“ambient”
Temperature?
– 55F,
65F,
75F,
85F,
95F,
105F
…
– 13C,
18C,
24C,
30C,
35C,
40.5C
…
• Electrical
distribuLon:
– 208v
or
480v?
• “Waste”
Heat:
– How
hot?
Liquid
or
Air?
Throw
it
away
or
Use
it?
38. 38
Liquid
Cooling
–
New
ConsideraLons
• Air
Cooling
– Humidity
– Fan
failures
– Air
side
economizers,
par?culates
• Liquid
Cooling
– pH
&
bacteria
– Dissolved
solids
– Corrosion
inhibitors,
etc.
• When
considering
liquid
cooled
systems,
insist
that
providers
adhere
to
the
latest
ASHRAE
water
quality
spec
or
it
could
be
costly.
40. 40
2011
ASHRAE
Thermal
Guidelines
2011
Thermal
Guidelines
for
Data
Processing
Environments
–
Expanded
Data
Center
Classes
and
Usage
Guidance.
White
paper
prepared
by
ASHRAE
Technical
Commi?ee
TC
9.9
41. 41
Energy
Savings
PotenLal:
Economizer
Cooling
Energy
savings
poten?al
for
recommended
envelope,
Stage
1:
Economizer
Cooling.12
(Source:
Billy
Roberts,
NREL)
42. 42
Data
Center
Energy
• Data
centers
are
energy
intensive
faciliLes.
– 10-‐100x
more
energy
intensive
than
an
office.
– Server
racks
well
in
excess
of
30kW.
– Power
and
cooling
constraints
in
exis?ng
facili?es.
• Data
Center
inefficiency
steals
power
that
would
otherwise
support
compute
capability.
• Important
to
have
DC
manager
responsible
for
ALL
DC
cost
including
energy!
43. 43
Energy
Savings
PotenLal:
Economizer
+
Direct
EvaporaLve
Cooling
Energy
savings
poten?al
for
recommended
envelope,
Stage
2:
Economizer
+
Direct
Evap.
Cooling.12
(Source:
Billy
Roberts,
NREL)
44. 44
Energy
Savings
PotenLal:
Economizer
+
Direct
Evap.
+
MulLstage
Indirect
Evap.
Cooling
Energy
savings
poten?al
for
recommended
envelope,
Stage
3:
Economizer
+
Direct
Evap.
+
Mul?stage
Indirect
Evap.
Cooling.12
(Source:
Billy
Roberts,
NREL)
45. 45
Data
Center
Energy
Efficiency
• ASHRAE
90.1
2011
requires
economizer
in
most
data
centers.
• ASHRAE
Standard
90.4P,
Energy
Standard
for
Data
Centers
and
Telecommunica0ons
Buildings
• PURPOSE:
To
establish
the
minimum
energy
efficiency
requirements
of
Data
Centers
and
TelecommunicaLons
Buildings,
for:
• Design,
construcLon,
and
a
plan
for
operaLon
and
maintenance
• SCOPE:
This
Standard
applies
to:
• New,
new
addiLons,
and
modificaLons
to
Data
Centers
and
TelecommunicaLons
Buildings
or
porLons
thereof
and
their
systems
• Will
set
minimum
PUE
based
on
climate.
• More
detail
at
:
h?ps://www.ashrae.org/news/2013/ashrae-‐seeks-‐
input-‐on-‐revisions-‐to-‐data-‐centers-‐in-‐90-‐1-‐energy-‐standard-‐scope
46. 46
1. Reduce
the
IT
load
-‐
VirtualizaLon
&
ConsolidaLon
(up
to
80%
reducLon)
2.
Implement
contained
hot
aisle
and
cold
aisle
layout.
̶ Curtains,
equipment
configura?on,
blank
panels,
cable
entrance/exit
ports,
3. Install
economizer
(air
or
water)
and
evaporaLve
cooling
(direct
or
indirect).
4. Raise
discharge
air
temperature.
Install
VFD’s
on
all
computer
room
air
condiLoning
(CRAC)
fans
(if
used)
and
network
the
controls.
5. Reuse
data
center
waste
heat
if
possible.
6. Raise
the
chilled
water
(if
used)
set-‐point.
̶ Increasing
chiller
water
temp
by
1°C
reduces
chiller
energy
use
by
about
3%
7. Install
high
efficiency
equipment
including
UPS,
power
supplies,
etc..
8. Move
chilled
water
as
close
to
server
as
possible
(direct
liquid
cooling).
9. Consider
centralized
high
efficiency
water
cooled
chiller
plant
̶ Air-‐cooled
=
2.9
COP,
water-‐cooled
=
7.8
COP
Energy
ConservaLon
Measures
47. 47
Equipment
Environmental
SpecificaLon
Air Inlet to IT Equipment is the
important specification to meet
Outlet temperature is not
important to IT Equipment
48. 48
Recommended
Range
(Statement
of
Reliability)
Preferred
facility
opera?on;
most
values
should
be
within
this
range.
Allowable
Range
(Statement
of
FuncLonality)
Robustness
of
equipment;
no
values
should
be
outside
this
range.
MAX
ALLOWABLE
RACK
INTAKE
TEMPERATURE
MAX
RECOMMENDED
Over-‐Temp
Recommended
Range
Under-‐Temp
MIN
RECOMMENDED
MIN
ALLOWABLE
Allowable
Range
Key
Nomenclature
49. 49
Improve
Air
Management
• Typically,
more
air
circulated
than
required.
• Air
mixing
and
short
circuiLng
leads
to:
– Low
supply
temperature
– Low
Delta
T
• Use
hot
and
cold
aisles.
• Improve
isolaLon
of
hot
and
cold
aisles.
– Reduce
fan
energy
– Improve
air-‐condi?oning
efficiency
– Increase
cooling
capacity
49
Hot
aisle/cold
aisle
configuraLon
decreases
mixing
of
intake
&
exhaust
air,
promoLng
efficiency.
Source:
hNp://www1.eere.energy.gov/femp/pdfs/eedatacenterbestprac0ces.pdf
50. 50
Isolate
Cold
and
Hot
Aisles
Source:
hNp://www1.eere.energy.gov/femp/pdfs/eedatacenterbestprac0ces.pdf
70-80ºF vs. 45-55ºF
95-105ºF vs. 60-70ºF
51. 51
Adding
Air
Curtains
for
Hot/Cold
IsolaLon
Photo
used
with
permission
from
the
NaLonal
Snow
and
Ice
Data
Center.
h?p://www.nrel.gov/docs/fy12osL/53939.pdf
53. 53
Three
(3)
Cooling
Device
Categories
IT
Equipment
Rack
cooling
water
rack
containment
SERVER
FRONT
1
-‐
Rack
Cooler
• APC-‐water
• Knürr(CoolTherm)-‐water
• Knürr(CoolLoop)-‐water
• Rigal-‐water
2
-‐
Row
Cooler
• APC(2*)-‐water
• Liebert-‐refrigerant
IT
Equipment
Rack
IT
Equipment
Rack
IT
Equipment
Rack
row
containment
cooling
water
cooling
water
SERVER
FRONT
3
-‐
Passive
Door
Cooler
• IBM-‐water
• Vege/Coolcentric-‐water
• Liebert-‐refrigerant
• SUN-‐refrigerant
SERVER
FRONT
IT
Equipment
Rack
cooling
water
Courtesy
of
Henry
Coles,
Lawrence
Berkeley
Na0onal
Laboratory
54. 54
“Chill-‐off
2”
EvaluaLon
of
Close-‐
coupled
Cooling
SoluLons
Courtesy
of
Geoffrey
Bell
and
Henry
Coles,
Lawrence
Berkeley
Na0onal
Laboratory
less energy
use
55. 55
Cooling
Takeaways…
• Use
a
central
plant
(e.g.
chiller/CRAHs)
vs.
CRAC
units
• Use
centralized
controls
on
CRAC/CRAH
units
to
prevent
simultaneous
humidifying
and
dehumidifying.
• Move
to
liquid
cooling
(room,
row,
rack,
chip)
• Consider
VSDs
on
fans,
pumps,
chillers,
and
towers
• Use
air-‐
or
water-‐side
free
cooling.
• Expand
humidity
range
and
improve
humidity
control
(or
disconnect).