In this deck from the 2018 Swiss HPC Conference, Karsten Kutzer from Lenovo presents: Energy Efficiency and Water-Cool-Technology Innovations.
"This session will discuss why water cooling is becoming more and more important for HPC data centers, Lenovo’s series of innovations in the area of direct water-cooled systems, and ways to re-use “waste heat” created by HPC systems."
Watch the video: https://wp.me/p3RLHQ-iDl
Learn more:
http://lenovo.com
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Explore how Capgemini’s Connected autonomous planning fine-tunes Consumer Products Company’s operations for manufacturing, transport, procurement, and virtually every other aspect of the supply-value network in a touchless, autonomous way.
DELL Technologies - The Complete Portfolio in 25 MinutesSmarter.World
To get an idea of the size, strength, and scope of the Dell Technologies product and solution offering, we will be showcasing the entire Dell Technologies family and their corresponding solutions over the next 25 minutes.
This is an high level DELL Technologies portfolio overview in 25 minutes.
Version v3 March 2018
DAS Slides: Building a Data Strategy — Practical Steps for Aligning with Busi...DATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task. The opportunity in getting it right can be significant, however, as data drives many of the key initiatives in today’s marketplace from digital transformation, to marketing, to customer centricity, population health, and more. This webinar will help de-mystify data strategy and data architecture and will provide concrete, practical ways to get started.
Data Architecture Best Practices for Advanced AnalyticsDATAVERSITY
Many organizations are immature when it comes to data and analytics use. The answer lies in delivering a greater level of insight from data, straight to the point of need.
There are so many Data Architecture best practices today, accumulated from years of practice. In this webinar, William will look at some Data Architecture best practices that he believes have emerged in the past two years and are not worked into many enterprise data programs yet. These are keepers and will be required to move towards, by one means or another, so it’s best to mindfully work them into the environment.
Enterprise Architecture vs. Data ArchitectureDATAVERSITY
Enterprise Architecture (EA) provides a visual blueprint of the organization, and shows key interrelationships between data, process, applications, and more. By abstracting these assets in a graphical view, it’s possible to see key interrelationships, particularly as they relate to data and its business impact across the organization. Join us for a discussion on how Data Architecture is a key component of an overall Enterprise Architecture for enhanced business value and success.
Dell Technologies - The Complete ISG Hardware PortfolioSmarter.World
To get an idea of the hughe hardware product portfolio of Dell Technologies, I will showcasing the entire Dell EMC ISG (Infrastructure Solutions Group) server, storage, backup, converged, hyper converged and network portfolio in this presentation.
I do not speak about hero numbers, magic quadrants, nor I present revenue, employee numbers.
In this presentation, the focus will be on the Dell EMC ISG hardware products of Dell Technologies that are needed for the IT transformation.
We will introduce the hardware products from Dell EMC at an semi high level view (e.g. product highlights, use caseses / workloads and a primary set of key capabilities)
Explore how Capgemini’s Connected autonomous planning fine-tunes Consumer Products Company’s operations for manufacturing, transport, procurement, and virtually every other aspect of the supply-value network in a touchless, autonomous way.
DELL Technologies - The Complete Portfolio in 25 MinutesSmarter.World
To get an idea of the size, strength, and scope of the Dell Technologies product and solution offering, we will be showcasing the entire Dell Technologies family and their corresponding solutions over the next 25 minutes.
This is an high level DELL Technologies portfolio overview in 25 minutes.
Version v3 March 2018
DAS Slides: Building a Data Strategy — Practical Steps for Aligning with Busi...DATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task. The opportunity in getting it right can be significant, however, as data drives many of the key initiatives in today’s marketplace from digital transformation, to marketing, to customer centricity, population health, and more. This webinar will help de-mystify data strategy and data architecture and will provide concrete, practical ways to get started.
Data Architecture Best Practices for Advanced AnalyticsDATAVERSITY
Many organizations are immature when it comes to data and analytics use. The answer lies in delivering a greater level of insight from data, straight to the point of need.
There are so many Data Architecture best practices today, accumulated from years of practice. In this webinar, William will look at some Data Architecture best practices that he believes have emerged in the past two years and are not worked into many enterprise data programs yet. These are keepers and will be required to move towards, by one means or another, so it’s best to mindfully work them into the environment.
Enterprise Architecture vs. Data ArchitectureDATAVERSITY
Enterprise Architecture (EA) provides a visual blueprint of the organization, and shows key interrelationships between data, process, applications, and more. By abstracting these assets in a graphical view, it’s possible to see key interrelationships, particularly as they relate to data and its business impact across the organization. Join us for a discussion on how Data Architecture is a key component of an overall Enterprise Architecture for enhanced business value and success.
Dell Technologies - The Complete ISG Hardware PortfolioSmarter.World
To get an idea of the hughe hardware product portfolio of Dell Technologies, I will showcasing the entire Dell EMC ISG (Infrastructure Solutions Group) server, storage, backup, converged, hyper converged and network portfolio in this presentation.
I do not speak about hero numbers, magic quadrants, nor I present revenue, employee numbers.
In this presentation, the focus will be on the Dell EMC ISG hardware products of Dell Technologies that are needed for the IT transformation.
We will introduce the hardware products from Dell EMC at an semi high level view (e.g. product highlights, use caseses / workloads and a primary set of key capabilities)
Introducing Trillium DQ for Big Data: Powerful Profiling and Data Quality for...Precisely
The advanced analytics and AI that run today’s businesses rely on a larger volume, and greater variety, of data. This data needs to be of the highest quality to ensure the best possible outcomes, but traditional data quality tools weren’t designed for today’s modern data environments.
That’s why we’ve developed Trillium DQ for Big Data -- an integrated product that delivers industry-leading data profiling and data quality at scale, in the cloud or on premises.
In this on-demand webcast, you will learn how Trillium DQ:
• Empowers data analysts to easily profile large, diverse data sources to discover new insights, uncover issues, and report on their findings – all without involving IT.
• Delivers best-in-class entity resolution to support mission-critical applications such as Customer 360, fraud detection, AML, and predictive analytics.
• Supports Cloud and hybrid architectures by providing consistent high-performance processing within critical time windows on all platforms.
• Keeps enterprise data lakes validated, clean, and trusted with the highest quality data – without technical expertise in big data or distributed architectures.
• Enables data quality monitoring based on targeted business rules for data governance and business insight
Intel, founded in 1968, was distinguished for making semiconductors.Today, intel is one of the most valuable brands in the world and has a market share of 80% of PC microprocessors.How did intel grow to be the market leader in microprocessors or rather in technology?It later emerged successful in the market through strategic planning. Its marketing campaigns include Intel inside campaign, dummy people series of ads, unwired campign. Intel aims at making lives simpler in a fun way. Sponsers of tomorrow ad campaign hughlights the role of Intel in changing the future of technology.
These campaigns have captured the right spirit of Intel and who they are.
New Analytic Uses of Master Data Management in the EnterpriseDATAVERSITY
Companies all over the world are going through digital transformation now, which in many cases is all about maturing the data environment and the use of data. Master data is key to this effort. All transformative projects require master data and usually many subject areas.
What could you accomplish if cultivating master data didn’t have to be part of every project and could be accessed as a service?
We’ll look at creative enterprise use cases of Master Data Management in the enterprise. We’ll see what some MDM vendors are doing with AI and how the future of MDM will be shaped by looking at some specific MDM actions influenced by AI.
What are big data in the contacts of energy & utilities, and how/where can the utilities find value in the data. In this C-level presentation we discussed the three prime areas: grid operations, smart metering and asset & workforce management. A section on cognitive computing for utilities have been omitted from the presentation due to confidentiality - but I tell you - it is mind-blowing perspectives on how IBM Watson will help utilities plan and optimize their operations in the near future!
See more on http://www.ibmbigdatahub.com/industry/energy-utilities
DAS Slides: Master Data Management — Aligning Data, Process, and GovernanceDATAVERSITY
Master Data Management (MDM) provides organizations with an accurate and comprehensive view of their business-critical data such as Customers, Products, Vendors, and more. While mastering these key data areas can be a complex task, the value of doing so can be tremendous – from real-time operational integration to data warehousing & analytic reporting. This webinar provides practical strategies for gaining value from your MDM initiative, while at the same time assuring a solid architectural and governance foundation that will ensure long-term, enterprise-wide success.
Dell Technologies Portfolio On One Single Page - POSTER - May 2019Smarter.World
The complete Dell Technologies businesses and product/solution portfolio and our strategy on two single pages.
Dell Technologies is a unique family of businesses that provides the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information.
This two-sided "one-pager" summarized Dell Technologies vision, corporate values, strategy, and the entire product / solution portfolio.
DIN A0 poster edition - v3b May 2019
Most organizations need to awaken to a sobering reality: their data maturity level is much lower than they realize. Organizational maturity is a journey requiring a balanced focus on both data and business process, with checkpoints along the way to ensure you’re on the right path. Ron Huizenga will discuss a continuous improvement approach that balances data and process alignment to achieve breakthrough results for data architecture and governance, using the Data Maturity Model as a benchmark.
How to design and implement a data ops architecture with sdc and gcpJoseph Arriola
Do you know how to use StreamSets Data Collector with Google Cloud Platform (GCP)? In this session we'll explain how YaloChat designed and implemented a streaming architecture that is sustainable, operable and scalable. Discover how we deployed Data Collector to integrate GCP components such as Pub / Sub and BigQuery to achieve DataOps in the cloud
Customer-Centric Data Management for Better Customer ExperiencesInformatica
With consumer and business buyer expectations growing exponentially, more businesses are competing on the basis of customer experience. But executing preferred customer experiences requires data about who your customers are today and what will they likely need in the future. Every business can benefit from an AI-powered master data management platform to supply this information to line-of-business owners so they can execute great experiences at scale. This same need is true from an internal business process perspective as well. For example, many businesses require better data management practices to deliver preferred employee experiences. Informatica provides an MDM platform to solve for these examples and more.
Liberati dal sovraccarico e dalle limitazioni dell’infrastruttura locale. Sfrutta risorse illimitate per ottenere scalabilità per i processi HPC (High Performance Computing), per analizzare dati su vasta scala, eseguire simulazioni e modelli finanziari e sperimentare riducendo il tempo di immissione sul mercato.
Accenture helps companies unlock the business and environmental value of organizational sustainability by strengthening their sustainability DNA. Read more.
The explosive growth of data and the value it creates calls on data professionals to level up their programs to build, demonstrate, and maintain trust. The days of fine print, pre-ticked boxes, and data hoarding are gone and strong collaboration from data, privacy, marketing and ethics teams is necessary to design trustworthy data-driven practices.
Join for a discussion on the latest trends in trusted data and how you can take critical steps to build trust in data practices by:
- Embedding privacy by design into data operations
- Respecting individual choice and optimizing the ongoing relationship with consumers
- Preparing for future data challenges including responsible AI and sustainability
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
The Digital Decoupling Journey | John Kriter, AccentureHostedbyConfluent
As many organization seek to modify both their core business technology platform, and their outlying digital channels, one of the largest hinderances people talk about is core data access. As one of our chief partners in event/stream processing, Confluent has worked with Accenture in the creation of our Digital Decoupling strategy. Leveraging CDC technologies to allow data access without modifying the core, organizations are now able to easily access data they previously would struggle to marshal. And not only data access, but real time responses and interactions with customer data previously locked behind the walls of antique or mission-critical systems.
Luigi Brochard from Lenovo gave this talk at the Switzerland HPC Conference. "High performance computing is converging more and more with the big data topic and related infrastructure requirements in the field. Lenovo is investing in developing systems designed to resolve todays and future problems in a more efficient way and respond to the demands of Industrial and research application landscape."
Watch the video: http://wp.me/p3RLHQ-gDC
Learn more: http://www3.lenovo.com/us/en/data-center/solutions/hpc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Introducing Trillium DQ for Big Data: Powerful Profiling and Data Quality for...Precisely
The advanced analytics and AI that run today’s businesses rely on a larger volume, and greater variety, of data. This data needs to be of the highest quality to ensure the best possible outcomes, but traditional data quality tools weren’t designed for today’s modern data environments.
That’s why we’ve developed Trillium DQ for Big Data -- an integrated product that delivers industry-leading data profiling and data quality at scale, in the cloud or on premises.
In this on-demand webcast, you will learn how Trillium DQ:
• Empowers data analysts to easily profile large, diverse data sources to discover new insights, uncover issues, and report on their findings – all without involving IT.
• Delivers best-in-class entity resolution to support mission-critical applications such as Customer 360, fraud detection, AML, and predictive analytics.
• Supports Cloud and hybrid architectures by providing consistent high-performance processing within critical time windows on all platforms.
• Keeps enterprise data lakes validated, clean, and trusted with the highest quality data – without technical expertise in big data or distributed architectures.
• Enables data quality monitoring based on targeted business rules for data governance and business insight
Intel, founded in 1968, was distinguished for making semiconductors.Today, intel is one of the most valuable brands in the world and has a market share of 80% of PC microprocessors.How did intel grow to be the market leader in microprocessors or rather in technology?It later emerged successful in the market through strategic planning. Its marketing campaigns include Intel inside campaign, dummy people series of ads, unwired campign. Intel aims at making lives simpler in a fun way. Sponsers of tomorrow ad campaign hughlights the role of Intel in changing the future of technology.
These campaigns have captured the right spirit of Intel and who they are.
New Analytic Uses of Master Data Management in the EnterpriseDATAVERSITY
Companies all over the world are going through digital transformation now, which in many cases is all about maturing the data environment and the use of data. Master data is key to this effort. All transformative projects require master data and usually many subject areas.
What could you accomplish if cultivating master data didn’t have to be part of every project and could be accessed as a service?
We’ll look at creative enterprise use cases of Master Data Management in the enterprise. We’ll see what some MDM vendors are doing with AI and how the future of MDM will be shaped by looking at some specific MDM actions influenced by AI.
What are big data in the contacts of energy & utilities, and how/where can the utilities find value in the data. In this C-level presentation we discussed the three prime areas: grid operations, smart metering and asset & workforce management. A section on cognitive computing for utilities have been omitted from the presentation due to confidentiality - but I tell you - it is mind-blowing perspectives on how IBM Watson will help utilities plan and optimize their operations in the near future!
See more on http://www.ibmbigdatahub.com/industry/energy-utilities
DAS Slides: Master Data Management — Aligning Data, Process, and GovernanceDATAVERSITY
Master Data Management (MDM) provides organizations with an accurate and comprehensive view of their business-critical data such as Customers, Products, Vendors, and more. While mastering these key data areas can be a complex task, the value of doing so can be tremendous – from real-time operational integration to data warehousing & analytic reporting. This webinar provides practical strategies for gaining value from your MDM initiative, while at the same time assuring a solid architectural and governance foundation that will ensure long-term, enterprise-wide success.
Dell Technologies Portfolio On One Single Page - POSTER - May 2019Smarter.World
The complete Dell Technologies businesses and product/solution portfolio and our strategy on two single pages.
Dell Technologies is a unique family of businesses that provides the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information.
This two-sided "one-pager" summarized Dell Technologies vision, corporate values, strategy, and the entire product / solution portfolio.
DIN A0 poster edition - v3b May 2019
Most organizations need to awaken to a sobering reality: their data maturity level is much lower than they realize. Organizational maturity is a journey requiring a balanced focus on both data and business process, with checkpoints along the way to ensure you’re on the right path. Ron Huizenga will discuss a continuous improvement approach that balances data and process alignment to achieve breakthrough results for data architecture and governance, using the Data Maturity Model as a benchmark.
How to design and implement a data ops architecture with sdc and gcpJoseph Arriola
Do you know how to use StreamSets Data Collector with Google Cloud Platform (GCP)? In this session we'll explain how YaloChat designed and implemented a streaming architecture that is sustainable, operable and scalable. Discover how we deployed Data Collector to integrate GCP components such as Pub / Sub and BigQuery to achieve DataOps in the cloud
Customer-Centric Data Management for Better Customer ExperiencesInformatica
With consumer and business buyer expectations growing exponentially, more businesses are competing on the basis of customer experience. But executing preferred customer experiences requires data about who your customers are today and what will they likely need in the future. Every business can benefit from an AI-powered master data management platform to supply this information to line-of-business owners so they can execute great experiences at scale. This same need is true from an internal business process perspective as well. For example, many businesses require better data management practices to deliver preferred employee experiences. Informatica provides an MDM platform to solve for these examples and more.
Liberati dal sovraccarico e dalle limitazioni dell’infrastruttura locale. Sfrutta risorse illimitate per ottenere scalabilità per i processi HPC (High Performance Computing), per analizzare dati su vasta scala, eseguire simulazioni e modelli finanziari e sperimentare riducendo il tempo di immissione sul mercato.
Accenture helps companies unlock the business and environmental value of organizational sustainability by strengthening their sustainability DNA. Read more.
The explosive growth of data and the value it creates calls on data professionals to level up their programs to build, demonstrate, and maintain trust. The days of fine print, pre-ticked boxes, and data hoarding are gone and strong collaboration from data, privacy, marketing and ethics teams is necessary to design trustworthy data-driven practices.
Join for a discussion on the latest trends in trusted data and how you can take critical steps to build trust in data practices by:
- Embedding privacy by design into data operations
- Respecting individual choice and optimizing the ongoing relationship with consumers
- Preparing for future data challenges including responsible AI and sustainability
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
The Digital Decoupling Journey | John Kriter, AccentureHostedbyConfluent
As many organization seek to modify both their core business technology platform, and their outlying digital channels, one of the largest hinderances people talk about is core data access. As one of our chief partners in event/stream processing, Confluent has worked with Accenture in the creation of our Digital Decoupling strategy. Leveraging CDC technologies to allow data access without modifying the core, organizations are now able to easily access data they previously would struggle to marshal. And not only data access, but real time responses and interactions with customer data previously locked behind the walls of antique or mission-critical systems.
Luigi Brochard from Lenovo gave this talk at the Switzerland HPC Conference. "High performance computing is converging more and more with the big data topic and related infrastructure requirements in the field. Lenovo is investing in developing systems designed to resolve todays and future problems in a more efficient way and respond to the demands of Industrial and research application landscape."
Watch the video: http://wp.me/p3RLHQ-gDC
Learn more: http://www3.lenovo.com/us/en/data-center/solutions/hpc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Data Centers: Compute Faster and More Sustainably with Microconvective Cooling®JetCool Technologies
As the demand for AI, ML, and big data grows, data centers are facing significant energy challenges. It is estimated that 3% of the planet's total energy consumption is used by data centers with 30% of that attributed to cooling equipment.
Inferior in-rack cooling solutions use considerable energy and resources often requiring power- and water-intensive infrastructure like chillers and evaporative coolers. Depending on the local climate, these systems can be used year-round with U.S. data centers alone consuming an estimated 660 billion liters of water in 2020.
Processors rely on optimized cooling solutions to maximize power density and increase compute performance. With suboptimal cooling solutions, heat can cause processor throttling and limited device lifetime. As power density and processor performance increase so do the expectations for cooling technology.
Recognizing the need for a simple, scalable, and sustainable liquid cooling option for data centers, JetCool developed SmartPlate. SmartPlate is a simple water cooling system that outfits processors with a fully-sealed, drop-in replacement cooling solution that lowers chip temperatures by 37%. SmartPlate technology helps accelerate performance by extracting 30% faster processing for TDPs over 1,000 Watts. Sustainability is one of JetCool's primary principles. Our consumers use 50% less electricity and 90% less water than the competing liquid cooling solutions. Reach out to our team to learn more about partnering with JetCool today.
One of our most popular webinar presentations on data center cooling: 2007 Data Center Cooling Study: Comparing Conventional Raised Floors with Close Coupled Cooling Technology.
If you're looking for a solution, it's simple physics: Water is 3,500 times more effective at cooling than air. But, liquid cooling carries a large stigma particularly because of the large price tag. And, if you're like other Data Center Managers, the words of Jerry McGuire may be ringing in your head "Show me the money!"
To view the recorded webinar presentation, please visit http://www.42u.com/data-center-liquid-cooling-webinar.htm
In this deck from the Switzerland HPC Conference, Francis Lam from Huawei presents: A Fresh Look at HPC from Huawei Enterprise.
"High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei's world class HPC solutions and explores creative new ways to solve HPC problems."
Watch the video: http://wp.me/p3RLHQ-gCV
Learn more: http://e.huawei.com/en/solutions/business-needs/data-center/high-performance-computing
and
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Scaling Green Instrumentation to more than 10 Million Coresinside-BigData.com
In this deck from SC18, Satoshi Matsuoka from RIKEN presents: Scaling Green Instrumentation to more than 10 Million Cores.
"We now have some sites that are extremely well instrumented for measuring power and energy. Lawrence Berkeley National Laboratory’s supercomputing center is a prime example, with sensors, data collection and analysis capabilities that span the facility and computing equipment. We have made major gains in improving the energy efficiency of the facility as well as computing hardware, but there are still large gains to be had with software- particularly application software. Just tuning code for performance isn’t enough; the same time to solution can have very different power profiles. We are at the point where measurement capabilities are allowing us to see cross-cutting issues- such as the cost of spin waits. These new measurement capabilities should provide a wealth of information to see the tall poles in the tent. This panel will explore how we can identify these emerging tall poles."
Learn more: http://sc18.supercomputing.org/proceedings/panel/panel_pages/pan101.html
and
https://wp.me/p3RLHQ-jHZ
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
YORK® AMICHI™ Series Air-Cooled DC Inverter Scroll Chillers and Heat PumpsIlia Iovtchev
Mundus Services, the official partner and service provider for Johnson Controls products and services in Bulgaria, is now ready to deliver technical solutions to our clients based on the highly efficient York® AmichiTM series of air-cooled DC inverter scroll chillers and heat pumps, designed to meet tomorrow’s efficiency standards today.
Delivering performance far beyond the typical chiller and heat pump efficiency levels, the YORK® Amichi™ Series meets or exceeds stringent European regulatory requirements through an optimized combination of YORK® efficiency enhancing technologies.
The Aurora G-Station and Aurora Cube are supercomputers in a box.
Equipped with the latest Intel and Nvidia processors, they offers extraordinary power to process the heaviest computational loads in:
computer graphics and digital media
scientific computing
industrial applications (EDA, CAE, signal processing...)
business applications (computational finance, cyber security, forensic...)
software development.
The Aurora G-Station is a supercomputer in a box. Small and compact, easily deployable with no need of infrastructure and extensive HPC knowledge. Available in different configurations, pre-loaded with the software stack, the G-Station is silent thanks to its liquid cooling, has no messy cabling and generate
no heat inside the room, so it can even be deployed in an office environment. Aurora G-Station is ideal for whoever needs easy high performance everywhere.
In this deck from the Stanford HPC Conference, Shahin Khan from OrionX describes major market Shifts in IT.
"We will discuss the digital infrastructure of the future enterprise and the state of these trends."
"We work with clients on the impact of Digital Transformation (DX) on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help."
Watch the video: https://wp.me/p3RLHQ-lPP
Learn more: http://orionx.net
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Greg Wahl from Advantech presents: Transforming Private 5G Networks.
Advantech Networks & Communications Group is driving innovation in next-generation network solutions with their High Performance Servers. We provide business critical hardware to the world's leading telecom and networking equipment manufacturers with both standard and customized products. Our High Performance Servers are highly configurable platforms designed to balance the best in x86 server-class processing performance with maximum I/O and offload density. The systems are cost effective, highly available and optimized to meet next generation networking and media processing needs.
“Advantech’s Networks and Communication Group has been both an innovator and trusted enabling partner in the telecommunications and network security markets for over a decade, designing and manufacturing products for OEMs that accelerate their network platform evolution and time to market.” Said Advantech Vice President of Networks & Communications Group, Ween Niu. “In the new IP Infrastructure era, we will be expanding our expertise in Software Defined Networking (SDN) and Network Function Virtualization (NFV), two of the essential conduits to 5G infrastructure agility making networks easier to install, secure, automate and manage in a cloud-based infrastructure.”
In addition to innovation in air interface technologies and architecture extensions, 5G will also need a new generation of network computing platforms to run the emerging software defined infrastructure, one that provides greater topology flexibility, essential to deliver on the promises of high availability, high coverage, low latency and high bandwidth connections. This will open up new parallel industry opportunities through dedicated 5G network slices reserved for specific industries dedicated to video traffic, augmented reality, IoT, connected cars etc. 5G unlocks many new doors and one of the keys to its enablement lies in the elasticity and flexibility of the underlying infrastructure.
Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence."
Watch the video: https://wp.me/p3RLHQ-lPQ
* Company website: https://www.advantech.com/
* Solution page: https://www2.advantech.com/nc/newsletter/NCG/SKY/benefits.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
In this deck from the Stanford HPC Conference, Nicole Xu from Stanford University describes how she transformed a common jellyfish into a bionic creature that is part animal and part machine.
"Animal locomotion and bioinspiration have the potential to expand the performance capabilities of robots, but current implementations are limited. Mechanical soft robots leverage engineered materials and are highly controllable, but these biomimetic robots consume more power than corresponding animal counterparts. Biological soft robots from a bottom-up approach offer advantages such as speed and controllability but are limited to survival in cell media. Instead, biohybrid robots that comprise live animals and self- contained microelectronic systems leverage the animals’ own metabolism to reduce power constraints and body as an natural scaffold with damage tolerance. We demonstrate that by integrating onboard microelectronics into live jellyfish, we can enhance propulsion up to threefold, using only 10 mW of external power input to the microelectronics and at only a twofold increase in cost of transport to the animal. This robotic system uses 10 to 1000 times less external power per mass than existing swimming robots in literature and can be used in future applications for ocean monitoring to track environmental changes."
Watch the video: https://youtu.be/HrmJFyvInj8
Learn more: https://sanfrancisco.cbslocal.com/2020/02/05/stanford-research-project-common-jellyfish-bionic-sea-creatures/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Gilad Shainer from the HPC AI Advisory Council describes how this organization fosters innovation in the high performance computing community.
"The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) and Artificial Intelligence (AI) use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products."
Watch the video: https://wp.me/p3RLHQ-lNz
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today RIKEN in Japan announced that the Fugaku supercomputer will be made available for research projects aimed to combat COVID-19.
"Fugaku is currently being installed and is scheduled to be available to the public in 2021. However, faced with the devastating disaster unfolding before our eyes, RIKEN and MEXT decided to make a portion of the computational resources of Fugaku available for COVID-19-related projects ahead of schedule while continuing the installation process.
Fugaku is being developed not only for the progress in science, but also to help build the society dubbed as the “Society 5.0” by the Japanese government, where all people will live safe and comfortable lives. The current initiative to fight against the novel coronavirus is driven by the philosophy behind the development of Fugaku."
Initial Projects
Exploring new drug candidates for COVID-19 by "Fugaku"
Yasushi Okuno, RIKEN / Kyoto University
Prediction of conformational dynamics of proteins on the surface of SARS-Cov-2 using Fugaku
Yuji Sugita, RIKEN
Simulation analysis of pandemic phenomena
Nobuyasu Ito, RIKEN
Fragment molecular orbital calculations for COVID-19 proteins
Yuji Mochizuki, Rikkyo University
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
In this deck from GTC Digital, William Beaudin from DDN presents: HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD.
Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We'll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems.
The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure. DDN A³I with the EXA5 parallel file system is a turnkey, AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration. The combined solution delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems.
Watch the video: https://wp.me/p3RLHQ-lIV
Learn more: https://www.ddn.com/download/nvidia-superpod-ddn-a3i-ai400-appliance-with-the-exa5-filesystem/
and
https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
Today Xilinx announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.
Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to- market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Learn more: https://insidehpc.com/2020/03/xilinx-announces-versal-premium-acap-for-network-and-cloud-acceleration/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
In this video from the Rice Oil & Gas Conference, Chin Fang from Zettar presents: Moving Massive Amounts of Data across Any Distance Efficiently.
The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally. Both projects have real world motivations, e.g. the ambitious data transfer requirements of Linac Coherent Light Source II (LCLS-II) [1], a premier preparation project of the U.S. DOE Exascale Computing Initiative (ECI) [2]. Their immediate goals are described and explained, together with the solution used for each. Findings and early results are reported. Possible future works are outlined.
Watch the video: https://wp.me/p3RLHQ-lBX
Learn more: https://www.zettar.com/
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Rice Oil & Gas Conference, Bradley McCredie from AMD presents: Scaling TCO in a Post Moore's Law Era.
"While foundries bravely drive forward to overcome the technical and economic challenges posed by scaling to 5nm and beyond, Moore’s law alone can provide only a fraction of the performance / watt and performance / dollar gains needed to satisfy the demands of today’s high performance computing and artificial intelligence applications. To close the gap, multiple strategies are required. First, new levels of innovation and design efficiency will supplement technology gains to continue to deliver meaningful improvements in SoC performance. Second, heterogenous compute architectures will create x-factor increases of performance efficiency for the most critical applications. Finally, open software frameworks, APIs, and toolsets will enable broad ecosystems of application level innovation."
Watch the video:
Learn more: http://amd.com
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
In this deck from the ECSS Symposium, Abe Stern from NVIDIA presents: CUDA-Python and RAPIDS for blazing fast scientific computing.
"We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started. Finally, we will briefly highlight several other relevant libraries for GPU programming."
Watch the video: https://wp.me/p3RLHQ-lvu
Learn more: https://developer.nvidia.com/rapids
and
https://www.xsede.org/for-users/ecss/ecss-symposium
Sign up for our insideHPC Newsletter: http://insidehp.com/newsletter
In this deck from FOSDEM 2020, Colin Sauze from Aberystwyth University describes the development of a RaspberryPi cluster for teaching an introduction to HPC.
"The motivation for this was to overcome four key problems faced by new HPC users:
* The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.
* A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.
* That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.
* That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.
The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered."
Learn more: https://github.com/colinsauze/pi_cluster
and
https://fosdem.org/2020/schedule/events/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. 2
Why care about Power and Cooling?
Increasing
Electricy Cost
Performance-
Power relation
Application
Diversity
Waste Heat
Reuse
Data Center
limitations
Leading the Industry in Energy Aware HPC
2018 Lenovo - All rights reserved.
4. 42018 Lenovo - All rights reserved.
Application Diversity
• CPU bound BQCD case
• Node runs on full Power
• CPU provides full performance
while running at full power
• Memory bound BQCD case
• Node still runs on full Power
• CPU provides less performance
while still running at full power
0.00
100.00
200.00
300.00
400.00
500.00
600.00
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
DC node[W]
CPU pkg 0 [W]
RAM pkg 0 [W]
CPU pkg 1 [W]
RAM pkg 1 [W]
0.00
100.00
200.00
300.00
400.00
500.00
600.00
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
DC node[W]
CPU pkg 0 [W]
RAM pkg 0 [W]
CPU pkg 1 [W]
RAM pkg 1 [W]
Turbo ON: 157 GFlops Turbo ON: 65 Gflops
SD650 with 2 sockets 8168 and 6 x 16GB DIMMs; room temp = 21°C, inlet water = 45°C, 1.5 lpm/tray
How much energy do we waste on non-CPU bound application?
5. 52018 Lenovo - All rights reserved.
Waste Energy reuse - ERE
Energy Waste Direct Reuse Indirect Reuse
How much energy do we waste by not using the system heat?
Pictures: Leibniz Supercomputing Centre
6. 6
Energy Aware HPC
2018 Lenovo - All rights reserved.
Best CPU choice with max TDP supported
Best performance fully utilizing the system
Best TCO / Performance for maximized ROI
Best use of limited DataCenter capacities
Best Carbon Footprint for eco responsible HPC
7. 7
The three Pillars
Leading the Industry in Energy Aware HPC
2018 Lenovo - All rights reserved.
Hardware Software Infrastructure
10. 10
• Standard Air flow with
internal fans cooled with
the room climatization
• Broadest choice of
configurable options
supported
• Relatively inefficient cooling
• Air cooled but heat
removed with RDHX
through chilled water
• Retains high flexibility
• Enables extremely tight
rack placement
• Potentially room neutral
• Most heat removed by
onboard-waterloop with
up to 50°C temperature
• Supports highest TDP CPU
at densest footprint
• Higher performance
• Free cooling
Air Cooled Air Cooled
w/ Rear Door Heat Exch.
Direct Water Cooled
2018 Lenovo - All rights reserved.
Lenovo Cooling Technologies
Choose for broadest choice
of customizable options
Choose for max performance
and high energy efficiency
Choose for increased energy
efficiency with broad choice
PUE ~2.0 – 1.5
ERE ~2.0 – 1.5
PUE ~1.4 – 1.2
ERE ~1.4 – 1.2
PUE <=1.1
ERE <=1.1
11. 112018 Lenovo - All rights reserved.
Return on Investment for DWC vs RDHx
• New data centers: Water cooling has immediate payback.
• Existing air-cooled data center payback period strongly depends on electricity rate
DWC RDHx
$0.06/kWh $0.12/kWh $0.20/kWh
12. 12
Rear Door Heat Exchanger
2018 Lenovo - All rights reserved.
Up to 27°C Cold Water Cooling
Up to 100% Heat Removal Efficiency on 30kW
No moving parts or power required
Tenthousands of nodes install base
Long
ago
2009
2010
14. 142018 Lenovo - All rights reserved.
Lenovo RDHx2 – Typical Environment
15. 15
Direct “Hot” Watercooling
2012
2014
2018
>24.000 nodes globally
Up to 50°C Hot Water Cooling
Up to 90% Heat Removal Efficiency
World Record Energy Reuse Efficiency
30+ patents on market leading design
2018 Lenovo - All rights reserved.
17. 17
Top-Down View
2018 Lenovo - All rights reserved.
ThinkSystem SD650
Water Inlet *)
Water Outlet
Power
Board
CPUs
6 DIMMs
per CPU
2 AEP
per CPU
x16 PCIe Slot
Disk Drive
M.2 Slot
50°C
60°C
two nodes sharing a tray and a waterloop
*) inlet water temperature 50°C with configuration limitations (45°C without configuration limitations)
18. 182018 Lenovo - All rights reserved.
SD650 Improved Node Water Cooling Architecture
• Focus on maximizing efficiency for high
(up to 50°C) inlet water temperatures
• Device cooling optimization by minimizing
water to device temperature differences
– dT CPU < ~0.1 K / W
– dT Memory < ~1 K / W
– dT Network < ~1 K / W
• Direct water cooling of processors,
memory, voltage regulation devices and
IO devices (Network and Disk)
• Water circuit traverses all critical
components to optimize cooling.
DISK
Conductive
plate
Memory
Water
chanels
19. 192018 Lenovo - All rights reserved.
HPL Temperature & Frequency on SD650 with 8168
PL2 (short term RAPL limit) is 1.2 x TDP PL1 (long term RAPL limit) is TDP
Non AVX instructions AVX instructions Non AVX instructions
SD650 with 2 sockets 8168 and 12 x 16GB DIMMs; room temp = 21°C, inlet water = 40°C, 1.5 lpm/tray
20. 202018 Lenovo - All rights reserved.
Performance Optimization
• ThinkSystem SD530 – Standard Performance
– ~ 2.15 TeraFlop/s sustained HPL
w/ SKL 6148 20C 2.4Ghz 150W
– /s sustained HPL
w/ SKL 6148 20C 2.4Ghz 150W
• ThinkSystem SD650 – High Performance Mode
– ~ 2.34 TeraFlop/s sustained HPL
w/ SKL 6148 20C 2.4Ghz 150W
HPC [GF] AC node DC node CPU Temp
Turbo OFF 2152.7 400.1 368.0 81.8
Turbo ON 2147.2 400.4 368.3 82.1
Turbo OFF
Turbo ON
Turbo OFF 2342.0 472.5 434.7 36.8
Turbo ON 2333.4 473.2 435.4 36.9
SD530 and SD650 with 2 sockets 6148 and 12 x 16GB DIMMs; room temp = 21°C, inlet water = 18°C, 1.5 lpm/tray
+9% +18%
23. 232018 Lenovo - All rights reserved.
SD650 – DC Power Sampling/Reporting Frequency
• AC power at chassis level
(through FPC)
– With xCAT
– With ipmi
• DC power and energy at
node level through XCC
– With hw_usage library
– With ipmi
– With RAPL
– With Allinea
– With LSF or LEAR
NM/ME
HSC
RAPL
CPU/memory
(energy MSRs)
XCC/BMC
1Hz
10Hz
1KHz
Meter
500Hz
Sensor
200Hz1Hz
High Level Software
HSC –node
power
XCC/BMC
FPGA
100Hz
100Hz
100Hz
New for Lenovo
ThinkSystem SD650
10KHz
Sensor
24. 24
Bulk 12V Node 12V
2018 Lenovo - All rights reserved.
SD650 – advanced Accuracy for Power and Energy
• Node DC Power readings
– Better than or equal to +/-3% power reading accuracy
– down to the node’s minimum active power (~40-50W DC).
– Power granularity <=100mW
– At least 100Hz update rate for node power readings
• Node DC Energy meter
– Accumulator for Energy in Joules (~10 weeks until meter overflow)
XCC
ME (Node
Manager)
SN1405006
(used for
capping)
FPGA
(FIFO)
ipmi raw
oem cmd
Rsense
INA226
(used for
metering)
High accuracy, fast sampling Maintains compatibility with Node Manager
26. 262018 Lenovo - All rights reserved.
Energy Aware Run time: Motivation
• Power and Energy has become a
critical constraint for HPC systems
• Performance and Power consumption
of parallel applications depends on:
– Architectural parameters
– Runtime node configuration
– Application characteristics
– Input data
• Manual “best” frequency
– Difficult to select manually and it is a time
consuming process (resources and then
power) and not reusable
– It may change along time
– It may change between nodes
Configure
application for
Architecture X
Execute with N
frequencies:
calculate time
and energy
Select optimal
frequency
27. 27
EAR – Automatic and Dynamic CPU Frequency
• Architecture characterization
• Application characterization
– Outer loop detection (DPD)
– Application signature computation (CPI,GBS,POWER,TIME)
• Performance and power projection
• Users/System policy definition for frequency selection (with thresholds)
– MINIMIZE_ENERGY_TO_SOLUTION
- Goal: To save energy by reducing frequency (with potential performance degradation)
- We limit the performance degradation with a MAX_PERFORMANCE_DEGRADATION threshold
– MINIMIZE_TIME_TO_SOLUTION
- Goal: To reduce time by increasing frequency (with potential energy increase)
- We use a MIN_PERFORMANCE_EFFICIENCY_GAIN threshold to avoid that application that do not scale
with frequency to consume more energy for nothing
2018 Lenovo - All rights reserved.
28. 282018 Lenovo - All rights reserved.
EAR – Functional Overview
Learning Phase (at EAR installation*)
Execution Phase (loaded with application)
Kernel
Execution
Coefficients
Computation
Coeffcients
Database
Dynamic Patter Detection
detects outer loop
Compute power and
performance metrics
for outer loop
Energy Policy
read
CPUFrequency
* or every time cluster configuration is modified
(more memory per node, new processors ...)
Optimal frequency
calculation
31. 312018 Lenovo - All rights reserved.
PUE, ITUE, TUE and ERE
• Power Usage Effectiveness (PUE) says how much power a datacenter uses is not used for computing.
• It is the ratio of total power to the power delivered to computing equipment.
• It does not take into account how effective a server uses the Power it gets.
• Ideal value is 1.0
• IT Usage Effectiveness (ITUE) measures how much power a system uses is not used for computing.
• It is the ratio of the power of IT equipment to the power of the computing components.
• Multiplied with the PUE it gives the Total-Power Usage Effectiveness (TUE)
• Ideal value is 1.0
• Energy Reuse Effectiveness (ERE) integrates the reuse of the power dissipated by the computer.
• It is the ratio of total power considering also reuse to the power delivered to computing equipment.
• An ideal ERE is 0.0. If no reuse, ERE = PUE
𝑃𝑈𝐸 =
𝑇𝑜𝑡𝑎𝑙 𝐹𝑎𝑐𝑖𝑙𝑖𝑡𝑦 𝑃𝑜𝑤𝑒𝑟
𝑇𝑜𝑡𝑎𝑙 𝐼𝑇 𝑃𝑜𝑤𝑒𝑟
𝐼𝑇𝑈𝐸 =
𝑇𝑜𝑡𝑎𝑙 𝐼𝑇 𝑃𝑜𝑤𝑒𝑟
𝑇𝑜𝑡𝑎𝑙 𝐶𝑜𝑚𝑝𝑢𝑡𝑒 𝑃𝑜𝑤𝑒𝑟
𝐸𝑅𝐸 =
(𝑇𝑜𝑡𝑎𝑙 𝐹𝑎𝑐𝑖𝑙𝑖𝑡𝑦 𝑃𝑜𝑤𝑒𝑟 − 𝑇𝑜𝑡𝑎𝑙 𝑅𝑒𝑢𝑠𝑒 𝑃𝑜𝑤𝑒𝑟)
𝑇𝑜𝑡𝑎𝑙 𝐼𝑇 𝑃𝑜𝑤𝑒𝑟
32. 32
• Standard Air flow with
internal fans cooled with
the room climatization
• Broadest choice of
configurable options
supported
• Relatively inefficient cooling
• Air cooled but heat
removed with RDHX
through chilled water
• Retains high flexibility
• Enables extremely tight
rack placement
• Potentially room neutral
• Waste heat reused to
generate coldness to cool
non-DWC components
• Retains highest TDP,
footprint and performance
• Potentially all system heat
covered through DWC
• Most heat removed by
onboard-waterloop with
up to 50°C temperature
• Supports highest TDP CPU
at densest footprint
• Higher performance
• Free cooling
Air Cooled Air Cooled
w/ Rear Door Heat Exch.
Direct Water Cooled
w/ Adsorption Chilling
Direct Water Cooled
2018 Lenovo - All rights reserved.
Lenovo Cooling Technologies
Choose for broadest choice
of customizable options
Choose for max performance
and high energy efficiency
Choose for increased energy
efficiency with broad choice
Choose for max performance
and max energy efficiency
PUE ~2.0 – 1.5
ERE ~2.0 – 1.5
PUE ~1.4 – 1.2
ERE ~1.4 – 1.2
PUE <=1.1
ERE <=1.1
PUE <=1.1
ERE <1
33. 332018 Lenovo - All rights reserved.
Value of Direct Water Cooling with Free Cooling
• Reduced noise level in the DataCenter
• Reduced server power consumption
– Lower processor power consumption (~ 5%)
– No fan per node (~ 4%)
• Reduce cooling power consumption
– At 45°C free cooling all year long ( ~ 25%)
• Energy Aware Scheduling
– Only CPU bound jobs get max frequency (~ 5%)
• CAPEX Savings
– Less conventional chillers for the Computing System
Energy Savings
35-40%
Total Saving
34. 34
Adsorption Chilling
The method of using solid materials
for cooling via evaporation.
• Adsorption chiller consists of two identical
vacuum containers, each containing two heat
exchangers – and water.
– Adsorber (Desorber)
Coated with the adsorbent (e.g. zeolite)
– Evaporator (Condenser)
Evaporation and condensation of water
• Adsorption process has 2 phases
– in the adsorption phase the water on the
evaporator is taken in by the coated material in
the adsorber. Through that evaporation the
evaporater and the water flowing through it does
cool down while the adsorber fills with water
vapor and heats up the water flowing through it.
When the adsorber is saturated the process is
reversed.
– in the desorption phase hot water is passed
through the adsorber acting as a desorber rather
as its desorbing the water vapor and dispensing it
to the evaporator which is acting as condenser at
that point condensing the vapor back to water.
Again the process is reversed when the adsorber
is emptied.
Module 1
Desorption
Hot Water from
Compute Racks
52°/46 °C
Condensation
Cooling Water
to Hybrid
Cooling Tower
26°/32°C
Adsorption
Cooling Water
to Hybrid
Cooling Tower
26°/32°C
Evaporation
Chilled Water
to Storage
etc. Racks
23°/20°C
Desorber Condenser
Module 2
Adsorber Evaporater
35. 352018 Lenovo - All rights reserved.
Value of Direct Water Cooling with Adsorption Chiller
• Reduced noise level in the DataCenter
• Maximum TDP CPU Choice
• Reduced server power consumption
– Lower processor power consumption (~ 5%)
– No fan per node (~ 4%)
• Reduce cooling power consumption
– At 50°C free cooling all year long (~ 25%)
– Heat Reuse generate 600kW cooling capacity (> 5%)
• Energy Aware Runtime
– Frequency optimization during runtime (~ 5%)
• CAPEX Savings
– Less conventional chillers for the Computing System
Energy Savings
40 - 50%
Total Saving