This document discusses how HPC infrastructure is being transformed with AI. It summarizes that cognitive systems use distributed deep learning across HPC clusters to speed up training times. It also outlines IBM's hardware portfolio expansion for AI training, inference, and storage capabilities. The document discusses software stacks for AI like Watson Machine Learning Community Edition that use containers and universal base images to simplify deployment.
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolution of Supermicro GPU Server SolutionNVIDIA Taiwan
Supermicro provides energy efficient server solutions optimized for GPU computing. Their portfolio includes 1U and 4U servers that support up to 10 GPUs, delivering the highest rack-level and node-level GPU density. Their new generation of solutions are optimized for machine learning applications using NVIDIA Pascal GPUs, with features like NVLink for high bandwidth GPU interconnect and direct low latency data access between GPUs. These solutions deliver the highest performance per watt for parallel workloads like machine learning training.
In this deck from the Perth HPC Conference, Rob Farber from TechEnablement presents: AI is Impacting HPC Everywhere.
"The convergence of AI and HPC has created a fertile venue that is ripe for imaginative researchers — versed in AI technology — to make a big impact in a variety of scientific fields. From new hardware to new computational approaches, the true impact of deep- and machine learning on HPC is, in a word, “everywhere”. Just as technology changes in the personal computer market brought about a revolution in the design and implementation of the systems and algorithms used in high performance computing (HPC), so are recent technology changes in machine learning bringing about an AI revolution in the HPC community. Expect new HPC analytic techniques including the use of GANs (Generative Adversarial Networks) in physics-based modeling and simulation, as well as reduced precision math libraries such as NLAFET and HiCMA to revolutionize many fields of research. Other benefits of the convergence of AI and HPC include the physical instantiation of data flow architectures in FPGAs and ASICs, plus the development of powerful data analytic services."
Learn more: http://www.techenablement.com/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from Switzerland HPC Conference, Gunter Roeth from NVIDIA presents: Deep Learning on the SaturnV Cluster.
"Machine Learning is among the most important developments in the history of computing. Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. It has dramatically improved the state-of-the-art in areas such as speech recognition, computer vision, predicting the activity of drug molecules, and many other machine learning tasks. The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters."
Watch the video: http://wp.me/p3RLHQ-gDv
Learn more: http://www.nvidia.com/object/dgx-saturnv.html
and
http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Trends in Systems and How to Get Efficient Performanceinside-BigData.com
In this video from Switzerland HPC Conference, Martin Hilgeman from Dell presents: HPC Workload Efficiency and the Challenges for System Builders.
"With all the advances in massively parallel and multi-core computing with CPUs and accelerators it is often overlooked whether the computational work is being done in an efficient manner. This efficiency is largely being determined at the application level and therefore puts the responsibility of sustaining a certain performance trajectory into the hands of the user. It is observed that the adoption rate of new hardware capabilities is decreasing and lead to a feeling of diminishing returns. This presentation shows the well-known laws of parallel performance from the perspective of a system builder. It also covers through the use of real case studies, examples of how to program for energy efficient parallel application performance."
Watch the video: http://wp.me/p3RLHQ-gIS
Learn more: http://dell.com
and
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses using temporal shift modules (TSM) for efficient video recognition, where TSM enables temporal modeling in 2D CNNs with no additional computation cost; TSM models achieve better performance than 3D CNNs and previous methods while using less computation, and can be used for applications like online video understanding, low-latency deployment on edge devices, and large-scale distributed training on supercomputers.
This document discusses how HPC infrastructure is being transformed with AI. It summarizes that cognitive systems use distributed deep learning across HPC clusters to speed up training times. It also outlines IBM's hardware portfolio expansion for AI training, inference, and storage capabilities. The document discusses software stacks for AI like Watson Machine Learning Community Edition that use containers and universal base images to simplify deployment.
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolution of Supermicro GPU Server SolutionNVIDIA Taiwan
Supermicro provides energy efficient server solutions optimized for GPU computing. Their portfolio includes 1U and 4U servers that support up to 10 GPUs, delivering the highest rack-level and node-level GPU density. Their new generation of solutions are optimized for machine learning applications using NVIDIA Pascal GPUs, with features like NVLink for high bandwidth GPU interconnect and direct low latency data access between GPUs. These solutions deliver the highest performance per watt for parallel workloads like machine learning training.
In this deck from the Perth HPC Conference, Rob Farber from TechEnablement presents: AI is Impacting HPC Everywhere.
"The convergence of AI and HPC has created a fertile venue that is ripe for imaginative researchers — versed in AI technology — to make a big impact in a variety of scientific fields. From new hardware to new computational approaches, the true impact of deep- and machine learning on HPC is, in a word, “everywhere”. Just as technology changes in the personal computer market brought about a revolution in the design and implementation of the systems and algorithms used in high performance computing (HPC), so are recent technology changes in machine learning bringing about an AI revolution in the HPC community. Expect new HPC analytic techniques including the use of GANs (Generative Adversarial Networks) in physics-based modeling and simulation, as well as reduced precision math libraries such as NLAFET and HiCMA to revolutionize many fields of research. Other benefits of the convergence of AI and HPC include the physical instantiation of data flow architectures in FPGAs and ASICs, plus the development of powerful data analytic services."
Learn more: http://www.techenablement.com/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from Switzerland HPC Conference, Gunter Roeth from NVIDIA presents: Deep Learning on the SaturnV Cluster.
"Machine Learning is among the most important developments in the history of computing. Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. It has dramatically improved the state-of-the-art in areas such as speech recognition, computer vision, predicting the activity of drug molecules, and many other machine learning tasks. The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters."
Watch the video: http://wp.me/p3RLHQ-gDv
Learn more: http://www.nvidia.com/object/dgx-saturnv.html
and
http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Trends in Systems and How to Get Efficient Performanceinside-BigData.com
In this video from Switzerland HPC Conference, Martin Hilgeman from Dell presents: HPC Workload Efficiency and the Challenges for System Builders.
"With all the advances in massively parallel and multi-core computing with CPUs and accelerators it is often overlooked whether the computational work is being done in an efficient manner. This efficiency is largely being determined at the application level and therefore puts the responsibility of sustaining a certain performance trajectory into the hands of the user. It is observed that the adoption rate of new hardware capabilities is decreasing and lead to a feeling of diminishing returns. This presentation shows the well-known laws of parallel performance from the perspective of a system builder. It also covers through the use of real case studies, examples of how to program for energy efficient parallel application performance."
Watch the video: http://wp.me/p3RLHQ-gIS
Learn more: http://dell.com
and
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses using temporal shift modules (TSM) for efficient video recognition, where TSM enables temporal modeling in 2D CNNs with no additional computation cost; TSM models achieve better performance than 3D CNNs and previous methods while using less computation, and can be used for applications like online video understanding, low-latency deployment on edge devices, and large-scale distributed training on supercomputers.
In this deck, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. "These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.
Watch the video presentation: http://insidehpc.com/2016/11/hdr-infiniband/
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bill Jenkins, Senior Product Specialist for High Level Design Tools at Intel, presents the "Accelerating Deep Learning Using Altera FPGAs" tutorial at the May 2016 Embedded Vision Summit.
While large strides have recently been made in the development of high-performance systems for neural networks based on multi-core technology, significant challenges in power, cost and, performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. Intel's Programmable Solutions Group has developed a scalable convolutional neural network reference design for deep learning systems using the OpenCL programming language built with our SDK for OpenCL. The design performance is being benchmarked using several popular CNN benchmarks: CIFAR-10, ImageNet and KITTI.
Building the CNN with OpenCL kernels allows true scaling of the design from smaller to larger devices and from one device generation to the next. New designs can be sized using different numbers of kernels at each layer. Performance scaling from one generation to the next also benefits from architectural advancements, such as floating-point engines and frequency scaling. Thus, you achieve greater than linear performance and performance per watt scaling with each new series of devices.
The GIST AI-X Computing Cluster provides powerful accelerated computation resources for machine learning using GPUs and other hardware. It includes DGX A100 and DGX-1V nodes with 8 NVIDIA A100 or V100 GPUs each, connected by high-speed networking. The cluster uses Singularity containers, Slurm scheduling, and Ceph storage. It allows researchers to request resources, build container images, and run distributed deep learning jobs across multiple GPUs.
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-iodice
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Gian Marco Iodice, Software Engineer at ARM, presents the "Using SGEMM and FFTs to Accelerate Deep Learning" tutorial at the May 2016 Embedded Vision Summit.
Matrix Multiplication and the Fast Fourier Transform are numerical foundation stones for a wide range of scientific algorithms. With the emergence of deep learning, they are becoming even more important, particularly as use cases extend into mobile and embedded devices. In this presentation, lodice discusses and analyzes how these two key, computationally-intensive algorithms can be used to gain significant performance improvements for convolutional neural network (CNN) implementations.
After a brief introduction to the nature of CNN computations, Iodice explores the use of GEMM (General Matrix Multiplication) and mixed-radix FFTs to accelerate 3D convolution. He shows examples of OpenCL implementations of these functions and highlights their advantages, limitations and trade-offs. Central to the techniques explored is an emphasis on cache-efficient memory accesses and the crucial role of reduced-precision data types.
This document discusses current trends in high performance computing. It begins with an introduction to high performance computing and its applications in science, engineering, business analysis, and more. It then discusses why high performance computing is needed due to changes in scientific discovery, the need to solve larger problems, and modern business needs. The document also discusses the top 500 supercomputers in the world and provides examples of some of the most powerful systems. It then covers performance development trends and challenges in increasing processor speeds. The rest of the document discusses parallel computing approaches using multi-core and many-core architectures, as well as cluster, grid, and cloud computing models for high performance.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/ceva/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-siegel
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Yair Siegel, Director of Segment Marketing at CEVA, presents the "Fast Deployment of Low-power Deep Learning on CEVA Vision Processors" tutorial at the May 2016 Embedded Vision Summit.
Image recognition capabilities enabled by deep learning are benefitting more and more applications, including automotive safety, surveillance and drones. This is driving a shift towards running neural networks inside embedded devices. But, there are numerous challenges in squeezing deep learning into resource-limited devices. This presentation details a fast path for taking a neural network from research into an embedded implementation on a CEVA vision processor core, making use of CEVA’s neural network software framework. Siegel explains how the CEVA framework integrates with existing deep learning development environments like Caffe, and how it can be used to create low-power embedded systems with neural network capabilities.
dCUDA: Distributed GPU Computing with Hardware Overlapinside-BigData.com
Torsten Hoefler from ETH Zurich presented this deck at the Switzerland HPC Conference.
"Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access with target notification. To hide instruction pipeline latencies, CUDA programs over-decompose the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a thread stalls, the hardware scheduler immediately proceeds with the execution of another thread ready for execution. This latency-hiding technique is key to make best use of the available hardware resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our benchmarks demonstrate perfect overlap for memory bandwidth-bound tasks and good overlap for compute-bound tasks."
Watch the video presentation: http://wp.me/p3RLHQ-gCB
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Fugaku is a Japanese supercomputer utilizing Fujitsu's A64FX CPU. It was designed through an iterative co-design process between application developers and Fujitsu to achieve over 100x performance gain compared to the previous K computer within a 30-40MW power budget. The A64FX CPU utilizes 7nm technology and features 48 Arm-based cores with high bandwidth memory to achieve superior floating point and memory bandwidth performance efficiently. Early evaluations show Fugaku meeting performance and power targets and outperforming x86 processors for real applications.
This document provides information about SGI's HPC customers and systems. It summarizes installations including a 1.009 PFlop system at NASA, a 339 TFlop system in Germany, and a 155.7 TFlop system in Norway. It also outlines SGI's key markets like HPC, cloud storage, big data, and commercial scientific applications. In summary, SGI is a leader in technical computing supporting mission critical applications through large-scale systems for research, industry, and government.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Deshanand Singh, Director of Software Engineering at Altera, presents the "Efficient Implementation of Convolutional Neural Networks using OpenCL on FPGAs" tutorial at the May 2015 Embedded Vision Summit.
Convolutional neural networks (CNN) are becoming increasingly popular in embedded applications such as vision processing and automotive driver assistance systems. The structure of CNN systems is characterized by cascades of FIR filters and transcendental functions. FPGA technology offers a very efficient way of implementing these structures by allowing designers to build custom hardware datapaths that implement the CNN structure. One challenge of using FPGAs revolves around the design flow that has been traditionally centered around tedious hardware description languages.
In this talk, Deshanand gives a detailed explanation of how CNN algorithms can be expressed in OpenCL and compiled directly to FPGA hardware. He gives detail on code optimizations and provides comparisons with the efficiency of hand-coded implementations.
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
This document provides a summary of the IBM POWER9 AC922 system with 6 GPUs. It includes details on the POWER9 processor which features 24 cores per die, an enhanced cache hierarchy up to 120MB, and on-chip accelerators. The AC922 system utilizes two POWER9 processors, supports up to 512GB memory via 16 DDR4 DIMMs, and has three Nvidia Volta GPUs per socket connected via NVLink 2.0. It also discusses the POWER ISA v3.0 instruction set and how POWER9 serves as a premier acceleration platform with technologies like CAPI, OpenCAPI, and NVLink.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
A Platform for Accelerating Machine Learning ApplicationsNVIDIA Taiwan
Robert Sheen from HPE gave a presentation on machine learning applications and accelerating deep learning. He provided a quick introduction to neural networks, discussing their structure and how they are inspired by biological neurons. Deep learning requires high performance computing due to its computational intensity during training. Popular deep learning frameworks like CogX were also discussed, which provide tools and libraries to help build and optimize neural networks. Finally, several enterprise use cases for machine learning and deep learning were highlighted, such as in finance, healthcare, security, and geospatial applications.
In this slidecast, Bill Mannel from SGI presents an update on the company's innovative HPC solutions.
Learn more at: http://sgi.com
Watch the presentation video: http://insidehpc.com/2013/07/01/slidecast-sgi-product-update-for-june-2013/
04 New opportunities in photon science with high-speed X-ray imaging detecto...RCCSRENKEI
Takaki Hatsui from RIKEN presented on new opportunities in photon science with the development of the CITIUS high-speed X-ray imaging detector. The CITIUS detector can achieve frame rates of 17.4 kHz with 1800 photons per frame and high dynamic range. This represents a significant advancement over conventional detectors. However, the large data output from such detectors poses challenges that will require edge computing and high performance computing to process and store the data. Major synchrotron facilities expect data outputs to reach tens to hundreds of petabytes per year requiring proper infrastructure to manage exascale data from next generation photon science experiments.
SAMOS 2018: LEGaTO: first steps towards energy-efficient toolset for heteroge...LEGATO project
The LEGaTO project received funding from the European Union's Horizon 2020 programme to create a software stack that optimizes for energy efficiency on heterogeneous computing platforms. The project aims to start with mature European software and optimize it to support energy-efficient computation on hardware with CPUs, GPUs, FPGAs, and FPGA-based dataflow engines. Key partners include universities and companies developing hardware and software. The project will develop programming models, runtime systems, and use cases in areas like healthcare, smart homes, and machine learning to demonstrate the stack.
1. Building exascale computers requires moving to sub-nanometer scales and steering individual electrons to solve problems more efficiently.
2. Moving data is a major challenge, as moving data off-chip uses 200x more energy than computing with it on-chip.
3. Future computers should optimize for data movement at all levels, from system design to microarchitecture, to minimize energy usage.
In this deck, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. "These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.
Watch the video presentation: http://insidehpc.com/2016/11/hdr-infiniband/
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bill Jenkins, Senior Product Specialist for High Level Design Tools at Intel, presents the "Accelerating Deep Learning Using Altera FPGAs" tutorial at the May 2016 Embedded Vision Summit.
While large strides have recently been made in the development of high-performance systems for neural networks based on multi-core technology, significant challenges in power, cost and, performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. Intel's Programmable Solutions Group has developed a scalable convolutional neural network reference design for deep learning systems using the OpenCL programming language built with our SDK for OpenCL. The design performance is being benchmarked using several popular CNN benchmarks: CIFAR-10, ImageNet and KITTI.
Building the CNN with OpenCL kernels allows true scaling of the design from smaller to larger devices and from one device generation to the next. New designs can be sized using different numbers of kernels at each layer. Performance scaling from one generation to the next also benefits from architectural advancements, such as floating-point engines and frequency scaling. Thus, you achieve greater than linear performance and performance per watt scaling with each new series of devices.
The GIST AI-X Computing Cluster provides powerful accelerated computation resources for machine learning using GPUs and other hardware. It includes DGX A100 and DGX-1V nodes with 8 NVIDIA A100 or V100 GPUs each, connected by high-speed networking. The cluster uses Singularity containers, Slurm scheduling, and Ceph storage. It allows researchers to request resources, build container images, and run distributed deep learning jobs across multiple GPUs.
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-iodice
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Gian Marco Iodice, Software Engineer at ARM, presents the "Using SGEMM and FFTs to Accelerate Deep Learning" tutorial at the May 2016 Embedded Vision Summit.
Matrix Multiplication and the Fast Fourier Transform are numerical foundation stones for a wide range of scientific algorithms. With the emergence of deep learning, they are becoming even more important, particularly as use cases extend into mobile and embedded devices. In this presentation, lodice discusses and analyzes how these two key, computationally-intensive algorithms can be used to gain significant performance improvements for convolutional neural network (CNN) implementations.
After a brief introduction to the nature of CNN computations, Iodice explores the use of GEMM (General Matrix Multiplication) and mixed-radix FFTs to accelerate 3D convolution. He shows examples of OpenCL implementations of these functions and highlights their advantages, limitations and trade-offs. Central to the techniques explored is an emphasis on cache-efficient memory accesses and the crucial role of reduced-precision data types.
This document discusses current trends in high performance computing. It begins with an introduction to high performance computing and its applications in science, engineering, business analysis, and more. It then discusses why high performance computing is needed due to changes in scientific discovery, the need to solve larger problems, and modern business needs. The document also discusses the top 500 supercomputers in the world and provides examples of some of the most powerful systems. It then covers performance development trends and challenges in increasing processor speeds. The rest of the document discusses parallel computing approaches using multi-core and many-core architectures, as well as cluster, grid, and cloud computing models for high performance.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/ceva/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-siegel
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Yair Siegel, Director of Segment Marketing at CEVA, presents the "Fast Deployment of Low-power Deep Learning on CEVA Vision Processors" tutorial at the May 2016 Embedded Vision Summit.
Image recognition capabilities enabled by deep learning are benefitting more and more applications, including automotive safety, surveillance and drones. This is driving a shift towards running neural networks inside embedded devices. But, there are numerous challenges in squeezing deep learning into resource-limited devices. This presentation details a fast path for taking a neural network from research into an embedded implementation on a CEVA vision processor core, making use of CEVA’s neural network software framework. Siegel explains how the CEVA framework integrates with existing deep learning development environments like Caffe, and how it can be used to create low-power embedded systems with neural network capabilities.
dCUDA: Distributed GPU Computing with Hardware Overlapinside-BigData.com
Torsten Hoefler from ETH Zurich presented this deck at the Switzerland HPC Conference.
"Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access with target notification. To hide instruction pipeline latencies, CUDA programs over-decompose the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a thread stalls, the hardware scheduler immediately proceeds with the execution of another thread ready for execution. This latency-hiding technique is key to make best use of the available hardware resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our benchmarks demonstrate perfect overlap for memory bandwidth-bound tasks and good overlap for compute-bound tasks."
Watch the video presentation: http://wp.me/p3RLHQ-gCB
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Fugaku is a Japanese supercomputer utilizing Fujitsu's A64FX CPU. It was designed through an iterative co-design process between application developers and Fujitsu to achieve over 100x performance gain compared to the previous K computer within a 30-40MW power budget. The A64FX CPU utilizes 7nm technology and features 48 Arm-based cores with high bandwidth memory to achieve superior floating point and memory bandwidth performance efficiently. Early evaluations show Fugaku meeting performance and power targets and outperforming x86 processors for real applications.
This document provides information about SGI's HPC customers and systems. It summarizes installations including a 1.009 PFlop system at NASA, a 339 TFlop system in Germany, and a 155.7 TFlop system in Norway. It also outlines SGI's key markets like HPC, cloud storage, big data, and commercial scientific applications. In summary, SGI is a leader in technical computing supporting mission critical applications through large-scale systems for research, industry, and government.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Deshanand Singh, Director of Software Engineering at Altera, presents the "Efficient Implementation of Convolutional Neural Networks using OpenCL on FPGAs" tutorial at the May 2015 Embedded Vision Summit.
Convolutional neural networks (CNN) are becoming increasingly popular in embedded applications such as vision processing and automotive driver assistance systems. The structure of CNN systems is characterized by cascades of FIR filters and transcendental functions. FPGA technology offers a very efficient way of implementing these structures by allowing designers to build custom hardware datapaths that implement the CNN structure. One challenge of using FPGAs revolves around the design flow that has been traditionally centered around tedious hardware description languages.
In this talk, Deshanand gives a detailed explanation of how CNN algorithms can be expressed in OpenCL and compiled directly to FPGA hardware. He gives detail on code optimizations and provides comparisons with the efficiency of hand-coded implementations.
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
This document provides a summary of the IBM POWER9 AC922 system with 6 GPUs. It includes details on the POWER9 processor which features 24 cores per die, an enhanced cache hierarchy up to 120MB, and on-chip accelerators. The AC922 system utilizes two POWER9 processors, supports up to 512GB memory via 16 DDR4 DIMMs, and has three Nvidia Volta GPUs per socket connected via NVLink 2.0. It also discusses the POWER ISA v3.0 instruction set and how POWER9 serves as a premier acceleration platform with technologies like CAPI, OpenCAPI, and NVLink.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
A Platform for Accelerating Machine Learning ApplicationsNVIDIA Taiwan
Robert Sheen from HPE gave a presentation on machine learning applications and accelerating deep learning. He provided a quick introduction to neural networks, discussing their structure and how they are inspired by biological neurons. Deep learning requires high performance computing due to its computational intensity during training. Popular deep learning frameworks like CogX were also discussed, which provide tools and libraries to help build and optimize neural networks. Finally, several enterprise use cases for machine learning and deep learning were highlighted, such as in finance, healthcare, security, and geospatial applications.
In this slidecast, Bill Mannel from SGI presents an update on the company's innovative HPC solutions.
Learn more at: http://sgi.com
Watch the presentation video: http://insidehpc.com/2013/07/01/slidecast-sgi-product-update-for-june-2013/
04 New opportunities in photon science with high-speed X-ray imaging detecto...RCCSRENKEI
Takaki Hatsui from RIKEN presented on new opportunities in photon science with the development of the CITIUS high-speed X-ray imaging detector. The CITIUS detector can achieve frame rates of 17.4 kHz with 1800 photons per frame and high dynamic range. This represents a significant advancement over conventional detectors. However, the large data output from such detectors poses challenges that will require edge computing and high performance computing to process and store the data. Major synchrotron facilities expect data outputs to reach tens to hundreds of petabytes per year requiring proper infrastructure to manage exascale data from next generation photon science experiments.
SAMOS 2018: LEGaTO: first steps towards energy-efficient toolset for heteroge...LEGATO project
The LEGaTO project received funding from the European Union's Horizon 2020 programme to create a software stack that optimizes for energy efficiency on heterogeneous computing platforms. The project aims to start with mature European software and optimize it to support energy-efficient computation on hardware with CPUs, GPUs, FPGAs, and FPGA-based dataflow engines. Key partners include universities and companies developing hardware and software. The project will develop programming models, runtime systems, and use cases in areas like healthcare, smart homes, and machine learning to demonstrate the stack.
1. Building exascale computers requires moving to sub-nanometer scales and steering individual electrons to solve problems more efficiently.
2. Moving data is a major challenge, as moving data off-chip uses 200x more energy than computing with it on-chip.
3. Future computers should optimize for data movement at all levels, from system design to microarchitecture, to minimize energy usage.
The LEGaTO project received funding from the EU's Horizon 2020 program to develop a heterogeneous hardware platform called RECS for cloud to edge computing. RECS uses a modular microserver approach integrating CPUs, GPUs, FPGAs, and SOCs. It allows for flexible node composition through virtual functions to enable different compute and communication topologies.
We leave in the era where the atomic building elements of silicon computers, e.g., transistors and wires, are no longer visible using traditional optical microscopes and their sizes are measured in just tens of Angstroms. In addition, power dissipation per unit volume is bounded by the laws of Physics that all resulted among others in stagnating processor clock frequencies. Adding more and more processor cores that perform simpler and simpler tasks in an attempt to efficiently fill the available on-chip area seems to be the current trend taken by the Industry.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
This slide explains about the detailed view hardware architecture which includes CPUs, GPUs, Interconnect networks and applications used by the summit supercomputer
The document discusses plans to establish an institutional high performance computing (HPC) facility at North-West University. It outlines the technical goals of building a Beowulf cluster to link existing departmental clusters and integrate with national and international computational grids. It also discusses management principles for the new HPC facility to ensure sustainability, efficiency, reliability, availability and high performance.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
The document summarizes several AI accelerators for cloud datacenters including Google TPU, HabanaLabs Gaudi, Graphcore IPU, and Baidu Kunlun. It discusses their architectures, performance, and how they address challenges in datacenters like workload diversity and energy efficiency. The accelerators use specialized hardware like systolic arrays and FPGA/ASIC designs to achieve much higher performance and efficiency than CPUs and GPUs for AI tasks like training deep learning models.
VEDLIoT at FPL'23_Accelerators for Heterogenous Computing in AIoTVEDLIoT Project
VEDLIoT took part in the 33rd International Conference on Field-Programmable Logic and Applications (FPL 2023), in Gothenburg, Sweden. René Griessl (UNIBI) presented VEDLIoT and our latest achievements in the Research Projects Event session, giving a presentation entitled "Accelerators for Heterogenous Computing in AIoT".
Dell NVIDIA AI Powered Transformation WebinarBill Wong
This document provides an overview and summary of Dell Technologies' and NVIDIA's partnership to provide artificial intelligence solutions and infrastructure. Some key points:
- Dell and NVIDIA are working together to help customers address challenges of digital transformation and data analytics through AI-powered solutions.
- NVIDIA's new DGX A100 system provides unprecedented performance for AI workloads through its 8 A100 GPUs and high-speed networking.
- The DGX A100 allows for different workloads like training, inference and analytics to run simultaneously on a single system through its multi-instance GPU capability.
- Dell and NVIDIA's solutions aim to help customers address business trends like remote work
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/12/making-edge-ai-inference-programming-easier-and-flexible-a-presentation-from-texas-instruments/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manisha Agrawal, Product Marketing Engineer at Texas Instruments, presents the “Making Edge AI Inference Programming Easier and Flexible” tutorial at the September 2020 Embedded Vision Summit.
Deploying an AI model at the edge doesn’t have to be challenging—but it often is. Embedded processing vendors have unique sets of software tools for deploying models. It takes time and investment to learn to use proprietary tools and to optimize the edge implementation to achieve your desired performance. While embedded vendors are providing proprietary tools for model deployment, the open source community is also advancing to standardize the model deployment process and make it hardware agnostic.
Texas Instruments has adopted open source software frameworks to make model deployment easier and more flexible. In this talk, you will learn about the struggles developers face when deploying models for inference on embedded processors and how TI addresses these critical software development challenges. You will also discover how TI enables faster time-to-market using a flexible open source development approach without the need to compromise performance, accuracy or power requirements.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This document discusses NVIDIA's chips for automotive, HPC, and networking. For automotive, it describes the Tegra line of SOC chips used in cars like Tesla, and upcoming chips like Orin and Atlan. For HPC, it introduces the upcoming Grace CPU designed for giant AI models. For networking, it presents the BlueField line of data processing units (DPUs) including the new 400Gbps BlueField-3 chip and the DOCA software framework. The document emphasizes that NVIDIA's GPU, CPU, and DPU chips make yearly leaps while sharing a common architecture.
uCluster (micro-Cluster) is a toy computer cluster composed of 3 Raspberry Pi boards, 2 NVIDIA Jetson Nano boards and 1 NVIDIA Jetson TX2 board.
The presentation shows how to build the uCluster and focuses on few interesting technologies for further consideration when building a cluster at any scale.
The project is for educational purposes and tinkering with various technologies.
Real time machine learning proposers day v3mustafa sarac
This document discusses DARPA's Real Time Machine Learning (RTML) program. The objective is to develop hardware generators and compilers that can automatically create application-specific integrated circuits for machine learning from high-level code. This would allow no-human-in-the-loop creation of efficient neural network hardware. The program has two phases: phase 1 develops an ML hardware compiler, and phase 2 demonstrates RTML systems for applications like wireless communication and image processing. Key goals are high performance, low power consumption, and support for a variety of neural network architectures and machine learning techniques.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
The document discusses IBM PowerAI LMS and DDL. PowerAI LMS allows deep learning models and data to utilize both GPU and system memory for processing. This enables support for larger models and higher resolution data. PowerAI DDL is a communication library that enables distributed deep learning across multiple servers with GPUs. It works with frameworks like TensorFlow to improve scaling efficiency compared to other solutions. A demo is then presented on the performance benefits of PowerAI LMS and DDL.
Similar to DATE 2020: Design, Automation and Test in Europe Conference (20)
Scrooge Attack: Undervolting ARM Processors for ProfitLEGATO project
Malicious cloud provider can intentionally undervolt cloud infrastructure for additional savings on the electricity bill. ARM processors are low power processors which can lead to substantial energy saving for cloud providers. In our scenario we consider a scrooge cloud provider which undervolts its ARM
infrastructure for profit. The instances can be undervolted in a stealthy manner by avoiding critical voltage regions.
Applications running under critical undervolting conditions can
malfunction. These conditions can be exploited by a cloud user to uncover the undervolted instances. For this novel attack scenario we present a detection method for cloud users. The detection method injects non-selectively faults into processes with the intend to crash the cloud instance. Even if the cloud
provider can spoof temperature and voltage readings of the processor, the cloud user is capable to uncover undervolted instances. By crashing instances simultaneously using the detection method, the cloud user is covered by the service licence agreement and exposes the scrooge cloud provider.
A practical approach for updating an integrity-enforced operating systemLEGATO project
The document describes an approach for securely updating software on an integrity-enforced operating system. It involves a trusted software repository (TSR) that sanitizes software packages before they are installed. The sanitization process modifies installation scripts and predicts OS configuration changes. The TSR also digitally signs packages. When a package is installed, its signature and files are measured and verified by the integrity monitoring system to ensure the update does not violate integrity.
TEEMon: A continuous performance monitoring framework for TEEsLEGATO project
LEGaTO paper presented at ACM Middleware 2020 by Robert Krahn, Donald Dragoti, Franz Gregor, Do Le Quoc, Valerio Schiavoni, Pascal Felber, Clenimar Souza, Andrey Brito and Christof Fetzer
LEGaTO: Low-Energy Heterogeneous Computing Use of AI in the projectLEGATO project
Presentation by Osman Unsal and Pirah Noor Soomro at the webinar AI4EU WebCafé: 'Energy-efficient AI, a perspective from the LEGaTO project' on 28 October 2020
TZ4Fabric: Executing Smart Contracts with ARM TrustZoneLEGATO project
Paper presented by Christian Göttel at SRDS'20.
Abstract: Transparency in blockchains can be an advantage and a disadvantage, in particular if confidential information such as assets or business interactions are exposed. There are no confidentiality guarantees in blockchain systems to protect the logic of a smart contract or the data it processes. One solution to this problem can be trusted execution environments (TEE) which are an emerging technology for example available in edge or mobile-grade processors (e.g., ARM TrustZone) or in server-grade processors (e.g., Intel SGX). In this presentation we introduce TZ4Fabric, an extension of Hyperledger Fabric which leverages ARM TrustZone to shield the execution of smart contracts from compromised systems and powerful attackers. TZ4Fabric exploits the open source OP-TEE framework to enable ARM TrustZone features. We evaluate our prototype on the Raspberry Pi platform and highlight energy and performance trade-offs.
Infection Research with Maxeler Dataflow ComputingLEGATO project
Presentation given by Tobias Becker (Maxeler) at the LEGaTO Final Event: Low-Energy Heterogeneous Computing Workshop on 4 September 2020
This event was collocated with FPL 2020
Presentation given by Nils Kucza (Bielefeld University) at the LEGaTO Final Event: Low-Energy Heterogeneous Computing Workshop on 4 September 2020
This event was collocated with FPL 2020
FPGA Undervolting and Checkpointing for Energy-Efficiency and Error-ResiliencyLEGATO project
Tutorial by Behzad Salami, Osman Unsal and Leonardo Bautista at 30th International Conference on Field-Programmable Logic and Applications (FPL2020), 3 September 2020
Scheduling Task-parallel Applications in Dynamically Asymmetric EnvironmentsLEGATO project
Presentation by Jing Chen and Pirah Noor Soomro (Chalmers University of Technology) at the 16th International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS 2020) on 17 August 2020.
SRMPDS was a virtual event and collocated with ICPP’20 - 2020 International Conference on Parallel Processing.
RECS – Cloud to Edge Microserver Platform for Energy-Efficient ComputingLEGATO project
Abstract:Today, application developers and data center operators face the challenging task to achieve high performance while at the same time needing to reduce the total cost of ownership, which is especially driven by the energy consumption of the server itself.
This poster shows the RECS Microserver platform, developed by Christmann and Bielefeld University. RECS simplifies the combined use of heterogeneous target architectures to achieve high performance and superior energy-efficiency.
Poster presented by Martin Kaiser at the LEGaTO Final Event: 'Low-Energy Heterogeneous Computing Workshop'
HiPerMAb: A statistical tool for judging the potential of short fat dataLEGATO project
1) The HiPerMAb tool estimates the potential of pilot biomarker studies with small sample sizes and large numbers of biomarkers. It compares the entropy of biomarkers in real and random data to estimate p-values via Monte Carlo simulations.
2) An application to a hip infection study with 24 patients and 50,416 biomarkers showed more informative biomarkers than expected by chance, indicating the study is worth pursuing with a larger sample size.
3) The tool estimated a sample size of 84 patients would be needed to validate an AUC of 0.8 for classifying hip infection status based on biomarker combinations.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Sharlene Leurig - Enabling Onsite Water Use with Net Zero Water
DATE 2020: Design, Automation and Test in Europe Conference
1. The LEGaTO project has received funding from the European Union's Horizon 2020 research
and innovation programme under the grant agreement No 780681
LEGaTO: Low-Energy, Secure,
and Resilient Toolset for
Heterogeneous Computing
DATE 2020
Valerio Schiavoni (Uni. Neuchatel)
11/March/2020
2. DATE 2020
The industry challenge
• Dennard scaling is dead
• Moore's law is slowing down
• Trend towards heterogeneous architectures
• Increasing focus on energy: ICT sector is responsible for
5% of global energy consumption
• Creates a programmability challenge
3. DATE 2020
LEGaTO Ambition
• Create software stack-support for energy-efficient
heterogeneous computing
o Starting with Made-in-Europe mature software stack, and
optimizing this stack to support energy-efficiency
o Computing on a commercial cutting-edge European-developed
heterogeneous hardware substrate with CPU + GPU + FPGA +
FPGA-based Dataflow Engines (DFE)
• Main goal: energy efficiency
4. DATE 2020
The future challenge of computing: MW, not FLOPS
4
“… without dramatic increases
in efficiency, ICT industry could
use 20% of all electricity and
emit up to 5.5% of the world’s
carbon emissions by 2025.”
“We have a tsunami of data
approaching. Everything which
can be is being digitalised. It is
a perfect storm.”
“ … a single $1bn Apple data
centre planned for Athenry in Co
Galway, expects to eventually
use 300MW of electricity, or
over 8% of the national capacity
and more than the daily entire
usage of Dublin. It will require
144 large diesel generators as
back up for when the wind does
not blow.”
5. DATE 2020
How did we get here?
5
Decades of exponential growth in performance
End of Dennard scaling
Moore’s Law is slowing down
Explore new architectures & models of computation
Exponential growth in demand & data
Move towards accelerators
6. DATE 2020
The End of Performance
• Frequency levels off, cores fill in the gap
http://doi.ieeecomputersociety.org/10.1109/MC.2015.368
7. DATE 2020
The real challenge: data movements
7
Pedram, Richardson, Galal, Kvatinsky, and Horowitz, “Dark Memory and Accelerator-
Rich System Optimization in the Dark Silicon Era”, April 2016, IEEE Design & Test
“As a result the most important optimization must be
done at the algorithm level, to reduce off- chip memory
accesses, to create Dark Memory. The algorithms must
first be (re)written for both locality and parallelism
before you tailor the hardware to accelerate them.”
“My hypothesis is that we can solve [the software crisis
in parallel computing], but only if we work from the
algorithm down to the hardware — not the traditional
hardware first mentality.” Tim Mattson, principal
engineer at Intel.
In agreement with Intel:
8. DATE 2020
FPGAs to the rescue?
• The model of computation is key
• Build ultra-deep, highly efficient pipelines
8
13. DATE 2020
Hardware Platforms
• Christmann (RECS Box)
o Heterogeneous platform: Supports CPU (X86 or Arm), GPU
(Nvidia) and FPGA (Xillinx and Altera)
• MAXELER
o Maxeler DFE platforms and toolset
14. • Heterogeneous microserver approach, integrating CPU, GPU and FPGA
technology
• High-speed, low latency communication infrastructure,
enabling accelerators, scalable across multiple servers
• Modular, blade-style design, hot pluggable/swappable
19/03/2020 14
RECS|Box overview
Concept
15. 19/03/2020 15
• Dedicated network for
monitoring, control and
iKVM
• Multiple 1Gb/10Gb
Ethernet links per
Microserver – integrated 40
Gb Ethernet backbone
• Dedicated high-speed, low-
latency communication
infrastructure
• Host-2-Host PCIe between
CPUs
• Switched high-speed
serial lanes between
FPGAs
• Enables pooling of
accelerators
and assignment to
different hosts
Baseboard
Baseboard
Baseboard
3/16
Microserver
s
per
Baseboard
Microserver
Module
CPU Mem I/O
Microserver
Module
Microserver
Module
Backplane
Up to 15 Baseboards per
Server
Distributed
Monitoring
and KVM
Distributed
Monitoring
and KVM
Ethernet
Communication
Infrastructure
Ethernet
Communication
Infrastructure
High-speed
Low-latency
Communication
Storage /
I/O-Ext.
High-speed
Low-latency
Communication
RECS|Box overview
Architecture
CPU Mem I/O CPU Mem I/O
17. DATE 2020
Maxeler Dataflow Computing Systems
MPC-X3000
• 8 Dataflow Engines in 1U
• Up to 1TB of DFE RAM
• Zero-copy RDMA
• Dynamic allocation of DFEs to
conventional CPU servers through
Infiniband
• Equivalent performance to
20-50 x86 servers
18. DATE 2020
Dataflow System at Jülich Supercomputing Center
• Pilot system deployed in October 2017 as part of PRACE PCP
• System configuration
o 1U Maxeler MPC-X with 8 MAX5 DFEs
o 1U AMD EPYC server
• 2x AMD 7601 EPYC CPUs
• 1 TB RAM
• 9.6 TB SSD (data storage)
o one 1U login head node
• Runs 4 scientific workloads
18
MPC-X node
Remote users
MAX5 DFE EPYC CPU
1TB DDR4
Head/Build node
ipmi
56 Gbps 2x Infiniband @ 56 Gbps
10 Gbps
10 Gbps
Supermicro EPYC node
http://www.prace-ri.eu/pcp/
19. DATE 2020
Scaling Dataflow Computing into the Cloud
• MAX5 DFEs are compatible with Amazon EC2 F1
instances
• MaxCompiler supports programming F1
• Hybrid Cloud Model:
o On-premise Dataflow system from Maxeler
o Elastically expand to the Amazon Cloud when
needed
19
On-premise
Cloud
21. DATE 2020
21
Towards a single source for any target
• A need if we expect programmers to survive
o Architectures appear like mushrooms
• Productivity
o Develop once → run everywhere
• Performance
• Key concept behind OmpSs
o Sequential task based program on single address/name space
o + directionality annotations
o Executed in parallel: Automatic runtime computation of dependences among
tasks
o LEGaTO: Extend tasks with resource requirements, propagate through the
stack to find the most energy efficient solution at run time
23. DATE 2020
23
AXIOM board:
Xilinx Zynq Ultrascale+ chip,
with 4 ARM Cortex-A53 cores,
and the ZU9EG FPGA.
IPs configuration 1*256, 3*128 Number of instances * size
Frequency (MHz) 200, 250, 300 Working frequency of the FPGA
Number of SMP cores SMP: 1 to 4
FPGA: 3+1 helper, 2+2 helpers
Combination of SMP and helper threads
Number of FPGA helper threads SMP: 0; FPGA: 1, 2 Helper threads are used to manage tasks on
the FPGA
Number of pending tasks 4, 8, 16 and 32 Number of tasks sent to the IP cores before
waiting for their finalization
OmpSs@FPGA Experiments (Matrix multiply)
24. DATE 2020
DFiant HLS
• Aims to bridge the programmability gap by combining constructs and
semantics from software, hardware and dataflow languages
• Programming model accommodates a middle-ground between low-
level HDL and high-level sequential programming
High-Level Synthesis
Languages and Tools
(e.g., C and Vivado HLS)
Register-Transfer Level
HDLs
(e.g., VHDL)
DFiant: A Dataflow HDL
Automatic pipelining
Not an HDL
Problem with state
Separating timing
from functionality
Concurrency
Fine-grain control
Automatic pipelining Concurrency
Fine-grain control
Bound to clock
Explicit pipelining
25. DATE 2020
PCI
Express
Manager
DFE
Memory
MyManager (.maxj)
x
x
+
30
x
Manager m = new Manager();
Kernel k =
new MyKernel();
m.setKernel(k);
m.setIO(
link(“x", CPU),
m.build();
link(“y", CPU));
MyKernel(DATA_SIZE,
x, DATA_SIZE*4,
y, DATA_SIZE*4);
for (int i =0; i < DATA_SIZE; i++)
y[i]= x[i] * x[i] + 30;
Main
Memory
CPU
CPU
Code
Host Code (.c)
MaxCompiler: Application Development
25
SLiC
MaxelerOS
DFEVar x = io.input("x", dfeInt(32));
DFEVar result = x * x + 30;
io.output("y", result, dfeInt(32));
MyKernel (.maxj)
int*x, *y;
y
x
x
+
30
y
x
26. Task-based kernel identification/DFE mapping
OmpSs identifies “static” task graphs while running and DFE kilo-task instance is
selected
Instantiate static, customized, ultra-deep (>1,000 stages) computing pipelines
DataFlow Engines
(DFEs) produce
one pair of results
at each clock cycle
29. Why extremely low-voltage for FPGAs?
Why Energy efficiency through Undervolting?
• Still way to go for <20MW@exascale
• FPGAs are at least 20X less power- and energy-efficient than ASICs.
• Undervolting directly delivers dynamic and static power/energy reduction.
• Note: undervolting is not DVFS
• Frequency is not lowered!
30. FPGA Undervolting Preliminary Experiments
Voltage scaling below standard nominal level, in on-chip
memories of Xilinx technology:
o Power quadratically decreases, deliver savings of more than 10X.
o A conservative voltage guardband exists below nominal.
o Fault exponentially increases, below the guardband.
o Faults can be detected / managed.
31. DATE 2020
Use Cases
• Healthcare: Infection biomarkers
o Statistical search for biomarkers, which often
needs intensive computation. A biomarker is
a measurable value that can indicate the
state of an organism, and is often the
presence, absence or severity of a specific
disease
• Smart Home: Assisted Living
o The ability of the home to learn from the
users behavior and anticipate future behavior
is still an open task and necessary to obtain
a broad user acceptance of assisted living in
the general public
32. DATE 2020
Use Cases
• Smart City: operational urban
pollutant dispersion modelling
o Modeling city landscape + sensor data +
wind prediction to issue a “pollutant
weather prediction”
• Machine Learning: Automated driving
and graphics rendering
o Object detection using CNN networks for
automated driving systems and CNN-
and LSTM-based methods for realistic
rendering of graphics for gaming and
multi-camera systems
• Secure IoT Gateway
o Variety of sensors and actors in an
industrial and private surrounding
33. DATE 2020
Smart City
7x gain in energy efficiency
using FPGAs
Health Care
822x speedup using
FPGAs, enabling a new
world of Biomarker
analysis
Machine Learning
up to 16x gain in energy
efficiency and
performance on the same
hardware, using the
EmbeDL optimizer
Boosting Use Cases by Using LEGaTO toolchain
34. DATE 2020
o Image: Object, face and gestures
o Speech with DeepSpeech
Displays personalized
information
All detections simultaniously
• Start: 16 FPS at 500 W
• Now: 25 FPS at 400 W
• Goal: 10 FPS at 50 W
Local machine learning
frameworks
Smart Mirror Use Case