Distributed and Collaborative Deep Learning and Machine Learning discusses machine learning and deep learning techniques including distributed deep learning. It describes how distributed deep learning can enable training models on large datasets across multiple GPUs and servers for faster training times. It also discusses how the IBM PowerAI Distributed Deep Learning library provides methods for popular AI frameworks to efficiently scale to multiple servers leveraging all attached GPUs.
Some resources how to navigate in the hardware space in order to build your own workstation for training deep learning models.
Alternative download link: https://www.dropbox.com/s/o7cwla30xtf9r74/deepLearning_buildComputer.pdf?dl=0
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
Some resources how to navigate in the hardware space in order to build your own workstation for training deep learning models.
Alternative download link: https://www.dropbox.com/s/o7cwla30xtf9r74/deepLearning_buildComputer.pdf?dl=0
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
The field of artificial intelligence (AI) has witnessed tremendous growth in recent years with the advent of Deep Neural Networks (DNNs) that surpass humans in a variety of cognitive tasks.
Optimizing High Performance Computing Applications for EnergyDavid Lecomber
Energy and power usage in high performance computing and supercomputing is a major issue for system owners and users - we take a look at what developers and administrators can do to reduce application energy costs
Build FAST Deep Learning Apps with Docker on OpenPOWER and GPUs Indrajit Poddar
GPU and NVLink accelerated training and inference with tensorflow and caffe on OpenPOWER systems. Presented at a meetup prior to DataWorks Summit Munich 2017.
Understand and Harness the Capabilities of Intel® Xeon Phi™ ProcessorsIntel® Software
The second-generation Intel® Xeon Phi™ processor offers new and enhanced features that provide significant performance gains in modernized code. For this lab, we pair these features with Intel® Software Development Products and methodologies to enable developers to gain insights on application behavior and to find opportunities to optimize parallelism, memory, and vectorization features.
Distributed deep learning optimizations - AI WithTheBestgeetachauhan
Learn how to optimize Tensorflow for your Intel CPU and techniques for distributed deep learning for both model training and inferencing. Talk @ AI WithTheBest
Fast Scalable Easy Machine Learning with OpenPOWER, GPUs and DockerIndrajit Poddar
Transparently accelerated Deep Learning workloads on OpenPOWER systems and GPUs using easy to use open source frameworks such as Caffe, Torch, Tensorflow, Theano.
Machine Learning with New Hardware ChallegensOscar Law
Describe basic neural network design and focus on Convolutional Neural Network architecture. Explain why CPU and GPU can't fulfill CNN hardware requirement. List out three hardware examples: Nvidia, Microsoft and Google. Finally highlight optimization approach for CNN design.
DOE Exascale Computing Project (EC) Software Technology focus area
is developing an HPC software ecosystem that will enable the efficient
and performant execution of exascale applications. Through the
Extreme-scale Scientific Software Stack (E4S), it is developing a
comprehensive and coherent software stack that will enable application
developers to productively write highly parallel applications that can
portably target diverse exascale architectures - including the IBM
OpenPOWER with NVIDIA GPU systems. E4S features a broad collection of
HPC software packages including the TAU Performance System(R) for
performance evaluation of HPC and AI/ML codes. TAU is a versatile
profiling and tracing toolkit that supports performance engineering of
codes written for CPU and GPUs and has support for most IBM platforms.
This talk will give an overview of TAU and E4S and how developers can
use these tools to analyze the performance of their codes. TAU supports
transparent instrumentation of codes without modifying the application
binary. The talk will describe TAU's support for CUDA, OpenACC, pthread,
OpenMP, Kokkos, and MPI applications. It will describe TAU's use for
Python based frameworks such as Tensorflow and PyTorch. It will cover
the use of TAU in E4S containers using Docker and Singularity runtimes
under ppc64le. E4S provides both source builds through the Spack
platform and a set of containers that feature a broad collection of HPC
software packages. E4S exists to accelerate the development, deployment, and use of HPC software, lowering the barriers for HPC users.
Hardware Acceleration of SVM Training for Real-time Embedded Systems: An Over...Ilham Amezzane
Support Vector Machines (SVMs) have proven to yield high accuracy and have been used widespread in recent years. However, the standard versions of the SVM algorithm are very time-consuming and computationally intensive; which places a challenge on engineers to explore other hardware architectures than CPU, capable of performing real-time training and classifications while maintaining low power consumption in embedded systems. This paper proposes an overview of works based on the two most popular parallel processing devices: GPU and FPGA, with a focus on multiclass training process. Since different techniques have been evaluated using different experimentation platforms and methodologies, we only focus on the improvements realized in each study.
NERSC is the production high-performance computing (HPC) center for the United States Department of Energy (DOE) Office of Science. The center supports over 6,000 users in 600 projects, using a variety of applications in materials science, chemistry, biology, astrophysics, high energy physics, climate science, fusion science, and more.
NERSC deployed the Cori system on over 9,000 Intel® Xeon Phi™ processors. This session describes the optimization strategy for porting codes that target traditional manycore architectures to the processors. We also discuss highlights and lessons learned from the optimization process on 20 applications associated with the NERSC Exascale Science Application Program (NESAP).
Dustin Franklin (GPGPU Applications Engineer, GE Intelligent Platforms ) presents:
"GPUDirect support for RDMA provides low-latency interconnectivity between NVIDIA GPUs and various networking, storage, and FPGA devices. Discussion will include how the CUDA 5 technology increases GPU autonomy and promotes multi-GPU topologies with high GPU-to-CPU ratios. In addition to improved bandwidth and latency, the resulting increase in GFLOPS/watt poses a significant impact to both HPC and embedded applications. We will dig into scalable PCIe switch hierarchies, as well as software infrastructure to manage device interopability and GPUDirect streaming. Highlighting emerging architectures composed of Tegra-style SoCs that further decouple GPUs from discrete CPUs to achieve greater computational density."
Learn more at: http://www.gputechconf.com/page/home.html
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
The field of artificial intelligence (AI) has witnessed tremendous growth in recent years with the advent of Deep Neural Networks (DNNs) that surpass humans in a variety of cognitive tasks.
Optimizing High Performance Computing Applications for EnergyDavid Lecomber
Energy and power usage in high performance computing and supercomputing is a major issue for system owners and users - we take a look at what developers and administrators can do to reduce application energy costs
Build FAST Deep Learning Apps with Docker on OpenPOWER and GPUs Indrajit Poddar
GPU and NVLink accelerated training and inference with tensorflow and caffe on OpenPOWER systems. Presented at a meetup prior to DataWorks Summit Munich 2017.
Understand and Harness the Capabilities of Intel® Xeon Phi™ ProcessorsIntel® Software
The second-generation Intel® Xeon Phi™ processor offers new and enhanced features that provide significant performance gains in modernized code. For this lab, we pair these features with Intel® Software Development Products and methodologies to enable developers to gain insights on application behavior and to find opportunities to optimize parallelism, memory, and vectorization features.
Distributed deep learning optimizations - AI WithTheBestgeetachauhan
Learn how to optimize Tensorflow for your Intel CPU and techniques for distributed deep learning for both model training and inferencing. Talk @ AI WithTheBest
Fast Scalable Easy Machine Learning with OpenPOWER, GPUs and DockerIndrajit Poddar
Transparently accelerated Deep Learning workloads on OpenPOWER systems and GPUs using easy to use open source frameworks such as Caffe, Torch, Tensorflow, Theano.
Machine Learning with New Hardware ChallegensOscar Law
Describe basic neural network design and focus on Convolutional Neural Network architecture. Explain why CPU and GPU can't fulfill CNN hardware requirement. List out three hardware examples: Nvidia, Microsoft and Google. Finally highlight optimization approach for CNN design.
DOE Exascale Computing Project (EC) Software Technology focus area
is developing an HPC software ecosystem that will enable the efficient
and performant execution of exascale applications. Through the
Extreme-scale Scientific Software Stack (E4S), it is developing a
comprehensive and coherent software stack that will enable application
developers to productively write highly parallel applications that can
portably target diverse exascale architectures - including the IBM
OpenPOWER with NVIDIA GPU systems. E4S features a broad collection of
HPC software packages including the TAU Performance System(R) for
performance evaluation of HPC and AI/ML codes. TAU is a versatile
profiling and tracing toolkit that supports performance engineering of
codes written for CPU and GPUs and has support for most IBM platforms.
This talk will give an overview of TAU and E4S and how developers can
use these tools to analyze the performance of their codes. TAU supports
transparent instrumentation of codes without modifying the application
binary. The talk will describe TAU's support for CUDA, OpenACC, pthread,
OpenMP, Kokkos, and MPI applications. It will describe TAU's use for
Python based frameworks such as Tensorflow and PyTorch. It will cover
the use of TAU in E4S containers using Docker and Singularity runtimes
under ppc64le. E4S provides both source builds through the Spack
platform and a set of containers that feature a broad collection of HPC
software packages. E4S exists to accelerate the development, deployment, and use of HPC software, lowering the barriers for HPC users.
Hardware Acceleration of SVM Training for Real-time Embedded Systems: An Over...Ilham Amezzane
Support Vector Machines (SVMs) have proven to yield high accuracy and have been used widespread in recent years. However, the standard versions of the SVM algorithm are very time-consuming and computationally intensive; which places a challenge on engineers to explore other hardware architectures than CPU, capable of performing real-time training and classifications while maintaining low power consumption in embedded systems. This paper proposes an overview of works based on the two most popular parallel processing devices: GPU and FPGA, with a focus on multiclass training process. Since different techniques have been evaluated using different experimentation platforms and methodologies, we only focus on the improvements realized in each study.
NERSC is the production high-performance computing (HPC) center for the United States Department of Energy (DOE) Office of Science. The center supports over 6,000 users in 600 projects, using a variety of applications in materials science, chemistry, biology, astrophysics, high energy physics, climate science, fusion science, and more.
NERSC deployed the Cori system on over 9,000 Intel® Xeon Phi™ processors. This session describes the optimization strategy for porting codes that target traditional manycore architectures to the processors. We also discuss highlights and lessons learned from the optimization process on 20 applications associated with the NERSC Exascale Science Application Program (NESAP).
Dustin Franklin (GPGPU Applications Engineer, GE Intelligent Platforms ) presents:
"GPUDirect support for RDMA provides low-latency interconnectivity between NVIDIA GPUs and various networking, storage, and FPGA devices. Discussion will include how the CUDA 5 technology increases GPU autonomy and promotes multi-GPU topologies with high GPU-to-CPU ratios. In addition to improved bandwidth and latency, the resulting increase in GFLOPS/watt poses a significant impact to both HPC and embedded applications. We will dig into scalable PCIe switch hierarchies, as well as software infrastructure to manage device interopability and GPUDirect streaming. Highlighting emerging architectures composed of Tegra-style SoCs that further decouple GPUs from discrete CPUs to achieve greater computational density."
Learn more at: http://www.gputechconf.com/page/home.html
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Accelerated Machine Learning with RAPIDS and MLflow, Nvidia/RAPIDSDatabricks
Accelerated Machine Learning with RAPIDS and MLflow, Nvidia/RAPIDS
Abstract: We will introduce RAPIDS, a suite of open source libraries for GPU-accelerated data science, and illustrate how it operates seamlessly with MLflow to enable reproducible training, model storage, and deployment. We will walk through a baseline example that incorporates MLflow locally, with a simple SQLite backend, and briefly introduce how the same workflow can be deployed in the context of GPU enabled Kubernetes clusters.
Accelerating Real Time Applications on Heterogeneous PlatformsIJMER
In this paper we describe about the novel implementations of depth estimation from a stereo
images using feature extraction algorithms that run on the graphics processing unit (GPU) which is
suitable for real time applications like analyzing video in real-time vision systems. Modern graphics
cards contain large number of parallel processors and high-bandwidth memory for accelerating the
processing of data computation operations. In this paper we give general idea of how to accelerate the
real time application using heterogeneous platforms. We have proposed to use some added resources to
grasp more computationally involved optimization methods. This proposed approach will indirectly
accelerate a database by producing better plan quality.
Distributed DNN training: Infrastructure, challenges, and lessons learnedWee Hyong Tok
Deep learning is revolutionizing a wide range of applications across various industries and in organizations of all sizes. Scalable DNN training is critical to the success of large-scale deep learning. The methodologies, tools, and infrastructure in this space are rapidly evolving. Drawing on their experiences building a multitenant, distributed DNN training infrastructure that uses familiar OSS components to execute Docker container-based deep learning workloads from hundreds of AI applications on clusters with thousands of GPUs, Kaarthik Sivashanmugam and Wee Hyong Tok share recommendations to address the common challenges in enabling scalable and efficient distributed DNN training and the lessons learned in building and operating a large-scale training infrastructure. Kaarthik and Wee Hyong introduce the challenges in distributed DNN training and provide an overview of the components that can enable distributed training on bare metal infrastructure, virtual machines, and containers. In addition, they outline practical tips for running deep learning workloads on Kubernetes clusters on Azure and explain how you can leverage deep learning toolkits (e.g., CNTK, TensorFlow) on these clusters to do distributed training.
How to optimize Hortonworks Apache Spark ML workloads on Power - POWER 8/9 architecture is the latest offering from IBM and OpenPower foundation. It is the perfect platform for optimizing Hortonworks Spark's performance. During this presentation we will walk the audience through steps required to optimize YARN, HDFS, and Spark on a Power cluster.
Step required:
1) Classify workload into CPU, Memory, IO or mixed (CPU, memory, IO) intensive
2) Characterize "out-of-box" Hortonworks spark workload to understand CPU, Memory, IO and Network performance characteristics
3) Floor Plan cluster resources
4) Tune "out-of-box" workload to navigate "Roofline" Performance space in the above named dimensions
5) If workload is Memory / IO/ Network intensive bound then tune SPARK to increase operational intensity operations/byte as much as possible to make it CPU bound
6) Divide search space into regions and perform exhaustive search.
7) Identify Performance bottlenecks by resource monitoring and tune the System, JVM or application layer by profiling application and hardware counters if required.
Large-Scale Optimization Strategies for Typical HPC Workloadsinside-BigData.com
In this deck from PASC 2019, Liu Yu from Inspur presents: Large-Scale Optimization Strategies for Typical HPC Workloads.
"Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallel mode of the physical model, algorithm implementation and other technical details. This three levels of strategy covers platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication and other modules, and innovative parallel mode of some applications have been optimized. After optimization, it is expected that these applications will outperform their non-optimized counterparts with obvious increase in performance."
Watch the video: https://wp.me/p3RLHQ-kwB
Learn more: http://en.inspur.com/en/2403285/2403287/2403295/index.html
and
https://pasc19.pasc-conference.org/program/keynote-presentations/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, and Memory Optimized families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives: • Understand the differences between instances • Learn best practices and tips for getting the most out of EC2 instances
The session will present HPC challenges in fuelling machine learning and deep learning into the simulations. Besides, we will present a user-centric view of IBM Watson ML Community Edition and the newly IBM inference system IC922 adoption into AIops of large HPC clusters (from deployment to inference).
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
I understand that physics and hardware emmaded on the use of finete .pdfanil0878
I understand that physics and hardware emmaded on the use of finete element methods to predict
fluid flow over airplane wings,that progress is likely to continue. However, in recent years, this
progress has been achieved through greatly increased hardware complexity with the rise of
multicore and manycore processors, and this is affecting the ability of application developers to
achieve the full potential of these systems. currently performance is measured on a dense
matrix–matrix multiplication test which has questionable relevance to real applications.the
incredible advances in processor technology and all of the accompanying aspects of computer
system design, such as the memory subsystem and networking
In embedded it seems to combination of both hardware and the software , it is used to be
combined function of action in the systems .while we do that the application to developed in the
achieve the full potential of the systems in advanced processer technology.
Hardware
(1) Memory
Advances in memory technology have struggled to keep pace with the phenomenal advances in
processors. This difficulty in improving the main memory bandwidth led to the development of a
cache hierarchy with data being held in different cache levels within the processor. The idea is
that instead of fetching the required data multiple times from the main memory, it is instead
brought into the cache once and re-used multiple times. Intel allocates about half of the chip to
cache, with the largest LLC (last-level cache) being 30MB in size. IBM\'s new Power8 CPU has
an even larger L3 cache of up to 96MB [4]. By contrast, the largest L2 cache in NVIDIA\'s
GPUs is only 1.5MB.These different hardware design choices are motivated by careful
consideration of the range of applications being run by typical users.
One complication which has become more common and more important in the past few years is
non-uniform memory access. Ten years ago, most shared-memory multiprocessors would have
several CPUs sharing a memory bus to access a single main memory. A final comment on the
memory subsystem concerns the energy cost of moving data compared to performing a single
floating point computation.
(2) Processors
CPUs had a single processing core, and the increase in performance came partly from an increase
in the number of computational pipelines, but mainly through an increase in clock frequency.
Unfortunately, the power consumption is approximately proportional to the cube of the
frequency and this led to CPUs with a power consumption of up to 250W.CPUs address memory
bandwidth limitations by devoting half or more of the chip to LLC, so that small applications can
be held entirely within the cache. They address the 200-cycle latency issue by using very
complex cores which are capable of out-of-order execution , By contrast, GPUs adopt a very
different design philosophy because of the different needs of the graphical applications they
target. A GPU usually has a number of functional u.
The Libre-SOC Project aims to create an entirely Libre-Licensed, transparently-developed fully auditable Hybrid 3D CPU-GPU-VPU, using the Supercomputer-class OpenPOWER ISA as the foundation.
Our first test ASIC is a 180nm "Fixed-Point" Power ISA v3.0B processor, 5.1mm x 5.9mm, as a proof-of-concept for the team, whose primary expertise is in Software Engineering. Software Engineering training brings a radically different approach to Hardware development: extensive unit tests, source code revision control, automated development tools are normal. Libre Project Management brings even more: bug trackers, mailing lists, auditable IRC logs and a wiki are standard fare for Libre Projects that are simply not normal Industry-Standard practice.
This talk therefore goes through the workflow, from the original HDL through to the GDS-II layout, showing how we were able to keep track of the development that led to the IMEC 180nm tape-out in July 2021. In particular, by following a parallel development process involving "Real" and "Symbolic" Cell Libraries, developed by Chips4Makers, will be shown how our developers did not need to sign a Foundry NDA, but were still able to work side-by-side with a University that did. With this parallel development process, the University upheld their NDA obligations, and Libre-SOC were simultaneously able to honour its Transparency Objectives.
Workload Transformation and Innovations in POWER Architecture Ganesan Narayanasamy
IT Industry is going through two major transformations. One is adaption of AI and tight integration of the same in the commercial applications and enterprise workflow. Two the transformation in software architecture through the concepts like microservices and the cloud native architecture. These transformation alongside the aggressive adaption of IoT/mobile and 5G in all our day today activities is making the world operate in more real time manner which opens-up a new challenge to improve the hardware architecture to adapt to these requirements. These above two major transformation pushes the boundary of the entire systems stack making the designer rethink hardware. This talk presents you a picture of how the enterprise Industry leading POWER architecture is transforming to fulfill the performance demands of these newer generation workloads with primary focus on the AI acceleration on the chip.
July 16th 2021 , Friday for our newest workshop with DoMS, IIT Roorkee, Concept to Solutions using OpenPOWER Stack. It's time to discover advances in #DeepLearning tools and techniques from the world's leading innovators across industries, research, and public speakers.
Register here:
https://lnkd.in/ggxMq2N
This presentation covers two uses cases using OpenPOWER Systems
1. Diabetic Retinopathy using AI on NVIDIA Jetson Nano: The objective is to classify the diabetic level solely on retina image in a remote area with minimum doctor's inference. The model uses VGG16 network architecture and gets trained from scratch on POWER9. The model was deployed on the Jetson Nano board.
1. Classifying Covid positivity using lung X-ray images: The idea is to build ML models to detect positive cases using X-ray images. The model was trained on POWER9, and the application was developed using Python.
IBM Bayesian Optimization Accelerator (BOA) is a do-it-yourself toolkit to apply state-of-the-art Bayesian inferencing techniques and obtain optimal solutions for complex, real-world design simulations without requiring deep machine learning skills. This talk will describe IBM BOA, its differentiation and ease of use, and how researchers can take advantage of it for optimizing any arbitrary HPC simulation.
This presentation covers various partners and collaborators who are currently working with OpenPOWER foundation ,Use cases of OpenPOWER systems in multiple Industries , OpenPOWER Workgroups and OpenCAPI features .
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
AI in healthcare and Automobile Industry using OpenPOWER/IBM POWER9 systemsGanesan Narayanasamy
As the adoption of AI technologies increases and matures, the focus will shift from exploration to time to market, productivity and integration with existing workflows. Governing Enterprise data, scaling AI model development, selecting a complete, collaborative hybrid platform and tools for rapid solution deployments are key focus areas for growing data scientist teams tasked to respond to business challenges. This talk will cover the challenges and innovations for AI at scale for the Industires such as Healthcare and Automotive , the AI ladder and AI life cycle and infrastructure architecture considerations.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
Healthcare has became one of the most important aspects of everyones life. Its importance has surged due to the latests outbreaks and due to this latest pandemic it has become mandatory to collaborate to improve everyones Healthcare as soon as possible.
IBM has reacted quickly sharing not only its knowledge but also its Artificial Intelligence Supercomputers all around the world.
Those Supercomputers are helping to prevail this outbreak and also future ones.
They have completely different features compared to proposals from other players of this Supercomputers market.
We will try to make a quick look at the differences of those AI focused Supercomputers and how they can help in the R&D of Healthcare solutions for everyone, from those ones with access to a big IBM AI Supercomputer to those ones with access to only one small IBM AI focused server.
Moving object recognition (MOR) corresponds to the localization and classification of moving objects in videos. Discriminating moving objects from static objects and background in videos is an essential task for many computer vision applications. MOR has widespread applications in intelligent visual surveillance, intrusion detection, anomaly detection and monitoring, industrial sites monitoring, detection-based tracking, autonomous vehicles, etc. In this session, Murari provided a poster about the deep learning algorithms to identify both locations and corresponding categories of moving objects with a convolutional network. The challenges in developing such algorithms have been discussed.
Clarisse Hedglin from IBM presented this as part of 3 days International Summit .. She shared the scenarios AI can solve for today using the IBM AI infrastructure.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
3. Machine Learning/ Deep Learning Process
3
Historic
Training
Data
ML/DL
Training
Trained
Model
Training/Development
Live
Data
Trained
model
Action
Inference/Deployment/Application
11. Deep Belief Network: The first “DEEP” network
11
Initial deep
learning
breakthroughs
in speech
recognition
12. Deep Convolutional Networks: Image processing/classification
12
Currently RELU operations such
as max-pool replace sigmoid
operations for easier training and
higher accuracy
20. How can I train deep learning models many times faster?
20
21. PowerAI Rel. 4 with Distributed
Deep Learning tech. preview
Performance…
Faster Training
and Inferencing
Near Ideal Scaling to 256 GPUs and Beyond16 Days Down to 7 Hours:
58x Faster
1 System 64 Systems
16 Days
7 Hours
ResNet-101, ImageNet-22K, Caffe with PowerAI DDL, Running on Minsky (S822Lc) Power System
25. Threads per
core vs x86
Up to 9.5x more I/O
bandwidth than x86
More RAM
possible vs. x86
CPU to deliver
PCIe gen 4
4x 9.5x 2.6x 1st
POWER9
An acceleration superhighway.
The only processor specifically designed for the AI era.
26. PowerAI Rel. 4 with Distributed
Deep Learning tech. preview
Performance…
Faster Training
and Inferencing
Near Ideal Scaling to 256 GPUs and Beyond16 Days Down to 7 Hours:
58x Faster
1 System 64 Systems
16 Days
7 Hours
ResNet-101, ImageNet-22K, Caffe with PowerAI DDL, Running on Minsky (S822Lc) Power System
27. 27
• TensorFlow 1.4 Performance on IBM POWER9 with Nvidia V100
• Single node 35% more images processed per second vs tested x86 systems
ResNet50 testing on ILSVRC 2012 dataset (aka Imagenet 2012)
Training on 1.2M images
Validation on 50K images
▪ Results are based IBM Internal Measurements running
1000 iterations of HPM Resnet50 on 1.2M images and
validation on 50K images with Dataset from ILSVRC 2012
also known as Imagenet 2012.
▪ Software: Tensorflow 1.4.0 framework and HPM
Resnet50 https://github.com/tensorflow/benchmarks.git (
commit: f5d85aef) and with the following parameters:
Batch-Size: 64 per GPU; Iterations: 1100; Data: Imagenet;
local-parameter-device: gpu; variable-update: replicated
Date of testing: November 26, 2017
Faster Training Time with Distributed Deep Learning
28. 28
• TensorFlow 1.4 Performance on IBM POWER9 with Nvidia V100
• Multiple nodes Distributed Deep Learning: IBM POWER9™ with
Nvidia Tesla V100 results in 2.3X more data
processed on TensorFlow versus tested x86 systems
2.3X more images processed per second vs tested
x86 systems
PowerAI Distributed Deep Learning (DDL) library
provides innovative distribution methods enabling
AI frameworks to scale to multiple servers
leveraging all attached GPUs
ResNet50 testing on ILSVRC 2012 dataset (also
known as Imagenet 2012)
Training on 1.2M images
Validation on 50K images
Date of testing: December 2, 2017
Faster Training Time with Distributed Deep Learning
30. Run using ddlrun
• ddlrun is a tool for running ddl enabled scripts
• See README.md in DDLRun docs in /opt/DL/ddl/doc/README.md
• https://developer.ibm.com/linuxonpower/2018/05/01/improved-ease-use-ddl-powerai/
Example invocation:
ddlrun -H system1,system2 python mnist.py
Yes. It’s really that easy!
32. Adjust the Keras callbacks
Adjust the learning rate
Since the data was split between the GPUs, the learning rate had to be scaled by the total number of GPUs.
The 2 primary operations that need to be restricted to only running on rank 0 are model checkpointing and logging. This is accomplished by only
adding these callbacks on rank 0.
An extra callback is needed to keep all metrics in sync across all nodes. This ensures that early stopping and learning rate scheduling all remain in
sync
33. How can I train models that don’t fit in GPU memory?
33
34. Why Power9 - 3.8X Faster than x86 architectures
Supports large model training sets that are too large for GPU memory in DL and HPC/simulation
Memory coherency also makes programming GPU’s easier for developers by automatically
moving data between Power9 system memory and V100 GPU memory.
35. Power Systems - 7-10X Bandwidth over x86 architectures
No NVLink for x86 Servers: PCIe Bottleneck
GPU
P8
GPU GPU
P8
GPU
NVLink
80
GB/s
NVLink
80
NVLink
80
Minsky + NVLink
CPU<->GPU and GPU<->GPU
GPU
x86
GPU GPU
x86
GPU
PCIe
32
GB/s
NVLink
80 GB/s
NVLink
80 GB/s
x86 + NVLink
GPU<->GPU Only
GPU
x86
GPU GPU
x86
GPU
PCIe
32
GB/s
x86 using PCIe
GPUs access System Memory thu x86 CPU & slow
PCIe
Bandwidth between GPU’s and
memory is critical
Power8 with NVLink delivers 2.5X the bandwidth
PCIe Data Pipe
NVLink
P9 CPUDDR4
Tesla V100 Tesla V100Tesla V100 NVL
150 GB/s
NVL
150 GB/s
150 GB/s
150GB/s
170 GB/s
Power9 and NVLink Gen 2.0
delivers 7-10 X bandwidth
increase over X86 architectures
Power8 NVLink
Data Pipe
36. TensorFlow Large Model Support (TFLMS)
• Swap out unused tensors (feature maps,
parameters) to CPU memory after GPU
computation
• Swap in before use in backward propagation
phase
• Implemented as a Python module to statically
edit model graph before training.
• Support for training with Session, Estimator,
tf.keras APIs
• Code contributed to TensorFlow community:
https://github.com/tensorflow/tensorflow/pull/19845
l+1l-1 LLayer 1
Loss
…..…
……….
…...
Forward
Backward
l
…….
CPU memory
GPU memory
Swap-out
Swap-in
37. Why 3D Image segmentation?
Training 3DUnet models for image segmentation generally has high memory usage requirements which can limit the size
of the 3D images that can be used for training and can also lead to lower batch sizes for model training.
The annual International Multimodal Brain Tumor Segmentation Challenge (BraTS) [1] drives advancements in 3D image
segmentation models.
We utilized enabled TFLMS in a Keras model written by David G. Ellis, U of Nebraska. This model was written to process
multimodal MRI scans following the model architecture described by Isensee et al. in the 2017 BraTS proceedings on page
100 and received 3rd place in the challenge [2].
Real world use case of large model support
The maximum image resolution and batch size:
144^3 with batch size 1 in 16GB GPU without TFLMS
192^3 with batch size 1 in 16GB GPU with TFLMS - ~2.4x the resolution
Higher resolution image processing allows for learning and labeling finer
details and structures.
[1] http://www.med.upenn.edu/sbia/brats2018/data.html
[2] https://www.cbica.upenn.edu/sbia/Spyridon.Bakas/MICCAI_BraTS/MICCAI_BraTS_2017_proceedings_shortPapers.pdf
39. TFLMS runtime performance POWER9 vs x86
The 3DUnet model was run with TFLMS on an IBM
AC922 and a x86 based GPU server. Both systems
have the NVIDIA Volta V100 GPU.
The x86 server shows significant slowdown which
gets worse when GPUs share the same PCI bus.
The diagram on the right shows the nvprof view of processing one image with the model. On the
x86 server the GPU goes idle (white space) while waiting on the memory copies over the PCI bus.
Corresponding kernels runtimes between the runs are linked in red.
Note that the 4 GPU times were running 4 individual models
concurrently, not one model distributed. Distributed results follow.
0
250
500
750
1000
1250
1500
1750
2000
2250
IBM AC922
(4 GPU
version)
IBM AC922
(6 GPU
version)
x86 server x86 server
with PCI
contention
Timeinseconds
Epoch times at 192^3 with TFLMS
40. GPU and interconnect usage
GB/s
10 GB/s
20 GB/s
30 GB/s
40 GB/s
50 GB/s
60 GB/s
70 GB/s
IBM AC922
(4 GPU version)
x86 server x86 server while sharing
PCI bus
Average memory copy throughput over 30
batches
Host to GPU memory copy GPU to Host memory copy
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
IBM AC922
(4 GPU version)
x86 server x86 server while sharing
PCI bus
Average GPU utilization over 30 batches
Higher memory copy throughput drives higher GPU utilization
44. How to enable TFLMS
Session based training:
Step 1: define optimizer/solver scopes
with tf.name_scope('adam_optimizer'):
optimizer = tf.train.AdamOptimizer(1e-4)
train_step = minimize(cross_entropy)
Step 2: define an LMS object and run it
from tensorflow.contrib.lms import LMS
lms_obj = LMS({'adam_optimizer'})
lms_obj.run(graph=tf.get_default_graph())
Estimator based training:
Step 1: define optimizer/solver scopes
with tf.name_scope(‘graddescopt'):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(loss=loss,
global_step=tf.train.get_global_step())
Step 2: define a LMSHook
from tensorflow.contrib.lms import LMSHook
lms_hook = LMSHook({'graddescopt'})
Step 3: add the LMSHook into the Estimator's hook list
mnist_classifier.train(input_fn=train_input_fn, steps=20000,
hooks=[logging_hook, lms_hook])
Step 1: Define a LMSKerasCallback.
from tensorflow.contrib.lms import LMSKerasCallback lms_callback =
LMSKerasCallback()
Step 2: pass the callback to the Keras fit or fit_generator function.
model.fit_generator(generator=training_gen,
callbacks=[lms_callback])
TF-Keras based training:
45. How can I train machine learning models on
terabyte scale with GPUs?
45
46. SnapML: Rapid training of logistic regression/SVMs on GPUs
|
Snap ML Rapid Training of Logistic Regression/SVMs on GPUs –
Tera-scale ML benchmark
50. How can I do this collaboratively with optimal resource utilization?
50
51. • Physical view - Spectrum Conductor installed on each Linux Server
• Logical view - Users & groups have own Spark cluster - isolated, protected & secured by Spark Instance Groups
• Manage all DL Resources with Conductor SLA’s
• Scheduler interfaces with Spark - ensures accelerated GPU resources for priority applications and users
Linux
Linux
LinuxLinux
Linux
Linux
Linux
LOB
Data Scientist
Researcher
Virtual Spark cluster
(PaaS)
Linux
Customer behavior...
Trend analysis...
HPC...
Marketing...
Fraud detection...
instance
group #1
instance
group #2
instance
group #3
Managem
ent Nodes
Pool
Compute Nodes Pool
Spectrum Scale
Administrator
Web console
Create Spark
instance group
Linux
LinuxLinuxLinux
Linux
Linux
LinuxLinuxLinux
Linux
IT or Data
Warehouse
ETL / Batch
instance
group #4
LOB
IoT
instance
group #5
Why Power9 - GPU accelerated Spark + Multi-Tenancy Spectrum Conductor
Conductor with Spark
Session scheduler Service management (ASC/K8s)
Security Report/log management ContainerMulti-tenancy
Notebook Spark ELK
GPU and AccelerationData Connector