X13 Products + Intel® Xeon® CPU Max Series–An Applications & Performance ViewRebekah Rodriguez
With Intel’s Jan 10th launch of the Intel® Xeon® Max CPU series – the industry’s first with high bandwidth memory (HBM) enabled CPU – Supermicro is proud to discuss its complete range of first-to-market X13 servers with high bandwidth memory. This Supermicro Systems, Applications, and Performance webinar shows how Supermicro’s Green Compute approach is the best solution for customers wanting to get more performance per watt, lowering CAPEX and OPEX spending.
Join us as we highlight our server solutions optimized for customer applications and for scale-out configurations that drive higher compute density in today’s modern data centers, along with some real performance improvements.
In this ACM Tech Talk, Doug Kothe from ORNL presents: The Exascale Computing Project and the future of HPC.
"The mission of the US Department of Energy (DOE) Exascale Computing Project (ECP) was initiated in 2016 as a formal DOE project and extends through 2022. The ECP is designing the software infrastructure to enable the next generation of supercomputers—systems capable of more than 1018 operations per second—to effectively and efficiently run applications that address currently intractable problems of strategic importance. The ECP is creating and deploying an expanded and vertically integrated software stack on US Department of Energy (DOE) HPC exascale and pre-exascale systems, thereby defining the enduring US exascale ecosystem."
Watch the video: https://wp.me/p3RLHQ-kep
Learn more: https://www.exascaleproject.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IIoT + Predictive Analytics: Solving for Disruption in Oil & Gas and Energy &...DataWorks Summit
The electric grid has evolved from linear generation and delivery to a complex mix of renewables, prosumer-generated electricity, and electric vehicles (EVs). Smart meters are generating loads of data. As a result, traditional forecasting models and technologies can no longer adequately predict supply and demand. Extreme weather, an aging infrastructure, and the burgeoning worldwide population are also contributing to increased outage frequency.
In oil and gas, commodity pricing pressures, resulting workforce reductions, and the need to reduce failures, automate workflows, and increase operational efficiencies are driving operators to shift analytics initiatives to advanced data-driven applications to complement physics-based tools.
While sensored equipment and legacy surveillance applications are generating massive amounts of data, just 2% is understood and being leveraged. Operationalizing it along with external datasets enables a shift from time-based to condition-based maintenance, better forecasting and dramatic reductions in unplanned downtime.
The session includes plenty of real-world anecdotes. For example, how an electric power holding company reduced the time it took to investigate energy theft from six months to less than one hour, producing theft leads in minutes and an expected multi-million dollar ROI. How a global offshore contract drilling services provider implemented an open source IIoT solution across its fleet of assets in less than a year, enabling remote monitoring, predictive analytics and maintenance.
Key takeaways:
• How are new processes for data collection, storage and democratization making it accessible and usable at scale?
• Beyond time series data, what other data types are important to assess?
• What advantage are open source technologies providing to enterprises deploying IIoT?
• Why is collaboration important across industrial verticals to increase IIoT open source adoption?
Speaker
Kenneth Smith, General Manager, Energy, Hortonworks
X13 Products + Intel® Xeon® CPU Max Series–An Applications & Performance ViewRebekah Rodriguez
With Intel’s Jan 10th launch of the Intel® Xeon® Max CPU series – the industry’s first with high bandwidth memory (HBM) enabled CPU – Supermicro is proud to discuss its complete range of first-to-market X13 servers with high bandwidth memory. This Supermicro Systems, Applications, and Performance webinar shows how Supermicro’s Green Compute approach is the best solution for customers wanting to get more performance per watt, lowering CAPEX and OPEX spending.
Join us as we highlight our server solutions optimized for customer applications and for scale-out configurations that drive higher compute density in today’s modern data centers, along with some real performance improvements.
In this ACM Tech Talk, Doug Kothe from ORNL presents: The Exascale Computing Project and the future of HPC.
"The mission of the US Department of Energy (DOE) Exascale Computing Project (ECP) was initiated in 2016 as a formal DOE project and extends through 2022. The ECP is designing the software infrastructure to enable the next generation of supercomputers—systems capable of more than 1018 operations per second—to effectively and efficiently run applications that address currently intractable problems of strategic importance. The ECP is creating and deploying an expanded and vertically integrated software stack on US Department of Energy (DOE) HPC exascale and pre-exascale systems, thereby defining the enduring US exascale ecosystem."
Watch the video: https://wp.me/p3RLHQ-kep
Learn more: https://www.exascaleproject.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IIoT + Predictive Analytics: Solving for Disruption in Oil & Gas and Energy &...DataWorks Summit
The electric grid has evolved from linear generation and delivery to a complex mix of renewables, prosumer-generated electricity, and electric vehicles (EVs). Smart meters are generating loads of data. As a result, traditional forecasting models and technologies can no longer adequately predict supply and demand. Extreme weather, an aging infrastructure, and the burgeoning worldwide population are also contributing to increased outage frequency.
In oil and gas, commodity pricing pressures, resulting workforce reductions, and the need to reduce failures, automate workflows, and increase operational efficiencies are driving operators to shift analytics initiatives to advanced data-driven applications to complement physics-based tools.
While sensored equipment and legacy surveillance applications are generating massive amounts of data, just 2% is understood and being leveraged. Operationalizing it along with external datasets enables a shift from time-based to condition-based maintenance, better forecasting and dramatic reductions in unplanned downtime.
The session includes plenty of real-world anecdotes. For example, how an electric power holding company reduced the time it took to investigate energy theft from six months to less than one hour, producing theft leads in minutes and an expected multi-million dollar ROI. How a global offshore contract drilling services provider implemented an open source IIoT solution across its fleet of assets in less than a year, enabling remote monitoring, predictive analytics and maintenance.
Key takeaways:
• How are new processes for data collection, storage and democratization making it accessible and usable at scale?
• Beyond time series data, what other data types are important to assess?
• What advantage are open source technologies providing to enterprises deploying IIoT?
• Why is collaboration important across industrial verticals to increase IIoT open source adoption?
Speaker
Kenneth Smith, General Manager, Energy, Hortonworks
Linux on RISC-V with Open Hardware (ELC-E 2020)Drew Fustini
Want to run Linux on open hardware? This talk will explore how the RISC-V, an open instruction set (ISA), and open source FPGA tools can be leveraged to achieve that goal. I will explain how myself and others at Hackaday Supercon teamed up to get Linux running on a RISC-V soft-core in the ECP5 FPGA on the conference badge. I will introduce Migen, LiteX and Vexriscv, and explain how they enabled us to quickly implement an SoC in the FPGA capable of running Linux. I will also explore other Linux-capable open source RISC-V implementations, and how some are being used in industry. I will highlight that OpenHW Group has adopted the PULP Ariane from ETH Zurich for its Core-V CVA64 implementation. Finally, I will look at what Linux-capable "hard" RISC-V SoC's currently exist, and what is on the horizon for 2020 and 2021. This talk is should be relevant to people who are interested in building open hardware systems capable of running Linux. It should also be useful to people who are curious about RISC-V. Software engineers may find it exciting to learn how Python can be used to for chip-level design with Migen and LiteX, and simplify building a System-on-Chip (SoC) for an FPGA.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Join us for an exciting and informative preview of the broadest range of next-generation systems optimized for tomorrow’s data center workloads, Powered by 4th Gen Intel® Xeon® Scalable Processors (formerly codenamed Sapphire Rapids).
Experts from Supermicro and Intel will discuss how the upcoming Supermicro X13 systems will enable new performance levels utilizing state-of-the-art technology, including DDR5, PCIe 5.0, Compute Express Link™ 1.1, and Intel® Advanced Matrix Extensions (Intel AMX).
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Kirill Tsym discusses Vector Packet Processing:
* Linux Kernel data path (in short), initial design, today's situation, optimization initiatives
* Brief overview of DPDK, Netmap, etc.
* Userspace Networking projects comparison: OpenFastPath, OpenSwitch, VPP.
* Introduction to VPP: architecture, capabilities and optimization techniques.
* Basic Data Flow and introduction to vectors.
* VPP Single and Multi-thread modes.
* Router and switch for namespaces example.
* VPP L4 protocol processing - Transport Layer Development Kit.
* VPP Plugins.
Kiril is a software developer at Check Point Software Technologies, part of Next Generation Gateway and Architecture team, developing proof of concept around DPDK and FD.IO VPP. He has years of experience in software, Linux kernel and networking development and has worked for Polycom, Broadcom and Qualcomm before joining Check Point.
Supermicro’s Universal GPU: Modular, Standards Based and Built for the FutureRebekah Rodriguez
The Universal GPU system architecture combines the latest technologies that support multiple GPU form factors, CPU choices, storage, and networking options.Together, these components are optimized to deliver high performance in a balanced architecture in a highly scalable system. Systems can be optimized for each customer’s specific Artificial Intelligence (AI), Machine Learning (ML), or High Performance Computing (HPC) applications. Organizations worldwide are demanding new options for their future computing environments, which have the thermal headroom for the next generation of CPUs and GPUs.
Join this webinar to learn how to leverage Supermicro's Universal GPU system to simplify customer deployments, deliver ultimate modularity and customization options for AI to Omniverse environments.
This is a talk at AI Nextcon Seattle on Feb 12, 2020.
An overview of TensorFlow Lite and various resources for helping you deploy TFLite models to mobile and edge devices. Walk through an example of end to end on-device ML: train a model from scratch, convert to TFLite and deploy it.
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today use too much memory, compute, and energy. To make AI truly ubiquitous, it needs to run on the end device within tight power and thermal budgets. Advancements in multiple areas are necessary to improve AI model efficiency, including quantization, compression, compilation, and neural architecture search (NAS). In this presentation, we’ll discuss:
- Qualcomm AI Research’s latest model efficiency research
- Our new NAS research to optimize neural networks more easily for on-device efficiency
- How the AI community can take advantage of this research though our open-source projects, such as the AI Model Efficiency Toolkit (AIMET) and AIMET Model Zoo
This webinar by Dov Nimratz (Senior Solution Architect, Consultant, GlobalLogic) was delivered at Embedded Community Webinar #1 on July 7, 2020.
Webinar agenda:
- CPU / GPU / TPU architectures
- Historical context
- CPU and their variations
- GPU or gin in a bottle for artificial intelligence tasks
- TPU architecture specialized artificial intelligence accelerator
- What's next in technology
More details and presentation: https://www.globallogic.com/ua/about/events/embedded-community-webinar-1/
The Internet-of-Things provides us with lots of sensor data. However, the data by themselves do not provide value unless we can turn them into actionable, contextualized information. Big data and data visualization techniques allow us to gain new insights by batch-processing and off-line analysis. Real-time sensor data analysis and decision-making is often done manually but to make it scalable, it is preferably automated. Artificial Intelligence provides us the framework and tools to go beyond trivial real-time decision and automation use cases for IoT.
Linux on RISC-V with Open Hardware (ELC-E 2020)Drew Fustini
Want to run Linux on open hardware? This talk will explore how the RISC-V, an open instruction set (ISA), and open source FPGA tools can be leveraged to achieve that goal. I will explain how myself and others at Hackaday Supercon teamed up to get Linux running on a RISC-V soft-core in the ECP5 FPGA on the conference badge. I will introduce Migen, LiteX and Vexriscv, and explain how they enabled us to quickly implement an SoC in the FPGA capable of running Linux. I will also explore other Linux-capable open source RISC-V implementations, and how some are being used in industry. I will highlight that OpenHW Group has adopted the PULP Ariane from ETH Zurich for its Core-V CVA64 implementation. Finally, I will look at what Linux-capable "hard" RISC-V SoC's currently exist, and what is on the horizon for 2020 and 2021. This talk is should be relevant to people who are interested in building open hardware systems capable of running Linux. It should also be useful to people who are curious about RISC-V. Software engineers may find it exciting to learn how Python can be used to for chip-level design with Migen and LiteX, and simplify building a System-on-Chip (SoC) for an FPGA.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Join us for an exciting and informative preview of the broadest range of next-generation systems optimized for tomorrow’s data center workloads, Powered by 4th Gen Intel® Xeon® Scalable Processors (formerly codenamed Sapphire Rapids).
Experts from Supermicro and Intel will discuss how the upcoming Supermicro X13 systems will enable new performance levels utilizing state-of-the-art technology, including DDR5, PCIe 5.0, Compute Express Link™ 1.1, and Intel® Advanced Matrix Extensions (Intel AMX).
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Kirill Tsym discusses Vector Packet Processing:
* Linux Kernel data path (in short), initial design, today's situation, optimization initiatives
* Brief overview of DPDK, Netmap, etc.
* Userspace Networking projects comparison: OpenFastPath, OpenSwitch, VPP.
* Introduction to VPP: architecture, capabilities and optimization techniques.
* Basic Data Flow and introduction to vectors.
* VPP Single and Multi-thread modes.
* Router and switch for namespaces example.
* VPP L4 protocol processing - Transport Layer Development Kit.
* VPP Plugins.
Kiril is a software developer at Check Point Software Technologies, part of Next Generation Gateway and Architecture team, developing proof of concept around DPDK and FD.IO VPP. He has years of experience in software, Linux kernel and networking development and has worked for Polycom, Broadcom and Qualcomm before joining Check Point.
Supermicro’s Universal GPU: Modular, Standards Based and Built for the FutureRebekah Rodriguez
The Universal GPU system architecture combines the latest technologies that support multiple GPU form factors, CPU choices, storage, and networking options.Together, these components are optimized to deliver high performance in a balanced architecture in a highly scalable system. Systems can be optimized for each customer’s specific Artificial Intelligence (AI), Machine Learning (ML), or High Performance Computing (HPC) applications. Organizations worldwide are demanding new options for their future computing environments, which have the thermal headroom for the next generation of CPUs and GPUs.
Join this webinar to learn how to leverage Supermicro's Universal GPU system to simplify customer deployments, deliver ultimate modularity and customization options for AI to Omniverse environments.
This is a talk at AI Nextcon Seattle on Feb 12, 2020.
An overview of TensorFlow Lite and various resources for helping you deploy TFLite models to mobile and edge devices. Walk through an example of end to end on-device ML: train a model from scratch, convert to TFLite and deploy it.
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today use too much memory, compute, and energy. To make AI truly ubiquitous, it needs to run on the end device within tight power and thermal budgets. Advancements in multiple areas are necessary to improve AI model efficiency, including quantization, compression, compilation, and neural architecture search (NAS). In this presentation, we’ll discuss:
- Qualcomm AI Research’s latest model efficiency research
- Our new NAS research to optimize neural networks more easily for on-device efficiency
- How the AI community can take advantage of this research though our open-source projects, such as the AI Model Efficiency Toolkit (AIMET) and AIMET Model Zoo
This webinar by Dov Nimratz (Senior Solution Architect, Consultant, GlobalLogic) was delivered at Embedded Community Webinar #1 on July 7, 2020.
Webinar agenda:
- CPU / GPU / TPU architectures
- Historical context
- CPU and their variations
- GPU or gin in a bottle for artificial intelligence tasks
- TPU architecture specialized artificial intelligence accelerator
- What's next in technology
More details and presentation: https://www.globallogic.com/ua/about/events/embedded-community-webinar-1/
The Internet-of-Things provides us with lots of sensor data. However, the data by themselves do not provide value unless we can turn them into actionable, contextualized information. Big data and data visualization techniques allow us to gain new insights by batch-processing and off-line analysis. Real-time sensor data analysis and decision-making is often done manually but to make it scalable, it is preferably automated. Artificial Intelligence provides us the framework and tools to go beyond trivial real-time decision and automation use cases for IoT.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
«When systems are not just dozens of subsystems, but dozens of engineering teams, even our best and most experienced engineers routinely guess wrong about the root cause of poor end-to-end performance» — that’s what think in Google.
Latency tracing approach helps Google and many other companies to control stability and performance as well as helps to find root causes of performance degradation even in huge and complex distributed systems.
I’ll tell about what is latency tracing, how that helps you, and how you can implement it in your project. Finally I will show live demo using such tools as Dynatrace and Zipkin.
examples: https://github.com/kslisenko/java-performance
http://javaday.org.ua/kanstantsin-slisenka-profiling-distributed-java-applications/
HPC DAY 2017 | Altair's PBS Pro: Your Gateway to HPC ComputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Altair's PBS Pro: Your Gateway to HPC Computing
Dr. Jochen Krebs | Director Enterprise Sales Central & Eastern Europe at Altaire
Model Simulation, Graphical Animation, and Omniscient Debugging with EcoreToo...Benoit Combemale
You have your shiny new modeling language up and running thanks to the Eclipse Modeling Technologies and you built a powerful graphical editor with Sirius to support it. But how can you see what is going on when a model is executed? Don't you need to debug your design in some way? Wouldn't you want to see your editors being animated directly within your modeling environment based on execution traces or simulator results?
In this talk, we will present Sirius Animator, an add-on to Sirius that provides you a tool- supported approach to complement a modeling language with an execution semantics and a graphical description of an animation layer. The execution semantics is defined thanks to ALE, an Action Language for EMF integrated into Ecore Tools to modularly implement the bodies of your EOperations, and the graphical description of the animation layer is defined thanks to Sirius. From both inputs, Sirius Animator automatically provides an advanced and extensible environment for model simulation, animation and debugging, on top of the graphical editor of Sirius and the debug UI of Eclipse. To illustrate the overall approach, we will demonstrate the ability to seamlessly extend Arduino Designer, in order to provide an advanced debugging environment that includes graphical animation, forward/backward step-by-step, breakpoint definition, etc.
LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.
Secure defaults without compromising usability
Everything is replaceable and customisable
Immutable infrastructure applied to building Linux distributions
Completely stateless, but persistent storage can be attached
Easy tooling, with easy iteration
Built with containers, for running containers
Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
Designed to be managed by external tooling, such as Infrakit or similar tools
Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security
HPC DAY 2017 | Prometheus - energy efficient supercomputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Prometheus - energy efficient supercomputing
Marek Magrys | Manager of Mass Storage Departament, ACC Cyfronet AGH-UST
GPU databases - How to use them and what the future holdsArnon Shimoni
GPU databases are the hottest new thing, with about 7 different companies producing their own variant. In this session, we will discuss why they were created, how they are already disrupting the database world, and what the future of computing holds for them.
This presentation demonstrates how the power of NVIDIA GPUs can be leveraged to both accelerate speed to insight and to scale the amount of hot and warm data analyzed to meet the increasing demands of data scientists and business intelligence professionals alike, as well as to find tactical and strategic insights with greater speed on exponentially growing datasets.
Organizations commonly believe that they are advancing in analytical capabilities due to the rise in the data science profession and the myriad of technologies available for analytics, business intelligence, artificial intelligence and machine learning. However, if you do the math, they are actually falling behind as the increases in the rates of data collection volume far outpace the rate of increases in hot and warm data used for analytics. This is causing organizations to rely on an ever-decreasing percentage of their information assets for decision making.
We talk about why GPU databases were created and share what sets SQream apart from other GPU databases, MPP solutions, in memory and Hadoop based analytic alternatives.
We will also outline how an organization can use GPU databases to thrive in the information revolution by using a significantly greater percentage of its data for analytical purposes, obtaining insights that are desired today, and will remain cost-effective into the next few years when data lakes are expected to balloon from petabytes to exabytes.
2018-11-06: Unfortunately, LinkedIn/Slideshare disabled the update functionality and, thus, I had to upload an updated version of this introduction to OMNeT++ as new presentation. It is available here: https://www.slideshare.net/christian.timmerer/an-introduction-to-omnet-54
Vert.x is a toolkit or platform for implementing reactive applications on the JVM.
Vert.x is an open-source project at the Eclipse Foundation. Vert.x was initiated in 2012 by Tim Fox.
General Purpose Application Framework, Polyglot (Java, Groovy, Scala, Kotlin, JavaScript, Ruby and Ceylon), Event Driven, non-blocking, Lightweight & fast, Reusable modules.
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...ScyllaDB
Benchmarks are fun to do but when going to production, all sorts of things can happen: anything from hardware outages to human error bringing your database down. Even in a healthy database, a lot of maintenance operations have to periodically run. Do you have the tools necessary to make sure you are good to go?
Computação de Alto Desempenho - Fator chave para a competitividade do País, d...Igor José F. Freitas
Vídeo: https://www.youtube.com/watch?v=8cFqNwhQ7uE
Fator chave para a competitividade do País, da Ciência e da Indústria.
Palestra ministrada durante o Intel Innovation Week 2015 .
"Huawei focuses on R&D of IT infrastructure, cooling solutions, software integration, and provides end-to-end HPC solution by building ecosystems with partners. Huawei help customers from different sectors and fields, solving challenges and problems with computing resources, energy expenditure and business needs. This presentation will introduce how Huawei brings fresh technologies to next-generation HPC solutions for more innovation, higher efficiency and scale, as well as presenting our best practices for HPC."
Watch the video presentation: http://wp.me/p3RLHQ-f8J
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
Learn about opportunities and challenges for accelerating big data middleware on modern high-performance computing (HPC) clusters by exploiting HPC technologies.
Data is the fuel for the idea economy, and being data-driven is essential for businesses to be competitive. HPE works with all the Hadoop partners to deliver packaged solutions to become data driven. Join us in this session and you’ll hear about HPE’s Enterprise-grade Hadoop solution which encompasses the following
-Infrastructure – Two industrialized solutions optimized for Hadoop; a standard solution with co-located storage and compute and an elastic solution which lets you scale storage and compute independently to enable data sharing and prevent Hadoop cluster sprawl.
-Software – A choice of all popular Hadoop distributions, and Hadoop ecosystem components like Spark and more. And a comprehensive utility to manage your Hadoop cluster infrastructure.
-Services – HPE’s data center experts have designed some of the largest Hadoop clusters in the world and can help you design the right Hadoop infrastructure to avoid performance issues and future proof you against Hadoop cluster sprawl.
-Add-on solutions – Hadoop needs more to fill in the gaps. HPE partners with the right ecosystem partners to bring you solutions such an industrial grade SQL on Hadoop with Vertica, data encryption with SecureData, SAP ecosystem with SAP HANA VORA, Multitenancy with Blue Data, Object storage with Scality and more.
ICEOTOPE & OCF: Performance for Manufacturing IceotopePR
ICEOTOPE, OCF & The Advanced Manufacturing Research Centre (AMRC) define the performance required for a manufacturing environment and potential challenges to overcome in order to enable a faster time-to-market.
HPE Hybrid HPC strategy including UberCloud ContainersThomas Francis
Selected slides from HPE's Hybrid HPC Strategy via Jean-Luc Assor, Worldwide Director, Hybrid HPC / HPC Cloud. Includes breakthrough simulations in healthcare made possible by UberCloud Containers
Delivering a Flexible IT Infrastructure for Analytics on IBM Power SystemsHortonworks
Customers are preparing themselves to analyze and manage an increasing quantity of structured and unstructured data. Business leaders introduce new analytical workloads faster than what IT departments can handle. Legacy IT infrastructure needs to evolve to deliver operational improvements and cost containment, while increasing flexibility to meet future requirements. By providing HDP on IBM Power Systems, Hortonworks and IBM are giving customers have more choice in selecting the appropriate architectural platform that is right for them. In this webinar, we’ll discuss some of the challenges with deploying big data platforms, and how choosing solutions built with HDP on IBM Power Systems can offer tangible benefits and flexibility to accommodate changing needs.
IBM Consultants & System Integrators Interchange - 2015
http://www-07.ibm.com/events/in/csiinterchange/index.html
Demystify OpenPOWER
Speaker: Anand Haridass, Chief Engineer – Power System, IBM India
OpenPOWER is an open development community, using the POWER Architecture to serve the evolving needs of customers. Hear about the success of the OpenPOWER strategy and Foundation that is building momentum, and fueling an explosion of new development, innovation and collaboration, and improved performance on the POWER Architecture. What does this means for your clients? Find out how OpenPOWER is expanding the Power ecosystem and capabilities with new solutions coming from IBM and our partners.
Innovating to Create a Brighter Future for AI, HPC, and Big Datainside-BigData.com
In this deck from the DDN User Group at ISC 2019, Alex Bouzari from DDN presents: Innovating to Create a Brighter Future for AI, HPC, and Big Data.
"In this rapidly changing landscape of HPC, DDN brings fresh innovation with the stability and support experience you need. Stay in front of your challenges with the most reliable long term partner in data at scale."
Watch the video: https://wp.me/p3RLHQ-kxm
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
More than 30 years of experience in Scientific Computing
In the early days, transtec focused on reselling DEC computers and peripherals, delivering high-performance workstations to university institutes and research facilities. In 1987, SUN/Sparc and storage solutions broadened the portfolio, enhanced by IBM/RS6000 products in 1991. These were the typical workstations and server systems for high performance computing then, used by the majority of researchers worldwide.In the late 90s, transtec was one of the first companies to offer highly customized HPC cluster solutions based on standard Intel architecture servers, some of which entered the TOP 500 list of the world’s fastest computing systems.
This brochure focusses on where transtec HPC solutions excel. transtec HPC solutions use the latest and most innovative technology. Bright Cluster Manager as the technology leader for unified HPC cluster management, leading-edge Moab HPC Suite for job and workload management, Intel Cluster Ready certification as an independent quality standard for our systems, Panasas HPC storage systems for highest performance and real ease of management required of a reliable HPC storage system. Again, with these components, usability, reliability and ease of management are central issues that are addressed, even in a highly heterogeneous environment. transtec is able to provide customers with well-designed, extremely powerful solutions for Tesla GPU computing, as well as thoroughly engineered Intel Xeon Phi systems. Intel’s InfiniBand Fabric Suite makes managing a large InfiniBand fabric easier than ever before – transtec masterly combines excellent and well-chosen components that are already there to a fine-tuned, customer-specific, and thoroughly designed HPC solution.Your decision for a transtec HPC solution means you opt for most intensive customer care and best service in HPC. Our experts will be glad to bring in their expertise and support to assist you at any stage, from HPC design to daily cluster operations, to HPC Cloud Services.Last but not least, transtec HPC Cloud Services provide customers with the possibility to have their jobs run on dynamically provided nodes in a dedicated datacenter, professionally managed and individually customizable. Numerous standard applications like ANSYS, LS-Dyna, OpenFOAM, as well as lots of codes like Gromacs, NAMD, VMD, and others are pre-installed, integrated into an enterprise-ready cloud management environment, and ready to run.
Have fun reading the transtec HPC Compass 2013/14
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
5. Technology features – key points to remember
HPC – High Performance Computing
High Performance
High Density
Fast
Interconnects
Scalable
Storage
Highly
Efficient
Infrastructure
6. HPC Solutions Business Unit
HPE HPC BU Solutions Areas
HighPerformance
Computing
- Monte Carlo simulations
- Oil & Gas computations
- Manufacturing, Intelligence
- Life sciences (Bio, Chem,…)
AI&BigDataapplications
- Deep Learning, AI
- HPDA (Hadoop, SPARK)
- In memory compute & DB
- Rendering, Content
Scale-OUTStorage
- Scale out Storage
- Media assets archives
- High Performance Storage
- Video surveillance
PerformanceOptimized
Datacenters
- Modular datacenters
- Mobile datacenters
- Green DC (low PUE)
- EMI/EMR protected DC
7. We design and deliver
a complete customer-specified solution,
including application software if needed
(often we stop with middleware),
delivered pre-built and tested to the highest level of quality,
ready to plug in and switch on with the shortest time to
acceptance.
What do deliver?
9. HPE is a proven leader in the high end Supercomputing Segment
9
Analysis – Summary
– HPE again #1 position - 128 systems (26%)
– Lenovo #2, with 84 systems (17%)
– Cray, #3, with 60 systems (12%)
– SGI, #7 with 25 systems (5%)
Vendor Top 25 Top 50 Top 100 Top 500
HPE 0 2 3 127
SGI 4 5 10 26
Cray 11 18 30 60
Lenovo 0 1 6 84
Comparison - # of Systems
TOP 500: 47th Edition
Top 500 – Vendor Comparison enhanced Top 100 HPE + SGI leadership
11. HPE Strategy –
Accelerate HPC leadership today and into the future
– NRE Efforts
– Forward Selling
– Early Ships
– Time To Market
Solutions
– Risk Compliant Archive
– Trade and Match
– Quantitative Finance Library
– Next Gen Sequencing
– CAE Solution
HPC Advanced Technology and Development
HPC and AI
Compute Solutions
HPC and AI
Storage Solutions
Deep Learning
Solutions
Optimized Platforms
Horizon 2
next 12 to 24 months
– Software Stacks
– Lustre
– Remote Graphics
– Cognitive Toolkit
Metrics
– Domain Expertise
– Customer Loyalty
– Share Growth
– Innovation
HPC and AI
Market
Leadership
Horizon 1
6 to 12 months
HPC and AI
Markets / Industries
Financial
Services
Industry
Life Sciences, Health
Oil & Gas,
Energy
ManufacturingEDA / CAE
Academia,
Research
Government
Weather
We are in the high performance computing solutions business
12. PathForward Exascale program
Ensures US competitiveness in the global market
– PathForward is a Department of Energy (DOE) Non-Recurring Engineering (NRE) Initiative
– Central element of DOE’s Exascale Compute Program (ECP) Hardware Technology effort
– Funding for R&D of technologies to develop the next generation compute infrastructure; includes open architectures
and alternative processors
– Cornerstone of U.S. scientific progress, technological innovation, economic vitality, and a strong national defense
13. Solving complex HPC and AI challenges with Hybrid Cluster
New Tokyo Institute of Technology Supercomputer
Key features
− 540 Compute Nodes
− Two (2) Intel® Xeon® E5-2680 v4 processors
− Four (4) NVIDIA TESLA P100 NVLink GPUs
− NVMe-compatible, high-speed 1.08 PB SSDs
− Four (4) Intel Omni-Path connectors/node
− Rich Fat Tree configuration
− 400 Gb/s bandwidth /node
TSUBAME 3.0 Supercomputer
− Available for outside researchers in private sector through
JHPCN1 and HPCI2
− Ranked #1 on Green500 List – most energy efficient
supercomputer in the world, running on HPE infrastructure.
− Supports significant AI and scientific HPC workloads,
providing unprecedented ability to analyze large data sets.
− Largest Tesla P100 SXM2 deployment to date with 2,160
NVLink-enabled GPUs
“Through our partnership with SGI, and now HPE, the
Tokyo Institute of Technology has worked successfully
to deliver a converged world-leading HPC and Deep
Learning platform….”
- Satoshi Matsuoka, Professor and TSUBAME Leader, Tokyo
Institute of Technology..
1, 2 Reference Information provided in speaker notes
14. World’s largest chemical company creates chemistry with HPC
HPE supercomputer enables global digital transformation at BASF
Key features
− HPE Apollo 6000 Gen10
− > 1 Petaflop using Next Gen platform
− Multitude nodes
− Work simultaneously on highly complex
tasks
− Dramatically reduce processing time
BASF Supercomputer
− Designed to be one of the world’s largest supercomputer
− Drive digitalization of BASF's worldwide research
− Shorten modeling / simulation times (months to days)
− Solve complex problems while decreasing discovery time
− Run virtual experiments to reduce time-to-market, lower costs
“The new supercomputer will promote the
application and development of complex
modeling and simulation approaches,
opening up completely new avenues for our
research at BASF.”
− Dr. Martin Brudermueller, Vice Chairman of
the Board of Executive Directors and CTO,
BASFBASF Cluster - HPE Factory Build in Houston, TX, May 2017
15. Exascale required to solve the world’s most complex problems
Life Sciences
Weather
Deep Learning, IoT and Artificial Intelligence systems will need Exascale computing
Material
Sciences
Manufacturing
Today’s top 500 systems
Consume 650MW of
power – (> ½ a Gigawatt)
Huge CO2 Footprint
Aggregated compute
power of ~1 ExaFLOPS
Accurate regional impact assessment of climate change
Accelerate and translate cancer research in RAS pathways,
drug responses, and treatment strategies
Additive manufacturing process design for qualifiable metal
components
Efficiency and performance characteristics of materials for
batteries, solar cells, and optoelectronics
17. − Deliver more choice / flexibility for HPC
− ARM processor based system
− Proof of concepts with select customers
Accelerating HPC innovation for today and tomorrow
New HPE SGI 8600 Next
gen petaflop scale, liquid cooled
supercomputer
– Greater performance, scale
and efficiency
New HPE Apollo 6000 Gen10
Next gen air cooled, purpose built
enterprise HPC solution
– Best in class performance, rack
scale efficiency
New HPE Apollo 10
Series
– Cost effective platforms
for AI and emerging
applications
A new experience in IT
security and protection
New HPE Performance
Software Suite: Out-of-the-
box HPC stack, enhanced
cluster system management
and acceleration tools
New Services and
Consumption Model
– New Advisory, Professional and
Operational Services
– HPE Flexible Capacity for HPC
DoE PathForward Exascale
Program
− New Exascale program to create
reference designs
− Inspired by Memory-Driven Computing
and Hewlett Packard Labs technologies
New disruptive technology based
system architecture
– ARM processor based
system
– Proof of concepts with
select customers
1 Substantiation for quantifiable benefits in speaker notes
Workload optimized
for extreme performance
Secure, agile, flexible
Compute experience
Exascale and advanced
technology programs
– NEW collaboration
for AI application in
precision medicine
World’s Most Secure Servers1
for HPC and AI – HPE Apollo
6000 Gen10
18. HPE purpose-built portfolio for HPC
HPE
Apollo 6500 Gen9
Rack-scale GPU
Computing
HPE Integrity
Superdome X
HPE Integrity
MC990 X
Scale-up, shared
memory HPC, UV
Technologies
HPE
Apollo 6000
Gen9
Rack-scale HPC
HPE
Apollo 2000
Gen9
The bridge to enterprise
scale-out architecture
HPE
SGI 8600
Liquid cooled, delivering
industry leading performance,
density and efficiency
HPE
Apollo 6000
Gen10
Extreme Compute
Performance in High
Density
Supercomputing / Enterprise / Commercial HPC
Advisory, Professional and Operational Services – HPE Flexible Capacity for HPC, HPE Datacenter Care for Hyperscale
HPC Storage Choice of Fabrics
HPC Industry
Solutions
Weather and
Climate Research
Financial
Services
Life Sciences,
Health
Academia,
Research,
Gov’t
Oil and Gas,
Energy
EDA / CAE
Manufacturing
HPE
Software
Open
Source
Software
Commercial
HPC Software
− HPE Performance Software - Core Stack
− HPE Insight Cluster Management Utility
− HPE SGI Management Suite
− HPE Performance Software – Message
Passing Interface*
HPE Apollo
4520
Arista
Networking
– Intel® Omni-Path
Architecture
– Mellanox InfiniBand
– HPE FlexFabric
Network
HPC Data
Management
Framework
Software
Large-scale, storage
virtualization & tiered
data management
platform
HPE Performance Software Suite
Emerging HPC In-memory HPC
Additional Storage
Options available
* Available in August 2017
21. The New Normal: Compute is not keeping up
21
0,3 0,8 1,2
1,8
4,4
7,9
15,8
31,6
44
0
5
10
15
20
25
30
35
40
45
50
2006 2008 2010 2012 2014 2016 2018 2020
Data
(Zettabytes)
Data nearly doubles every two years
(2013-2020)
Data growth
Transistors
(thousands)
Single-thread
Performance
(SpecINT)
Frequency
(MHz)
Typical Power
(Watts)
Number of
Cores
1975 1980 1985 1990 1995 2000 2005 2010 2015
Microprocessors
107
106
105
104
103
102
101
100
22. We need new type of compute – Memory Driven Computing
Structured data
40 petabytes
Walmart’s transaction
database (2017)
Human interaction data
4 petabytes
Per-day posting to Facebook
across 1.1 billion active users
(May 2016)
4kB per active user
Digitization of Analog Reality
40,000 petabytes a day*
10m self-driving cars by 2020
Front camera
20MB / sec
Front ultrasonic sensors
10kB / sec
Infrared camera
20MB / sec
Side ultrasonic
sensors
100kB / sec
Front, rear and
top-view cameras
40MB / sec
Rear ultrasonic
cameras
100kB / secRear radar sensors
100kB / sec
Crash sensors
100kB / sec
Front radar
sensors
100kB / sec
* Driver assistance systems only
23. Key attributes of
Memory-Driven
Computing
Powerful
A quantum leap in performance,
beyond what you can imagine
Open
An open architecture designed to foster
a vibrant innovation ecosystem
Trusted
Always safe, always recoverable
All the benefits without asking for sacrifice
Simple
Structurally simple, manageable and
automatic, so that “it just works”
23
25. What are core Memory-Driven Computing components?
25
Combining memory and
storage in a stable
environment to increase
processing speed and
improve energy efficiency
Using photonics where
necessary to eliminate
distance and create
otherwise impossible
topologies
Optimizing processing from
general to specific tasks
Radically simplifying
programming and enabling
new applications that we
can’t even begin to build
today
Fast, persistent
memory
Fast memory fabric
Task-specific
processing
New and Adapted
software
27. Memory-Driven Computing Developer Toolkit
Software already available to you
‒ Example Applications
‒ Programming and analytics
tools
‒ Operating system support
‒ Emulation/simulation tools
Get access to the toolkit:
https://www.labs.hpe.com/the-
machine/developer-toolkit
Open source components
Machine (Prototype) hardware
Node Operating System
Persistent Memory
Library (pmem.io)
Librarian File System (LFS)
Fabric attached memory
atomics library
Linux for
Memory-Driven
Computing
Example Applications
Management
Services
Librarian
Data Management & Programming Frameworks
Managed data
structures
Sparkle
Emulation/Simulation Tools
Performance
emulation for NVM
Fabric attached
memory emulation
X’86 emulation (Superdome X, MC990x,
ProLiant)
Fault-tolerant
programming
Fast
optimistic
engine
Image Search Large Scale Graph Inference
Persistent
memory toolkit
28. HPE introduces the world’s largest single-memory computer
The prototype contains 160 terabytes of memory
28
– 160 TB of shared memory spread across
40 physical nodes, interconnected using a
high-performance fabric protocol.
– An optimized Linux-based operating
system running on ThunderX2, Cavium’s
flagship second generation dual socket
capable ARMv8-A workload optimized
System on a Chip.
– Photonics/Optical communication links,
including the new X1 photonics module,
are online and operational.
– Software programming tools designed
to take advantage of abundant of
persistent memory.
34. Are we on the brink of a ….
34
Change 1:
Moving from gather and hunting
to settling down to farms and
ports
Change 2:
Developing the printing press
and industrial revolution
Latest Change:
The greatest change of our
lives. Artificial Intelligence
35. 0
10
20
30
40
50
60
70
Market in
billion US dollars
1.38
2016
2.24
2017
4.07
2018
6.63
2019
10.53
2020
16.24
2021
24.16
2022
34.38
2023
46.52
2024
59.75
2025
What is the size of the AI market?
35
1 Source : IDC IT Predictions 2017
Services
App
Advisory
Total AI TAM
2017 TAM 2021 TAM 4yr CAGR
$0.7B $2.3B 32%
$0.4B $1.6B 40%
$2.3B $18.5B 67%
Server- ML
$7.9B $31.3B 41%
$3.5B
$0.9BServer- DL
$4.6B
$4.4B
7%
48%
By 2019,
40% of all digital
transformation initiatives 100% of all effective IoT efforts will be
supported by AI capabilities1
andBy 2018,
75% of developer teams
will include AI functionality
in one or more applications1
37. AI vs Brain?
AI – HPE, CMU Liberatus
Brain: Kim, Les, Chou, MCAulay
10160
Poker - 2017Checkers -1995
AI: UAlberta Chinook: white
Brain: Don Lafferty – red
1020
Chess -1997
AI: IBM Deep Blue: white
Brain: Garry Kasparov: black
1047
AI: Google AlphaGo - black
Brain: Lee Sedol - white
10171
Go - 2016
38. ‒ Search & information
extraction
‒ Security/Video
surveillance
‒ Self-driving cars
‒ Medical imaging
‒ Robotics
‒ Interactive voice
response (IVR)
systems
‒ Voice interfaces
(Mobile, Cars,
Gaming, Home)
‒ Security (speaker
identification)
‒ Health care
‒ People with disabilities
‒ Search and ranking
‒ Sentiment analysis
‒ Machine translation
‒ Question answering
‒ Recommendation
engines
‒ Advertising
‒ Fraud detection
‒ AI challenges
‒ Drug discovery
‒ Sensor data analysis
‒ Diagnostic support
Where can we use deep learning today?
Applications
38
TextVision Speech Other
39. Applications break down
39
Detection
Look for a known object/pattern
Classification
Assign a label from a predefined set of
labels
Generation
Generate content
Anomaly detection
Look for abnormal, unknown patterns
Images
Video
Text
Sensor
Other
Speech
Video surveillance
Speech recognition
Sentiment analysis
Predictive maintenance
Fraud detection
Image analysis
40. Where to start ?
Recommend DL stack by vertical application
40
Infrastructure
Frameworks
Typical layers
Data type
Data
ManufacturingVerticals Oil & gas
Connected
cars
Voice
interfaces
Social media
Speech Images Sensor dataVideo
Small Moderate Large
Convolutional
Fully-
connected
Recurrent
TensorFlow Caffe 2 CNTK …
x86 GPUs FPGAs TPU ? …
…
Torch
Neural Network sits here
41. AI expertise and solutions to “get started” with deep learning models
41
New foundation to “get started” with deep learning models
Enhance
employee productivity
Accelerate app development
with New deep learning
integrated solution
Pre-configured, proven
hardware & software solution
− Purpose-build platform
− Easy to use and install
− Simple management
− Automated framework updates
Train
your teams
Gain organizational
competencies with Enhanced
Deep Learning Institute
State of the art deep learning
training
− Latest techniques
− Software frameworks
− Infrastructure requirements
− Hands on, instructor led
HPE Fraud Detection Solution
with Kinetica
− Uses deep learning techniques
− Qualified with Kinetica in-
memory GPU database
− NVIDIA GPU accelerators
Leverage
“out of the box” solutions
Increase security of e-commerce
with Enhanced HPE Fraud
Detection solution
Get
Started
Select
ideal technologies & systems
Make Informed technology
decisions with New HPE
Deep Learning Cookbook
Comprehensive technology
selection tool
− Estimates & refines performance
− Characterizes frameworks
− Recommends ideal hardware
and software stacks
IT Expertise Solutions
43. Where would the AI road take us?
43
Advances in artificial intelligence will transform modern life by reshaping transportation, health, science, finance, and the
military.
“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and
more cheaply than human workers.
Grace et al , When Will AI Exceed Human Performance? Evidence from AI Experts
Writing a
bestseller –
2049
Driving a truck
- 2027
Math Research
- 2060
Surgeon -
2043
Retail - 2031 Full
Automation of
labor – 2140