Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
AWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated ComputingAmazon Web Services
AWS Compute Evolved Week at the San Francisco Loft: Deep Dive on Amazon EC2 Accelerated Computing
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Speaker: Clinton Ford - Sr. Product Manager, EC2, AWS
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, understand the differences among their hardware types and capabilities, and explore their optimal use cases.
Amazon EC2 provides resizable compute capacity in the cloud, and is designed to make web-scale computing easier. This web service offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high performance computing (HPC)—all with flexible pricing options. In this session, we learn about the latest Amazon EC2 features and capabilities, including new instance families, differences among hardware types and capabilities, and optimal use cases.
Snap ML is a machine learning framework for fast training of generalized linear models (GLMs) that can scale to large datasets. It uses multi-level parallelism across nodes and GPUs. Snap ML implementations include snap-ml-local for single nodes, snap-ml-mpi for multi-node HPC environments, and snap-ml-spark for Apache Spark clusters. Experimental results show Snap ML can train a logistic regression model on a 3TB Criteo dataset within 1.5 minutes using 16 GPUs.
This document discusses IBM's involvement in artificial intelligence and deep learning. It includes:
- An introduction to IBM's Cognitive Systems team working in AI.
- A brief history of IBM's AI projects including Deep Blue, Blue Gene, and Watson.
- Explanations of concepts like machine learning, deep learning, and how they relate to high performance computing.
- Details of IBM's current hardware, software, and services for AI workloads including the Power9 processor, PowerAI tools, and storage solutions.
The document provides an overview of IBM's expertise and offerings in the field of artificial intelligence.
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...Amazon Web Services
Amazon EC2 P3 instances offer up to eight of the latest NVIDIA Tesla V100 GPUs, with up to 13X the speed of previous generation GPU instances. In this session, learn from Airbnb how they use machine learning to make their services smarter and more engaging for their customers and how they are using P3 instances to dramatically lower training time of their machine learning models while optimize costs.
AWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated ComputingAmazon Web Services
AWS Compute Evolved Week at the San Francisco Loft: Deep Dive on Amazon EC2 Accelerated Computing
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Speaker: Clinton Ford - Sr. Product Manager, EC2, AWS
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, understand the differences among their hardware types and capabilities, and explore their optimal use cases.
Amazon EC2 provides resizable compute capacity in the cloud, and is designed to make web-scale computing easier. This web service offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high performance computing (HPC)—all with flexible pricing options. In this session, we learn about the latest Amazon EC2 features and capabilities, including new instance families, differences among hardware types and capabilities, and optimal use cases.
Snap ML is a machine learning framework for fast training of generalized linear models (GLMs) that can scale to large datasets. It uses multi-level parallelism across nodes and GPUs. Snap ML implementations include snap-ml-local for single nodes, snap-ml-mpi for multi-node HPC environments, and snap-ml-spark for Apache Spark clusters. Experimental results show Snap ML can train a logistic regression model on a 3TB Criteo dataset within 1.5 minutes using 16 GPUs.
This document discusses IBM's involvement in artificial intelligence and deep learning. It includes:
- An introduction to IBM's Cognitive Systems team working in AI.
- A brief history of IBM's AI projects including Deep Blue, Blue Gene, and Watson.
- Explanations of concepts like machine learning, deep learning, and how they relate to high performance computing.
- Details of IBM's current hardware, software, and services for AI workloads including the Power9 processor, PowerAI tools, and storage solutions.
The document provides an overview of IBM's expertise and offerings in the field of artificial intelligence.
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...Amazon Web Services
Amazon EC2 P3 instances offer up to eight of the latest NVIDIA Tesla V100 GPUs, with up to 13X the speed of previous generation GPU instances. In this session, learn from Airbnb how they use machine learning to make their services smarter and more engaging for their customers and how they are using P3 instances to dramatically lower training time of their machine learning models while optimize costs.
This document discusses using OpenMP 4.5 directives and CUDA to accelerate computational fluid dynamics (CFD) simulations on GPUs using OpenPOWER platforms. It describes porting an open-source CFD code called Code Saturne to leverage GPUs for tasks like linear algebra kernels and algebraic multigrid. It shows how OpenMP 4.5 data environments can be used to manage data movement between the host and device without modifying the code. Profiling results indicate that directive-based programming models can achieve speedups and improve programmer productivity when porting existing CPU codes to accelerate tasks on GPUs.
This document discusses three key artificial intelligence capabilities of IBM's Power9 architecture:
1) Large Memory Support enables processing of high-definition images and large models that exceed GPU memory limits.
2) Distributed Deep Learning allows scaling to multiple servers for faster and more accurate training on large datasets.
3) PowerAI Vision provides tools for labeling data, training models for computer vision tasks, and deploying models for production use.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, the differences among their hardware types and capabilities, and their optimal use cases. Also discover best practices for optimizing your expenditure and getting the most benefit from your EC2 instances while saving time and money.
RAPIDS: GPU-Accelerated ETL and Feature EngineeringKeith Kraus
The RAPIDS suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
JMI Techtalk: 한재근 - How to use GPU for developing AILablup Inc.
이 Techtalk에서는 AI 개발을 위해 GPU를 사용할 때 Nvidia가 제공하는 성능 향상을 위한 다양한 방법들을 기술자료들과 함께 소개합니다. 특히 Volta 아키텍처를 기반으로 Mixed precision을 도입하여 성능을 향상하는 과정에 관한 내용을 자세히 다룹니다.
This Techtalk introduces a variety of ways to improve the performance that Nvidia provides when using the GPU for AI development, along with technical resources. In particular, this talk discusses the process of improving performance by introducing mixed precision based on the Volta architecture.
A Primer on FPGAs - Field Programmable Gate ArraysTaylor Riggan
A focus on the use of FPGAs by cloud service providers. Includes Microsoft Azure Catapult, Google Tensor Processors, and Amazon EC2 F1 instances. Also includes background info on how to get started with FPGAs
Open Source RAPIDS GPU Platform to Accelerate Predictive Data Analyticsinside-BigData.com
Today NVIDIA announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed.
Data analytics and machine learning are the largest segments of the high performance computing market that have not been accelerated — until now,” said Jensen Huang, founder and CEO of NVIDIA, who revealed RAPIDS in his keynote address at the GPU Technology Conference. “The world’s largest industries run algorithms written by machine learning on a sea of servers to sense complex patterns in their market and environment, and make fast, accurate predictions that directly impact their bottom line.
"RAPIDS open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU’s importance in data analytics, an array of companies is supporting RAPIDS — from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle."
Learn more: https://insidehpc.com/2018/10/open-source-rapids-gpu-platform-accelerate-predictive-data-analytics/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
This is a presentation about how to use Kubeflow for "AI pipeline optimization" - we show the "traditional" pipeline and why it should be optimized to have it available to a wider audience. Services are getting more and more important nowadays - thats why we call it "Data Science as a service".
The document provides details about an OpenPOWER and AI workshop being held on June 18-19, 2018 at the Barcelona Supercomputing Center.
Day 1 will provide an introduction to AI and cover topics like Power9 and PowerAI features, large model support, and use case demonstrations. Day 2 will focus on deeper learning exercises and industry use cases using Power9 features like distributed deep learning.
The agenda lists out the schedule and topics to be covered each day, including welcome sessions, technical presentations, breaks and wrap-up discussions.
Mellanox is a supplier of interconnect solutions headquartered in Israel with worldwide offices and over 2,700 employees. It provides adapters, switches, cables, and transceivers for high-speed InfiniBand and Ethernet connectivity. Mellanox's solutions accelerate high performance computing and artificial intelligence workloads through technologies like GPUDirect, RDMA, and in-network computing capabilities. Mellanox's products are used to build several of the world's fastest supercomputers and its technologies help unlock the power of artificial intelligence for leading companies.
This document discusses the evolution of data storage needs from traditional structured data to modern unstructured data like objects and machine data. It outlines the four industrial revolutions defined by major technological advances. Pure Storage's FlashBlade is introduced as the industry's first data hub purpose-built for AI and deep learning, with massively parallel architecture powered by Purity software to scale without limits. Real-world customer examples demonstrate how FlashBlade accelerates AI initiatives for autonomous vehicles and powers some of the world's most powerful AI supercomputers.
This document discusses the evolution of computing from PCs to mobile-cloud to AI and IoT. It highlights how deep learning using GPUs has become a new computing model, with neural network complexity exploding to tackle increasingly complex challenges. It introduces Nvidia's Volta GPU and how it delivers revolutionary performance for deep learning training and inference through new tensor cores and optimizations for deep learning frameworks and models.
Accelerate Machine Learning Workloads using Amazon EC2 P3 Instances - SRV201 ...Amazon Web Services
Organizations are tackling exponentially complex questions across advanced scientific, energy, high tech, and medical fields. Machine learning (ML) makes it possible to quickly explore a multitude of scenarios and generate the best answers, ranging from image, video, and speech recognition to autonomous vehicle systems and weather prediction. Learn how Amazon EC2 P3 instances can help data scientists, researchers, and developers significantly lower their time and cost to train ML models, speed up their development process, and bring innovations to market sooner.
RAPIDS – Open GPU-accelerated Data ScienceData Works MD
RAPIDS – Open GPU-accelerated Data Science
RAPIDS is an initiative driven by NVIDIA to accelerate the complete end-to-end data science ecosystem with GPUs. It consists of several open source projects that expose familiar interfaces making it easy to accelerate the entire data science pipeline- from the ETL and data wrangling to feature engineering, statistical modeling, machine learning, and graph analysis.
Corey J. Nolet
Corey has a passion for understanding the world through the analysis of data. He is a developer on the RAPIDS open source project focused on accelerating machine learning algorithms with GPUs.
Adam Thompson
Adam Thompson is a Senior Solutions Architect at NVIDIA. With a background in signal processing, he has spent his career participating in and leading programs focused on deep learning for RF classification, data compression, high-performance computing, and managing and designing applications targeting large collection frameworks. His research interests include deep learning, high-performance computing, systems engineering, cloud architecture/integration, and statistical signal processing. He holds a Masters degree in Electrical & Computer Engineering from Georgia Tech and a Bachelors from Clemson University.
GTC Taiwan 2017 在 Google Cloud 當中使用 GPU 進行效能最佳化NVIDIA Taiwan
The document discusses using GPUs on Google Cloud Platform for accelerating compute-intensive workloads. It describes how GPUs can provide significant performance gains for machine learning, high performance computing, and visualization workloads. It provides examples of customers like Schlumberger leveraging GPUs on GCP for oil exploration and Shazam for music fingerprinting. The document also highlights the flexibility, scalability, and cost benefits of using GPUs on Google Cloud Platform.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
by Jeanine Banks, Director of Product Management, EC2 Windows & Enterprise Workloads, AWS
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Technical understanding of AWS' offerings for GPU-based and FPGA-based accelerated computing
- Technical understanding of which Amazon EC2 Accelerated Computing services are ideal for running deep learning training and inference, advanced graphics applications, high performance computing, and reconfigurable computing
- What are the technical advantages of using Amazon EC2 Accelerated Computing services to run ML/DL and HPC workloads in the cloud
This document discusses using OpenMP 4.5 directives and CUDA to accelerate computational fluid dynamics (CFD) simulations on GPUs using OpenPOWER platforms. It describes porting an open-source CFD code called Code Saturne to leverage GPUs for tasks like linear algebra kernels and algebraic multigrid. It shows how OpenMP 4.5 data environments can be used to manage data movement between the host and device without modifying the code. Profiling results indicate that directive-based programming models can achieve speedups and improve programmer productivity when porting existing CPU codes to accelerate tasks on GPUs.
This document discusses three key artificial intelligence capabilities of IBM's Power9 architecture:
1) Large Memory Support enables processing of high-definition images and large models that exceed GPU memory limits.
2) Distributed Deep Learning allows scaling to multiple servers for faster and more accurate training on large datasets.
3) PowerAI Vision provides tools for labeling data, training models for computer vision tasks, and deploying models for production use.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, the differences among their hardware types and capabilities, and their optimal use cases. Also discover best practices for optimizing your expenditure and getting the most benefit from your EC2 instances while saving time and money.
RAPIDS: GPU-Accelerated ETL and Feature EngineeringKeith Kraus
The RAPIDS suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
JMI Techtalk: 한재근 - How to use GPU for developing AILablup Inc.
이 Techtalk에서는 AI 개발을 위해 GPU를 사용할 때 Nvidia가 제공하는 성능 향상을 위한 다양한 방법들을 기술자료들과 함께 소개합니다. 특히 Volta 아키텍처를 기반으로 Mixed precision을 도입하여 성능을 향상하는 과정에 관한 내용을 자세히 다룹니다.
This Techtalk introduces a variety of ways to improve the performance that Nvidia provides when using the GPU for AI development, along with technical resources. In particular, this talk discusses the process of improving performance by introducing mixed precision based on the Volta architecture.
A Primer on FPGAs - Field Programmable Gate ArraysTaylor Riggan
A focus on the use of FPGAs by cloud service providers. Includes Microsoft Azure Catapult, Google Tensor Processors, and Amazon EC2 F1 instances. Also includes background info on how to get started with FPGAs
Open Source RAPIDS GPU Platform to Accelerate Predictive Data Analyticsinside-BigData.com
Today NVIDIA announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed.
Data analytics and machine learning are the largest segments of the high performance computing market that have not been accelerated — until now,” said Jensen Huang, founder and CEO of NVIDIA, who revealed RAPIDS in his keynote address at the GPU Technology Conference. “The world’s largest industries run algorithms written by machine learning on a sea of servers to sense complex patterns in their market and environment, and make fast, accurate predictions that directly impact their bottom line.
"RAPIDS open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU’s importance in data analytics, an array of companies is supporting RAPIDS — from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle."
Learn more: https://insidehpc.com/2018/10/open-source-rapids-gpu-platform-accelerate-predictive-data-analytics/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/xilinx/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nick Ni, Director of Product Marketing at Xilinx, presents the "Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability" tutorial at the May 2019 Embedded Vision Summit.
AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by ASICs that take years to go from architecture to production. In this talk, Ni introduces the Xilinx AI Engine, which complements the dynamically- programmable FPGA fabric to enable ASIC-like performance via custom data flows and a flexible memory hierarchy. This combination provides an orders-of-magnitude boost in AI performance along with the hardware architecture flexibility needed to quickly adapt to rapidly evolving neural network topologies.
This is a presentation about how to use Kubeflow for "AI pipeline optimization" - we show the "traditional" pipeline and why it should be optimized to have it available to a wider audience. Services are getting more and more important nowadays - thats why we call it "Data Science as a service".
The document provides details about an OpenPOWER and AI workshop being held on June 18-19, 2018 at the Barcelona Supercomputing Center.
Day 1 will provide an introduction to AI and cover topics like Power9 and PowerAI features, large model support, and use case demonstrations. Day 2 will focus on deeper learning exercises and industry use cases using Power9 features like distributed deep learning.
The agenda lists out the schedule and topics to be covered each day, including welcome sessions, technical presentations, breaks and wrap-up discussions.
Mellanox is a supplier of interconnect solutions headquartered in Israel with worldwide offices and over 2,700 employees. It provides adapters, switches, cables, and transceivers for high-speed InfiniBand and Ethernet connectivity. Mellanox's solutions accelerate high performance computing and artificial intelligence workloads through technologies like GPUDirect, RDMA, and in-network computing capabilities. Mellanox's products are used to build several of the world's fastest supercomputers and its technologies help unlock the power of artificial intelligence for leading companies.
This document discusses the evolution of data storage needs from traditional structured data to modern unstructured data like objects and machine data. It outlines the four industrial revolutions defined by major technological advances. Pure Storage's FlashBlade is introduced as the industry's first data hub purpose-built for AI and deep learning, with massively parallel architecture powered by Purity software to scale without limits. Real-world customer examples demonstrate how FlashBlade accelerates AI initiatives for autonomous vehicles and powers some of the world's most powerful AI supercomputers.
This document discusses the evolution of computing from PCs to mobile-cloud to AI and IoT. It highlights how deep learning using GPUs has become a new computing model, with neural network complexity exploding to tackle increasingly complex challenges. It introduces Nvidia's Volta GPU and how it delivers revolutionary performance for deep learning training and inference through new tensor cores and optimizations for deep learning frameworks and models.
Accelerate Machine Learning Workloads using Amazon EC2 P3 Instances - SRV201 ...Amazon Web Services
Organizations are tackling exponentially complex questions across advanced scientific, energy, high tech, and medical fields. Machine learning (ML) makes it possible to quickly explore a multitude of scenarios and generate the best answers, ranging from image, video, and speech recognition to autonomous vehicle systems and weather prediction. Learn how Amazon EC2 P3 instances can help data scientists, researchers, and developers significantly lower their time and cost to train ML models, speed up their development process, and bring innovations to market sooner.
RAPIDS – Open GPU-accelerated Data ScienceData Works MD
RAPIDS – Open GPU-accelerated Data Science
RAPIDS is an initiative driven by NVIDIA to accelerate the complete end-to-end data science ecosystem with GPUs. It consists of several open source projects that expose familiar interfaces making it easy to accelerate the entire data science pipeline- from the ETL and data wrangling to feature engineering, statistical modeling, machine learning, and graph analysis.
Corey J. Nolet
Corey has a passion for understanding the world through the analysis of data. He is a developer on the RAPIDS open source project focused on accelerating machine learning algorithms with GPUs.
Adam Thompson
Adam Thompson is a Senior Solutions Architect at NVIDIA. With a background in signal processing, he has spent his career participating in and leading programs focused on deep learning for RF classification, data compression, high-performance computing, and managing and designing applications targeting large collection frameworks. His research interests include deep learning, high-performance computing, systems engineering, cloud architecture/integration, and statistical signal processing. He holds a Masters degree in Electrical & Computer Engineering from Georgia Tech and a Bachelors from Clemson University.
GTC Taiwan 2017 在 Google Cloud 當中使用 GPU 進行效能最佳化NVIDIA Taiwan
The document discusses using GPUs on Google Cloud Platform for accelerating compute-intensive workloads. It describes how GPUs can provide significant performance gains for machine learning, high performance computing, and visualization workloads. It provides examples of customers like Schlumberger leveraging GPUs on GCP for oil exploration and Shazam for music fingerprinting. The document also highlights the flexibility, scalability, and cost benefits of using GPUs on Google Cloud Platform.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
by Jeanine Banks, Director of Product Management, EC2 Windows & Enterprise Workloads, AWS
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Technical understanding of AWS' offerings for GPU-based and FPGA-based accelerated computing
- Technical understanding of which Amazon EC2 Accelerated Computing services are ideal for running deep learning training and inference, advanced graphics applications, high performance computing, and reconfigurable computing
- What are the technical advantages of using Amazon EC2 Accelerated Computing services to run ML/DL and HPC workloads in the cloud
Accelerate ML workloads using EC2 accelerated computing - CMP202 - Santa Clar...Amazon Web Services
Machine learning (ML) facilitates quick exploration into a multitude of scenarios to generate the best solution to complex issues in image, video, speech recognition, autonomous vehicle systems, and weather prediction. For data scientists, researchers, and developers who want to speed up development of their ML applications, Amazon EC2 P3 instances are the most powerful, cost-effective, and versatile GPU compute instances available in the cloud, while Amazon EC2 G4 instances are cost-effective for deploying ML models to production. In this session, we discuss P3 and G4 instances and how to use them for various use cases to meet your ML needs.
This document provides an overview of Amazon Elastic Compute Cloud (EC2) foundations, including resources, instances, storage, networking, availability, management, deployment, monitoring, administration, and purchase options. It describes EC2 instances and Amazon Machine Images (AMIs) that define the virtual server environment. It also covers Amazon Elastic Block Store (EBS) for persistent block level storage and networking components like Amazon Virtual Private Cloud (VPC), security groups, and elastic network interfaces. The document discusses high-level concepts like regions, availability zones, and placement groups that influence availability and performance.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing--all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including Approved instance families, the differences among their hardware types and capabilities, and their optimal use cases. Also discover best practices for optimizing your expenditure and getting the most benefit from your EC2 instances while saving time and money.
by Jeanine Banks, Director of Product Management, EC2 Windows & Enterprise Workloads, AWS
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
Amazon EC2 provides resizable compute capacity in the cloud, making web scale computing easier. It offers a wide variety of compute instances and is well suited to every imaginable use case, from static websites to on-demand, high-performance supercomputing, all with flexible pricing options. In this session, learn about the latest Amazon EC2 features and capabilities, including new instance families, understand the differences among their hardware types and capabilities, and explore their optimal use cases.
Accelerating Development Using Custom Hardware Accelerations with Amazon EC2 ...Amazon Web Services
This document discusses accelerating development using custom hardware accelerations with Amazon F1 instances. It provides an overview of F1 instances and their capabilities for hardware acceleration using FPGAs. It also discusses tools, examples, and use cases for building and loading custom hardware accelerations onto F1 instances through Amazon FPGA Images (AFIs). Performance results are shown for a financial simulation model accelerated by over 600x using an F1 instance compared to CPU-only.
Accelerate Your C/C++ Applications with Amazon EC2 F1 Instances (CMP405) - AW...Amazon Web Services
The document discusses Amazon EC2 F1 instances for accelerating C/C++ applications using FPGAs. It provides an overview of F1 instances and capabilities like packet processing and multiple AFIs. Updates include more regions, sizes, and tools. Examples demonstrate financial computing, genomics, and video acceleration. The workshop will use precompiled AFIs and F1.2xlarge instances to introduce FPGA development.
Amazon EC2 deepdive and a sprinkel of AWS Compute | AWS Floor28Amazon Web Services
The document discusses an upcoming AWS event called "AMAZON EC2 DEEPDIVE AND A SPRINKLE OF AWS COMPUTE" presented by Doron Rogov. The event agenda lists technical sessions on various AWS topics occurring from October 14-23. The presentation will cover choosing Amazon EC2 instances, how performance is characterized for different workloads, how EC2 instances provide flexibility and agility while delivering performance, and how to optimize the EC2 instance experience through various instance types.
Amazon EC2 provides resizable compute capacity in the cloud and makes web-scale computing easier for customers. It offers a wide variety of compute instances well suited to every imaginable use case, from static websites to high-performance supercomputing on-demand, all available through highly flexible pricing options. This session covers the latest Amazon EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also cover some best practices on how you can optimize your expenditure on Amazon EC2 to make the most of your EC2 instances, saving time and money.
This document provides an overview of the different types of Amazon Elastic Compute Cloud (EC2) instances available on AWS. It discusses the various instance families including general purpose, compute optimized, memory optimized, storage optimized, and accelerated computing instances. For each instance type, the document outlines the appropriate use cases, available configurations over time, and performance improvements compared to earlier generations. It also describes how the underlying AWS Nitro System enables rapid innovation and delivery of new EC2 instances with varying processor architectures, storage options, and network capabilities.
The document summarizes announcements from AWS re:Invent 2019, including new AWS services and capabilities:
- AWS Outposts brings AWS infrastructure on-premises for applications requiring low latency; it offers EC2, EBS, and other services.
- Local Zones place AWS compute and storage closer to end-users for applications requiring single-digit millisecond latencies.
- Wavelength extends AWS to 5G networks by hosting infrastructure in communication service providers' networks, enabling very low latency applications over 5G.
The document discusses a leadership session on using cloud technologies to accelerate innovation for intelligent, connected products in the high-tech and semiconductor industries. It highlights key workloads like electronic design automation (EDA) and examples of companies innovating faster on AWS through more efficient EDA workflows, faster software testing, and reduced product development times.
FPGA Accelerated Computing Using Amazon EC2 F1 Instances - CMP308 - re:Invent...Amazon Web Services
Amazon EC2 F1 instances with field programmable gate arrays (FPGAs), combined with improved cloud-based FPGA programming tools, provides researchers, application developers, and startups with a well-tested, standardized, and accessible platform for hardware-accelerated computing. This session introduces you to Amazon EC2 F1 instances with FPGAs, walks you through a typical development and deployment process, and highlights a number of use cases in different domains, including genomics, video processing, text search, and financial computing.
This session introduces you to Amazon EC2 F1 instances and walks you through a typical development and deployment process, including the Approved Amazon EC2 F1 C/C++ development workflow. We also discuss a number of use cases in different domains, including financial risk simulation, genomics, video processing, big data and analytics, with a discussion about acceleration work on top of EC2 F1.
Computação de Alta Performance (HPC) na AWS - CMP201 - Sao Paulo SummitAmazon Web Services
A computação de alta performance (HPC) na nuvem permite alta escala computacional e intensivos gráficos de cargas de trabalho em diversos setores, incluindo aeroespacial, manufatura, ciências da vida, serviços financeiros e energia. A AWS fornece aos desenvolvedores de aplicações e usuários um poder computacional sem precedentes para aplicações massivamente paralelas em áreas como fluído de grande escala e simulação de materiais, renderização de conteúdo 3D, computação financeira e deep learning. Nesta sessão, fornecemos uma visão geral dos recursos de HPC na AWS. Descrevemos a mais nova geração de instâncias de computação de uso geral e aceleradas, destacamos casos de uso de clientes e parceiros em todos os setores e discutimos casos de uso de HPC novos e emergentes. Os participantes aprenderão as melhores práticas para executar fluxos de trabalho de HPC na nuvem, incluindo automação e otimização de fluxo de trabalho.
Similar to Deep Dive on Amazon EC2 Accelerated Computing (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.