© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
W E B I N A R
17.03.20
Getting started with
AWS Machine Learning
Cobus Bernard
Senior Developer Advocate
Amazon Web Services
@cobusbernard
cobusbernard
cobusbernard
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Agenda
• Overview of AI/ML on AWS
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Agenda
• Overview of AI/ML on AWS
• Amazon Polly
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Agenda
• Overview of AI/ML on AWS
• Amazon Polly
• Amazon Lex
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Agenda
• Overview of AI/ML on AWS
• Amazon Polly
• Amazon Lex
• Amazon Rekognition
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Agenda
• Overview of AI/ML on AWS
• Amazon Polly
• Amazon Lex
• Amazon Rekognition
• Amazon SageMaker
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Put machine learning in the hands of every
developer and data scientist
Our mission
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
FRAMEWORKS INTERFACES INFRASTRUCTURE
AI Services
TheAWS MLstack
VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS
ML Services
ML Frameworks + Infrastructure
A M A Z O N
P O L L Y
A M A Z O N
T R A N S C R I B E
A M A Z O N
T R A N S L A T E
A M A Z O N
C O M P R E H E N D &
A M A Z O N
C O M P R E H E N D M E D I C A L
A M A Z O N L E X A M A Z O N
F O R E C A S T
A M A Z O N
R E K O G N I T I O N
I M A G E
A M A Z O N
R E K O G N I T I O N
V I D E O
A M A Z O N
T E X T R A C T
A M A Z O N P E R S O N A L I Z E
F P G A sE C 2 P 3
& P 3 D N
E C 2 G 4
E C 2 C 5
A W S
I N F E R E N T I A
A W S I o T
G R E E N G R A S S
A M A Z O N
E L A S T I C
I N F E R E N C E
A W S D L
C O N T A I N E R S
& A M I s
A M A Z O N
E L A S T I C
K U B E R N E T E S
S E R V I C E
A M A Z O N
E L A S T I C
C O N T A I N E R
S E R V I C E
Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting
Broadest and deepest set of capabilities
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
FRAMEWORKS INTERFACES INFRASTRUCTURE
AI Services
VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS
ML Services
ML Frameworks + Infrastructure
A M A Z O N
P O L L Y
A M A Z O N
T R A N S C R I B E
A M A Z O N
T R A N S L A T E
A M A Z O N
C O M P R E H E N D &
A M A Z O N
C O M P R E H E N D M E D I C A L
A M A Z O N L E X A M A Z O N
F O R E C A S T
A M A Z O N
R E K O G N I T I O N
I M A G E
A M A Z O N
R E K O G N I T I O N
V I D E O
A M A Z O N
T E X T R A C T
A M A Z O N P E R S O N A L I Z E
F P G A sE C 2 P 3
& P 3 D N
E C 2 G 4
E C 2 C 5
A W S
I N F E R E N T I A
A W S I o T
G R E E N G R A S S
A M A Z O N
E L A S T I C
I N F E R E N C E
A W S D L
C O N T A I N E R S
& A M I s
A M A Z O N
E L A S T I C
K U B E R N E T E S
S E R V I C E
A M A Z O N
E L A S T I C
C O N T A I N E R
S E R V I C E
Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting
TheAWS MLstack
Broadest and deepest set of capabilities
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
FRAMEWORKS INTERFACES INFRASTRUCTURE
AI Services
VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS
ML Services
ML Frameworks + Infrastructure
A M A Z O N
P O L L Y
A M A Z O N
T R A N S C R I B E
A M A Z O N
T R A N S L A T E
A M A Z O N
C O M P R E H E N D &
A M A Z O N
C O M P R E H E N D M E D I C A L
A M A Z O N L E X A M A Z O N
F O R E C A S T
A M A Z O N
R E K O G N I T I O N
I M A G E
A M A Z O N
R E K O G N I T I O N
V I D E O
A M A Z O N
T E X T R A C T
A M A Z O N P E R S O N A L I Z E
F P G A sE C 2 P 3
& P 3 D N
E C 2 G 4
E C 2 C 5
A W S
I N F E R E N T I A
A W S I o T
G R E E N G R A S S
A M A Z O N
E L A S T I C
I N F E R E N C E
A W S D L
C O N T A I N E R S
& A M I s
A M A Z O N
E L A S T I C
K U B E R N E T E S
S E R V I C E
A M A Z O N
E L A S T I C
C O N T A I N E R
S E R V I C E
Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting
TheAWS MLstack
Broadest and deepest set of capabilities
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
FRAMEWORKS INTERFACES INFRASTRUCTURE
AI Services
VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS
ML Services
ML Frameworks + Infrastructure
A M A Z O N
P O L L Y
A M A Z O N
T R A N S C R I B E
A M A Z O N
T R A N S L A T E
A M A Z O N
C O M P R E H E N D &
A M A Z O N
C O M P R E H E N D M E D I C A L
A M A Z O N L E X A M A Z O N
F O R E C A S T
A M A Z O N
R E K O G N I T I O N
I M A G E
A M A Z O N
R E K O G N I T I O N
V I D E O
A M A Z O N
T E X T R A C T
A M A Z O N P E R S O N A L I Z E
F P G A sE C 2 P 3
& P 3 D N
E C 2 G 4
E C 2 C 5
A W S
I N F E R E N T I A
A W S I o T
G R E E N G R A S S
A M A Z O N
E L A S T I C
I N F E R E N C E
A W S D L
C O N T A I N E R S
& A M I s
A M A Z O N
E L A S T I C
K U B E R N E T E S
S E R V I C E
A M A Z O N
E L A S T I C
C O N T A I N E R
S E R V I C E
Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting
TheAWS MLstack
Broadest and deepest set of capabilities
Thank you!
© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.
W E B I N A R
Cobus Bernard
Senior Developer Advocate
Amazon Web Services
@cobusbernard
cobusbernard
cobusbernard

Getting started with AWS Machine Learning

  • 1.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. W E B I N A R 17.03.20 Getting started with AWS Machine Learning Cobus Bernard Senior Developer Advocate Amazon Web Services @cobusbernard cobusbernard cobusbernard
  • 2.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Agenda • Overview of AI/ML on AWS
  • 3.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Agenda • Overview of AI/ML on AWS • Amazon Polly
  • 4.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Agenda • Overview of AI/ML on AWS • Amazon Polly • Amazon Lex
  • 5.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Agenda • Overview of AI/ML on AWS • Amazon Polly • Amazon Lex • Amazon Rekognition
  • 6.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Agenda • Overview of AI/ML on AWS • Amazon Polly • Amazon Lex • Amazon Rekognition • Amazon SageMaker
  • 7.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. Put machine learning in the hands of every developer and data scientist Our mission
  • 8.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. FRAMEWORKS INTERFACES INFRASTRUCTURE AI Services TheAWS MLstack VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS ML Services ML Frameworks + Infrastructure A M A Z O N P O L L Y A M A Z O N T R A N S C R I B E A M A Z O N T R A N S L A T E A M A Z O N C O M P R E H E N D & A M A Z O N C O M P R E H E N D M E D I C A L A M A Z O N L E X A M A Z O N F O R E C A S T A M A Z O N R E K O G N I T I O N I M A G E A M A Z O N R E K O G N I T I O N V I D E O A M A Z O N T E X T R A C T A M A Z O N P E R S O N A L I Z E F P G A sE C 2 P 3 & P 3 D N E C 2 G 4 E C 2 C 5 A W S I N F E R E N T I A A W S I o T G R E E N G R A S S A M A Z O N E L A S T I C I N F E R E N C E A W S D L C O N T A I N E R S & A M I s A M A Z O N E L A S T I C K U B E R N E T E S S E R V I C E A M A Z O N E L A S T I C C O N T A I N E R S E R V I C E Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting Broadest and deepest set of capabilities
  • 9.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. FRAMEWORKS INTERFACES INFRASTRUCTURE AI Services VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS ML Services ML Frameworks + Infrastructure A M A Z O N P O L L Y A M A Z O N T R A N S C R I B E A M A Z O N T R A N S L A T E A M A Z O N C O M P R E H E N D & A M A Z O N C O M P R E H E N D M E D I C A L A M A Z O N L E X A M A Z O N F O R E C A S T A M A Z O N R E K O G N I T I O N I M A G E A M A Z O N R E K O G N I T I O N V I D E O A M A Z O N T E X T R A C T A M A Z O N P E R S O N A L I Z E F P G A sE C 2 P 3 & P 3 D N E C 2 G 4 E C 2 C 5 A W S I N F E R E N T I A A W S I o T G R E E N G R A S S A M A Z O N E L A S T I C I N F E R E N C E A W S D L C O N T A I N E R S & A M I s A M A Z O N E L A S T I C K U B E R N E T E S S E R V I C E A M A Z O N E L A S T I C C O N T A I N E R S E R V I C E Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting TheAWS MLstack Broadest and deepest set of capabilities
  • 10.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. FRAMEWORKS INTERFACES INFRASTRUCTURE AI Services VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS ML Services ML Frameworks + Infrastructure A M A Z O N P O L L Y A M A Z O N T R A N S C R I B E A M A Z O N T R A N S L A T E A M A Z O N C O M P R E H E N D & A M A Z O N C O M P R E H E N D M E D I C A L A M A Z O N L E X A M A Z O N F O R E C A S T A M A Z O N R E K O G N I T I O N I M A G E A M A Z O N R E K O G N I T I O N V I D E O A M A Z O N T E X T R A C T A M A Z O N P E R S O N A L I Z E F P G A sE C 2 P 3 & P 3 D N E C 2 G 4 E C 2 C 5 A W S I N F E R E N T I A A W S I o T G R E E N G R A S S A M A Z O N E L A S T I C I N F E R E N C E A W S D L C O N T A I N E R S & A M I s A M A Z O N E L A S T I C K U B E R N E T E S S E R V I C E A M A Z O N E L A S T I C C O N T A I N E R S E R V I C E Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting TheAWS MLstack Broadest and deepest set of capabilities
  • 11.
    © 2020, AmazonWeb Services, Inc. or its affiliates. All rights reserved. FRAMEWORKS INTERFACES INFRASTRUCTURE AI Services VISION SPEECH LANGUAGE CHATBOTS FORECASTING RECOMMENDATIONS ML Services ML Frameworks + Infrastructure A M A Z O N P O L L Y A M A Z O N T R A N S C R I B E A M A Z O N T R A N S L A T E A M A Z O N C O M P R E H E N D & A M A Z O N C O M P R E H E N D M E D I C A L A M A Z O N L E X A M A Z O N F O R E C A S T A M A Z O N R E K O G N I T I O N I M A G E A M A Z O N R E K O G N I T I O N V I D E O A M A Z O N T E X T R A C T A M A Z O N P E R S O N A L I Z E F P G A sE C 2 P 3 & P 3 D N E C 2 G 4 E C 2 C 5 A W S I N F E R E N T I A A W S I o T G R E E N G R A S S A M A Z O N E L A S T I C I N F E R E N C E A W S D L C O N T A I N E R S & A M I s A M A Z O N E L A S T I C K U B E R N E T E S S E R V I C E A M A Z O N E L A S T I C C O N T A I N E R S E R V I C E Amazon SageMaker Ground Truth Notebooks Algorithms + Marketplace Reinforcement Learning Training Optimization Deployment Hosting TheAWS MLstack Broadest and deepest set of capabilities
  • 12.
    Thank you! © 2020,Amazon Web Services, Inc. or its affiliates. All rights reserved. W E B I N A R Cobus Bernard Senior Developer Advocate Amazon Web Services @cobusbernard cobusbernard cobusbernard

Editor's Notes

  • #9 Within AWS we see the stack as having three layers: The bottom layer of the stack is for expert machine learning practitioners who work at the framework level and are comfortable building, training, tuning, and deploying machine learning models. This is the foundation for all of the innovation we drive at every other layer of the stack. We focus on performance, flexibility and reducing costs, so that anyone can experiment across frameworks and capabilities with the latest and greatest infrastructure. We also focus on making it easy to connect more broadly to the AWS ecosystem, whether that’s about pulling in IoT data from Greengrass, or accessing our state-of-the art chips (P3), or leveraging elastic inference. The vast majority of deep learning and machine learning in the cloud is being done on top of P3 instances in AWS. We recently announced P3dn instances, which are the most powerful GPU instances for machine learning that you'll find anywhere in the world. They have a hundred gigabits per second of networking, which changes how you can scale out and parallelize and lower costs on these models. They have networking throughput that’s three times as fast as anything else out there, twice as much GPU memory as anything out there, and a hundred plus gigabytes more of system memory than anything out there. This is where you see customers starting to do machine learning at large scale. Of course, they use lots of different frameworks, and we support all the major frameworks that customers want to use. But the one with the most resonance right now in the community is TensorFlow. If you look in the cloud, 85 percent of TensorFlow being run is run on top of AWS. You see this with lots of different types of customers, like Expedia, Siemens, Xendex, News Corp, and Snap. But for our customers who run TensorFlow, there are some challenges, particularly scaling. What they tell us is that it's really difficult to actually consume as much of the GPU with TensorFlow as they want to efficiently use the hardware and keep costs low. And that's because TensorFlow has a lot of processing overhead in distributing the weight of the neural network effectively across a large number of GPUs. At AWS, we don't believe in one tool to rule the world. We want you to use the right tool for the right job. And it turns out if you're doing things like video analytics or natural language processing, MXNet is a great solution and scales the best. Or if you're doing computer vision, Caffe2 is great. There's all kinds of incredibly innovative research being done on top of PyTorch too. More than half our customers who do machine learning at AWS are using more than two frameworks in their everyday machine learning work. And we will always make sure that all the frameworks you care about are supported equally well, so you have the right tool for the right job. The one constant in a very fluid world of machine learning is change. In the next couple of years, there will be other frameworks you care about, and we'll support them as well. Additional info on infrastructure: AWS offers a broad array of compute options for training and inference with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. The new Amazon EC2 P3dn instance has four-times the networking bandwidth and twice the GPU memory of the largest P3 instance, P3dn is ideal for large scale distributed training. No one else has anything close. P3dn.24xlarge instances offer 96vCPUs of Intel Skylake processors to reduce preprocessing time of data required for machine learning training. The enhanced networking of the P3n instance allows GPUs to be used more efficiently in multi-node configurations so training jobs complete faster. Finally, the extra GPU memory allows developers to easily handle more advanced machine learning models such as holding and processing multiple batches of 4k images for image classification and object detection systems C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, and are ideal for demanding inference applications.  We also have Amazon EC2 F1, a compute instance with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your machine learning applications. F1 instances are easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code. You can reuse your designs as many times, and across as many F1 instances as you like.
  • #10 Within AWS we see the stack as having three layers: The bottom layer of the stack is for expert machine learning practitioners who work at the framework level and are comfortable building, training, tuning, and deploying machine learning models. This is the foundation for all of the innovation we drive at every other layer of the stack. We focus on performance, flexibility and reducing costs, so that anyone can experiment across frameworks and capabilities with the latest and greatest infrastructure. We also focus on making it easy to connect more broadly to the AWS ecosystem, whether that’s about pulling in IoT data from Greengrass, or accessing our state-of-the art chips (P3), or leveraging elastic inference. The vast majority of deep learning and machine learning in the cloud is being done on top of P3 instances in AWS. We recently announced P3dn instances, which are the most powerful GPU instances for machine learning that you'll find anywhere in the world. They have a hundred gigabits per second of networking, which changes how you can scale out and parallelize and lower costs on these models. They have networking throughput that’s three times as fast as anything else out there, twice as much GPU memory as anything out there, and a hundred plus gigabytes more of system memory than anything out there. This is where you see customers starting to do machine learning at large scale. Of course, they use lots of different frameworks, and we support all the major frameworks that customers want to use. But the one with the most resonance right now in the community is TensorFlow. If you look in the cloud, 85 percent of TensorFlow being run is run on top of AWS. You see this with lots of different types of customers, like Expedia, Siemens, Xendex, News Corp, and Snap. But for our customers who run TensorFlow, there are some challenges, particularly scaling. What they tell us is that it's really difficult to actually consume as much of the GPU with TensorFlow as they want to efficiently use the hardware and keep costs low. And that's because TensorFlow has a lot of processing overhead in distributing the weight of the neural network effectively across a large number of GPUs. At AWS, we don't believe in one tool to rule the world. We want you to use the right tool for the right job. And it turns out if you're doing things like video analytics or natural language processing, MXNet is a great solution and scales the best. Or if you're doing computer vision, Caffe2 is great. There's all kinds of incredibly innovative research being done on top of PyTorch too. More than half our customers who do machine learning at AWS are using more than two frameworks in their everyday machine learning work. And we will always make sure that all the frameworks you care about are supported equally well, so you have the right tool for the right job. The one constant in a very fluid world of machine learning is change. In the next couple of years, there will be other frameworks you care about, and we'll support them as well. Additional info on infrastructure: AWS offers a broad array of compute options for training and inference with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. The new Amazon EC2 P3dn instance has four-times the networking bandwidth and twice the GPU memory of the largest P3 instance, P3dn is ideal for large scale distributed training. No one else has anything close. P3dn.24xlarge instances offer 96vCPUs of Intel Skylake processors to reduce preprocessing time of data required for machine learning training. The enhanced networking of the P3n instance allows GPUs to be used more efficiently in multi-node configurations so training jobs complete faster. Finally, the extra GPU memory allows developers to easily handle more advanced machine learning models such as holding and processing multiple batches of 4k images for image classification and object detection systems C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, and are ideal for demanding inference applications.  We also have Amazon EC2 F1, a compute instance with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your machine learning applications. F1 instances are easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code. You can reuse your designs as many times, and across as many F1 instances as you like.
  • #11 Within AWS we see the stack as having three layers: The bottom layer of the stack is for expert machine learning practitioners who work at the framework level and are comfortable building, training, tuning, and deploying machine learning models. This is the foundation for all of the innovation we drive at every other layer of the stack. We focus on performance, flexibility and reducing costs, so that anyone can experiment across frameworks and capabilities with the latest and greatest infrastructure. We also focus on making it easy to connect more broadly to the AWS ecosystem, whether that’s about pulling in IoT data from Greengrass, or accessing our state-of-the art chips (P3), or leveraging elastic inference. The vast majority of deep learning and machine learning in the cloud is being done on top of P3 instances in AWS. We recently announced P3dn instances, which are the most powerful GPU instances for machine learning that you'll find anywhere in the world. They have a hundred gigabits per second of networking, which changes how you can scale out and parallelize and lower costs on these models. They have networking throughput that’s three times as fast as anything else out there, twice as much GPU memory as anything out there, and a hundred plus gigabytes more of system memory than anything out there. This is where you see customers starting to do machine learning at large scale. Of course, they use lots of different frameworks, and we support all the major frameworks that customers want to use. But the one with the most resonance right now in the community is TensorFlow. If you look in the cloud, 85 percent of TensorFlow being run is run on top of AWS. You see this with lots of different types of customers, like Expedia, Siemens, Xendex, News Corp, and Snap. But for our customers who run TensorFlow, there are some challenges, particularly scaling. What they tell us is that it's really difficult to actually consume as much of the GPU with TensorFlow as they want to efficiently use the hardware and keep costs low. And that's because TensorFlow has a lot of processing overhead in distributing the weight of the neural network effectively across a large number of GPUs. At AWS, we don't believe in one tool to rule the world. We want you to use the right tool for the right job. And it turns out if you're doing things like video analytics or natural language processing, MXNet is a great solution and scales the best. Or if you're doing computer vision, Caffe2 is great. There's all kinds of incredibly innovative research being done on top of PyTorch too. More than half our customers who do machine learning at AWS are using more than two frameworks in their everyday machine learning work. And we will always make sure that all the frameworks you care about are supported equally well, so you have the right tool for the right job. The one constant in a very fluid world of machine learning is change. In the next couple of years, there will be other frameworks you care about, and we'll support them as well. Additional info on infrastructure: AWS offers a broad array of compute options for training and inference with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. The new Amazon EC2 P3dn instance has four-times the networking bandwidth and twice the GPU memory of the largest P3 instance, P3dn is ideal for large scale distributed training. No one else has anything close. P3dn.24xlarge instances offer 96vCPUs of Intel Skylake processors to reduce preprocessing time of data required for machine learning training. The enhanced networking of the P3n instance allows GPUs to be used more efficiently in multi-node configurations so training jobs complete faster. Finally, the extra GPU memory allows developers to easily handle more advanced machine learning models such as holding and processing multiple batches of 4k images for image classification and object detection systems C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, and are ideal for demanding inference applications.  We also have Amazon EC2 F1, a compute instance with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your machine learning applications. F1 instances are easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code. You can reuse your designs as many times, and across as many F1 instances as you like.
  • #12 Within AWS we see the stack as having three layers: The bottom layer of the stack is for expert machine learning practitioners who work at the framework level and are comfortable building, training, tuning, and deploying machine learning models. This is the foundation for all of the innovation we drive at every other layer of the stack. We focus on performance, flexibility and reducing costs, so that anyone can experiment across frameworks and capabilities with the latest and greatest infrastructure. We also focus on making it easy to connect more broadly to the AWS ecosystem, whether that’s about pulling in IoT data from Greengrass, or accessing our state-of-the art chips (P3), or leveraging elastic inference. The vast majority of deep learning and machine learning in the cloud is being done on top of P3 instances in AWS. We recently announced P3dn instances, which are the most powerful GPU instances for machine learning that you'll find anywhere in the world. They have a hundred gigabits per second of networking, which changes how you can scale out and parallelize and lower costs on these models. They have networking throughput that’s three times as fast as anything else out there, twice as much GPU memory as anything out there, and a hundred plus gigabytes more of system memory than anything out there. This is where you see customers starting to do machine learning at large scale. Of course, they use lots of different frameworks, and we support all the major frameworks that customers want to use. But the one with the most resonance right now in the community is TensorFlow. If you look in the cloud, 85 percent of TensorFlow being run is run on top of AWS. You see this with lots of different types of customers, like Expedia, Siemens, Xendex, News Corp, and Snap. But for our customers who run TensorFlow, there are some challenges, particularly scaling. What they tell us is that it's really difficult to actually consume as much of the GPU with TensorFlow as they want to efficiently use the hardware and keep costs low. And that's because TensorFlow has a lot of processing overhead in distributing the weight of the neural network effectively across a large number of GPUs. At AWS, we don't believe in one tool to rule the world. We want you to use the right tool for the right job. And it turns out if you're doing things like video analytics or natural language processing, MXNet is a great solution and scales the best. Or if you're doing computer vision, Caffe2 is great. There's all kinds of incredibly innovative research being done on top of PyTorch too. More than half our customers who do machine learning at AWS are using more than two frameworks in their everyday machine learning work. And we will always make sure that all the frameworks you care about are supported equally well, so you have the right tool for the right job. The one constant in a very fluid world of machine learning is change. In the next couple of years, there will be other frameworks you care about, and we'll support them as well. Additional info on infrastructure: AWS offers a broad array of compute options for training and inference with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. The new Amazon EC2 P3dn instance has four-times the networking bandwidth and twice the GPU memory of the largest P3 instance, P3dn is ideal for large scale distributed training. No one else has anything close. P3dn.24xlarge instances offer 96vCPUs of Intel Skylake processors to reduce preprocessing time of data required for machine learning training. The enhanced networking of the P3n instance allows GPUs to be used more efficiently in multi-node configurations so training jobs complete faster. Finally, the extra GPU memory allows developers to easily handle more advanced machine learning models such as holding and processing multiple batches of 4k images for image classification and object detection systems C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, and are ideal for demanding inference applications.  We also have Amazon EC2 F1, a compute instance with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your machine learning applications. F1 instances are easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code. You can reuse your designs as many times, and across as many F1 instances as you like.