Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Keith Steward - SageMaker Algorithms Infinitely Scalable Machine Learning_VK.pdfAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
by Roy Ben-Alta, Business Development Manager, AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this session, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
Keith Steward - SageMaker Algorithms Infinitely Scalable Machine Learning_VK.pdfAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
by Roy Ben-Alta, Business Development Manager, AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this session, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Neel Mitra, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Mahendra Bairagi, AI Specialist Solutions Architect, AWS
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun. You are looking to use Machine Learning and IoT technologies to solve this unique problem.
Do you accept the Challenge?
The objective of this task is to help the festival-goers quickly identify the DJ stage where crowd is the happiest. You've seen a lot of buzz around computer vision, machine learning, and IoT and want to use this technology to detect crowd emotions. From your initial research there are existing ML models that you can leverage to do face and emotion detection, but there are two ways that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
by Mike Gillespie, Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Build Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Using Amazon SageMaker to Build, Train, and Deploy Your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
This session will build on the previous one, focusing on performance and cost optimization. First, we'll show you how to automatically tune hyper-parameters, and quickly converge to optimal models. Second, you'll learn how to use SageMaker Neo, a new service that optimizes models for the underlying hardware architecture. Third, we'll show you how Elastic Inference lets you attach GPU acceleration to EC2 and SageMaker instances at the fraction of the cost of a full-fledged GPU instance. Finally, we'll share additional cost optimization tips for SageMaker.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Workshop: Build a Virtual Assistant with Amazon Polly and Amazon Lex - "Pollexy"Amazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Technology advances have enabled people with disabilities to communicate more meaningfully and participate more fully in their daily lives. In this workshop, we will show how voice technologies can empower this population by building a verbal assistant using Pollexy (Amazon Polly + Amazon Lex) with a Raspberry Pi. This verbal assistant lets caretakers schedule audio prompts and messages, both on a recurring schedule and on-demand.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Use Amazon Rekognition to Build a Facial Recognition SystemAmazon Web Services
by Kashif Imran
Amazon Rekognition makes it easy to extract meaningful metadata from visual content. In this workshop, you will work in teams to build a simple system to help track missing persons. You'll develop a solution that leverages Amazon Rekognition and other AWS services to analyze images from various sources (e.g., social media) and provide authorities with timely reports and alerts on new leads for missing individuals. The solution will entail a repeatable and automated process that follows best practices for architecting in the cloud, such as designing for high availability and scalability.
Build Deep Learning Applications with TensorFlow & SageMakerAmazon Web Services
Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Level: 200-300
Speakers:
Martin Schade - R&D Engineer, AWS Solutions Architecture
Steve Sedlmeyer - Sr. Solutions Architect, World Wide Public Sector, AWS
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Roy Ben-Alta - Principal BDM, Kinesis, AWS
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Neel Mitra, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Mahendra Bairagi, AI Specialist Solutions Architect, AWS
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun. You are looking to use Machine Learning and IoT technologies to solve this unique problem.
Do you accept the Challenge?
The objective of this task is to help the festival-goers quickly identify the DJ stage where crowd is the happiest. You've seen a lot of buzz around computer vision, machine learning, and IoT and want to use this technology to detect crowd emotions. From your initial research there are existing ML models that you can leverage to do face and emotion detection, but there are two ways that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
by Mike Gillespie, Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Build Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Using Amazon SageMaker to Build, Train, and Deploy Your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
This session will build on the previous one, focusing on performance and cost optimization. First, we'll show you how to automatically tune hyper-parameters, and quickly converge to optimal models. Second, you'll learn how to use SageMaker Neo, a new service that optimizes models for the underlying hardware architecture. Third, we'll show you how Elastic Inference lets you attach GPU acceleration to EC2 and SageMaker instances at the fraction of the cost of a full-fledged GPU instance. Finally, we'll share additional cost optimization tips for SageMaker.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Workshop: Build a Virtual Assistant with Amazon Polly and Amazon Lex - "Pollexy"Amazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Technology advances have enabled people with disabilities to communicate more meaningfully and participate more fully in their daily lives. In this workshop, we will show how voice technologies can empower this population by building a verbal assistant using Pollexy (Amazon Polly + Amazon Lex) with a Raspberry Pi. This verbal assistant lets caretakers schedule audio prompts and messages, both on a recurring schedule and on-demand.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Use Amazon Rekognition to Build a Facial Recognition SystemAmazon Web Services
by Kashif Imran
Amazon Rekognition makes it easy to extract meaningful metadata from visual content. In this workshop, you will work in teams to build a simple system to help track missing persons. You'll develop a solution that leverages Amazon Rekognition and other AWS services to analyze images from various sources (e.g., social media) and provide authorities with timely reports and alerts on new leads for missing individuals. The solution will entail a repeatable and automated process that follows best practices for architecting in the cloud, such as designing for high availability and scalability.
Build Deep Learning Applications with TensorFlow & SageMakerAmazon Web Services
Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Level: 200-300
Speakers:
Martin Schade - R&D Engineer, AWS Solutions Architecture
Steve Sedlmeyer - Sr. Solutions Architect, World Wide Public Sector, AWS
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Roy Ben-Alta - Principal BDM, Kinesis, AWS
Build, Train, & Deploy ML Models Using SageMaker: Machine Learning Week San F...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Amit Sharma - Principal Solutions Architect, AWS
Build, Train and Deploy ML Models using Amazon SageMakerHagay Lupesko
(presented in AWS ML Day in SF on June 2018)
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. This presentation goes over key use cases and features of SageMaker, including a hands-on demo of using SageMaker and MXNet to build, train and deploy a neural network for sentiment analysis.
Learning Objectives:
- Learn how Amazon SageMaker can be used for exploratory data analysis before training
- Learn how Amazon SageMaker provides managed distributed training with flexibility
- Learn how easy it is to deploy your models for hosting within Amazon SageMaker
Building Machine Learning models with Apache Spark and Amazon SageMaker | AWS...Amazon Web Services
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. In this session, we'll show you how to combine it with Apache Spark to build efficient Machine Learning pipeline.
Building a Recommender System Using Amazon SageMaker's Factorization Machine ...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Building a Recommender System Using Amazon SageMaker's Factorization Machine Algorithm
Factorization Machines are a powerful algorithm in the click prediction and recommendation space. Amazon SageMaker has a nearly infinitely scalable implementation that we'll show you how to use to build a recommender of your own.
Speaker: David Arpin - AI Platform Selections Leader, AI Platforms
Build, Train, and Deploy ML Models with Amazon SageMaker (AIM410-R2) - AWS re...Amazon Web Services
Come and help build the most accurate text classification model possible. A fully managed machine learning (ML) platform, Amazon SageMaker enables developers and data scientists to build, train, and deploy ML models using built-in or custom algorithms. In this workshop, you learn how to leverage Keras/TensorFlow deep learning frameworks to build a text classification solution using custom algorithms on Amazon SageMaker. You package custom training code in a Docker container, test it locally, and then use Amazon SageMaker to train a deep learning model. You then try to iteratively improve the model to achieve a higher level of accuracy. Finally, you deploy the model in production so different applications within the company can start leveraging this ML classification service. Please note that to actively participate in this workshop, you need an active AWS account with admin-level IAM permissions to Amazon SageMaker, Amazon Elastic Container Registry (Amazon ECR), and Amazon S3.
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...Provectus
Machine learning often feels a lot harder than it should be to most developers because the process to build and train models, and then deploy them into production is too complicated and too slow. Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Apache MXNet and TensorFlow are pre-installed, and Amazon SageMaker offers a range of built-in, high-performance machine learning algorithms. If you want to train with an alternative framework or algorithm, you can bring your own in a Docker container.
AWS Summit Singapore - Artificial Intelligence to Delight Your CustomersAmazon Web Services
Andrew Watts-Curnow, Senior Cloud Architect – Professional Services, APAC, AWS
Learn how advances in AI are enabling improvements in customer experience. This is a deep dive using machine learning frameworks for people who are familiar with building their own models. In this session, we will detail a facial recognition solution that can detect known customers and alert customer service staff.
End to End Model Development to Deployment using SageMakerAmazon Web Services
End to End Model Development to Deployment Using SageMaker
In this session we would be developing a model for image classification model (a convolutional neural network, or CNN). We would start off with some theory about CNNs, explore how they learn an image and then proceed towards hands-on lab. We would be using Amazon SageMaker to develop the model in Python, train the model and then to finally create an endpoint and run inference against it. We would be using a custom Conda Kernel for this exercise and would be looking at leveraging SageMaker features like LifeCycle Configurations to help us prepare the notebook before launch. Finally we would be deploying the model in production and run inference against it. We would also be able to monitor various parameters for endpoint performance such as endpoint’s CPU/Memory and Model inference performance metrics.
Level: 200-300
by Yash Pant, Enterprise Solutions Architect AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walk through the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Il Machine Learning può sembrare più difficile di quanto non lo sia perché il processo di sviluppo, training e deployment dei modelli in produzione è troppo complicato e lento. Amazon SageMaker è un servizio completamente gestito che consente a sviluppatori e data scientist di progettare, implementare e distribuire modelli di Machine Learning in qualsiasi scala. Amazon SageMaker offre una scelta di algoritmi di machine learning altamente performanti e framework preconfigurati come Apache MXNet, TensorFlow, PyTorch e Chainer; inoltre, è possibile utilizzare framework o algoritmi alternativi attraverso container Docker. In questa sessione approfondiremo l’utilizzo di Amazon SageMaker, anche attraverso alcuni pratici esempi.
AWS re:Invent is an annual global conference of the Amazon Web Services community held in Las Vegas. In 2017, we held 1000+ breakout sessions and attracted over 40,000 attendees. The event offers expanded opportunities to learn about the latest AWS releases, use cases and business benefits, not to mention diving deep into hot topics and meeting with our subject matter experts.
Missed it? Don’t worry, we are bringing AWS re:Invent to Hong Kong on Jan 18, 2018. Packed in a day, AWS re:Invent 2017 Recap Hong Kong will showcase new releases announced at re:Invent 2017 on Serverless & Container, DevOps & Mobile, Artificial Intelligence & Machine Learning and more. Local customers will also be invited to share their re:Invent experience and success stories with AWS.
Discover the latest services and features from Amazon Web Services and learn how to integrate them into your applications
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Integrating Deep Learning into your Enterprise
In this workshop we return to one of the popular Machine Learning Framework - scikit-learn. We scikit-learn's decision tree classifier to train the model. Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. We follow the whole machine learning pipeline from algorithm selection, training and finally deployment of an endpoint. We would be working with the widely available Iris dataset and the endpoint would be predicting what species the sample belongs to from the Sepal width and length, Petal width and length. Through this workshop we would know all the internal details of how we use containers to train and deploy our machine learning workloads.
Level: 300-400
Similar to Kate Werling - Using Amazon SageMaker to build, train, and deploy your ML Models (200).pdf (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
4. Amazon SageMaker
A fully-managed platform
that provides the quickest and easiest way for
data scientists and developers to get
ML models from idea to production.
5. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
6. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
7. Building
… or Apache Spark
through EMR and
the SageMaker
Spark SDK...
Use SageMaker‘s
hosted Notebook
Instances...
... or the Console
for a point and click
experience...
... or your own
device (EC2,
laptop, etc.)
8. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
9. Training
Zero setup Streaming datasets
+ distributed
compute
Docker / ECS Deploy trained models
locally or to Amazon
SageMaker, AWS
Greengrass, AWS
DeepLens
10. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
12. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
13. Built-in algorithms
XGBoost, FM,
Linear, and
Forecasting
for supervised
learning
Kmeans, PCA,
and Word2Vec
for clustering
and pre-
processing
Image
classification
with
convolutional
neural networks
LDA and NTM
for topic
modeling,
seq2seq for
translation
14. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
15. TensorFlow and Apache MXNet Docker Containers
… explore and
refine models in a
single Notebook
instance
… deploy to
production
Sample your
data… Use the same code
to train on the full
dataset in a cluster
of instances…
16. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
17. Bring your own algorithm
... add algorithm
code to a Docker
container...
Pick your preferred
framework...
... publish to ECS
Amazon ECS
18. Amazon SageMaker components
Amazon’s fast, scalable algorithms
Distributed TensorFlow, Apache MXNet, Chainer, PyTorch
Bring your own algorithm
Hyperparameter Tuning
Building HostingTraining
19. Hyperparameter Tuning
(Automated Model Tuning)
Run a large set of training
jobs with varying
hyperparameters...
... and search the
hyperparameter space for
improved accuracy.
20. Zero setup for data exploration
Resizable as you
need
Common tools
pre-installed
Easy access to
your data sources
No servers to
manage
21. M o d u lar a r chi te ctu re
Past
Data
Training
algorithm
Model
artifacts
Inference
code
Client
application
Model
Data
Inference
Ground
truth
Amazon SageMaker
22. Pay as you go and inexpensive
ML compute by the
second starting
at $0.0464/hr
ML storage by the
second
at $0.14
per GB-month
Data processed in
notebooks and hosting
at $0.016 per GB
Free trial to get started
quickly