Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Using Amazon SageMaker to Build, Train, and Deploy Your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Build Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Gitansh Chadha, Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
by Yash Pant, Enterprise Solutions Architect AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walk through the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Using Amazon SageMaker to Build, Train, and Deploy Your ML ModelsAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Build Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Gitansh Chadha, Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
by Yash Pant, Enterprise Solutions Architect AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walk through the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
by Amit Sharma, Principal Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Zlatan Dzinic - Partner Solutions Architect, AWS
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
Workshop: Build a Virtual Assistant with Amazon Polly and Amazon Lex - "Pollexy"Amazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Technology advances have enabled people with disabilities to communicate more meaningfully and participate more fully in their daily lives. In this workshop, we will show how voice technologies can empower this population by building a verbal assistant using Pollexy (Amazon Polly + Amazon Lex) with a Raspberry Pi. This verbal assistant lets caretakers schedule audio prompts and messages, both on a recurring schedule and on-demand.
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Use Amazon Rekognition to Build a Facial Recognition SystemAmazon Web Services
by Kashif Imran
Amazon Rekognition makes it easy to extract meaningful metadata from visual content. In this workshop, you will work in teams to build a simple system to help track missing persons. You'll develop a solution that leverages Amazon Rekognition and other AWS services to analyze images from various sources (e.g., social media) and provide authorities with timely reports and alerts on new leads for missing individuals. The solution will entail a repeatable and automated process that follows best practices for architecting in the cloud, such as designing for high availability and scalability.
End to End Model Development to Deployment using SageMakerAmazon Web Services
End to End Model Development to Deployment Using SageMaker
In this session we would be developing a model for image classification model (a convolutional neural network, or CNN). We would start off with some theory about CNNs, explore how they learn an image and then proceed towards hands-on lab. We would be using Amazon SageMaker to develop the model in Python, train the model and then to finally create an endpoint and run inference against it. We would be using a custom Conda Kernel for this exercise and would be looking at leveraging SageMaker features like LifeCycle Configurations to help us prepare the notebook before launch. Finally we would be deploying the model in production and run inference against it. We would also be able to monitor various parameters for endpoint performance such as endpoint’s CPU/Memory and Model inference performance metrics.
Level: 200-300
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
Integrating Deep Learning into your Enterprise
In this workshop we return to one of the popular Machine Learning Framework - scikit-learn. We scikit-learn's decision tree classifier to train the model. Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. We follow the whole machine learning pipeline from algorithm selection, training and finally deployment of an endpoint. We would be working with the widely available Iris dataset and the endpoint would be predicting what species the sample belongs to from the Sepal width and length, Petal width and length. Through this workshop we would know all the internal details of how we use containers to train and deploy our machine learning workloads.
Level: 300-400
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Kate Werling - Using Amazon SageMaker to build, train, and deploy your ML Mod...Amazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
AI & Machine Learning at AWS - An IntroductionDaniel Zivkovic
Slides from my "Introduction to AI & ML for AWS Pros" Lunch & Learn presentation. The idea was to (1) bridge the gap between Data Scientists & today's Cloud professionals; (2) spur the imagination of AWS Pros about ML possibilities, and (3) explain the importance of SageMaker - because it's not just another tool in Data Scientist's toolbox, but an amazing End-to-End Machine Learning Platform.
Supercharge your Machine Learning Solutions with Amazon SageMakerAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SageMaker on AWS for real-time fraud detection.
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
by Amit Sharma, Principal Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Zlatan Dzinic - Partner Solutions Architect, AWS
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
Workshop: Build a Virtual Assistant with Amazon Polly and Amazon Lex - "Pollexy"Amazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Technology advances have enabled people with disabilities to communicate more meaningfully and participate more fully in their daily lives. In this workshop, we will show how voice technologies can empower this population by building a verbal assistant using Pollexy (Amazon Polly + Amazon Lex) with a Raspberry Pi. This verbal assistant lets caretakers schedule audio prompts and messages, both on a recurring schedule and on-demand.
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Use Amazon Rekognition to Build a Facial Recognition SystemAmazon Web Services
by Kashif Imran
Amazon Rekognition makes it easy to extract meaningful metadata from visual content. In this workshop, you will work in teams to build a simple system to help track missing persons. You'll develop a solution that leverages Amazon Rekognition and other AWS services to analyze images from various sources (e.g., social media) and provide authorities with timely reports and alerts on new leads for missing individuals. The solution will entail a repeatable and automated process that follows best practices for architecting in the cloud, such as designing for high availability and scalability.
End to End Model Development to Deployment using SageMakerAmazon Web Services
End to End Model Development to Deployment Using SageMaker
In this session we would be developing a model for image classification model (a convolutional neural network, or CNN). We would start off with some theory about CNNs, explore how they learn an image and then proceed towards hands-on lab. We would be using Amazon SageMaker to develop the model in Python, train the model and then to finally create an endpoint and run inference against it. We would be using a custom Conda Kernel for this exercise and would be looking at leveraging SageMaker features like LifeCycle Configurations to help us prepare the notebook before launch. Finally we would be deploying the model in production and run inference against it. We would also be able to monitor various parameters for endpoint performance such as endpoint’s CPU/Memory and Model inference performance metrics.
Level: 200-300
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
Integrating Deep Learning into your Enterprise
In this workshop we return to one of the popular Machine Learning Framework - scikit-learn. We scikit-learn's decision tree classifier to train the model. Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. We follow the whole machine learning pipeline from algorithm selection, training and finally deployment of an endpoint. We would be working with the widely available Iris dataset and the endpoint would be predicting what species the sample belongs to from the Sepal width and length, Petal width and length. Through this workshop we would know all the internal details of how we use containers to train and deploy our machine learning workloads.
Level: 300-400
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Kate Werling - Using Amazon SageMaker to build, train, and deploy your ML Mod...Amazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
AI & Machine Learning at AWS - An IntroductionDaniel Zivkovic
Slides from my "Introduction to AI & ML for AWS Pros" Lunch & Learn presentation. The idea was to (1) bridge the gap between Data Scientists & today's Cloud professionals; (2) spur the imagination of AWS Pros about ML possibilities, and (3) explain the importance of SageMaker - because it's not just another tool in Data Scientist's toolbox, but an amazing End-to-End Machine Learning Platform.
Supercharge your Machine Learning Solutions with Amazon SageMakerAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SageMaker on AWS for real-time fraud detection.
AI & Machine Learning Web Day | Einführung in Amazon SageMaker, eine Werkbank...AWS Germany
Amazon SageMaker ist ein vollständig automatisiertes Werkzeug, das Entwicklern und Data Scientists hilft, schnell und einfach Modelle für Maschinelles Lernen (ML) zu aufzubauen, zu trainieren und skalierbar in Produktion zu bringen. Dabei orientiert sich Amazon SageMaker an typischen Anwendungs-Szenarien und Arbeitsweisen für ML und hilft, gängige Barrieren bei der Entwicklung und dem Betrieb von ML-Anwendungen zu überwinden. In diesem Vortrag geben wir einen Überblick über einen typischen ML-Entwicklungs-Zyklus, über den Einsatz von Amazon SageMaker für ML in der Praxis, sowie über die darin verfügbaren ML-Algorithmen sowie die Nutzung von SageMaker für Deep Learning und eigene ML-Algorithmen.
Moderator: Constantin Gonzalez, Principal Solutions Architect, AWS
by Roy Ben-Alta, Business Development Manager, AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this session, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Artificial Intelligence (Machine Learning) on AWS: How to StartVladimir Simek
Amazon has been investing deeply in artificial intelligence (AI) for over 20 years. Machine learning (ML) algorithms drive many of its internal systems. It is also core to the capabilities Amazon's customers experience – from the path optimization in the fulfillment centers, and Amazon.com’s recommendations engine, to Echo powered by Alexa, drone initiative Prime Air, and the new retail experience Amazon Go. This is just the beginning. Amazon's mission is to share learnings and ML capabilities as fully managed services, and put them into the hands of every developer and data scientist.
If you are interested, how can you develop ML-based smart applications on the AWS platform, and want to see a couple of cool demos, join us for the next AWS meetup. AWS Solutions Architect, Vladimir Simek, will be presenting the full AWS portfolio for AI and ML - from virtual servers enabled for training Deep Learning models up to a fully managed API-based services.
Machine Learning - From Notebook to Production with Amazon SagemakerAmazon Web Services
Learn more about how to deploy machine learning models with high-performance machine learning algorithms, broad framework support, and one-click training, tuning, and inference.
NEW LAUNCH! Integrating Amazon SageMaker into your Enterprise - MCL345 - re:I...Amazon Web Services
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this workshop, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices.Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
Workshop slides for the introduction to Amazon SageMaker, and integration of Amazon SageMaker with other tools within your AWS environment. Visit https://aws.amazon.com/sagemaker for more information.
Artificial Intelligence (Machine Learning) on AWS: How to StartVladimir Simek
Amazon has been investing deeply in artificial intelligence (AI) for over 20 years. Machine learning (ML) algorithms drive many of its internal systems. It is also core to the capabilities Amazon's customers experience – from the path optimization in the fulfillment centers, and Amazon.com’s recommendations engine, to Echo powered by Alexa, drone initiative Prime Air, and the new retail experience Amazon Go. This is just the beginning. Amazon's mission is to share learnings and ML capabilities as fully managed services, and put them into the hands of every developer and data scientist.
Integrating Amazon SageMaker into your Enterprise - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Get an introduction to Amazon SageMaker
- Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment
- View a walkthrough of the machine learning lifecycle to cover best practices in the ML process
NEW LAUNCH! Introducing Amazon SageMaker - MCL365 - re:Invent 2017Amazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SaeMaker on AWS for real-time fraud detection.
Building WhereML, an AI Powered Twitter Bot for Guessing Locations of Picture...Amazon Web Services
The WhereML Twitter bot is built on the LocationNet model which is trained with the Berkley Multimedia Commons public dataset of 33.9 million geotagged images from Flickr (and other sources). The model is based on a ResNet-101 architecture and adds a classification layer that splits the earth into ~15000 cells created with Google’s S2 spherical geometry library. This model is based on prior work completed at Berkley and Google.
In this session we’ll start by describing AI in general terms then diving into deep learning and the MXNet framework. We’ll describe the LocationNet model in detail and show how it is trained and created in Amazon SageMaker. Finally, we’ll talk about the Twitter Account Activity webhooks API and how to interact with it using an API Gateway and AWS Lambda function.
Attendees are encouraged to interact with the bot in real-time at whereml.bot or on twitter at @WhereML
All code used in this project is open source and was written live on twitch.tv/aws and attendees are encouraged to experiment with it.
Machine Learning State of the Union - MCL210 - re:Invent 2017Amazon Web Services
Join us to hear about our strategy for driving machine learning innovation for our customers and learn what’s new from AWS in the machine learning space. Swami Sivasubramanian, VP of Amazon Machine Learning, will discuss and demonstrate the latest new services for ML on AWS: Amazon SageMaker, AWS DeepLens, Amazon Rekogntion Video, Amazon Translate, Amazon Transcribe, and Amazon Comprehend. Attend this session to understand how to make the most of machine learning in the cloud.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.