by Mahendra Bairagi, AI Specialist Solutions Architect, AWS
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun. You are looking to use Machine Learning and IoT technologies to solve this unique problem.
Do you accept the Challenge?
The objective of this task is to help the festival-goers quickly identify the DJ stage where crowd is the happiest. You've seen a lot of buzz around computer vision, machine learning, and IoT and want to use this technology to detect crowd emotions. From your initial research there are existing ML models that you can leverage to do face and emotion detection, but there are two ways that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Build Deep Learning Applications with TensorFlow & SageMakerAmazon Web Services
Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Level: 200-300
Speakers:
Martin Schade - R&D Engineer, AWS Solutions Architecture
Steve Sedlmeyer - Sr. Solutions Architect, World Wide Public Sector, AWS
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Build Deep Learning Applications with TensorFlow & SageMakerAmazon Web Services
Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Level: 200-300
Speakers:
Martin Schade - R&D Engineer, AWS Solutions Architecture
Steve Sedlmeyer - Sr. Solutions Architect, World Wide Public Sector, AWS
Building Deep Learning Applications with TensorFlow and Amazon SageMakerAmazon Web Services
by Steve Shirkey, Solutions Architect ASEAN
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
by Nick Brandaleone, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This session will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Kate Werling - Using Amazon SageMaker to build, train, and deploy your ML Mod...Amazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Jeanine Banks, Director of Product Management, EC2 Windows & Enterprise Workloads, AWS
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Neel Mitra, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Roy Ben-Alta, Business Development Manager, AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this session, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
AWS Machine Learning Week at the San Francisco Loft: Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Gitansh Chadha, Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Keith Steward - SageMaker Algorithms Infinitely Scalable Machine Learning_VK.pdfAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
大會主題演說 2: AI x 機器學習,無所不在!Ubiquity of AI and Machine Learning in our Everyday ...Amazon Web Services
大會主題演說 2: AI x 機器學習,無所不在!Ubiquity of AI and Machine Learning in our Everyday Life
Olivier Klein, Head of Emerging Technologies, APAC, AWS Solutions Architecture
by Mike Gillespie, Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Supercharge your Machine Learning Solutions with Amazon SageMakerAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SageMaker on AWS for real-time fraud detection.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
Learning Objectives:
- Learn how Amazon SageMaker can be used for exploratory data analysis before training
- Learn how Amazon SageMaker provides managed distributed training with flexibility
- Learn how easy it is to deploy your models for hosting within Amazon SageMaker
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun.
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
DataPalooza at the San Francisco Loft: In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
by Mike Miller, SR. Manager PMT
In this session you will get to see AWS DeepLens in action! You will learn how AWS DeepLens empowers developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click. In this session you will get an overview of how to build and deploy computer vision models, such as face detection using Amazon SageMaker and AWS DeepLens and learn about some of the great use cases that bring together multiple AWS services to create new to the world deep-learning enabled innovation.
Kate Werling - Using Amazon SageMaker to build, train, and deploy your ML Mod...Amazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Jeanine Banks, Director of Product Management, EC2 Windows & Enterprise Workloads, AWS
Researchers, scientists and IT organizations are looking to develop, deploy and deliver machine learning and HPC workloads by leveraging the agility, scalability and availability of the public cloud. Amazon EC2 Accelerated Computing platform products include Amazon EC2 P3 instances, Amazon EC2 G3 instances and Amazon EC2 F1 instances. This session will provide a detailed technical deep dive of the Amazon EC2 Accelerated Computing platforms, which are Amazon EC2 P3, Amazon EC2 G3 and Amazon EC2 F1 instances, their key market use cases which include machine learning, high performance computing, scientific research and reconfigurable computing.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Neel Mitra, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
by Roy Ben-Alta, Business Development Manager, AWS
Amazon SageMaker is a fully managed platform for data scientists and developers to build, train and deploy machine learning models in production applications. In this session, you will learn how to integrate Amazon SageMaker with other AWS services in order to meet enterprise requirements. Using Amazon S3, Amazon Glue, Amazon KMS, Amazon SageMaker, Amazon CodeStar, Amazon ECR, IAM; we will walkthrough the machine learning lifecycle in an integrated AWS environment and discuss best practices. Attendees must have some familiarities with AWS products as well as a good understanding of machine learning theory. The dataset for the workshop will be provided.
AWS Machine Learning Week at the San Francisco Loft: Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Gitansh Chadha, Solutions Architect AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Keith Steward - SageMaker Algorithms Infinitely Scalable Machine Learning_VK.pdfAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
大會主題演說 2: AI x 機器學習,無所不在!Ubiquity of AI and Machine Learning in our Everyday ...Amazon Web Services
大會主題演說 2: AI x 機器學習,無所不在!Ubiquity of AI and Machine Learning in our Everyday Life
Olivier Klein, Head of Emerging Technologies, APAC, AWS Solutions Architecture
by Mike Gillespie, Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Supercharge your Machine Learning Solutions with Amazon SageMakerAmazon Web Services
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SageMaker on AWS for real-time fraud detection.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Automate for Efficiency with Amazon Transcribe and Amazon TranslateAmazon Web Services
by Niranjan Hira, Solutions Architect, AWS
Teaching a computer how to understand human language is one of the most challenging problems in computer science. However, significant progress has been made in automatic speech recognition (ASR) and machine translation (MT) to create highly accurate and fluent transcriptions and translations. Amazon Transcribe is an ASR service that makes it easy for developers to add speech to text capability to their applications, and Amazon Translate is a MT service that delivers fast, high-quality, and affordable language translation. In this session, you’ll learn how to weave machine translation and transcription into your workflows, to increase the efficiency and reach of your operations.
Learning Objectives:
- Learn how Amazon SageMaker can be used for exploratory data analysis before training
- Learn how Amazon SageMaker provides managed distributed training with flexibility
- Learn how easy it is to deploy your models for hosting within Amazon SageMaker
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun.
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
DataPalooza at the San Francisco Loft: In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
The breath and depth of Azure products that fall under the AI and ML umbrella can be difficult to follow. In this presentation I’ll first define exactly what AI, ML, and deep learning is, and then go over the various Microsoft AI and ML products and their use cases.
Automated machine learning (automated ML) automates feature engineering, algorithm and hyperparameter selection to find the best model for your data. The mission: Enable automated building of machine learning with the goal of accelerating, democratizing and scaling AI. This presentation covers some recent announcements of technologies related to Automated ML, and especially for Azure. The demonstrations focus on Python with Azure ML Service and Azure Databricks.
Big Data Advanced Analytics on Microsoft Azure 201904Mark Tabladillo
This talk summarizes key points for big data advanced analytics on Microsoft Azure. First, there is a review of the major technologies. Second, there is a series of technology demos (focusing on VMs, Databricks and Azure ML Service). Third, there is some advice on using the Team Data Science Process to help plan projects. The deck has web resources recommended. This presentation was delivered at the Global Azure Bootcamp 2019, Atlanta GA location (Alpharetta Avalon).
Join us to see how Public-sector organizations and AWS Partners are combining Smart Devices and Artificial Intelligence to create flexible, secure and cost-effective solutions. Applying machine learning models to live video/audio, cameras can be transformed into flexible IoT devices that perform critical functions around public safety, security, property management, smart parking & environmental management. Learn how these solutions are architected using AWS services such as AWS IoT Core, AWS GreenGrass, AWS DeepLens, Amazon SageMaker and Amazon Alexa.
AWS re:Invent is an annual global conference of the Amazon Web Services community held in Las Vegas. In 2017, we held 1000+ breakout sessions and attracted over 40,000 attendees. The event offers expanded opportunities to learn about the latest AWS releases, use cases and business benefits, not to mention diving deep into hot topics and meeting with our subject matter experts.
Missed it? Don’t worry, we are bringing AWS re:Invent to Hong Kong on Jan 18, 2018. Packed in a day, AWS re:Invent 2017 Recap Hong Kong will showcase new releases announced at re:Invent 2017 on Serverless & Container, DevOps & Mobile, Artificial Intelligence & Machine Learning and more. Local customers will also be invited to share their re:Invent experience and success stories with AWS.
Discover the latest services and features from Amazon Web Services and learn how to integrate them into your applications
2018 11 14 Artificial Intelligence and Machine Learning in AzureBruno Capuano
Slides used during my session "Artificial Intelligence and Machine Learning in Azure" for The Azure Group (Canada's Azure User Community) on November 14 2018.
Public group
Machine Learning Inference at the EdgeJulien SIMON
Machine Learning works by using powerful algorithms to discover patterns in data and construct complex mathematical models using these patterns. Once the model is built, you perform inference by applying new data to the trained model to make predictions for your application. Building and training ML models require massive computing resources so it is a natural fit for the cloud. But, inference takes a lot less computing power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud and you deploy and run ML inference locally on connected devices. For example, autonomous cars need to identify road signs in real time, drones need to recognize objects with or without network connectivity.
Scalable Open-Source IoT Solutions on Microsoft AzureMaxim Ivannikov
Scalable Open-Source IoT Solutions from gateways to the Cloud using DeviceHive, Ubuntu Snappy Core and Microsoft Azure.
The presentation was used during the NY Open-Source IoT Solutions Summit on November 12, 2015.
In this session we will delve into the world of Azure Databricks and analyze why it is becoming a tool for data Scientist and/or fundamental data Engineer in conjunction with Azure services
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
3. The Challenge
DataPalooza—A music festival themed ML & IoT Workshop
Scenario: Your bold startup has taken the challenge of providing a new type of EDM music festival
experience. At venues with multiple stages, festival-goers are always looking to identify which DJ stage
areas are the liveliest. This causes them to constantly move around between different stages and miss
out. You are looking to use Machine Learning and IoT to come up with a connected
fan experience that takes the music festival scene to the next level. From your initial research there are
existing ML models that you can leverage to do face and emotion detection, but there are two ways
that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will
work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies including Amazon SageMaker with Intel C5
Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, AWS Lambda, along with Intel IoT
hardware kits. The objective of the workshop is to learn how to build and deploy a machine learning
model and then run inference on it from the cloud and from the edge device.
By the time you’re done with these challenges, EDM DJ’s will be able to tell whether the crowd is
enjoying their set by the looks on their faces.
6. Machine Learning Process is Hard…
Fetch
data
Clean &
form at data
Prepare &
transform
data
Train
m odel
Evaluate
m odel
Integrate
w ith prod
M onitor/
debug/
refresh
Data wrangling
• Set up and manage
Notebook environments
• Get data to notebooks securely
Experimentation
• Set up and manage clusters
• Scale/distribute ML algorithms
Deployment
• Set up and manage
inference clusters
• Manage and auto scale
inference APIs
• Testing, versioning,
and monitoring
7. Amazon’s fast, scalable algorithms
Distributed Apache MXNet and TensorFlow
Bring your own algorithm
Hyperparameter optimization
Building HostingTraining
Amazon SageMaker Components
8. Amazon SageMaker Components
Amazon’s fast, scalable algorithms
Distributed Apache MXNet and TensorFlow
Bring your own algorithm
Hyperparameter optimization
Building Hosting (C/P)Training
9. Resizable as
you need
Common tools
pre-installed
Easy access to
your data sources
No servers
to manage
Zero Setup for Data Exploration
10. Amazon SageMaker Components
Amazon’s fast, scalable algorithms
Distributed Apache MXNet and TensorFlow
Bring your own algorithm
Hyperparameter optimization
Building Hosting (C/P)Training
Clusters of GPU
or powerful CPU
11. Distributed Training that Works with You
Amazon-optimized
algorithms using the
AWS SDK…
… or Apache Spark
IM Estimators
Bring your own deep
learning script…
… or your custom
algorithm Docker image
12. More than Just General Purpose Algorithms
XGBoost, FM, and
Linear for classification
and regression
Kmeans and PCA
for clustering and
dimensionality reduction
Image classification
with convolutional
neural networks
LDA and NTM for
topic modeling, seq2seq
for translation
13. Amazon ECS
Bring Your Own Algorithm
... publish to Amazon ECS... add algorithm code
to a Docker container...
Choose your own framework
14. Amazon’s fast, scalable algorithms
Distributed Apache MXNet and TensorFlow
Bring your own algorithm
Hyperparameter optimization
Building Hosting (C/P)Training
Elastic Clusters
CPU or GPU
instances
Amazon SageMaker Components
16. Modular Architecture So You Can Use What You Need
Training
algorithm
Model
artifacts
Inference
code
Client
application
Model
Data Inference
Ground
truth
Amazon SageMaker
Past
Data
17. Pay As You Go and Inexpensive
ML compute by the
second starting
at $0.0464/hr
ML storage by the
second at $0.14
per GB-month
Data processed in
notebooks and hosting
at $0.016 per GB
Free trial to
get started quickly
18. Amazon EC2 C5 Instances
Cost effective CPUs, e.g., for models using INT8
• Powered by 3.0 GHz Intel Xeon (Skylake) platinum processors
• 72 vCPUs and 155-GB RAM (25% price/performance improvement versus C4)
• Nitro Hypervisor for larger instance sizes
Ideal for running ML inference as GPU based instances would be overkill
(cost saving)
Suitable for training simple ML algorithms (text or CSV data) or during
dev/test mode and proof-of-concepts
19. Can we do more to put ML in the
hands of all developers (literally)?
20. AWS Deeplens
is not a video camera…
…it’s the worlds first
Deep Learning Enabled
Developer kit
27. AWS and Intel
Amazon Web Services (AWS) and Intel technologies are designed to provide
a more secure, scalable edge-to-cloud solution for IoT applications
• Operate locally and on the cloud
• Easily manage and update devices
• Connect fleets of devices, gateways, and cloud environments
28. Customer Pain Points with IoT Implementation
Security
• Securingdatatransport tothecloud
withencryption
• Enablingdevicestocommunicate
withoneanother without
introducingvulnerabilities
• Ensuringdeviceshavenot been
tamperedwithbeforesendingdata
tothecloud
• Authenticatingdeviceidentitywithout
sendingcredentialsover thewire
Deployment and Management
• Managinglargenumbersof
simultaneousconnectionstodevices
connectingviadifferent networks
• Updatingdevicesoftware, patching, and
sendingconfigurationstodevicefleets
• Incorporatinglegacyandproprietary
protocolswithIoTdeployments
• Bandwidthandstoragecostsof
sendingdevicedatatothecloudwhen
local hardwarehassufficient resources
for local analytics
• Ongoingsecuritymanagement over
lifeof implementation
Scale
• Managinglargenumbersof
simultaneousconnectionstodevices
connectingviadifferent networks
• Updatingdevicesoftware, patching, and
sendingconfigurationstodevicefleets
• Incorporatinglegacyandproprietary
protocolswithIoTdeployments
• Bandwidthandstoragecostsof
sendingdevicedatatothecloudwhen
local hardwarehassufficient resources
for analytics
29. Benefits of Using AWS with Intel IoT Hardware
Easy to deploy
and manage
Whether making existing
things smart or deploying
newconnected devices,
AWS and Intel make it easy
to get started
Security enabled
Intel hardware and software
solutions are tightly
integrated with the robust
AWS cloud infrastructure to
deliver enhanced security,
fromto device, to network,
to cloud
Scalable
Start with minimal or no
upfront investment and
easily scale to millions
of devices and billions
of messages
Cost-effective
Leverage pay-as-you-go
pricing, the flexibility to use
local and cloud resources,
and flexible and low-cost IT
resources powered by Intel
technology to reduce the
costs of IoT deployments
30. AWS and Intel Strategies to Maximize Value
of IoT Deployments
Act locally on device data
at the edge. Use the cloud
for management, analytics,
and durable storage
Operate offline in
circumstances when
latency requirements or
intermittent connectivity
that make a round trip to
the cloud unfeasible
Execute AWS Lambda
functions locally using AWS
Greengrass, reducing the
complexity of developing
embedded software
Increase the quality of the
data you send to the cloud
through filtering device data
locally and only transmitting
the data you need so you
can achieve rich insight at
a lower cost
31. Where Do I Want To Process Data?
Infrastructure CloudPoPIoT Endpoint Gateway Appliance
Common Programming Model
Onboard
AWS
Cloud
Lambda
@Edge
Amazon
FreeRTOS Greengrass
32. Features of Greengrass
Security
AWS-grade
security
Data and
state sync
Local
Device Shadows
Local
triggers
Local
Message Broker
Local
actions
Local
Lambda Functions
Machine
Inference
Local Execution
of ML Models
Protocol
Adapters
Local messaging
with other devices
Over the
Air Updates
Easily Update
Greengrass Core
Local
Resource Access
Lambdas interact
with peripherals
Amazon
FreeRTOS
Works together
out of the box
33. Benefits AWS Greengrass
Respond quickly
to local events
Operate
offline
Simplified device
programming
Reduce the cost of
IoT applications
AWS-grade
security
35. Images—Universal, Ubiquitous, and Essential
There are 3,700,000,000 internet users in 2017
1,200,000,000 photos will be taken in 2017 (9% YoY Growth)
Source: InfoTrendsWorldwide
36. Amazon Rekognition
Extract rich metadata from visual content
Object and Scene
Detection
Facial
Analysis
Face
Comparison
Facial
Recognition
Celebrity
Recognition
Image
Moderation
37. Why Use Rekognition?
Object & Scene Detection
• Photo-sharing apps can power smart searches and
quickly find cherished memories, such as weddings,
hiking, or sunsets
Facial Analysis
• Retail businesses can understand the demographics and
sentiment of in-store customers
Face Comparison
• Hotels & hospitality businesses can provide seamless
access for guests and VIPs
Facial Recognition
• Provide secondary authentication for existing applications
johnf
38. Object and scene detection makes it easy for you to add features
that search, filter, and curate large image libraries
DetectLabels
Flower
Arrangement
Chair
Coffee Table
Living Room Indoors
Furniture
Cushion
Vase
Maple
Villa
Plant
Garden
Water
Swimming Pool
Tree
Potted Plant
Backyard
Patio
Object & Scene Detection
Identify objects and scenes and provide confidence scores
39. Emotion Expressed
General Attributes
Facial Pose
Facial Landmarks
EyeLeft,EyeRight,Nose
RightPupil,LeftPupil
MouthRight,LeftEyeBrowUp
Bounding Box...
Happy
Surprised
Smile:True
EyesOpen:True
Beard:True
Mustache:True
Pitch
Roll
Yaw
Demographic Data
Age Range
Gender:Male
29–45
96.5%
Facial Analysis
Analyze facial characteristics in multiple dimensions
DetectFaces
Image Quality
Brightness
Sharpness
23.6%
99.9%
83.8%
0.65%
23.6%
99.8%
99.5%
99.9%
1.446
5.725
4.383
42. Detect explicit and suggestive contentRecognize thousands of famous individuals
DetectModerationLabelsRecognizeCelebrities
Celebrity Recognition & Image Moderation
Newly released Rekognition features
43. Interfacing with Rekognition
Optimizing your input & requests for best performance
• S3 input for API calls – max image size of 15MB
• 5MB limit for non-S3 (Base64 encoded) API calls
• Minimum image resolution (x or y) of 80 pixels
• Image data supported in PNG or JPG format
• Max number of faces in a single face collection is 1 million
• The max matching faces the search API returns is 4096
• Size of face should occupy 5%+ of image for detection
• Collections are for faces!
…
Use Amazon CloudWatch to observe & issue alerts on Rekognition metrics
45. Rekognition APIs—Overview
Rekognition’s computer vision API operations can be grouped into
Non-storage API operations, and Storage-based API operations
Non-storage API Operations Storage-based API Operations
{
"FaceMatches": [
{"Face": {"BoundingB
"Height":
0.2683333456516266,
"Left":
0.5099999904632568,
"Top":
0.1783333271741867,
"Width":
0.17888888716697693},
"
Com pareFaces
DetectFaces
DetectLabels
DetectM oderationLabels
GetCelebrityInfo
RecognizeCelebrities
{
"FaceMatches": [
{"Face": {"BoundingB
"Height":
0.2683333456516266,
"Left":
0.5099999904632568,
"Top":
0.1783333271741867,
"Width":
0.17888888716697693},
"
CreateCollection
DeleteCollection
DeleteFaces
IndexFaces
ListCollections
SearchFaces
SearchFacesByIm age
ListFaces
46. What Can You Do with Amazon Rekognition?
Search for people, objects, scenes, and concepts across millions of images
Filter inappropriate or specific content
Redact identities from images of faces
Verify identities by matching against reference faces
Recognize individuals by matching faces to a collection
Analyze user traffic hotspots and journey paths by demographics and sentiment
47. Searchable Image Library
Real Estate Property Search
Property Search Amazon Elasticsearch
User captures an image
for their property listing
Mobile app uploads
the image to S3
A Lambda function is triggered
and calls Rekognition
Rekognition retrieves the image from S3 and
returns labels for the property and amenities
Lambda pushes the labels and
confidence scores to Elasticsearch
Other users can search properties
by landmarks, category, etc.
Photo Upload Amazon S3 AWS Lambda Detect Objects & Scenes
49. Face-Based User Verification
Confirm user identities by comparing their live image with a reference image
Authenticated User
Image Capture
Amazon S3
Compare Faces
Rekognition compares the live image
and the badge image—and returns
a similarity score
The application retrieves the
user’s badge from S3
Application
If the similarity score is over 92%,
the application returns a green status.
If not, an alert is issued to security staff
The application captures a live
image of each employee as they
scan their access card
50. Face-Based User Verification
Confirm user identities by comparing their live image with a reference image
• S3 Encryption of badge images—
SSE-S3, SSE-KMS, SSE-C
• Prevent tampering with bucket
policies & IAM RO permissions
• Extend by using Rek collections
• Cloudtrail—Logging & Auditing
with tamper-proof log signatures
• Tie notification into SNS/SES,
Custom CloudWatch Logs metrics,
or ElasticSearch with alerts
AWS
KMS
AWS
CloudTrail
AWS
Lambda
Amazon
S3
Amazon
SNS
AWS
CloudFormation
Amazon
CloudWatch
Amazon
SES
1
2
3
51. Facial Recognition
Identify individuals by matching a live image to a collection
of images of known persons
# 0 1 2 3 # 0 1 2 3
5 4 2 6 1 2 8 7 6 2
7 8 4 2 6 4 5 8 7 1
2 8 6 5 4 6 2 6 7 5 1
3 8 6 1 9 4 5
Images SearchFacesByImage Face Collection
Person Details Table
Photo AppEnd User Amazon S3
Rekognition searches the face collection for
matches to the reference image and returns an
array of face metadata for potential face
matches, ordered by similarity
If source images are required,
they are retrieved from S3
The photo app displays search
results to the end user
52. Collections and Access Patterns
Logging—visitor logs, digital libraries
• Easily find specific images from a digital library
• Find certain images by using a reference image
Social Tagging—photo storage and sharing
• One collection per application user
• Automated friend tagging
Person Verification—employee gate check
• One collection for each person to be verified
• Detection of stolen/shared IDs
53. Rekognition APIs—Advanced Usage
Decision trees and processing pipelines
Why?
• Many use cases require more than a single operation
to arrive at actionable data
How?
• S3 event notifications, Lambda, Step Functions
• DynamoDB for persistent pipeline storage
• Augmenting results with 3rd Party AI/ML
• OpenCV, MXNet, etc. on EC2 Spot, ECS, AI/ML AMI
Sample Use Cases
• Person of interest near a celebrity
• Multi-pass motion detection enhancement
• Subjects leaving a location without possessions
IndexFaces
DetectLabels
“person”
55. Automating Footage Tagging with
Amazon Rekognition
• Built in three weeks
• Indexed against 99,000 people
• Index created in one day
• Saved ~9,000 hours a year in
manual curation costs
• Live video with frame sampling
Previously, only about half of all footage
was indexed due to the immense time
requirements required by manual processes
57. Visual Search
Open Influence is a market leader in the
influencer marketing space and enables
global brands and agencies to identify
relevant influencers
• Real-time visual search, powered by
Rekognition, enables Open Influence to
tag millions of social images accurately
• Using Rekognition allowed Open Influence to
cut down the time it takes to source relevant
influencers from 2–3 days to minutes
58. Metadata Tagging
Scripps Networks Interactive is a leading
developer of engaging lifestyle content
• Instead of manually tagging media
assets, Rekognition enables Scripps
Networks Interactive to save time and
increase productivity with automated
metadata tagging
60. Resources
Product information:
• Product page: https://aws.amazon.com/deeplens/
• Blog posts: https://aws.amazon.com/blogs/machine-learning/category/artificial-intelligence/aws-deeplens/
• Developer community projects: https://aws.amazon.com/deeplens/community-projects/
Help Getting started with DeepLens:
10-Minute tutorials: they can already access the step by step guides now. Note: Friday 5/4 there will be video versions
on You Tube for each of these that are easier to follow:
1. Howto Configure Your NewAWS DeepLens
https://aws.amazon.com/getting-started/tutorials/configure-aws-deeplens/
2. Howto Create and Deploy a Deep Learning Project With AWS DeepLens
https://aws.amazon.com/getting-started/tutorials/create-deploy-project-deeplens/
3. Howto Extent a Deep Leaning Project with AWS DeepLens
https://aws.amazon.com/getting-started/tutorials/extend-deeplens-project/
4. Howto Build an AWS DeepLens project using Amazon SageMaker
https://aws.amazon.com/getting-started/tutorials/build-deeplens-project-sagemaker/
General questions:
Check out the AWS DeepLens FAQs or AWS DeepLens Developer Forum