Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Learning di AWS

Amazon Web Services
Amazon Web ServicesAmazon Web Services
Rendi unica l’offerta della tua startup sul mercato
con i servizi Machine Learning di AWS
Fausto Palma
AWS Solution Architect
© 2020, Amazon Web Services, Inc. or its Affiliates.
What is Machine Learning (ML)?
{ }
program
algorithm
training data
X
Y
input
data output
YX
trained
model
untrained
model
input
data
output
predictions
YX
inference
{ }
training
algorithm
training
© 2020, Amazon Web Services, Inc. or its Affiliates.
What is Machine Learning (ML)?
“The field of study that gives computers the ability to
learn without being explicitly programmed”
(Arthur Samuel)
Machine lerner pioneer
training data
X
Y
trained
model
training
training data
X
environment
§ reinforcement learning
§ unsupervised
§ supervised
© 2020, Amazon Web Services, Inc. or its Affiliates.
The reach of ML is growing
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking Helping users find the most relevant thing
Example
Ranking algorithm within Amazon Search
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking
Recommendation
Helping users find the most relevant thing
Giving users the thing they may be most
interested in
Example
Recommendations across the website
Amazon’s Choice
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking
Recommendation
Classification
Helping users find the most relevant thing
Giving users the thing they may be most
interested in
Figuring out what kind of thing something is
Example
Product classification for our catalog
High-Low Dress Straight Dress
Striped Skirt Graphic Shirt
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking
Recommendation
Classification
Regression
Helping users find the most relevant thing
Giving users the thing they may be most
interested in
Figuring out what kind of thing something is
Predicting a numerical value of a thing
Example
Predicting sales for specific ASINs
Seasonality | Out of stock | Promotions
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking
Recommendation
Classification
Regression
Helping users find the most relevant thing
Giving users the thing they may be most
interested in
Figuring out what kind of thing something is
Clustering
Predicting a numerical value of a thing
Example
Putting similar things together
Close-matching for near-duplicates
© 2020, Amazon Web Services, Inc. or its Affiliates.
Machine Learning applications (examples)
Problem type Description
Ranking
Recommendation
Classification
Regression
Helping users find the most relevant thing
Giving users the thing they may be most
interested in
Figuring out what kind of thing something is
Finding uncommon things
Clustering
Predicting a numerical value of a thing
Example
Anomaly Detection
Putting similar things together
Fruit freshness
Good
Damage
Serious
Damage
Decay
Before
After
© 2020, Amazon Web Services, Inc. or its Affiliates.
The AWS ML Stack
Broadest and most complete set of Machine Learning capabilities
VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS
Ground
Truth
ML
Marketplace
Neo
Augmented
AI
Built-in
algorithms
Notebooks Experiments
Model
training &
tuning
Debugger Autopilot
Model
hosting
Model Monitor
Deep Learning
AMIs & Containers
GPUs &
CPUs
Elastic
Inference
Inferentia FPGA
Amazon
Rekognition
Amazon
Polly
Amazon
Transcribe
+Medical
Amazon
Comprehend
+Medical
Amazon
Translate
Amazon
Lex
Amazon
Personalize
Amazon
Forecast
Amazon
Fraud Detector
Amazon
CodeGuru
AI SERVICES
ML SERVICES
ML FRAMEWORKS & INFRASTRUCTURE
Amazon
Textract
Amazon
Kendra
Contact Lens
For Amazon Connect
SageMaker Studio IDE
Amazon SageMaker
DeepGraphLibrary
© 2020, Amazon Web Services, Inc. or its Affiliates.
Amazon Rekognition
Sample APIs and resources Amazon Rekognition
(+ Custom Labels)
Amazon Rekognition Image
CompareFaces
CreateCollection
DeleteCollection
DeleteFaces
DescribeCollection
DetectFaces
DetectLabels
DetectModerationLabels
DetectText
GetCelebrityInfo
IndexFaces
ListCollections
ListFaces
RecognizeCelebrities
SearchFaces
SearchFacesByImage
Amazon Rekognition Custom
Labels
CreateProject
CreateProjectVersion
DescribeProjects
DescribeProjectVersions
DetectCustomLabels
StartProjectVersion
StopProjectVersion
Amazon Rekognition Video Stored Video
GetCelebrityRecognition
GetContentModeration
GetFaceDetection
GetFaceSearch
GetLabelDetection
GetPersonTracking
StartCelebrityRecognition
StartContentModeration
StartFaceDetection
StartFaceSearch
StartLabelDetection
StartPersonTracking
Amazon Rekognition Video Streaming
Video
CreateStreamProcessor
DeleteStreamProcessor
DescribeStreamProcessor
ListStreamProcessors
StartStreamProcessor
StopStreamProcessor
aws rekognition detect-labels –image 
'{"S3Object":{"Bucket":"bucket","Name":"image"}}'
DetectLabels
Request:
{
"Image": {
"Bytes": blob,
"S3Object": {
"Bucket": "string",
"Name": "string",
"Version": "string"
}
},
"MaxLabels": number,
"MinConfidence": number
}
Response:
{
"LabelModelVersion": "string",
"Labels": [
{
"Confidence": number,
"Instances": [
{
"BoundingBox": {
"Height": number,
"Left": number,
"Top": number,
"Width": number
},
"Confidence": number
}
],
"Name": "string",
"Parents": [
{
"Name": "string"
}
]
}
],
"OrientationCorrection": "string"
}
Some sample calls…
Check the documentation:
https://docs.aws.amazon.com/rekognition/
AWS SDK available for:
C++, Go, Java, JavaScript,
.NET, Node.js, PHP,
Python, Ruby
© 2020, Amazon Web Services, Inc. or its Affiliates.
Simple example with Amazon Rekognition
Lambda
Amazon
Rekognition
S3
Amazon SNS
picture email
© 2020, Amazon Web Services, Inc. or its Affiliates.
Artificial Intelligence
S3:
Web UI
S3:
Media storage
Elasticsearch:
Search index
Amazon Rekognition Video:
Detect objects, scenes,
faces, & celebrities
AWS Elemental MediaConvert:
Transcode videos
Transcribe Comprehend
Lambda
API Gateway:
REST API
Lambda
Step Functions:
Orchestrate
analysis
© 2020, Amazon Web Services, Inc. or its Affiliates.
The AWS ML Stack
Broadest and most complete set of Machine Learning capabilities
VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS
Ground
Truth
ML
Marketplace
Neo
Augmented
AI
Built-in
algorithms
Notebooks Experiments
Model
training &
tuning
Debugger Autopilot
Model
hosting
Model Monitor
Deep Learning
AMIs & Containers
GPUs &
CPUs
Elastic
Inference
Inferentia FPGA
Amazon
Rekognition
Amazon
Polly
Amazon
Transcribe
+Medical
Amazon
Comprehend
+Medical
Amazon
Translate
Amazon
Lex
Amazon
Personalize
Amazon
Forecast
Amazon
Fraud Detector
Amazon
CodeGuru
AI SERVICES
ML SERVICES
ML FRAMEWORKS & INFRASTRUCTURE
Amazon
Textract
Amazon
Kendra
Contact Lens
For Amazon Connect
SageMaker Studio IDE
Amazon SageMaker
DeepGraphLibrary
© 2020, Amazon Web Services, Inc. or its Affiliates.
Use Amazon SageMaker Studio to update models
and see impact on model quality
© 2020, Amazon Web Services, Inc. or its Affiliates.
Sagemaker training process
S3 Bucket
with XY data
EC2
Instance
EBS
Volume
SageMaker
notebook
Model
saved in S3
model.fit()
SageMaker
service
1
EC2 EC22
Elastic
Container
Registry
3
Docker
Container
4
5
7
Trained
model
6
© 2020, Amazon Web Services, Inc. or its Affiliates.
Training process on SageMaker
• Matrix factorization
• Regression
• Principal component analysis
• K-means clustering
• Gradient boosted trees
• And more!
17 Built-in algorithms
1
Bring your own script
(Amazon SageMaker managed container)
2
Bring your own
container
(you build the
Docker container)
3
Subscribe to
Algorithms and
Model Packages
on AWS
Marketplace
4
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
1. Prepare your own script
- Configure script input parameters
- Build your model
- Setup data loaders and transformations
- Build your training loop
- Save the model
- Load the model for deploying
2. Create a sagemaker.pytorch.PyTorch estimator
3. Call the estimator’s fit method
4. Deploy an endpoint
5. Call the endpoint to get predictions
Pytorch
script file
SageMaker
notebook
https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#train-a-model-with-pytorch
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py#L118
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
1. Prepare your own script
Build your model
© 2020, Amazon Web Services, Inc. or its Affiliates.
Artificial Neuron
x1
x2
x3
xn
…
w1
w2
w3
wn×
×
×
×
𝜎 y
∑ 𝜎
∑+
+
+
+
b
+
© 2020, Amazon Web Services, Inc. or its Affiliates.
Artificial NN
x1
x2
x3
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
∑ 𝜎
y1
y2
y3
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py#L118
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
1. Prepare your own script
Build your model
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root=args.data_dir, train=True,
download=False, transform=transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=args.batch_size,
shuffle=True, num_workers=args.workers)
testset = torchvision.datasets.CIFAR10(root=args.data_dir, train=False,
download=False, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=args.batch_size,
shuffle=False, num_workers=args.workers)
1. Prepare your own script
Setup data loaders and transformations
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
1. Prepare your own script
Build your training loop
model = Net()
model = model.to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(0, args.epochs):
running_loss = 0.0
for i, data in enumerate(train_loader):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f’ % (epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
© 2020, Amazon Web Services, Inc. or its Affiliates.
Training loop
w11
w12
w13
w1n
w21
w22
w23
w2n
b1 b2
w31
w32
w33
w3n
b3
wm1
wm2
wm3
wmn
bm
… … … …
…
…
…
…
…
Y predictionX training Y training
≠
L loss function
Which direction and how
much to change parameters
to reduce the L function
∂L
∂w
∂L
∂w
×Learning Rate
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
model = Net()
model = model.to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(0, args.epochs):
running_loss = 0.0
for i, data in enumerate(train_loader):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f’ % (epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
1. Prepare your own script
Build your training loop
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
1. Prepare your own script
- Configure script input parameters
- Build your model
- Setup data loaders and transformations
- Build your training loop
- Save the model
- Load the model for deploying
2. Create a sagemaker.pytorch.PyTorch estimator
3. Call the estimator’s fit method
4. Deploy an endpoint
5. Call the endpoint to get predictions
Pytorch
script file
SageMaker
notebook
https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#train-a-model-with-pytorch
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
import argparse
import os
if __name__ =='__main__':
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--batch-size', type=int, default=64)
parser.add_argument('--learning-rate', type=float, default=0.05)
parser.add_argument('--use-cuda', type=bool, default=False)
# Data, model, and output directories
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR’])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
parser.add_argument(‘--validation', type=str, default=os.environ['SM_CHANNEL_VALIDATION'])
parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])
args, _ = parser.parse_known_args()
# ... load from args.train, args.validation, and args.test, train a model, write model to args.model_dir.
1. Prepare your own script
Configure script input parameters
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
import argparse
import os
import torch
if __name__=='__main__’:
# default to the value in environment variable `SM_MODEL_DIR`. Using args makes the script more portable.
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
args, _ = parser.parse_known_args()
# ... train `model`, then save it to `model_dir`
with open(os.path.join(args.model_dir, 'model.pth'), 'wb') as f:
torch.save(model.state_dict(), f)
1. Prepare your own script
Save the model
import os
import torch
def model_fn(model_dir):
model = Your_Model()
with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
model.load_state_dict(torch.load(f))
return model
Load the model for deploying
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
from sagemaker.pytorch import PyTorch
pytorch_estimator = PyTorch(entry_point='pytorch-train.py',
instance_type='ml.p2.xlarge',
instance_count=1,
framework_version='1.5.0',
py_version='py3',
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
2. Create a sagemaker.pytorch.PyTorch Estimator
3. Call the estimator’s fit method
pytorch_estimator.fit({'train': 's3://my-data-bucket/path/to/my/training/data’,
‘validation': 's3://my-data-bucket/path/to/my/validation/data',
'test': 's3://my-data-bucket/path/to/my/test/data’})
SageMaker
notebook
© 2020, Amazon Web Services, Inc. or its Affiliates.
Bring your own script with PyTorch
# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = pytorch_estimator.deploy(instance_type='ml.m5.xlarge',
initial_instance_count=1)
# `data` is a NumPy array or a Python list.
# `response` is a NumPy array.
response = predictor.predict(data)
4. Deploy an endpoint
5. Call the endpoint to get predictions
SageMaker
notebook
Hosting
Services
(endpoints)
Batch
Transform
Trained Model
Neo
Run
Anywhere DeepRacer
Amazon SageMaker Deployment Options
DeepLensMarketplace
Auto Scaling group
Availability Zone 1
Availability Zone 2
Availability Zone 3
Deployment / Hosting
Amazon SageMaker ML
Compute Instances
SageMaker Endpoints
Elastic
Load Balancing
Model
Endpoint
Amazon
API
Gateway
Input Data
(Request)
Prediction
(Response)
Client
Model
saved in S3
Elastic
Container
Registry
Trained
model
Inference
container
© 2020, Amazon Web Services, Inc. or its Affiliates.
Train once and run anywhere with 2x performance
Amazon SageMaker Neo
Neo
Broad framework
support
Broad hardware
support
Open-source Neo-AI device runtime and compiler
1/10th the size of original frameworks
© 2020, Amazon Web Services, Inc. or its Affiliates.
The AWS ML Stack
Broadest and most complete set of Machine Learning capabilities
VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS
Ground
Truth
ML
Marketplace
Neo
Augmented
AI
Built-in
algorithms
Notebooks Experiments
Model
training &
tuning
Debugger Autopilot
Model
hosting
Model Monitor
Deep Learning
AMIs & Containers
GPUs &
CPUs
Elastic
Inference
Inferentia FPGA
Amazon
Rekognition
Amazon
Polly
Amazon
Transcribe
+Medical
Amazon
Comprehend
+Medical
Amazon
Translate
Amazon
Lex
Amazon
Personalize
Amazon
Forecast
Amazon
Fraud Detector
Amazon
CodeGuru
AI SERVICES
ML SERVICES
ML FRAMEWORKS & INFRASTRUCTURE
Amazon
Textract
Amazon
Kendra
Contact Lens
For Amazon Connect
SageMaker Studio IDE
Amazon SageMaker
DeepGraphLibrary
© 2020, Amazon Web Services, Inc. or its Affiliates.
Wide selection of options for cost-effective inference
NETWORK ATTACHED
INFERENCE ACCELERATOR
eia1.medium
Mid-sized models, low-latency
budget with tolerance limits
ELASTIC INFERENCE
M5
Large models, high throughput,
and low-latency access to CUDA
GPU INSTANCES
P3 G4
Small models,
low throughput
CPU INSTANCES
C5
Inf1: High throughput, high performance,
and lowest cost in the cloud
CUSTOM CHIP
Inf1
© 2020, Amazon Web Services, Inc. or its Affiliates.
Inf1 instances are built from
the ground up by AWS to
provide high performance,
cost-effective inference
https://aws.amazon.com/ec2/instance-types/inf1
HIGH PERFORMANCE LOW COST
AWS Custom 2nd Gen Intel
Xeon Scalable Processors
AWS Nitro AWS Inferentia
CUSTOM BUILT FOR
ML INFERENCE
100Gbps
Networking
© 2020, Amazon Web Services, Inc. or its Affiliates.
AWS Training & Certification
https://www.aws.training: Free on-demand courses to help you build new cloud skills
Curriculum: Exploring the Machine Learning
Toolset
https://www.aws.training/Details/Curriculum?id=27155
Curriculum: Developing Machine Learning
Applications
https://www.aws.training/Details/Curriculum?id=27243
Curriculum: Machine Learning Security
https://www.aws.training/Details/Curriculum?id=27273
Curriculum: Demystifying AI/ML/DL
https://www.aws.training/Details/Curriculum?id=27241
Video: AWS Foundations: Machine Learning
Basics
https://www.aws.training/Details/Video?id=49644
Curriculum: Conversation Primer: Machine
Learning Terminology
https://www.aws.training/Details/Curriculum?id=27270
For more info on AWS T&C visit: https://aws.amazon.com/it/training/
© 2020, Amazon Web Services, Inc. or its Affiliates.
Thanks!
Appendix – other useful links
https://sagemaker.readthedocs.io/en/stable/framework
s/pytorch/using_pytorch.html#train-a-model-with-
pytorch
Using PyTorch
script
documentation
Github Sagemaker
Examples
https://github.com/aws/amazon-sagemaker-examples
Github using
PyTorch script
example
https://github.com/aws/amazon-sagemaker-
examples/tree/master/sagemaker-python-
sdk/pytorch_cnn_cifar10
Note: there has been a recent update of the SageMaker SDK to version 2.0. Some examples are written for the previous SageMaker SDK. Please,
notice this link for further details:
https://sagemaker.readthedocs.io/en/stable/v2.html
You may downgrade temporarily with the following terminal command:
pip install sagemaker==1.72.0 –U
Upgrading to the latest SageMaker SDK can be done by executing:
pip install --upgrade sagemaker
1 of 41

More Related Content

What's hot(20)

SAP Modernization with AWSSAP Modernization with AWS
SAP Modernization with AWS
Amazon Web Services3.1K views
AWS business essentials AWS business essentials
AWS business essentials
Amazon Web Services5.9K views
Enterprise workloads on AWSEnterprise workloads on AWS
Enterprise workloads on AWS
Amazon Web Services1.2K views
Modernize your Microsoft Applications on AWSModernize your Microsoft Applications on AWS
Modernize your Microsoft Applications on AWS
Amazon Web Services2K views
AWS 101: Introduction to AWSAWS 101: Introduction to AWS
AWS 101: Introduction to AWS
Ian Massingham34.1K views

Similar to Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Learning di AWS(20)

AI/ML PoweredPersonalized Recommendations in Gaming IndustryAI/ML PoweredPersonalized Recommendations in Gaming Industry
AI/ML Powered Personalized Recommendations in Gaming Industry
Hasan Basri AKIRMAK, MSc,ExecMBA103 views
Deep Dive Amazon SageMakerDeep Dive Amazon SageMaker
Deep Dive Amazon SageMaker
Cobus Bernard215 views
Introduction to SagemakerIntroduction to Sagemaker
Introduction to Sagemaker
Amazon Web Services1.2K views
AI/ML Week: Strengthen CybersecurityAI/ML Week: Strengthen Cybersecurity
AI/ML Week: Strengthen Cybersecurity
Amazon Web Services2.3K views
深入淺出 AWS AI深入淺出 AWS AI
深入淺出 AWS AI
Amazon Web Services402 views
Amazon SageMaker (December 2018)Amazon SageMaker (December 2018)
Amazon SageMaker (December 2018)
Julien SIMON2.3K views
AI Services for Developers | AWS Floor28AI Services for Developers | AWS Floor28
AI Services for Developers | AWS Floor28
Amazon Web Services247 views

More from Amazon Web Services(20)

Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWS
Amazon Web Services2.8K views
Open banking as a serviceOpen banking as a service
Open banking as a service
Amazon Web Services7K views
Computer Vision con AWSComputer Vision con AWS
Computer Vision con AWS
Amazon Web Services3.1K views
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWS
Amazon Web Services2.4K views
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch Deck
Amazon Web Services1.4K views
Building a web application without serversBuilding a web application without servers
Building a web application without servers
Amazon Web Services1.4K views
Fundraising EssentialsFundraising Essentials
Fundraising Essentials
Amazon Web Services887 views
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container Service
Amazon Web Services2.7K views

Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Learning di AWS

  • 1. Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Learning di AWS Fausto Palma AWS Solution Architect
  • 2. © 2020, Amazon Web Services, Inc. or its Affiliates. What is Machine Learning (ML)? { } program algorithm training data X Y input data output YX trained model untrained model input data output predictions YX inference { } training algorithm training
  • 3. © 2020, Amazon Web Services, Inc. or its Affiliates. What is Machine Learning (ML)? “The field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel) Machine lerner pioneer training data X Y trained model training training data X environment § reinforcement learning § unsupervised § supervised
  • 4. © 2020, Amazon Web Services, Inc. or its Affiliates. The reach of ML is growing
  • 5. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Helping users find the most relevant thing Example Ranking algorithm within Amazon Search
  • 6. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Recommendation Helping users find the most relevant thing Giving users the thing they may be most interested in Example Recommendations across the website Amazon’s Choice
  • 7. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Recommendation Classification Helping users find the most relevant thing Giving users the thing they may be most interested in Figuring out what kind of thing something is Example Product classification for our catalog High-Low Dress Straight Dress Striped Skirt Graphic Shirt
  • 8. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Recommendation Classification Regression Helping users find the most relevant thing Giving users the thing they may be most interested in Figuring out what kind of thing something is Predicting a numerical value of a thing Example Predicting sales for specific ASINs Seasonality | Out of stock | Promotions
  • 9. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Recommendation Classification Regression Helping users find the most relevant thing Giving users the thing they may be most interested in Figuring out what kind of thing something is Clustering Predicting a numerical value of a thing Example Putting similar things together Close-matching for near-duplicates
  • 10. © 2020, Amazon Web Services, Inc. or its Affiliates. Machine Learning applications (examples) Problem type Description Ranking Recommendation Classification Regression Helping users find the most relevant thing Giving users the thing they may be most interested in Figuring out what kind of thing something is Finding uncommon things Clustering Predicting a numerical value of a thing Example Anomaly Detection Putting similar things together Fruit freshness Good Damage Serious Damage Decay Before After
  • 11. © 2020, Amazon Web Services, Inc. or its Affiliates. The AWS ML Stack Broadest and most complete set of Machine Learning capabilities VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS Ground Truth ML Marketplace Neo Augmented AI Built-in algorithms Notebooks Experiments Model training & tuning Debugger Autopilot Model hosting Model Monitor Deep Learning AMIs & Containers GPUs & CPUs Elastic Inference Inferentia FPGA Amazon Rekognition Amazon Polly Amazon Transcribe +Medical Amazon Comprehend +Medical Amazon Translate Amazon Lex Amazon Personalize Amazon Forecast Amazon Fraud Detector Amazon CodeGuru AI SERVICES ML SERVICES ML FRAMEWORKS & INFRASTRUCTURE Amazon Textract Amazon Kendra Contact Lens For Amazon Connect SageMaker Studio IDE Amazon SageMaker DeepGraphLibrary
  • 12. © 2020, Amazon Web Services, Inc. or its Affiliates. Amazon Rekognition Sample APIs and resources Amazon Rekognition (+ Custom Labels) Amazon Rekognition Image CompareFaces CreateCollection DeleteCollection DeleteFaces DescribeCollection DetectFaces DetectLabels DetectModerationLabels DetectText GetCelebrityInfo IndexFaces ListCollections ListFaces RecognizeCelebrities SearchFaces SearchFacesByImage Amazon Rekognition Custom Labels CreateProject CreateProjectVersion DescribeProjects DescribeProjectVersions DetectCustomLabels StartProjectVersion StopProjectVersion Amazon Rekognition Video Stored Video GetCelebrityRecognition GetContentModeration GetFaceDetection GetFaceSearch GetLabelDetection GetPersonTracking StartCelebrityRecognition StartContentModeration StartFaceDetection StartFaceSearch StartLabelDetection StartPersonTracking Amazon Rekognition Video Streaming Video CreateStreamProcessor DeleteStreamProcessor DescribeStreamProcessor ListStreamProcessors StartStreamProcessor StopStreamProcessor aws rekognition detect-labels –image '{"S3Object":{"Bucket":"bucket","Name":"image"}}' DetectLabels Request: { "Image": { "Bytes": blob, "S3Object": { "Bucket": "string", "Name": "string", "Version": "string" } }, "MaxLabels": number, "MinConfidence": number } Response: { "LabelModelVersion": "string", "Labels": [ { "Confidence": number, "Instances": [ { "BoundingBox": { "Height": number, "Left": number, "Top": number, "Width": number }, "Confidence": number } ], "Name": "string", "Parents": [ { "Name": "string" } ] } ], "OrientationCorrection": "string" } Some sample calls… Check the documentation: https://docs.aws.amazon.com/rekognition/ AWS SDK available for: C++, Go, Java, JavaScript, .NET, Node.js, PHP, Python, Ruby
  • 13. © 2020, Amazon Web Services, Inc. or its Affiliates. Simple example with Amazon Rekognition Lambda Amazon Rekognition S3 Amazon SNS picture email
  • 14. © 2020, Amazon Web Services, Inc. or its Affiliates. Artificial Intelligence S3: Web UI S3: Media storage Elasticsearch: Search index Amazon Rekognition Video: Detect objects, scenes, faces, & celebrities AWS Elemental MediaConvert: Transcode videos Transcribe Comprehend Lambda API Gateway: REST API Lambda Step Functions: Orchestrate analysis
  • 15. © 2020, Amazon Web Services, Inc. or its Affiliates. The AWS ML Stack Broadest and most complete set of Machine Learning capabilities VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS Ground Truth ML Marketplace Neo Augmented AI Built-in algorithms Notebooks Experiments Model training & tuning Debugger Autopilot Model hosting Model Monitor Deep Learning AMIs & Containers GPUs & CPUs Elastic Inference Inferentia FPGA Amazon Rekognition Amazon Polly Amazon Transcribe +Medical Amazon Comprehend +Medical Amazon Translate Amazon Lex Amazon Personalize Amazon Forecast Amazon Fraud Detector Amazon CodeGuru AI SERVICES ML SERVICES ML FRAMEWORKS & INFRASTRUCTURE Amazon Textract Amazon Kendra Contact Lens For Amazon Connect SageMaker Studio IDE Amazon SageMaker DeepGraphLibrary
  • 16. © 2020, Amazon Web Services, Inc. or its Affiliates. Use Amazon SageMaker Studio to update models and see impact on model quality
  • 17. © 2020, Amazon Web Services, Inc. or its Affiliates. Sagemaker training process S3 Bucket with XY data EC2 Instance EBS Volume SageMaker notebook Model saved in S3 model.fit() SageMaker service 1 EC2 EC22 Elastic Container Registry 3 Docker Container 4 5 7 Trained model 6
  • 18. © 2020, Amazon Web Services, Inc. or its Affiliates. Training process on SageMaker • Matrix factorization • Regression • Principal component analysis • K-means clustering • Gradient boosted trees • And more! 17 Built-in algorithms 1 Bring your own script (Amazon SageMaker managed container) 2 Bring your own container (you build the Docker container) 3 Subscribe to Algorithms and Model Packages on AWS Marketplace 4
  • 19. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch 1. Prepare your own script - Configure script input parameters - Build your model - Setup data loaders and transformations - Build your training loop - Save the model - Load the model for deploying 2. Create a sagemaker.pytorch.PyTorch estimator 3. Call the estimator’s fit method 4. Deploy an endpoint 5. Call the endpoint to get predictions Pytorch script file SageMaker notebook https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#train-a-model-with-pytorch
  • 20. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py#L118 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x 1. Prepare your own script Build your model
  • 21. © 2020, Amazon Web Services, Inc. or its Affiliates. Artificial Neuron x1 x2 x3 xn … w1 w2 w3 wn× × × × 𝜎 y ∑ 𝜎 ∑+ + + + b +
  • 22. © 2020, Amazon Web Services, Inc. or its Affiliates. Artificial NN x1 x2 x3 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 ∑ 𝜎 y1 y2 y3
  • 23. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py#L118 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x 1. Prepare your own script Build your model
  • 24. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root=args.data_dir, train=True, download=False, transform=transform) train_loader = torch.utils.data.DataLoader(trainset, batch_size=args.batch_size, shuffle=True, num_workers=args.workers) testset = torchvision.datasets.CIFAR10(root=args.data_dir, train=False, download=False, transform=transform) test_loader = torch.utils.data.DataLoader(testset, batch_size=args.batch_size, shuffle=False, num_workers=args.workers) 1. Prepare your own script Setup data loaders and transformations
  • 25. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch 1. Prepare your own script Build your training loop model = Net() model = model.to(device) criterion = nn.CrossEntropyLoss().to(device) optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) for epoch in range(0, args.epochs): running_loss = 0.0 for i, data in enumerate(train_loader): # get the inputs inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f’ % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0
  • 26. © 2020, Amazon Web Services, Inc. or its Affiliates. Training loop w11 w12 w13 w1n w21 w22 w23 w2n b1 b2 w31 w32 w33 w3n b3 wm1 wm2 wm3 wmn bm … … … … … … … … … Y predictionX training Y training ≠ L loss function Which direction and how much to change parameters to reduce the L function ∂L ∂w ∂L ∂w ×Learning Rate
  • 27. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch model = Net() model = model.to(device) criterion = nn.CrossEntropyLoss().to(device) optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) for epoch in range(0, args.epochs): running_loss = 0.0 for i, data in enumerate(train_loader): # get the inputs inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f’ % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 1. Prepare your own script Build your training loop
  • 28. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch 1. Prepare your own script - Configure script input parameters - Build your model - Setup data loaders and transformations - Build your training loop - Save the model - Load the model for deploying 2. Create a sagemaker.pytorch.PyTorch estimator 3. Call the estimator’s fit method 4. Deploy an endpoint 5. Call the endpoint to get predictions Pytorch script file SageMaker notebook https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#train-a-model-with-pytorch
  • 29. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch import argparse import os if __name__ =='__main__': parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument('--epochs', type=int, default=50) parser.add_argument('--batch-size', type=int, default=64) parser.add_argument('--learning-rate', type=float, default=0.05) parser.add_argument('--use-cuda', type=bool, default=False) # Data, model, and output directories parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR']) parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR’]) parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN']) parser.add_argument(‘--validation', type=str, default=os.environ['SM_CHANNEL_VALIDATION']) parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST']) args, _ = parser.parse_known_args() # ... load from args.train, args.validation, and args.test, train a model, write model to args.model_dir. 1. Prepare your own script Configure script input parameters
  • 30. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch import argparse import os import torch if __name__=='__main__’: # default to the value in environment variable `SM_MODEL_DIR`. Using args makes the script more portable. parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR']) args, _ = parser.parse_known_args() # ... train `model`, then save it to `model_dir` with open(os.path.join(args.model_dir, 'model.pth'), 'wb') as f: torch.save(model.state_dict(), f) 1. Prepare your own script Save the model import os import torch def model_fn(model_dir): model = Your_Model() with open(os.path.join(model_dir, 'model.pth'), 'rb') as f: model.load_state_dict(torch.load(f)) return model Load the model for deploying
  • 31. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch from sagemaker.pytorch import PyTorch pytorch_estimator = PyTorch(entry_point='pytorch-train.py', instance_type='ml.p2.xlarge', instance_count=1, framework_version='1.5.0', py_version='py3', hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1}) 2. Create a sagemaker.pytorch.PyTorch Estimator 3. Call the estimator’s fit method pytorch_estimator.fit({'train': 's3://my-data-bucket/path/to/my/training/data’, ‘validation': 's3://my-data-bucket/path/to/my/validation/data', 'test': 's3://my-data-bucket/path/to/my/test/data’}) SageMaker notebook
  • 32. © 2020, Amazon Web Services, Inc. or its Affiliates. Bring your own script with PyTorch # Deploy my estimator to a SageMaker Endpoint and get a Predictor predictor = pytorch_estimator.deploy(instance_type='ml.m5.xlarge', initial_instance_count=1) # `data` is a NumPy array or a Python list. # `response` is a NumPy array. response = predictor.predict(data) 4. Deploy an endpoint 5. Call the endpoint to get predictions SageMaker notebook
  • 34. Auto Scaling group Availability Zone 1 Availability Zone 2 Availability Zone 3 Deployment / Hosting Amazon SageMaker ML Compute Instances SageMaker Endpoints Elastic Load Balancing Model Endpoint Amazon API Gateway Input Data (Request) Prediction (Response) Client Model saved in S3 Elastic Container Registry Trained model Inference container
  • 35. © 2020, Amazon Web Services, Inc. or its Affiliates. Train once and run anywhere with 2x performance Amazon SageMaker Neo Neo Broad framework support Broad hardware support Open-source Neo-AI device runtime and compiler 1/10th the size of original frameworks
  • 36. © 2020, Amazon Web Services, Inc. or its Affiliates. The AWS ML Stack Broadest and most complete set of Machine Learning capabilities VISION SPEECH TEXT SEARCH CHATBOTS PERSONALIZATION FORECASTING FRAUD DEVELOPMENT CONTACT CENTERS Ground Truth ML Marketplace Neo Augmented AI Built-in algorithms Notebooks Experiments Model training & tuning Debugger Autopilot Model hosting Model Monitor Deep Learning AMIs & Containers GPUs & CPUs Elastic Inference Inferentia FPGA Amazon Rekognition Amazon Polly Amazon Transcribe +Medical Amazon Comprehend +Medical Amazon Translate Amazon Lex Amazon Personalize Amazon Forecast Amazon Fraud Detector Amazon CodeGuru AI SERVICES ML SERVICES ML FRAMEWORKS & INFRASTRUCTURE Amazon Textract Amazon Kendra Contact Lens For Amazon Connect SageMaker Studio IDE Amazon SageMaker DeepGraphLibrary
  • 37. © 2020, Amazon Web Services, Inc. or its Affiliates. Wide selection of options for cost-effective inference NETWORK ATTACHED INFERENCE ACCELERATOR eia1.medium Mid-sized models, low-latency budget with tolerance limits ELASTIC INFERENCE M5 Large models, high throughput, and low-latency access to CUDA GPU INSTANCES P3 G4 Small models, low throughput CPU INSTANCES C5 Inf1: High throughput, high performance, and lowest cost in the cloud CUSTOM CHIP Inf1
  • 38. © 2020, Amazon Web Services, Inc. or its Affiliates. Inf1 instances are built from the ground up by AWS to provide high performance, cost-effective inference https://aws.amazon.com/ec2/instance-types/inf1 HIGH PERFORMANCE LOW COST AWS Custom 2nd Gen Intel Xeon Scalable Processors AWS Nitro AWS Inferentia CUSTOM BUILT FOR ML INFERENCE 100Gbps Networking
  • 39. © 2020, Amazon Web Services, Inc. or its Affiliates. AWS Training & Certification https://www.aws.training: Free on-demand courses to help you build new cloud skills Curriculum: Exploring the Machine Learning Toolset https://www.aws.training/Details/Curriculum?id=27155 Curriculum: Developing Machine Learning Applications https://www.aws.training/Details/Curriculum?id=27243 Curriculum: Machine Learning Security https://www.aws.training/Details/Curriculum?id=27273 Curriculum: Demystifying AI/ML/DL https://www.aws.training/Details/Curriculum?id=27241 Video: AWS Foundations: Machine Learning Basics https://www.aws.training/Details/Video?id=49644 Curriculum: Conversation Primer: Machine Learning Terminology https://www.aws.training/Details/Curriculum?id=27270 For more info on AWS T&C visit: https://aws.amazon.com/it/training/
  • 40. © 2020, Amazon Web Services, Inc. or its Affiliates. Thanks!
  • 41. Appendix – other useful links https://sagemaker.readthedocs.io/en/stable/framework s/pytorch/using_pytorch.html#train-a-model-with- pytorch Using PyTorch script documentation Github Sagemaker Examples https://github.com/aws/amazon-sagemaker-examples Github using PyTorch script example https://github.com/aws/amazon-sagemaker- examples/tree/master/sagemaker-python- sdk/pytorch_cnn_cifar10 Note: there has been a recent update of the SageMaker SDK to version 2.0. Some examples are written for the previous SageMaker SDK. Please, notice this link for further details: https://sagemaker.readthedocs.io/en/stable/v2.html You may downgrade temporarily with the following terminal command: pip install sagemaker==1.72.0 –U Upgrading to the latest SageMaker SDK can be done by executing: pip install --upgrade sagemaker