This document provides an overview of machine learning capabilities on AWS. It begins with introductions to machine learning concepts and the benefits of performing machine learning in the cloud. It then describes various AWS machine learning services like Amazon SageMaker for building, training, and deploying models. The rest of the document explores Amazon SageMaker in more detail, demonstrating how to train models using built-in algorithms or custom containers and deploy them for inference.
AWS Machine Learning Week at the San Francisco Loft: Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
MCL 322 Optimizing Training on Apache MXNet Julien SIMON
Techniques and tips to optimize training on Apache MXNet
Infrastructure performance: storage and I/O, GPU throughput, distributed training, CPU-based training, cost
Model performance: data augmentation, initializers, optimizers, etc.
Level 400: you should be familiar with Deep Learning and MXNet
Deep Learning for Developers (December 2017)Julien SIMON
Talk @ Code Europe, Poland, December 5th, 2017
- An introduction to Deep Learning
- An introduction to Apache MXNet
- Demos using Jupyter notebooks on Amazon SageMaker
- Resources
Optimizing training on Apache MXNet (January 2018)Julien SIMON
This document provides tips and techniques for optimizing training on Apache MXNet. It discusses optimizing infrastructure performance through efficient data storage and distribution, maximizing GPU usage through techniques like batch size tuning, and optimizing model performance through techniques like data augmentation, learning rate scheduling, and optimizer selection. The document demonstrates these concepts through examples and recommends experimenting to find the best approach for each problem.
What is Deep Learning
Rise of Deep Learning
Phases of Deep Learning - Training and Inference
AI & Limitations of Deep Learning
Apache MXNet History, Apache MXNet concepts
How to use Apache MXNet and Spark together for Distributed Inference.
Amazon SageMaker는 기계 학습을 위한 데이터와 알고리즘, 프레임워크를 빠르게 연결하에 손쉽게 ML 구축이 가능한 신규 클라우드 서비스입니다. 이번 시간에는 Amazon S3에 저장된 학습 데이터를 이용하여 가장 일반적으로 사용하는 알고리즘 몇 가지를 직접 실행해 보는 실습을 진행합니다. 이를 위해 유명한 오픈 소스 프레임워크인 TensorFlow와 Keras 그리고 Apache MXNet과 Gluon 등을 사용해 봅니다.
This document provides an overview of recurrent neural networks (RNNs) and long short-term memory (LSTM) networks. It discusses how RNNs can be used for sequence modeling tasks like sentiment analysis, machine translation, and speech recognition by incorporating context or memory from previous steps. LSTMs are presented as an improvement over basic RNNs that can learn long-term dependencies in sequences using forget gates, input gates, and output gates to control the flow of information through the network.
This document provides an overview of machine learning capabilities on AWS. It begins with introductions to machine learning concepts and the benefits of performing machine learning in the cloud. It then describes various AWS machine learning services like Amazon SageMaker for building, training, and deploying models. The rest of the document explores Amazon SageMaker in more detail, demonstrating how to train models using built-in algorithms or custom containers and deploy them for inference.
AWS Machine Learning Week at the San Francisco Loft: Build Deep Learning Applications with TensorFlow and SageMaker
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
MCL 322 Optimizing Training on Apache MXNet Julien SIMON
Techniques and tips to optimize training on Apache MXNet
Infrastructure performance: storage and I/O, GPU throughput, distributed training, CPU-based training, cost
Model performance: data augmentation, initializers, optimizers, etc.
Level 400: you should be familiar with Deep Learning and MXNet
Deep Learning for Developers (December 2017)Julien SIMON
Talk @ Code Europe, Poland, December 5th, 2017
- An introduction to Deep Learning
- An introduction to Apache MXNet
- Demos using Jupyter notebooks on Amazon SageMaker
- Resources
Optimizing training on Apache MXNet (January 2018)Julien SIMON
This document provides tips and techniques for optimizing training on Apache MXNet. It discusses optimizing infrastructure performance through efficient data storage and distribution, maximizing GPU usage through techniques like batch size tuning, and optimizing model performance through techniques like data augmentation, learning rate scheduling, and optimizer selection. The document demonstrates these concepts through examples and recommends experimenting to find the best approach for each problem.
What is Deep Learning
Rise of Deep Learning
Phases of Deep Learning - Training and Inference
AI & Limitations of Deep Learning
Apache MXNet History, Apache MXNet concepts
How to use Apache MXNet and Spark together for Distributed Inference.
Amazon SageMaker는 기계 학습을 위한 데이터와 알고리즘, 프레임워크를 빠르게 연결하에 손쉽게 ML 구축이 가능한 신규 클라우드 서비스입니다. 이번 시간에는 Amazon S3에 저장된 학습 데이터를 이용하여 가장 일반적으로 사용하는 알고리즘 몇 가지를 직접 실행해 보는 실습을 진행합니다. 이를 위해 유명한 오픈 소스 프레임워크인 TensorFlow와 Keras 그리고 Apache MXNet과 Gluon 등을 사용해 봅니다.
This document provides an overview of recurrent neural networks (RNNs) and long short-term memory (LSTM) networks. It discusses how RNNs can be used for sequence modeling tasks like sentiment analysis, machine translation, and speech recognition by incorporating context or memory from previous steps. LSTMs are presented as an improvement over basic RNNs that can learn long-term dependencies in sequences using forget gates, input gates, and output gates to control the flow of information through the network.
This document discusses how Amazon SageMaker can be used to train machine learning models on large datasets using hosted Jupyter notebooks. It notes that DigitalGlobe plans to use SageMaker to train models on petabytes of Earth observation imagery so that users can create and deploy models within one scalable environment. The document also quotes the CTO of Maxar Technologies saying they will use SageMaker to build and deploy novel AI algorithms at scale to solve complex problems.
Build, Train & Deploy Your ML Application on Amazon SageMakerAmazon Web Services
This session covers a step by step walk through of a typical Machine Learning (ML) process: From asking the right questions, collecting the data, looking at the data, picking the right algorithms, training and evaluating ML models with Amazon SageMaker and bringing them live into production. A series of hands-on demos is included to illustrate these steps so that you can start building your first Machine Learning application right after this session.
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
This document discusses Amazon SageMaker, a fully managed service that allows users to build, train and deploy machine learning models at scale. It provides pre-built algorithms and frameworks to simplify the ML workflow. Models can be trained on Amazon EC2 instances optimized for ML like P3 and C5 instances which provide GPUs and new CPUs respectively. SageMaker also allows users to use their own custom Docker containers. It was used by DigitalGlobe to extract information from satellite imagery using ML models. Demos of SageMaker's capabilities were shown.
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
The document discusses Amazon SageMaker, a fully managed service that allows users to build, train and deploy machine learning models at scale. It provides pre-built algorithms and frameworks, managed hosting, one-click deployment and hyperparameter tuning capabilities. It also supports bringing your own custom algorithms by allowing users to run their own Docker containers. The document highlights how SageMaker simplifies and automates ML workflows and provides examples of customers using it at scale for image and data analysis.
Deep Learning for Developers (expanded version, 12/2017)Julien SIMON
This document provides an introduction to deep learning and neural networks. It discusses key concepts like artificial intelligence, machine learning, deep learning and common network architectures. It also introduces Apache MXNet and demos using Jupyter notebooks on Amazon SageMaker. Deep learning uses neural networks to teach machines from complex data without explicit programming. Common network types discussed are convolutional neural networks, long short-term memory networks and generative adversarial networks. MXNet is an open source deep learning library that programs and scales models across GPUs. Demos show classifying images and generating digits with neural networks.
Find out more about:
• Techniques and tips to optimize trainingon Apache MXNet
• Infrastructure performance:storage and I/O, GPU throughput, distributed training, CPU-based training, cost
• Model performance:data augmentation, initializers, optimizers, etc.
• Level 666: you should be familiar with Deep Learning and MXNet
This document provides an overview of Apache MXNet, an open-source library for deep learning. It discusses MXNet's capabilities such as high performance scaling across GPUs, support for mobile and IoT models, and multiple language syntax. It also demonstrates MXNet through Jupyter notebooks on MNIST data and introduces Gluon, a high-level API for MXNet. Resources for learning more about MXNet, deep learning on AWS, and the presenter's blog are provided.
Deep Learning with Apache MXNet (September 2017)Julien SIMON
The document provides an overview of deep learning with Apache MXNet. It discusses key concepts like neural networks, training processes, convolutional neural networks, and Apache MXNet. It also outlines examples of using MXNet, including building a first network, using pre-trained models, learning from scratch on datasets like MNIST, and fine-tuning models on CIFAR-10. The document concludes by mentioning an example of using MXNet with Keras and a quick reference to Sockeye.
Best Practices for Running Amazon EC2 Spot Instances with Amazon EMR - AWS On...Amazon Web Services
Learning Objectives:
- Learn how to run Amazon EMR clusters on Spot instances and significantly reduce the cost of processing vast amounts of data on managed Hadoop clusters
- Understand key EC2 Spot Instances concepts and common usage patterns for maximum scale and cost optimization for Big Data workloads
- See a few customer examples that show how to leverage the full scale of the AWS cloud for faster results
AWS Webcast - Amazon Elastic Map Reduce Deep Dive and Best PracticesAmazon Web Services
Amazon Elastic MapReduce (EMR) is one of the largest Hadoop operators in the world. Since its launch five years ago, our customers have launched more than 15 million Hadoop clusters inside of EMR. In this webinar, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters and other Amazon EMR architectural patterns. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost efficient.
MCL333_Building Deep Learning Applications with TensorFlow on AWSAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding, and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we provide an overview of deep learning, focusing on getting started with the TensorFlow framework on AWS.
Apache Hadoop and Spark on AWS: Getting started with Amazon EMR - Pop-up Loft...Amazon Web Services
Amazon EMR is a managed service that makes it easy for customers to use big data frameworks and applications like Apache Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3, Amazon’s highly scalable object storage service. In this session, we will introduce Amazon EMR and the greater Apache Hadoop ecosystem, and show how customers use them to implement and scale common big data use cases such as batch analytics, real-time data processing, interactive data science, and more. Then, we will walk through a demo to show how you can start processing your data at scale within minutes.
AWS re:Invent 2018 - AIM401 - Deep Learning using TensorflowJulien SIMON
The document provides an overview of Amazon SageMaker and TensorFlow. It discusses how TensorFlow can be used on Amazon SageMaker for deep learning applications. Key points include:
- Amazon SageMaker is a fully managed service that allows users to build, train and deploy machine learning models.
- TensorFlow is an open-source machine learning framework that can be used with Amazon SageMaker.
- Amazon SageMaker supports TensorFlow out of the box with optimized containers for training and inference. This allows users to easily build and deploy TensorFlow models on SageMaker.
AWS Summit London 2014 | From One to Many - Evolving VPC Design (400)Amazon Web Services
In this advanced technical session you will learn how you can use AWS to build and deploy virtual data centres as fast as you can design them. Learn how to combine CloudFormation templates together with best practice techniques that are in use by AWS customers today to optimise the design and implementation of your VPCs
Overview on Amazon EMR and its benefits for a wide variety of use cases and how to get started alongside Apache Zeppelin for interactive data analytics and document collaboration.
Amazon SageMaker를 통한 대용량 모델 훈련 방법 살펴보기 - 김대근 AWS AI/ML 스페셜리스트 솔루션즈 아키텍트 / 최영준...Amazon Web Services Korea
대량의 딥러닝 모델의 훈련을 위해 Amazon SageMaker에서는 새로운 분산 훈련 기능과 빠른 분산 훈련 환경을 제공하고 있습니다. 특히 기존 TensorFlow/PyTorch의 코드에 몇 줄만 추가하면 쉽게 Amazon SageMaker 환경으로 마이그레이션하여 훈련 속도를 단축할 수 있습니다. 또한 모니터링 기능으로 리소스 사용률을 제공하며, 훈련 속도 최적화에 활용이 가능합니다. 예제 코드와 데모를 통해 Amazon SageMaker 분산 훈련의 이점을 자세히 알려 드립니다.
Best Practices for Managing Hadoop Framework Based Workloads (on Amazon EMR) ...Amazon Web Services
Learning Objectives:
- Learn how to use Amazon EMR for easy, fast, and cost-effective processing of vast amounts of data across dynamically scalable Amazon EC2 instances.
- Learn how using EC2 Spot can significantly reduce the cost of running your clusters.
- Learn how Amazon EMR Instance Fleets can make it easier to quickly obtain and maintain your desired capacity for your clusters.
Nearly 1,000 takeaways ordered a minute from hungry consumers, with near real time confirmation from restaurants and delivery of their food just 45 minutes later is a hard technical challenge.
AWS allows the many small engineering teams at JUST EAT to take responsibility to meet that challenge, as they build and operate a platform that delivers a takeaway experience for consumers to love.
Learn how we migrated our e-commerce platform to AWS and organise both our platform and teams around the the twin goals of rapid change and high availability. Watch as during the session we deploy changes and break things live in production, and see how the JUST EAT platform is designed around AWS to recover quickly and automatically.
Amazon SageMaker is a fully managed Machine Learning service which facilitates seamless adoption of #MachineLearning across various industries! Jayesh is walking us through details of SageMaker with demo in this talk!
Building, Training and Deploying Custom Algorithms with Amazon SageMakerAmazon Web Services
This document discusses how to build, train, and deploy custom machine learning algorithms using Amazon SageMaker. It provides an overview of the key SageMaker services: Notebook Instances for exploratory data analysis and model building; built-in and custom algorithms; the ML Training service for training models; and the ML Hosting service for deploying models. It then walks through an example of using SageMaker with the fast.ai library for deep learning. Resources are also provided for learning more about fast.ai and accessing the demo source code.
This document discusses how Amazon SageMaker can be used to train machine learning models on large datasets using hosted Jupyter notebooks. It notes that DigitalGlobe plans to use SageMaker to train models on petabytes of Earth observation imagery so that users can create and deploy models within one scalable environment. The document also quotes the CTO of Maxar Technologies saying they will use SageMaker to build and deploy novel AI algorithms at scale to solve complex problems.
Build, Train & Deploy Your ML Application on Amazon SageMakerAmazon Web Services
This session covers a step by step walk through of a typical Machine Learning (ML) process: From asking the right questions, collecting the data, looking at the data, picking the right algorithms, training and evaluating ML models with Amazon SageMaker and bringing them live into production. A series of hands-on demos is included to illustrate these steps so that you can start building your first Machine Learning application right after this session.
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
This document discusses Amazon SageMaker, a fully managed service that allows users to build, train and deploy machine learning models at scale. It provides pre-built algorithms and frameworks to simplify the ML workflow. Models can be trained on Amazon EC2 instances optimized for ML like P3 and C5 instances which provide GPUs and new CPUs respectively. SageMaker also allows users to use their own custom Docker containers. It was used by DigitalGlobe to extract information from satellite imagery using ML models. Demos of SageMaker's capabilities were shown.
Build, train, and deploy Machine Learning models at scale (May 2018)Julien SIMON
The document discusses Amazon SageMaker, a fully managed service that allows users to build, train and deploy machine learning models at scale. It provides pre-built algorithms and frameworks, managed hosting, one-click deployment and hyperparameter tuning capabilities. It also supports bringing your own custom algorithms by allowing users to run their own Docker containers. The document highlights how SageMaker simplifies and automates ML workflows and provides examples of customers using it at scale for image and data analysis.
Deep Learning for Developers (expanded version, 12/2017)Julien SIMON
This document provides an introduction to deep learning and neural networks. It discusses key concepts like artificial intelligence, machine learning, deep learning and common network architectures. It also introduces Apache MXNet and demos using Jupyter notebooks on Amazon SageMaker. Deep learning uses neural networks to teach machines from complex data without explicit programming. Common network types discussed are convolutional neural networks, long short-term memory networks and generative adversarial networks. MXNet is an open source deep learning library that programs and scales models across GPUs. Demos show classifying images and generating digits with neural networks.
Find out more about:
• Techniques and tips to optimize trainingon Apache MXNet
• Infrastructure performance:storage and I/O, GPU throughput, distributed training, CPU-based training, cost
• Model performance:data augmentation, initializers, optimizers, etc.
• Level 666: you should be familiar with Deep Learning and MXNet
This document provides an overview of Apache MXNet, an open-source library for deep learning. It discusses MXNet's capabilities such as high performance scaling across GPUs, support for mobile and IoT models, and multiple language syntax. It also demonstrates MXNet through Jupyter notebooks on MNIST data and introduces Gluon, a high-level API for MXNet. Resources for learning more about MXNet, deep learning on AWS, and the presenter's blog are provided.
Deep Learning with Apache MXNet (September 2017)Julien SIMON
The document provides an overview of deep learning with Apache MXNet. It discusses key concepts like neural networks, training processes, convolutional neural networks, and Apache MXNet. It also outlines examples of using MXNet, including building a first network, using pre-trained models, learning from scratch on datasets like MNIST, and fine-tuning models on CIFAR-10. The document concludes by mentioning an example of using MXNet with Keras and a quick reference to Sockeye.
Best Practices for Running Amazon EC2 Spot Instances with Amazon EMR - AWS On...Amazon Web Services
Learning Objectives:
- Learn how to run Amazon EMR clusters on Spot instances and significantly reduce the cost of processing vast amounts of data on managed Hadoop clusters
- Understand key EC2 Spot Instances concepts and common usage patterns for maximum scale and cost optimization for Big Data workloads
- See a few customer examples that show how to leverage the full scale of the AWS cloud for faster results
AWS Webcast - Amazon Elastic Map Reduce Deep Dive and Best PracticesAmazon Web Services
Amazon Elastic MapReduce (EMR) is one of the largest Hadoop operators in the world. Since its launch five years ago, our customers have launched more than 15 million Hadoop clusters inside of EMR. In this webinar, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters and other Amazon EMR architectural patterns. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost efficient.
MCL333_Building Deep Learning Applications with TensorFlow on AWSAmazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding, and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we provide an overview of deep learning, focusing on getting started with the TensorFlow framework on AWS.
Apache Hadoop and Spark on AWS: Getting started with Amazon EMR - Pop-up Loft...Amazon Web Services
Amazon EMR is a managed service that makes it easy for customers to use big data frameworks and applications like Apache Hadoop, Spark, and Presto to analyze data stored in HDFS or on Amazon S3, Amazon’s highly scalable object storage service. In this session, we will introduce Amazon EMR and the greater Apache Hadoop ecosystem, and show how customers use them to implement and scale common big data use cases such as batch analytics, real-time data processing, interactive data science, and more. Then, we will walk through a demo to show how you can start processing your data at scale within minutes.
AWS re:Invent 2018 - AIM401 - Deep Learning using TensorflowJulien SIMON
The document provides an overview of Amazon SageMaker and TensorFlow. It discusses how TensorFlow can be used on Amazon SageMaker for deep learning applications. Key points include:
- Amazon SageMaker is a fully managed service that allows users to build, train and deploy machine learning models.
- TensorFlow is an open-source machine learning framework that can be used with Amazon SageMaker.
- Amazon SageMaker supports TensorFlow out of the box with optimized containers for training and inference. This allows users to easily build and deploy TensorFlow models on SageMaker.
AWS Summit London 2014 | From One to Many - Evolving VPC Design (400)Amazon Web Services
In this advanced technical session you will learn how you can use AWS to build and deploy virtual data centres as fast as you can design them. Learn how to combine CloudFormation templates together with best practice techniques that are in use by AWS customers today to optimise the design and implementation of your VPCs
Overview on Amazon EMR and its benefits for a wide variety of use cases and how to get started alongside Apache Zeppelin for interactive data analytics and document collaboration.
Amazon SageMaker를 통한 대용량 모델 훈련 방법 살펴보기 - 김대근 AWS AI/ML 스페셜리스트 솔루션즈 아키텍트 / 최영준...Amazon Web Services Korea
대량의 딥러닝 모델의 훈련을 위해 Amazon SageMaker에서는 새로운 분산 훈련 기능과 빠른 분산 훈련 환경을 제공하고 있습니다. 특히 기존 TensorFlow/PyTorch의 코드에 몇 줄만 추가하면 쉽게 Amazon SageMaker 환경으로 마이그레이션하여 훈련 속도를 단축할 수 있습니다. 또한 모니터링 기능으로 리소스 사용률을 제공하며, 훈련 속도 최적화에 활용이 가능합니다. 예제 코드와 데모를 통해 Amazon SageMaker 분산 훈련의 이점을 자세히 알려 드립니다.
Best Practices for Managing Hadoop Framework Based Workloads (on Amazon EMR) ...Amazon Web Services
Learning Objectives:
- Learn how to use Amazon EMR for easy, fast, and cost-effective processing of vast amounts of data across dynamically scalable Amazon EC2 instances.
- Learn how using EC2 Spot can significantly reduce the cost of running your clusters.
- Learn how Amazon EMR Instance Fleets can make it easier to quickly obtain and maintain your desired capacity for your clusters.
Nearly 1,000 takeaways ordered a minute from hungry consumers, with near real time confirmation from restaurants and delivery of their food just 45 minutes later is a hard technical challenge.
AWS allows the many small engineering teams at JUST EAT to take responsibility to meet that challenge, as they build and operate a platform that delivers a takeaway experience for consumers to love.
Learn how we migrated our e-commerce platform to AWS and organise both our platform and teams around the the twin goals of rapid change and high availability. Watch as during the session we deploy changes and break things live in production, and see how the JUST EAT platform is designed around AWS to recover quickly and automatically.
Amazon SageMaker is a fully managed Machine Learning service which facilitates seamless adoption of #MachineLearning across various industries! Jayesh is walking us through details of SageMaker with demo in this talk!
Building, Training and Deploying Custom Algorithms with Amazon SageMakerAmazon Web Services
This document discusses how to build, train, and deploy custom machine learning algorithms using Amazon SageMaker. It provides an overview of the key SageMaker services: Notebook Instances for exploratory data analysis and model building; built-in and custom algorithms; the ML Training service for training models; and the ML Hosting service for deploying models. It then walks through an example of using SageMaker with the fast.ai library for deep learning. Resources are also provided for learning more about fast.ai and accessing the demo source code.
This document summarizes a presentation about deploying custom machine learning models using Amazon SageMaker. It discusses:
1. An overview of machine learning, AWS SageMaker, and how SageMaker works to build, train, test, tune and deploy models.
2. The process for deploying a fully custom ML model with SageMaker, including building the model, defining inference code, creating a SageMaker container, and deploying the model as an endpoint.
3. A demo of this process showing how to create a model, endpoint configuration, and endpoint to deploy a custom model and invoke it via an API.
Setting up custom machine learning environments on AWS - AIM204 - Chicago AWS...Amazon Web Services
Sometimes, you might need to set up your own deep learning environments for domain-specific performance optimization and integration with custom applications. AWS offers prepackaged, optimized Amazon Machine Images (AMIs) and Docker container images that make it easy to quickly deploy these custom environments by letting you skip the complicated process of building and optimizing your environments from scratch. In this session, you learn about how to use AWS Deep Learning AMIs and AWS Deep Learning Containers to create custom machine learning environments with TensorFlow and Apache MXNet frameworks.
This slide deck gives an overview of the Azure Machine Learning Service. It highlights benefits of Azure Machine Learning Workspace, Automated Machine Learning and integration Notebook scripts
AWS re:Invent 2018 - ENT321 - SageMaker WorkshopJulien SIMON
This document provides an overview of Amazon SageMaker, an AWS service for building, training, and deploying machine learning models. The agenda includes loading data from S3, training and deploying models using built-in algorithms and Automatic Model Tuning, running predictions, and an introduction to using deep learning with SageMaker. Labs will cover using XGBoost, hyperparameter tuning, batch predictions, and TensorFlow. Related sessions at the conference include a talk on deep learning applications with TensorFlow.
Il Machine Learning può sembrare più difficile di quanto non lo sia perché il processo di sviluppo, training e deployment dei modelli in produzione è troppo complicato e lento. Amazon SageMaker è un servizio completamente gestito che consente a sviluppatori e data scientist di progettare, implementare e distribuire modelli di Machine Learning in qualsiasi scala. Amazon SageMaker offre una scelta di algoritmi di machine learning altamente performanti e framework preconfigurati come Apache MXNet, TensorFlow, PyTorch e Chainer; inoltre, è possibile utilizzare framework o algoritmi alternativi attraverso container Docker. In questa sessione approfondiremo l’utilizzo di Amazon SageMaker, anche attraverso alcuni pratici esempi.
Train ML Models Using Amazon SageMaker with TensorFlow - SRV336 - Chicago AWS...Amazon Web Services
Amazon SageMaker is a fully managed platform that enables developers and data scientists to build, train, and deploy machine learning (ML) models in production applications easily and at scale. In this chalk talk, we dive deep into training an ML model based on the TensorFlow framework. We discuss the specifics of training a model through Amazon SageMaker by taking an algorithm and running it on a training cluster in an auto-scaling group. This session showcases the scalability of training that is possible with Amazon SageMaker, which reduces the time and cost of training runs.
Build, Train, and Deploy Machine Learning for the Enterprise with Amazon Sage...Amazon Web Services
The document provides an overview of Amazon SageMaker, a fully managed platform for building, training, and deploying machine learning models. It discusses how SageMaker allows users to load and prepare training data, choose from built-in algorithms or custom frameworks, train and tune models, and deploy models for production. The agenda then outlines labs that will cover loading data from S3, training and deploying with built-in algorithms like XGBoost, tuning hyperparameters, and making predictions from deployed models.
I want my model to be deployed ! (another story of MLOps)AZUG FR
Speaker : Paul Peton
Putting machine learning into production remains a challenge even though the algorithms have been around for a very long time. Here are some blocks:
– the choice of programming language
– the difficulty of scaling
– fear of black boxes on the part of users
Azure Machine Learning is a new service that allows to control the deployment steps on the appropriate resources (Web App, ACI, AKS) and specially to automate the whole process thanks to the Python SDK.
End-to-End Machine Learning with Amazon SageMakerSungmin Kim
Sungmin Kim, an AWS Solutions Architect, discusses Amazon SageMaker for end-to-end machine learning. SageMaker provides a fully managed service for building, training, and deploying machine learning models in the cloud. It offers tools for labeling data, running automated machine learning, training models with built-in algorithms or custom code, tuning hyperparameters, and deploying models for inference through endpoints. SageMaker aims to make machine learning more accessible and productive for developers through its integrated development environment called Amazon SageMaker Studio.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
by Neel Mitra, Solutions Architect, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Automated machine learning (automated ML) automates feature engineering, algorithm and hyperparameter selection to find the best model for your data. The mission: Enable automated building of machine learning with the goal of accelerating, democratizing and scaling AI.
This presentation covers some recent announcements of technologies related to Automated ML, and especially for Azure. The demonstrations focus on Python with Azure ML Service and Azure Databricks.
This presentation is the fourth of four related to ML.NET and Automated ML. The presentation will be recorded with video posted to this YouTube Channel: http://bit.ly/2ZybKwI
The document provides information about an experienced machine learning solutions architect. It includes details about their experience and qualifications, including 12 AWS certifications and over 6 years of AWS experience. It also discusses their vision for MLOps and experience producing machine learning models at scale. Their role at Inawisdom as a principal solutions architect and head of practice is mentioned.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
AI Stack on AWS: Amazon SageMaker and BeyondProvectus
Looking to learn more about AWS AI stack? Join experts from Provectus & AWS to find out how to use Amazon SageMaker (with combination with other tools and services) to enable enterprise-wide AI.
Companies are looking to scale and become more productive when it comes to AI and data initiatives. They seek to launch AI projects more rapidly, which, among many other factors, requires a robust machine learning infrastructure. In this webinar, you will learn how to create a canonical SageMaker workflow, expand the SageMaker workflow to a holistic implementation, enhance and expand the implementation using best practices for feature store, data versioning, ML pipeline orchestration, and model monitoring.
Agenda
- Introductions
- Amazon SageMaker Overview
- Real-World Use Case
- Data Lake for Machine Learning
- Amazon SageMaker Experiments
- Orchestration Beyond SageMaker Experiments
- Amazon SageMaker Debugger
- Amazon SageMaker Model Monitor
- Webinar Takeaways
Intended audience
Technology executives & decision makers, manager-level tech roles, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Pritpal Sahota, Technical Account Manager, Provectus
- Christopher A. Burns, Sr. AI/ML Solution Architect, AWS
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/ai-stack-on-aws-sagemaker-and-beyond-mar-2020/
In this workshop, we introduce you to the open source deep learning framework Apache MXNet, and show you how to install it on a Raspberry Pi. Then, using a camera and a pre-trained object detection model, we show real-life objects to the Pi and listen to what it thinks the objects are, thanks to the text-to-speech capabilities of Amazon Polly. To participate in this workshop, attendees will need to bring a laptop and headphones.
Similar to ACDKOCHI19 - Demystifying amazon sagemaker (20)
ACDKOCHI19 - Medlife's journey on AWS from ZERO Orders to 6 digits markAWS User Group Kochi
This document outlines the stages of development for Medlife, an Indian healthcare startup.
Stage 1 describes the early days when Medlife had a simple two-tier monolith architecture on AWS with few optimizations and manual deployment processes.
Stage 2 focused on automating deployments, adding auto-scaling, separating front-end and back-end into a multi-tier architecture, and introducing services like ECS, Kafka, and Elasticsearch to optimize the application and architecture.
Stage 3 saw Medlife migrate operations from Singapore to Mumbai while future stages involved analytics, monitoring, cost optimization, and plans to introduce machine learning.
ACDKOCHI19 - Become Thanos of the Lambda Land: Wield all the Infinity StonesAWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
Become Thanos of the Lambda Land: Wield all the Infinity Stones by Srushith R , Head of Engineering - KonfHub
AWS Community Day Kochi 2019 - Technical Session
Rapid development, CI/CD for Chatbots on AWS by Muthukumar Oman, , Senior Architect - AWS Cloud & Big Data Solutions - Agilisium
ACDKOCHI19 - Complete Media Content Management System and Website on ServerlessAWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
Complete Media Content Management System and Website on Serverless by Anoop Mohan, Associate Director Of Technology at Asianet
ACDKOCHI19 - A minimalistic guide to keeping things simple and straightforwar...AWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
A minimalistic guide to keeping things simple and straightforward on AWS by Jeevan Dongre , AWS Community Hero, Lead: AWS UG BLR
ACDKOCHI19 - Journey from a traditional on-prem Datacenter to AWS: Challenges...AWS User Group Kochi
AWS Community Day Kochi 2019 - Sponsor Talks
Journey from a traditional on-prem Datacenter to AWS: Challenges and Opportunities By Thomas Brennekke , Founder & President, Network Redux
ACDKOCHI19 - Enterprise grade security for web and mobile applications on AWSAWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
Enterprise grade security for web and mobile applications on AWS by Robin Varghese , Chief Architect - TCS
HiFX designed and implemented a unified data analytics platform called Vision Lens for Malayala Manorama to generate meaningful insights from large amounts of data across their multiple digital properties. The solution involved building a data lake, data pipeline, processing framework, and dashboards to provide real-time and historical analytics. This helped Manorama improve user experiences, drive smarter marketing, and make better business decisions.
ACDKOCHI19 - Turbocharge Developer productivity with platform build on K8S an...AWS User Group Kochi
AWS Community Day Kochi 2019 - Technical Session
Turbocharge Developer productivity with platform build on K8S and AWS services by - Laks , Principal Engineer - Intuit
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
2. Jayesh Bapu Ahire
➢ Organizer,
Twilio India Community, AWS UG Pune, Elasticsearch UG Pune,
Alexa UG Nashik
➢ Research Assistant, Stanford AI Lab
➢ Research Associate, Tsinghua AI Lab & ETH Research
➢ Author, Blogger, Speaker, Student, Poet
7. Select Algo & Framework
Integrate & Deploy
Data Preprocessing
Train & Tune Model
8. Machine Learning in Cloud
● The cloud’s pay-per-use model
● Easy for enterprises to experiment, scale and go in
production.
● Intelligent capabilities accessible without requiring
advanced skills in AI.
● Don’t require deep knowledge of AI, machine learning
theory, or a team of data scientists.
9. AI & ML capabilities of AWS
ML Frameworks +
Infrastructure
ML Services AI Services
Frameworks
Interfaces
+
Infrastructure
Amazon Sagemaker
Build
+
Train
+
Deploy
Personalize Forecast Rekognition
Comprehend Textract Polly
Lex Translate Transcribe
11. Reduce Complexity Fully managed
Quick Test
Pre-optimized
Algorithms
Bring Your Own
Algorithm
Distributed Training
12. Build Train Deploy
Collect & prepare
training data
Data labelling & pre-built
notebooks for common
problems
Set up & manage
environments for training
One-click training using
Amazon EC2 On-Demand
or Spot instances
Deploy model in
production
One-click deployment
Choose & optimize your
ML algorithm
Built-in, high-performance
algorithms and hundreds
of ready to use
algorithms in AWS
Marketplace
Train & tune model
Train once, run anywhere
& model optimization
Scale & manage the
production environment
Fully managed with auto-
scaling for 75% less
13. Machine Learning end to end pipeline using Amazon Sagemaker
Build
1. Pre-build algorithms
& notebooks
2. Data Labeling:
Ground Truth
3. AWS marketplace for
ML
Deploy
1. one-click deployment
and hosting
Train
1. One-click model
training and tuning
2. Sagemaker Neo
3. Sagemaker RL
03
01 02
14.
15.
16.
17.
18.
19. Amazon SageMaker: Open Source Containers
● Customize them
● Run them locally for development and testing
● Run them on SageMaker for training and prediction at scale
https://github.com/aws/sagemaker-tensorflow-containers
https://github.com/aws/sagemaker-mxnet-containers
20. Amazon SageMaker: Bring Your Own Container
● Prepare the training code in Docker container
● Upload container image to Amazon Elastic Container Registry (ECR)
● Upload training dataset to Amazon S3/FSx/EFS
● Invoke Create Training Job API to execute a SageMaker training job
SageMaker training job pulls the container image from Amazon ECR, reads
the training data from the data source, configures the training job with
hyperparameter inputs, trains a model, and saves the model to model_dir so
that it can be deployed for inference later.
https://github.com/aws/sagemaker-container-support
21. Distributed Training At Scale on Amazon SageMaker
● Training on Amazon SageMaker can automatically distribute processing
across a number of nodes - including P3 instances
● You can choose from two data distribution types for training ML models
○ Fully Replicated - This will pass every file in the input to every
machine
○ Sharded S3 Key - This will separate and distribute the files in the input
across the training nodes
Overall, sharding can run faster but it depends on the algorithm
22. Amazon SageMaker: Local Mode Training
Enabling experimentation speed
● Train with local notebooks
● Train on notebook instances
● Iterate faster a small sample of the dataset locally no waiting for a new
● training cluster to be built each time
● Emulate CPU (single and multi-instance) and GPU (single instance) in local
mode
● Go distributed with a single line of code
23. Automatic Model Tuning on Amazon SageMaker
Hyperparameter Optimizer
● Amazon SageMaker automatic model tuning predicts hyperparameter
values, which might be most effective at improving fit.
● Automatic model tuning can be used with the Amazon SageMaker
○ Built-in algorithms,
○ Pre-built deep learning frameworks, and
○ Bring-your-own-algorithm containers
http://github.com/awslabs/amazon-sagemakerexamples/tree/master/hyperparameter tuning
24. Amazon SageMaker: Accelerating ML Training
Faster start times and training job execution time
● Two modes: File Mode and Pipe Mode
○ input mode parameter in sagemaker.estimator.estimator
● File Mode: S3 data source or file system data source
○ When using S3 as data source, training data set is downloaded to EBS volumes
○ Use file system data source (Amazon EFS or Amazon FSx for Lustre) for faster
training
○ startup and execution time. Different data formats supported: CSV, protobuf, JSON,
libsvm (check algo docs!)
● Pipe Mode streams the data set to training instances
○ This allows you to process large data sets and training starts faster
○ Dataset must be in recordio-encoded protobuf or csv format
25. Amazon SageMaker: Fully-Managed Spot Training
Reduce training costs at scale
● Managed Spot training on SageMaker to reduce training costs by up to 90%
● Managed Spot Training is available in all training configurations:
○ All instance types supported by Amazon SageMaker
○ All models: built-in algorithms, built-in frameworks, and custom models
○ All configurations: single instance training, distributed training, and
automatic model tuning.
● Setting it up is extremely simple
○ If you're using the console, just switch the feature on.
○ If you're working with the Amazon SageMaker SDK just set
train_use_spot_instances to true in the Estimator constructor.
26. Amazon SageMaker: Secure Machine Learning
● No retention of customers data
● SageMaker provides encryption in transit
● Encryption at rest everywhere
● Compute isolation - instances allocated for computation are never shared with
others
● Network isolation: all compute instances run inside private service managed
VPCs
● Secure, fully managed infrastructure: Amazon Sagemaker take care of patching
and keeping instances up-to-date
● Notebook security - Jupyter notebooks can be operated without internet access
and bound to secure customer VPCs
27. Amazon SageMaker Training: Getting Started
To train a model in Amazon SageMaker, you will need the following:
● A dataset. Here we will use the MNIST (Modified National Institute of Standards and
Technology database) dataset. This dataset provides a training set of 50,000 example
images of handwritten single-digit numbers, a validation set of 10,000 images, and a test
dataset of 10,000 images.
● An algorithm. Here we will use the Linear Learner algorithm provided by Amazon
● An Amazon Simple Storage Service (Amazon S3) bucket to store the training data and the
model artifacts
● An Amazon SageMaker notebook instance to prepare and process data and to train and
deploy a machine learning model.
● A Jupyter notebook to use with the notebook instance
● For model training, deployment, and validation, I will use the high-level Amazon
SageMaker Python SDK
28. Amazon SageMaker Training: Getting Started
● Create the S3 bucket
● Create an Amazon SageMaker Notebook instance by going here:
https://console.aws.amazon.com/sagemaker/
● Choose Notebook instances, then choose Create notebook instance.
● On the Create notebook instance page, provide the Notebook instance name,
choose ml.t2.medium for instance type (least expensive instance) For IAM role,
choose Create a new role, then choose Create role.
● Choose Create notebook instance.
In a few minutes, Amazon SageMaker launches an ML compute instance
and attaches an ML storage volume to it. The notebook instance has a
preconfigured Jupyter notebook server and a set of Anaconda libraries.
29. How To Train a Model With Amazon SageMaker
To train a model in Amazon SageMaker, you create a training job. The training job
includes the following information:
● The URL of the Amazon Simple Storage Service (Amazon S3) bucket or the file
● system id of the file system where you've stored the training data.
● The compute resources that you want Amazon SageMaker to use for model
training. Compute resources are ML compute instances that are managed by
Amazon SageMaker.
● The URL of the S3 bucket where you want to store the output of the job.
● The Amazon Elastic Container Registry path where the training code is stored.
30. Linear Learner with MNIST dataset example
● Provide the S3 bucket and prefix that you want to use for training and model
artifacts. This should be within the same region as the Notebook instance,
training, and hosting
● The IAM role arn used to give training and hosting access to your data
● Download the MNIST dataset
● Amazon SageMaker implementation of Linear Learner takes recordio wrapped
protobuf, where as the data we have is a pickle-ized numpy array on disk.
● This data conversion will be handled by the Amazon SageMaker Python SDK,
imported as sagemaker
31. Train the model
Create and Run a Training Job with Amazon SageMaker Python SDK
● To train a model in Amazon Sagemaker, you can use
○ Amazon SageMaker Python SDK or
○ AWS SDK for Python (Boto 3) or
○ AWS console
● For this exercise, I will use the notebook instance and the Python SDK
● The Amazon SageMaker Python SDK includes the
sagemaker.estimator.Estimator estimator, which can be used with any
algorithm.
● To run a model training job import the Amazon SageMaker Python SDK and get
the Linear Learner container