GluonNLP is a deep learning toolkit for Natural Language Processing. These slides covers the motivation behind the creation of the toolkit and what is available in it. Go try it at https://gluon-nlp.mxnet.io!
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Building a Recommender System Using Amazon SageMaker's Factorization Machine ...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Building a Recommender System Using Amazon SageMaker's Factorization Machine Algorithm
Factorization Machines are a powerful algorithm in the click prediction and recommendation space. Amazon SageMaker has a nearly infinitely scalable implementation that we'll show you how to use to build a recommender of your own.
Speaker: David Arpin - AI Platform Selections Leader, AI Platforms
Build Text Analytics Solutions with Amazon Comprehend and Amazon TranslateAmazon Web Services
by Pratap Ramamurthy, Partner Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Training Chatbots and Conversational Artificial Intelligence Agents with Amaz...Amazon Web Services
Building a conversational AI experience that can respond to a wide variety of inputs and situations depends on gathering high-quality, relevant training data. Dialog with humans is an important part of this training process. In this session, learn how researchers at Facebook use Amazon Mechanical Turk within the ParlAI (pronounced “parlay”) framework for training and evaluating AI models to perform data collection, human training, and human evaluation. Learn how you can use this interface to gather high-quality training data to build next-generation chatbots and conversational agents.
Create a Serverless Searchable Media Library (AIM342-R1) - AWS re:Invent 2018Amazon Web Services
Companies have ever-growing media libraries, making them increasingly difficult to index and search. In this session, we describe how to maintain your library by using Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend to perform automatic metadata extraction from image, video, and audio files. We show you how to then use this metadata to build a serverless media library that can be filtered by image tags, celebrities, and more.
Workshop: Build Deep Learning Applications with TensorFlow and SageMakerAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding and recommendation engines. In this workshop, you’ll learn how to get started with the TensorFlow deep learning framework using Amazon SageMaker, a platform to easily build, train and deploy models at scale. You’ll learn how to build a model using TensorFlow by setting up a Jupyter notebook to get started with image and object recognition. You’ll also learn how to quickly train and deploy a model through Amazon SageMaker.
Building a Recommender System Using Amazon SageMaker's Factorization Machine ...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Building a Recommender System Using Amazon SageMaker's Factorization Machine Algorithm
Factorization Machines are a powerful algorithm in the click prediction and recommendation space. Amazon SageMaker has a nearly infinitely scalable implementation that we'll show you how to use to build a recommender of your own.
Speaker: David Arpin - AI Platform Selections Leader, AI Platforms
Build Text Analytics Solutions with Amazon Comprehend and Amazon TranslateAmazon Web Services
by Pratap Ramamurthy, Partner Solutions Architect, AWS
Natural language holds a wealth of information like user sentiment and conversational intent. In this session, we'll demonstrate the capabilities of Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We'll show you how to build a VOC (Voice of the Customer) application and integrate it with other AWS services including AWS Lambda, Amazon S3, Amazon Athena, Amazon QuickSight, and Amazon Translate. We’ll also show you additional methods for NLP available through Amazon Sagemaker.
Training Chatbots and Conversational Artificial Intelligence Agents with Amaz...Amazon Web Services
Building a conversational AI experience that can respond to a wide variety of inputs and situations depends on gathering high-quality, relevant training data. Dialog with humans is an important part of this training process. In this session, learn how researchers at Facebook use Amazon Mechanical Turk within the ParlAI (pronounced “parlay”) framework for training and evaluating AI models to perform data collection, human training, and human evaluation. Learn how you can use this interface to gather high-quality training data to build next-generation chatbots and conversational agents.
Create a Serverless Searchable Media Library (AIM342-R1) - AWS re:Invent 2018Amazon Web Services
Companies have ever-growing media libraries, making them increasingly difficult to index and search. In this session, we describe how to maintain your library by using Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend to perform automatic metadata extraction from image, video, and audio files. We show you how to then use this metadata to build a serverless media library that can be filtered by image tags, celebrities, and more.
Building, Training, and Deploying fast.ai Models Using Amazon SageMaker (AIM4...Amazon Web Services
In a short space of time, fast.ai has become a popular Deep Learning library, driven by the success of the fast.ai online Massive Open Online Course (MOOC). It has allowed SW developers to achieve, in the span of a few weeks, state-of-the-art results in domains such as Computer Vision (CV), Natural Language Processing (NLP), and structured data machine learning. In this chalk talk, we go into the details of building, training, and deploying fast.ai-based models using Amazon SageMaker.
Keras vs Tensorflow vs PyTorch | Deep Learning Frameworks Comparison | EdurekaEdureka!
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on "Keras vs TensorFlow vs PyTorch" will provide you with a crisp comparison among the top three deep learning frameworks. It provides a detailed and comprehensive knowledge about Keras, TensorFlow and PyTorch and which one to use for what purposes. Following topics will be covered in this PPT:
Introduction to keras, Tensorflow, Pytorch
Parameters of Comparison
Level of API
Speed
Architecture
Ease of Code
Debugging
Community Support
Datasets
Popularity
Suitable use cases
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Smarter Event-Driven Edge with Amazon SageMaker & Project Flogo (AIM204-S) - ...Amazon Web Services
A single device can produce thousands of events every second. In traditional implementations, all data is transmitted back to a server or gateway for scoring by a machine learning (ML) model. This data is also stored in a data repository for later use by data scientists. In this session, we explore data science techniques for dealing with time series data leveraging Amazon SageMaker. We also look at modeling applications using deterministic rules with streaming pipelines for data prep, and model inferencing using deep learning frameworks directly onto edge devices or onto AWS Lambda using Project Flogo, an open-source event-driven framework. This session is brought to you by AWS partner, TIBCO Software Inc.
Let’s Talk about Reinforcement Learning with Amazon SageMaker RL (AIM399) - A...Amazon Web Services
Reinforcement learning has emerged as an exciting new technique in the world of machine learning (ML), where your ML models can achieve specific outcomes without the need for pre-labeled training data. Join us in this chalk talk as we discuss the newly announced Amazon SageMaker RL, which takes a different approach to training ML models. We dive deep into scenarios where there isn’t a right answer; instead, there is an optimal outcome for a given problem. At the end of this chalk talk, you will be familiar with Amazon SageMaker RL and understand how to use reinforcement learning for your businesses and build intelligent applications.
Using Amazon ML Services for Video Transcription & Translation: Machine Learn...Amazon Web Services
Machine Learning Workshops at the San Francisco Loft
Using Amazon ML Services for Video Transcription and Translation
In this hands-on workshop, participants will use AWS ML services to generate transcripts from audio files, use NLP to analyze those transcripts, and produce subtitles in multiple languages. Using ML, you can keep pace with the proliferation of audio/video content across businesses. Asset managers can unlock hidden value in existing media libraries by finding precise moments when particular keywords or phrases are spoken; video publishers can benefit from subtitle and localized files for reaching global audiences; and IT organizations can utilize transcription data to improve organizational governance.
Level: 200-300
Osemeke Isibor, Solutions Architect, AWS
With the launch of several new Machine Learning (ML) services on AWS, now is your chance to learn how to quickly apply ML to solve real-world business problems, no prior ML experience necessary. During this session, you will learn about vision services to analyze your images and video for facial comparison, object detection and detecting text (Amazon Rekognition and Amazon Rekognition Video), building conversational interfaces for chatbots (Amazon Lex), and core language services for converting audio to text (Amazon Transcribe), converting text to speech (Amazon Polly), identifying topics and themes in text (Amazon Comprehend) and translating between two languages (Amazon Translate).
Mike Gillespie - Build Intelligent Applications with AWS ML Services (200).pdfAmazon Web Services
Organizations are increasingly turning to machine learning to build intelligent applications and get more insights out of their data in real-time. In this session, you’ll learn about AWS Machine Learning APIs for computer vision and language, and how to get started with these pre-trained services: Amazon Rekognition, Amazon Comprehend, Amazon Transcribe, Amazon Translate, Amazon Polly, and Amazon Lex. We’ll also show how these services connect to AWS’s comprehensive data platform and services to drive the success of your machine learning projects.
NEW LAUNCH! Amazon Rekognition Video eliminates manual cataloging of video wh...Amazon Web Services
During this session, we will provide an overview of Amazon Rekognition Video, a deep learning powered video analysis service that tracks people, detects activities, and recognizes objects, celebrities, and inappropriate content. Amazon Rekognition Video can detect and recognize faces in live streams. Rekognition Video also analyzes existing video stored in Amazon S3 and returns specific labels of activities, people and faces, and objects with time stamps so you can easily locate the scene. For people and faces, it also returns the bounding box, which is the specific location of the person or face in the frame. We will also cover different use cases for Amazon Rekognition Video in applications such as security and public safety, and media and entertainment.
Supercharge Any Alexa Skill by Understanding What Games Do (ALX403-R2) - AWS ...Amazon Web Services
Games push the boundaries of tech, and we can learn from them for a wide range of use cases. In this session, we look at what makes an Alexa quiz game, interactive fiction, and companion games work. Learn how to build engaging experiences with dialog management and entity resolution (Alexa NLU), session persistence (Amazon DynamoDB), production value (SSML, Amazon S3), as well as state handling, multi-modal displays, interceptors (ASK SDK), and cross-device interactions (Gadgets, AWS IoT for PC games).
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Binoy Das - Partner Solutions Architect, AWS
Add Intelligence to Applications with AWS ML Services
Organizations are increasingly turning to machine learning to build intelligent applications and get more insights out of their data in real-time. In this session, you’ll learn about AWS Machine Learning APIs for computer vision and language, and how to get started with these pre-trained services: Amazon Rekognition, Amazon Comprehend, Amazon Transcribe, Amazon Translate, Amazon Polly, and Amazon Lex. We’ll also show how these services connect to AWS’s comprehensive data platform and services to drive the success of your machine learning projects.
Level: 200
Speaker: Yash Pant - Enterprise Solutions Architect, AWS
Building Text Analytics Applications on AWS using Amazon Comprehend - AWS Onl...Amazon Web Services
Learning Objectives:
- Get an introduction to Natural Language Processing (NLP)
- Learn benefits of new approaches to analytics and technologies that help empower better decisions, e.g., NLP, data prep
- Build a text analytics solution with Amazon Comprehend and Amazon Relational Database Service in a step by step demo
Improve Your Customer Experience with Machine Translation (AIM321) - AWS re:I...Amazon Web Services
Machine Translation powers Amazon’s international expansion. Sign up to learn how you can leverage Amazon Translate to increase customer satisfaction, cut down response times, and build a more efficient customer support operation. For example, you can add real-time translation to chat, email, and helpdesk so an English-speaking agent can communicate with customers in their preferred language, or translate your knowledge base into multiple languages to make it accessible to customers and employees around the world.
ML Best Practices: Prepare Data, Build Models, and Manage Lifecycle (AIM396-S...Amazon Web Services
In this session, we cover best practices for enterprises that want to use powerful open-source technologies to simplify and scale their machine learning (ML) efforts. Learn how to use Apache Spark, the data processing and analytics engine commonly used at enterprises today, for data preparation as it unifies data at massive scale across various sources. We train models using TensorFlow, and we use MLflow to track experiment runs between multiple users within a reproducible environment. We then manage the deployment of models to production. We show you how MLflow can be used with any existing ML library and incrementally incorporated into an existing ML development process. This session is brought to you by AWS partner, Databricks.
Building, Training, and Deploying fast.ai Models Using Amazon SageMaker (AIM4...Amazon Web Services
In a short space of time, fast.ai has become a popular Deep Learning library, driven by the success of the fast.ai online Massive Open Online Course (MOOC). It has allowed SW developers to achieve, in the span of a few weeks, state-of-the-art results in domains such as Computer Vision (CV), Natural Language Processing (NLP), and structured data machine learning. In this chalk talk, we go into the details of building, training, and deploying fast.ai-based models using Amazon SageMaker.
Keras vs Tensorflow vs PyTorch | Deep Learning Frameworks Comparison | EdurekaEdureka!
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on "Keras vs TensorFlow vs PyTorch" will provide you with a crisp comparison among the top three deep learning frameworks. It provides a detailed and comprehensive knowledge about Keras, TensorFlow and PyTorch and which one to use for what purposes. Following topics will be covered in this PPT:
Introduction to keras, Tensorflow, Pytorch
Parameters of Comparison
Level of API
Speed
Architecture
Ease of Code
Debugging
Community Support
Datasets
Popularity
Suitable use cases
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Smarter Event-Driven Edge with Amazon SageMaker & Project Flogo (AIM204-S) - ...Amazon Web Services
A single device can produce thousands of events every second. In traditional implementations, all data is transmitted back to a server or gateway for scoring by a machine learning (ML) model. This data is also stored in a data repository for later use by data scientists. In this session, we explore data science techniques for dealing with time series data leveraging Amazon SageMaker. We also look at modeling applications using deterministic rules with streaming pipelines for data prep, and model inferencing using deep learning frameworks directly onto edge devices or onto AWS Lambda using Project Flogo, an open-source event-driven framework. This session is brought to you by AWS partner, TIBCO Software Inc.
Let’s Talk about Reinforcement Learning with Amazon SageMaker RL (AIM399) - A...Amazon Web Services
Reinforcement learning has emerged as an exciting new technique in the world of machine learning (ML), where your ML models can achieve specific outcomes without the need for pre-labeled training data. Join us in this chalk talk as we discuss the newly announced Amazon SageMaker RL, which takes a different approach to training ML models. We dive deep into scenarios where there isn’t a right answer; instead, there is an optimal outcome for a given problem. At the end of this chalk talk, you will be familiar with Amazon SageMaker RL and understand how to use reinforcement learning for your businesses and build intelligent applications.
Using Amazon ML Services for Video Transcription & Translation: Machine Learn...Amazon Web Services
Machine Learning Workshops at the San Francisco Loft
Using Amazon ML Services for Video Transcription and Translation
In this hands-on workshop, participants will use AWS ML services to generate transcripts from audio files, use NLP to analyze those transcripts, and produce subtitles in multiple languages. Using ML, you can keep pace with the proliferation of audio/video content across businesses. Asset managers can unlock hidden value in existing media libraries by finding precise moments when particular keywords or phrases are spoken; video publishers can benefit from subtitle and localized files for reaching global audiences; and IT organizations can utilize transcription data to improve organizational governance.
Level: 200-300
Osemeke Isibor, Solutions Architect, AWS
With the launch of several new Machine Learning (ML) services on AWS, now is your chance to learn how to quickly apply ML to solve real-world business problems, no prior ML experience necessary. During this session, you will learn about vision services to analyze your images and video for facial comparison, object detection and detecting text (Amazon Rekognition and Amazon Rekognition Video), building conversational interfaces for chatbots (Amazon Lex), and core language services for converting audio to text (Amazon Transcribe), converting text to speech (Amazon Polly), identifying topics and themes in text (Amazon Comprehend) and translating between two languages (Amazon Translate).
Mike Gillespie - Build Intelligent Applications with AWS ML Services (200).pdfAmazon Web Services
Organizations are increasingly turning to machine learning to build intelligent applications and get more insights out of their data in real-time. In this session, you’ll learn about AWS Machine Learning APIs for computer vision and language, and how to get started with these pre-trained services: Amazon Rekognition, Amazon Comprehend, Amazon Transcribe, Amazon Translate, Amazon Polly, and Amazon Lex. We’ll also show how these services connect to AWS’s comprehensive data platform and services to drive the success of your machine learning projects.
NEW LAUNCH! Amazon Rekognition Video eliminates manual cataloging of video wh...Amazon Web Services
During this session, we will provide an overview of Amazon Rekognition Video, a deep learning powered video analysis service that tracks people, detects activities, and recognizes objects, celebrities, and inappropriate content. Amazon Rekognition Video can detect and recognize faces in live streams. Rekognition Video also analyzes existing video stored in Amazon S3 and returns specific labels of activities, people and faces, and objects with time stamps so you can easily locate the scene. For people and faces, it also returns the bounding box, which is the specific location of the person or face in the frame. We will also cover different use cases for Amazon Rekognition Video in applications such as security and public safety, and media and entertainment.
Supercharge Any Alexa Skill by Understanding What Games Do (ALX403-R2) - AWS ...Amazon Web Services
Games push the boundaries of tech, and we can learn from them for a wide range of use cases. In this session, we look at what makes an Alexa quiz game, interactive fiction, and companion games work. Learn how to build engaging experiences with dialog management and entity resolution (Alexa NLU), session persistence (Amazon DynamoDB), production value (SSML, Amazon S3), as well as state handling, multi-modal displays, interceptors (ASK SDK), and cross-device interactions (Gadgets, AWS IoT for PC games).
SageMaker Algorithms Infinitely Scalable Machine LearningAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Binoy Das - Partner Solutions Architect, AWS
Add Intelligence to Applications with AWS ML Services
Organizations are increasingly turning to machine learning to build intelligent applications and get more insights out of their data in real-time. In this session, you’ll learn about AWS Machine Learning APIs for computer vision and language, and how to get started with these pre-trained services: Amazon Rekognition, Amazon Comprehend, Amazon Transcribe, Amazon Translate, Amazon Polly, and Amazon Lex. We’ll also show how these services connect to AWS’s comprehensive data platform and services to drive the success of your machine learning projects.
Level: 200
Speaker: Yash Pant - Enterprise Solutions Architect, AWS
Building Text Analytics Applications on AWS using Amazon Comprehend - AWS Onl...Amazon Web Services
Learning Objectives:
- Get an introduction to Natural Language Processing (NLP)
- Learn benefits of new approaches to analytics and technologies that help empower better decisions, e.g., NLP, data prep
- Build a text analytics solution with Amazon Comprehend and Amazon Relational Database Service in a step by step demo
Improve Your Customer Experience with Machine Translation (AIM321) - AWS re:I...Amazon Web Services
Machine Translation powers Amazon’s international expansion. Sign up to learn how you can leverage Amazon Translate to increase customer satisfaction, cut down response times, and build a more efficient customer support operation. For example, you can add real-time translation to chat, email, and helpdesk so an English-speaking agent can communicate with customers in their preferred language, or translate your knowledge base into multiple languages to make it accessible to customers and employees around the world.
ML Best Practices: Prepare Data, Build Models, and Manage Lifecycle (AIM396-S...Amazon Web Services
In this session, we cover best practices for enterprises that want to use powerful open-source technologies to simplify and scale their machine learning (ML) efforts. Learn how to use Apache Spark, the data processing and analytics engine commonly used at enterprises today, for data preparation as it unifies data at massive scale across various sources. We train models using TensorFlow, and we use MLflow to track experiment runs between multiple users within a reproducible environment. We then manage the deployment of models to production. We show you how MLflow can be used with any existing ML library and incrementally incorporated into an existing ML development process. This session is brought to you by AWS partner, Databricks.
Kubernetes is making the promise of changing the datacenter from being a group of computer to "a computer" itself. This presentation outlines the new features in K8S with 1.1 and 1.2 release.
AWS DeepLens Workshop: Building Computer Vision Applications - BDA201 - Anahe...Amazon Web Services
In this workshop, learn how to build and deploy computer vision models using the AWS DeepLens deep learning-enabled video camera. Learn to build a machine learning model from scratch using Amazon SageMaker, and get hands-on experience with AWS DeepLens by extending that model to build an end-to-end AI application using Amazon Rekognition. Attendees also learn about use cases built by the community which integrate other AWS services and extend the functionality of AWS DeepLens. Please note, you must have an AWS account to participate in this workshop. If setting up a new account, please do so at least 24 hours in advance of the workshop.
Integrate Machine Learning into Your Spring Application in Less than an HourVMware Tanzu
SpringOne 2020
Integrate Machine Learning into Your Spring Application in Less than an Hour
Hermann Burgmeier, Senior Software Engineer at Amazon
Qing Lan, Software Developement Engineer at AWS
Mikhail Shapirov, Senior Partner Solutions at Amazon Web Services, Inc
Vaibhav Goel, Sr. Software Development Engineer at Amazon
You’ve taken your first steps into Node.js. You’ve learned how to initialize your projects, you’ve played with some dependencies, and you’re ready to get into some serious Node work. In this session, we’ll dive further into Node as a framework. We’ll learn how to master Node’s inherently asynchronous nature, take advantage of Node’s events and streams capabilities, and learn about sophisticated Node deployments at scale. Participants will leave with a richer understanding of what Node has to offer and higher confidence in dealing with some of Node’s more difficult concepts.
Accelerate machine-learning workloads using Amazon EC2 P3 instances - CMP204 ...Amazon Web Services
Organizations are tackling exponentially complex questions across advanced scientific, energy, high-tech, and medical fields. Machine learning (ML) makes it possible to quickly explore a multitude of scenarios and generate the best answers, ranging from image, video, and speech recognition to autonomous vehicle systems and weather prediction. In this interactive chalk talk, we discuss the latest advancements in compute to support your ML goals. We also discuss how, for data scientists, researchers, and developers who want to speed development of their ML applications, Amazon Elastic Compute Cloud (Amazon EC2) P3 instances are the most powerful, cost-effective, and versatile GPU-compute instances available.
From Monolith to Microservices (And All the Bumps along the Way) (CON360-R1) ...Amazon Web Services
Applications built on a microservices-based architecture and packaged as containers bring several benefits to your organization. In this session, Duolingo, a popular language-learning platform and an Amazon ECS customer, describes its journey from a monolith to a microservices architecture. We highlight the hurdles you may encounter, discuss how to plan your migration to microservices, and explain how you can use Amazon ECS to manage this journey.
Maschinelles Lernen auf AWS für Entwickler, Data Scientists und ExpertenAWS Germany
In diesem Vortrag geben wir einen Überblick mit Beispielen über aktuelle Werkzeuge für Maschinelles Lernen (ML) auf AWS. Dieser überblick deckt alle Möglichkeiten von einfach zu nutzenden, vollständig verwalteten ML-Services für Entwickler über ML-Plattformen für Data Scientists bis hin zu ML-optimierten Infrastruktur- und Software-Komponenten ab. Beispiele und Online-Demos zeigen, wie einfach ML-Methoden auf AWS genutzt werden können.
Moderator: Christian Petters, Solutions Architect, AWS
Build Deep Learning Applications Using Apache MXNet, Featuring Workday (AIM40...Amazon Web Services
The Apache MXNet deep learning framework is used for developing, training, and deploying diverse AI applications, including computer vision, speech recognition, and natural language processing at scale. In this session, learn how to get started with MXNet on the Amazon SageMaker machine learning platform. Hear from Workday about how they built computer vision and natural language processing (NLP) models using MXNet to automatically extract information from paper documents, such as expense receipts and populate data records. Workday also shares its experience using Sockeye, an MXNet toolkit for quickly prototyping sequence-to-sequence NLP models.
The PPT contains the following content:
1. What is Google Cloud Study Jam
2. What is Cloud Computing
3. Fundamentals of cloud computing
4. what is Generative AI
5. Fundamentals of Generative AI
6. Breif overview on Google Cloud Study Jam.
7. Networking Session.
Build Deep Learning Applications Using Apache MXNet - Featuring Chick-fil-A (...Amazon Web Services
The Apache MXNet deep learning framework is used for developing, training, and deploying diverse AI applications, including computer vision, speech recognition, natural language processing, and more at scale. In this session, learn how to get started with Apache MXNet on the Amazon SageMaker machine learning platform. Chick-fil-A share how they got started with MXNet on Amazon SageMaker to measure waffle fry freshness and how they leverage AWS services to improve the Chick-fil-A guest experience.
Recent Advances in Natural Language ProcessingApache MXNet
This talk goes over the recent progress made in the Natural Language Processing field in terms of Language Representation. Starting with the classic tf-idf, we cover word2vec, ELMo, BERT, GPT-2 and XL-Net.
This deck was used for a ACNA2019 talk.
Slides: Thomas Delteil
Fine-tuning BERT for Question AnsweringApache MXNet
This deck covers the problem of fine-tuning a pre-trained BERT model for the task of Question Answering. Check out the GluonNLP model zoo here for models and tutorials: http://gluon-nlp.mxnet.io/model_zoo/bert/index.html
Slides: Thomas Delteil
Introduction to object tracking with Deep LearningApache MXNet
This presentation introduces the problem of tracking object with deep learning and quickly go over specific implementations.
Go build! https://mxnet.apache.org
Slides: Thomas Delteil
This presentation introduces the topic of computer vision, especially through the lense of Deep Learning.
Go build! https://gluon-cv.mxnet.io
Slides: Thomas Delteil
Image Segmentation: Approaches and ChallengesApache MXNet
This slides go over the problem of deep semantic segmentation. It covers the different approaches taken, from hourglass autoencoder to pyramid networks.
Slides by Thomas Delteil
Deep Learning With Apache MXNet On Video by Ben Taylor @ ziff.aiApache MXNet
This talk will go over using Apache MXNet on video streams such as security footage from Ring, or live XBOX video data to perform inference and indexing. This can be used to classify video events, detect anomalies in normal behavior, and search. This talk will focus on using FFMPEG for feeding Apache MXNet models for fast inference throughput and performance. This talk will also discuss the difference between frame level inference, and frame buffer inference (comprehending a temporal video event).
Links to videos on the slides:
IntelAct: Winner, Visual Doom AI Competition, Full Deathmatch: https://www.youtube.com/watch?v=947bSUtuSQ0
GPU assisted call of duty processing, prep for AI auto-play: https://www.youtube.com/watch?v=gTXOYzSC_ZE
Presented at https://www.meetup.com/deep-learning-with-mxnet/events/258901722/
What is Deep Learning
Rise of Deep Learning
Phases of Deep Learning - Training and Inference
AI & Limitations of Deep Learning
Apache MXNet History, Apache MXNet concepts
How to use Apache MXNet and Spark together for Distributed Inference.
Various Open Source projects around Apache MXNet for you to build End to End pipeline from building Models using 7 different languages to choose from or use Keras if you are already a Keras user.
Get the state of the Art models pre-trained with code and examples using GluonCV and GluonNLP
Use ONNX to create and save models right from MXNet so you can port to any framework.
Use Apache 2.0 Licensed MXNet Model Server to deploy your models.
Use TVM to optimize for your own hardware.
In this talk ONNX (Open Neural Network eXchange) is introduced, and the ONNX Model Zoo is used as the base for fine-tuning with AWS SageMaker and Apache MXNet's Gluon API. With a fine-tuned model trained on Caltech101, AWS GreenGrass is discussed for edge deployments and the TVM Stack is suggested as a method for optimising the inference of models on edge devices.
Presented by: Thom Lane at Linaro Connect Vancouver 2018 on 19th September 2018.
Distributed Inference with MXNet and SparkApache MXNet
Deep Learning has become ubiquitous with abundance of data, commoditization of compute and storage. Pre-trained models are readily available for many use-cases. Distributed Inference has many applications such as pre-computing results offline, backfilling historic data with predictions from state-of-the-art models, etc.,. Inference on large scale datasets comes with many challenges prevalent in distributed data processing. This presentation will show how to efficiently run deep learning prediction on large data sets, leveraging Apache Spark and Apache MXNet (incubating).
This presentation describes two major papers in multi-variate time-series using deep neural networks. The first paper, DeepAR was developed at Amazon to deal with forecasting of millions of items where the same model can be applied to millions of products. DeepAR is implemented as a built-in algorithm of Amazon SageMaker. Code example is provided.
The second paper, Long- and Short-Term Temporal Patterns with Deep Neural Networks is developed at CMU and introduces a novel way to detect both short term and long term seasonality in data through introduction of skip-rnn.
A Gluon implementation of the paper is provided in the presentation.
Inference on edge has an ever increasing performance for companies and thus it is crucial to be able to make models smaller. Compressing models can be loss-less or can result in loss of accuracy. This presentation provides a survey of compression techniques for deep learning models. It then describes different architectures of AWS IoT/Green Grass to combine on-device inference and GPU inference in a hub model. Additionally the presentation introduces MXNet, which has small footprint and efficient both for inference and training in distributed settings.
Building Content Recommendation Systems using MXNet GluonApache MXNet
Netflix competition triggered a flurry of research for recommendation engines. This presentation provides a survey of techniques and models for creating a recommender system. The presentation covers Matrix Factorisation, Factorisation Machines, Distributed Factorisation Machines, and DSSM networks as well provide code examples for developing a Matrix Factorisation in Gluon. At the end the presentation provides tips and tricks for large-scale, realtime recommender engines.
Deep Q-Learning in recent years has shown great promise for solving general purpose problems. One area that DQN and its variant have been successful is playing Atari games. This presentation provides a gentle introduction to Reinforcement and Q-Learning. Additionally the presentation describes a series of improvement on Q-Learning by describing Deep Q-Learning, Deep Double Q-Learning, and Deep Bayesian Q-Learning as well providing implementation references of the models using MXNet.
Building Applications with Apache MXNetApache MXNet
This deck quickly walks through fundamentals of Deep Learning and describes how symbolic engine of MXNet implements such networks. It then introduces gluon and provides code examples. The last section of the presentation introduces latest developments in gluon family of tools to include GluonNLP, an NLP toolkit with SOTA implementation of NLP algorithms, GluonCV, a Computer Vision toolkit with SOTA implementation of Vision algorithms, and MXNet backend for Keras.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
15. Fixed Bucketing + Length-aware Batch Size
Batch Size = 18Batch Size = 11
Average Padding = 1.8
Better throughput! ✌️
Batch Size = 8
ratio
Length of the buckets
16. Improvement over published results
AWD [1] model on WikiText2 Test Perplexity
GluonNLP 66.9 (250 epochs)
Pytorch 67.8 (250 epochs)
Diff -0.9
Table 3: AWD Language Model
Table 1: fastText n-gram embedding scores, trained onText8 dataset, evaluated on Wordsim353
Table 2: Machine Translation Model BLEU score same standard and settings
17. MachineTranslation
Encoder: Bidirectional
LSTM + Residual
Decoder: LSTM + Residual +
MLP Attention
• GluonNLP:
• BLEU 26.22 on
IWSLT2015, 10 epochs,
Beam Size=10
• Tensorflow/nmt:
• BLEU 26.10 on
IWSLT2015,
Beam Size=10
Wu, Yonghui, et al. "Google's neural machine translation system: Bridging the gap between human and machine translation." arXiv preprint arXiv:1609.08144 (2016).
Google Neural MachineTranslation (GNMT)
18. • Encoder
• 6 layers of self-attention+feed-forward
• Decoder
• 6 layers of masked self-attention and
output of encoder + feed-forward
• GluonNLP:
• BLEU 26.81 onWMT2014en_de, 40 epochs
• Tensorflow/t2t:
• BLEU 26.55 onWMT2014en_de
Vaswani, Ashish, et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017.
MachineTranslation
Transformer
19. • Feature-based approach
• Pre-training bidirectional
language model
• Character embedding +
stacked bidirectional LSTMs
• GluonNLPTutorial
Transfer learning: ELMo
Embedding from Language Model
Deep contextualized word representations, Peters et al., 2018
20. • Fine-tuning approach
• Pre-training masked language model +
next sentence prediction
• Stacked transformer encoder + BPE
• GluonNLPTutorial
Transfer Learning: BERT
Bidirectional Encoder Representations fromTransformers
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Devlin et al., 2018
First call deck for a high level introduction to Apache MXNet.
this is the core value proposition of GluonNLP.
SOTA and reproducing scripts for baselines.
APIs that reduce implementation complexity.
tutorials to get people started in NLP.
provides dynamic-graph workload. motivation for static mem for Gluon, dynamic graph optimization, round-up GPU memory pool
over 300 pre-trained word embeddings
intrinsic evaluation tools and datasets, embedding training
transformer 13.36 no static, 59.02 static