Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. In this session, we'll show you how to combine it with Apache Spark to build efficient Machine Learning pipeline.
Driving AI Innovation with Machine Learning powered by AWS. AI is opening up new insights and efficiencies in enterprises of every industry. Learn how enterprises are using AWS’ machine learning capabilities combined with its deep storage, compute, analytics, and security services to deliver intelligent applications today. Strategies to develop ML expertise within your org will also be discussed.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
With cloud, you have the flexibility to acquire and use IT resources and services on-demand, which represents a major shift from traditional approaches managing cost. A key first step on your organization’s cloud journey is to establish best practices for cost management in the cloud. AWS' cost optimization techniques help our customers understand cost drivers and effectively manage the cost of running existing application workloads or new ones in the cloud.
Driving AI Innovation with Machine Learning powered by AWS. AI is opening up new insights and efficiencies in enterprises of every industry. Learn how enterprises are using AWS’ machine learning capabilities combined with its deep storage, compute, analytics, and security services to deliver intelligent applications today. Strategies to develop ML expertise within your org will also be discussed.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
With cloud, you have the flexibility to acquire and use IT resources and services on-demand, which represents a major shift from traditional approaches managing cost. A key first step on your organization’s cloud journey is to establish best practices for cost management in the cloud. AWS' cost optimization techniques help our customers understand cost drivers and effectively manage the cost of running existing application workloads or new ones in the cloud.
AWS delivers an integrated suite of services that provide everything needed to quickly and easily build and manage a data lake for analytics. AWS-powered data lakes can handle the scale, agility, and flexibility required to combine different types of data and analytics approaches to gain deeper insights, in ways that traditional data silos and data warehouses cannot. In this session, we will show you how you can quickly build a data lake on AWS that ingests, catalogs and processes incoming data and makes it ready for analysis. Using a live demo, we demonstrate the capabilities of AWS provided analytical services such as AWS Glue, Amazon Athena and Amazon EMR and how to build a Data Lake on AWS step-by-step.
This slide deck explores the impact of MSA on API strategies and designs and the possible changes in API design and deployment, API security, control and monitoring, and CI/CD.
Watch recording: https://wso2.com/library/webinars/2018/09/apis-in-a-microservice-architecture
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Learn how to get insight and understanding into where your AWS costs are going by using automated tag management of your AWS resources.
See the accompanying webinar at https://www.youtube.com/watch?v=m762X3eGyKQ
AWS Glue is a fully managed, serverless extract, transform, and load (ETL) service that makes it easy to move data between data stores. AWS Glue simplifies and automates the difficult and time consuming tasks of data discovery, conversion mapping, and job scheduling so you can focus more of your time querying and analyzing your data using Amazon Redshift Spectrum and Amazon Athena. In this session, we introduce AWS Glue, provide an overview of its components, and share how you can use AWS Glue to automate discovering your data, cataloging it, and preparing it for analysis.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is an upcoming managed service for running Kubernetes on AWS. This session will provide an overview of Amazon EKS, why we built it, and how it works.
First Steps with Apache Kafka on Google Cloud Platformconfluent
Speakers: Jay Smith, Cloud Customer Engineer, Google Cloud + Gwen Shapira, Product Manager, Confluent
Curious about Apache Kafka®? Find out why you would want to use the de facto standard for real-time streaming, the easiest way to get started and how to leverage the extensive Apache Kafka ecosystem. In this chat, we'll talk about three common use cases, review stream processing patterns and discuss integration with important GCP services such as BigQuery. We'll also demo how to implement real-time clickstream analytics on Confluent Cloud, fully managed Apache Kafka as a service.
CI/CD for Your Machine Learning Pipeline with Amazon SageMaker (DVC303) - AWS...Amazon Web Services
Amazon SageMaker is a powerful tool that enables us to build, train, and deploy at scale our machine learning-based workloads. With help from AWS CI/CD tools, we can speed up this pipeline process. In this talk, we discuss how to integrate Amazon SageMaker into a CI/CD pipeline as well as how to orchestrate with other serverless components.
This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Regardless of whether you do nothing, build kit, buy from AWS or another CSP, someone from finance will come back to you and ask what happened to their money. In this session we will cover Cloud ROI: the key economical drivers for moving to the cloud and the tips and tricks for cost optimization on AWS.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
How Can I Build a Landing Zone & Extend my Operations into AWS to Support my ...Amazon Web Services
AWS Landing Zone accelerates customer adoption of the cloud by providing a prescriptive set of instructions for deploying an AWS-recommended foundation of interrelated AWS accounts, networks, and core services. AWS Landing Zone provides prescriptive guidance and best practice templates that a customer can deploy into their initial AWS environment with confidence that it will grow to meet future business needs including security and regulatory compliance requirements. Learn More: https://aws.amazon.com/government-education/
Application Load Balancer and the integration with AutoScaling and ECS - Pop-...Amazon Web Services
Classic Load Balancer and Application Load Balancer automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about the load balancers configuration and day-to-day management, as well as its use in conjunction with Auto Scaling ad ECS. We explain how to make decisions about the service and share best practices and useful tips for success.
Apache Kafka in Financial Services - Use Cases and ArchitecturesKai Wähner
The Rise of Event Streaming in Financial Services - Use Cases, Architectures and Examples powered by Apache Kafka.
The New FinServ Enterprise Reality: Every company is a software company. Innovate OR be Disrupted. Learn how Event Streaming with Apache Kafka and its ecosystem help...
More details:
https://www.kai-waehner.de/apache-kafka-financial-services-industry-banking-finserv-payment-fraud-middleware-messaging-transactions
https://www.kai-waehner.de/blog/2020/04/15/apache-kafka-machine-learning-banking-finance-industry/
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data transformations and data loading processes.
Level: Intermediate
Speakers:
Ryan Malecky - Solutions Architect, EdTech, AWS
Rajakumar Sampathkumar - Sr. Technical Account Manager, AWS
In this session we will introduce key ETL features of AWS Glue and cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We will also discuss how to build scalable, efficient, and serverless ETL pipelines.
Using AWS IoT & Amazon SageMaker to Improve Manufacturing Operations - SVC204...Amazon Web Services
Predictive maintenance holds great promise for improving industrial operations across many industries, including mining, manufacturing, oil and gas, and commercial agriculture. Industrial companies want to reap the benefits of IoT applications, but there is a lot to learn before getting started. In this session, we discuss how you can improve manufacturing plant efficiency with predictive maintenance and asset condition monitoring using AWS services, including AWS IoT Core, AWS IoT Greengrass, and Amazon SageMaker. We'll also be joined on stage with Reliance Steel & Aluminum Co., the largest metals service center operator in North America, is improving their manufacturing plant efficiency with preventative maintenance and asset management using AWS services including AWS IoT Core, AWS IoT Greengrass, and Amazon SageMaker.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Building Machine Learning inference pipelines at scale | AWS Summit Tel Aviv ...Amazon Web Services
Real-life Machine Learning (ML) workloads typically require more than training and predicting: data often needs to be pre-processed and post-processed, sometimes in multiple steps. Thus, developers and data scientists have to train and deploy not just a single algorithm, but a sequence of algorithms that will collaborate in delivering predictions from raw data. In this session, we’ll first show you how to use Apache Spark MLlib to build ML pipelines, and we’ll discuss scaling options when datasets grow huge. We’ll then show how to how implement inference pipelines on Amazon SageMaker, using Apache Spark, Scikit-learn, as well as ML algorithms implemented by Amazon.
Building Machine Learning inference pipelines at scale | AWS Summit Tel Aviv ...AWS Summits
Real-life Machine Learning (ML) workloads typically require more than training and predicting: data often needs to be pre-processed and post-processed, sometimes in multiple steps. Thus, developers and data scientists have to train and deploy not just a single algorithm, but a sequence of algorithms that will collaborate in delivering predictions from raw data. In this session, we’ll first show you how to use Apache Spark MLlib to build ML pipelines, and we’ll discuss scaling options when datasets grow huge. We’ll then show how to how implement inference pipelines on Amazon SageMaker, using Apache Spark, Scikit-learn, as well as ML algorithms implemented by Amazon.
AWS delivers an integrated suite of services that provide everything needed to quickly and easily build and manage a data lake for analytics. AWS-powered data lakes can handle the scale, agility, and flexibility required to combine different types of data and analytics approaches to gain deeper insights, in ways that traditional data silos and data warehouses cannot. In this session, we will show you how you can quickly build a data lake on AWS that ingests, catalogs and processes incoming data and makes it ready for analysis. Using a live demo, we demonstrate the capabilities of AWS provided analytical services such as AWS Glue, Amazon Athena and Amazon EMR and how to build a Data Lake on AWS step-by-step.
This slide deck explores the impact of MSA on API strategies and designs and the possible changes in API design and deployment, API security, control and monitoring, and CI/CD.
Watch recording: https://wso2.com/library/webinars/2018/09/apis-in-a-microservice-architecture
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Learn how to get insight and understanding into where your AWS costs are going by using automated tag management of your AWS resources.
See the accompanying webinar at https://www.youtube.com/watch?v=m762X3eGyKQ
AWS Glue is a fully managed, serverless extract, transform, and load (ETL) service that makes it easy to move data between data stores. AWS Glue simplifies and automates the difficult and time consuming tasks of data discovery, conversion mapping, and job scheduling so you can focus more of your time querying and analyzing your data using Amazon Redshift Spectrum and Amazon Athena. In this session, we introduce AWS Glue, provide an overview of its components, and share how you can use AWS Glue to automate discovering your data, cataloging it, and preparing it for analysis.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is an upcoming managed service for running Kubernetes on AWS. This session will provide an overview of Amazon EKS, why we built it, and how it works.
First Steps with Apache Kafka on Google Cloud Platformconfluent
Speakers: Jay Smith, Cloud Customer Engineer, Google Cloud + Gwen Shapira, Product Manager, Confluent
Curious about Apache Kafka®? Find out why you would want to use the de facto standard for real-time streaming, the easiest way to get started and how to leverage the extensive Apache Kafka ecosystem. In this chat, we'll talk about three common use cases, review stream processing patterns and discuss integration with important GCP services such as BigQuery. We'll also demo how to implement real-time clickstream analytics on Confluent Cloud, fully managed Apache Kafka as a service.
CI/CD for Your Machine Learning Pipeline with Amazon SageMaker (DVC303) - AWS...Amazon Web Services
Amazon SageMaker is a powerful tool that enables us to build, train, and deploy at scale our machine learning-based workloads. With help from AWS CI/CD tools, we can speed up this pipeline process. In this talk, we discuss how to integrate Amazon SageMaker into a CI/CD pipeline as well as how to orchestrate with other serverless components.
This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Regardless of whether you do nothing, build kit, buy from AWS or another CSP, someone from finance will come back to you and ask what happened to their money. In this session we will cover Cloud ROI: the key economical drivers for moving to the cloud and the tips and tricks for cost optimization on AWS.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
How Can I Build a Landing Zone & Extend my Operations into AWS to Support my ...Amazon Web Services
AWS Landing Zone accelerates customer adoption of the cloud by providing a prescriptive set of instructions for deploying an AWS-recommended foundation of interrelated AWS accounts, networks, and core services. AWS Landing Zone provides prescriptive guidance and best practice templates that a customer can deploy into their initial AWS environment with confidence that it will grow to meet future business needs including security and regulatory compliance requirements. Learn More: https://aws.amazon.com/government-education/
Application Load Balancer and the integration with AutoScaling and ECS - Pop-...Amazon Web Services
Classic Load Balancer and Application Load Balancer automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about the load balancers configuration and day-to-day management, as well as its use in conjunction with Auto Scaling ad ECS. We explain how to make decisions about the service and share best practices and useful tips for success.
Apache Kafka in Financial Services - Use Cases and ArchitecturesKai Wähner
The Rise of Event Streaming in Financial Services - Use Cases, Architectures and Examples powered by Apache Kafka.
The New FinServ Enterprise Reality: Every company is a software company. Innovate OR be Disrupted. Learn how Event Streaming with Apache Kafka and its ecosystem help...
More details:
https://www.kai-waehner.de/apache-kafka-financial-services-industry-banking-finserv-payment-fraud-middleware-messaging-transactions
https://www.kai-waehner.de/blog/2020/04/15/apache-kafka-machine-learning-banking-finance-industry/
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data transformations and data loading processes.
Level: Intermediate
Speakers:
Ryan Malecky - Solutions Architect, EdTech, AWS
Rajakumar Sampathkumar - Sr. Technical Account Manager, AWS
In this session we will introduce key ETL features of AWS Glue and cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We will also discuss how to build scalable, efficient, and serverless ETL pipelines.
Using AWS IoT & Amazon SageMaker to Improve Manufacturing Operations - SVC204...Amazon Web Services
Predictive maintenance holds great promise for improving industrial operations across many industries, including mining, manufacturing, oil and gas, and commercial agriculture. Industrial companies want to reap the benefits of IoT applications, but there is a lot to learn before getting started. In this session, we discuss how you can improve manufacturing plant efficiency with predictive maintenance and asset condition monitoring using AWS services, including AWS IoT Core, AWS IoT Greengrass, and Amazon SageMaker. We'll also be joined on stage with Reliance Steel & Aluminum Co., the largest metals service center operator in North America, is improving their manufacturing plant efficiency with preventative maintenance and asset management using AWS services including AWS IoT Core, AWS IoT Greengrass, and Amazon SageMaker.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Building Machine Learning inference pipelines at scale | AWS Summit Tel Aviv ...Amazon Web Services
Real-life Machine Learning (ML) workloads typically require more than training and predicting: data often needs to be pre-processed and post-processed, sometimes in multiple steps. Thus, developers and data scientists have to train and deploy not just a single algorithm, but a sequence of algorithms that will collaborate in delivering predictions from raw data. In this session, we’ll first show you how to use Apache Spark MLlib to build ML pipelines, and we’ll discuss scaling options when datasets grow huge. We’ll then show how to how implement inference pipelines on Amazon SageMaker, using Apache Spark, Scikit-learn, as well as ML algorithms implemented by Amazon.
Building Machine Learning inference pipelines at scale | AWS Summit Tel Aviv ...AWS Summits
Real-life Machine Learning (ML) workloads typically require more than training and predicting: data often needs to be pre-processed and post-processed, sometimes in multiple steps. Thus, developers and data scientists have to train and deploy not just a single algorithm, but a sequence of algorithms that will collaborate in delivering predictions from raw data. In this session, we’ll first show you how to use Apache Spark MLlib to build ML pipelines, and we’ll discuss scaling options when datasets grow huge. We’ll then show how to how implement inference pipelines on Amazon SageMaker, using Apache Spark, Scikit-learn, as well as ML algorithms implemented by Amazon.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
Build, Deploy, and Serve Machine-Learning Models on Streaming Data Using Amaz...Amazon Web Services
As data exponentially grows in organizations, there is an increasing need to use machine learning (ML) to gather insights from this data at scale and to use those insights to perform real-time predictions on incoming data. In this workshop, we walk you through how to train an Apache Spark model using Amazon SageMaker that is pointed to Apache Livy and running on an Amazon EMR Spark cluster. We also show you how to host the Spark model on Amazon SageMaker to serve a RESTful inference API. Finally, we show you how to use the RESTful API to serve real-time predictions on streaming data from Amazon Kinesis Data Streams.
Build, Train, and Deploy ML Models with Amazon SageMaker (AIM410-R2) - AWS re...Amazon Web Services
Come and help build the most accurate text classification model possible. A fully managed machine learning (ML) platform, Amazon SageMaker enables developers and data scientists to build, train, and deploy ML models using built-in or custom algorithms. In this workshop, you learn how to leverage Keras/TensorFlow deep learning frameworks to build a text classification solution using custom algorithms on Amazon SageMaker. You package custom training code in a Docker container, test it locally, and then use Amazon SageMaker to train a deep learning model. You then try to iteratively improve the model to achieve a higher level of accuracy. Finally, you deploy the model in production so different applications within the company can start leveraging this ML classification service. Please note that to actively participate in this workshop, you need an active AWS account with admin-level IAM permissions to Amazon SageMaker, Amazon Elastic Container Registry (Amazon ECR), and Amazon S3.
Big Data Meets Machine Learning: Architecting Spark Environment for Data Scie...Amazon Web Services
In this code-level session, we show you how to integrate your Apache Spark application with Amazon SageMaker. We'll include details on how networking, architecture, and code execute across the components of the solution. We will also dive deep into how to perform data exploration and feature engineering on the Spark cluster, starting training jobs from Spark, integrating training jobs in Spark pipelines, and more. Amazon SageMaker, our fully managed machine learning platform, comes with pre-built algorithms and popular deep learning frameworks. Amazon SageMaker also includes an Apache Spark library that you can use to easily train models from your Spark clusters.
Introducing AWS Glue, a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. In this session, we will introduce key ETL features of AWS Glue and cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. Join us as we walk through the process of building scalable, efficient, and serverless ETL pipelines.
Speaker: Craig Roach, Solutions Architect, AWS
Build, Train, and Deploy Machine Learning for the Enterprise with Amazon Sage...Amazon Web Services
Machine learning (ML) is rapidly being adopted by enterprises, enabling them to be nimble and align technical solutions to solve real-world business problems. ML use cases include diagnosis and research in healthcare, financial fraud detection, natural language processing (NLU), and accurate statistics in sports. Amazon SageMaker is a fully managed platform that enables developers to build, train, and deploy enterprise-scale ML models quickly and easily. In this workshop, we build an ML model using Amazon SageMaker’s built-in algorithms and frameworks. We train the model to achieve a high level of accurate predictions, then we deploy the model in production to achieve best results. Gain an understanding of how Amazon SageMaker removes the complexity and barriers to use and deploy ML models.
Serverless AI with Scikit-Learn (GPSWS405) - AWS re:Invent 2018Amazon Web Services
Take advantage of serverless technologies for artificial intelligence (AI) by making a prediction on the fly. There is no model hosting and no servers to maintain. In this session, we show how to train a model in scikit-learn, an open source machine learning library for Python. Then we load and call the trained model from an AWS Lambda function, and finally we demonstrate how to load the library and send the data for prediction.
End to End Model Development to Deployment using SageMakerAmazon Web Services
End to End Model Development to Deployment Using SageMaker
In this session we would be developing a model for image classification model (a convolutional neural network, or CNN). We would start off with some theory about CNNs, explore how they learn an image and then proceed towards hands-on lab. We would be using Amazon SageMaker to develop the model in Python, train the model and then to finally create an endpoint and run inference against it. We would be using a custom Conda Kernel for this exercise and would be looking at leveraging SageMaker features like LifeCycle Configurations to help us prepare the notebook before launch. Finally we would be deploying the model in production and run inference against it. We would also be able to monitor various parameters for endpoint performance such as endpoint’s CPU/Memory and Model inference performance metrics.
Level: 200-300
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018Amazon Web Services
Building Serverless ETL Pipelines with AWS Glue
In this session we will introduce key ETL features of AWS Glue and cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We will also discuss how to build scalable, efficient, and serverless ETL pipelines.
Ben Thurgood, Solutions Architect, Amazon Web Services
BDA301 Working with Machine Learning in Amazon SageMaker: Algorithms, Models,...Amazon Web Services
Today, organizations are using machine learning (ML) to address a host of business challenges, from product recommendations and pricing predictions, to tracking disease progression and demand forecasting. Until recently, developing these ML models took a significant amount of time and effort, and it required expertise in this field. In this session, we introduce you to Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models more quickly and easily. We walk through the features and benefits of Amazon SageMaker and discuss the uniquely designed ML algorithms that allow for optimized model training, to get you to production fast.
Machine Learning e Amazon SageMaker: Algoritmos, Modelos e Inferências - MCL...Amazon Web Services
Atualmente, as organizações estão usando machine learning (ML) para endereçar uma série de desafios nos negócios, desde recomenções de produtos e previsão de preços, até o rastreamento da progressão de doença e previsão de demanda. Até recentemente, desenvolver esses modelos de ML demorava um período significante de tempo e esforços, e exigia especialização nesse campo. Nesta sessão, apresentaremos o Amazon SageMaker, um seviço ML totalmente gerenciado que permite desenvolvedores e cientistas de dados desenvolver e implementar modelos de aprendizagem profunda com mais rapidez e facilidade. Analisaremos os recursos e os benefícios do Amazon SageMaker e discutiremos os algoritmos ML exclusivamente projetados que permitem treinamento otimizado do modelo, para levar você à rápida produtividade.
Automate all your EMR related activitiesEitan Sela
This presentation was part of "AWS Big Data Demystified #5 | Automate all your EMR related activities" meetup.
in this presentation I shared from my own experience how we managed to automate EMR Clusters creation for scheduled running ETL Spark jobs, submitting ad-hoc Spark steps and creating EMR Clusters per developer request using Slack with the help of the super cool chatbot they developed in WeissBeerger.
AWS Machine Learning Week SF: End to End Model Development Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: End to End Model Development Using SageMaker
In this session we would be developing a model for image classification model (a convolutional neural network, or CNN). We would start off with some theory about CNNs, explore how they learn an image and then proceed towards hands-on lab. We would be using Amazon SageMaker to develop the model in Python, train the model and then to finally create an endpoint and run inference against it. We would be using a custom Conda Kernel for this exercise and would be looking at leveraging SageMaker features like LifeCycle Configurations to help us prepare the notebook before launch. Finally we would be deploying the model in production and run inference against it. We would also be able to monitor various parameters for endpoint performance such as endpoint’s CPU/Memory and Model inference performance metrics.
Presenter: Kris Skrinak
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Randall Hunt - Sr. Technical Evangelist, AWS
Similar to Building Machine Learning models with Apache Spark and Amazon SageMaker | AWS Floor28 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.