Apache Kafka is a popular stream-processing platform, but it’s no secret that it can be tough to set up, manage, and scale. Amazon Managed Streaming for Kafka (Amazon MSK) can help remove some of that toil for you. In this session, you learn about new Amazon MSK features and capabilities. You also get a glimpse under the hood, giving you a better understanding of how Amazon MSK operationalizes Apache Kafka so you don't have to. We compare and contrast Amazon Kinesis Data Streams and Apache Kafka (with/without MSK) and show how to lift-and-shift your workload into Amazon MSK with minimal downtime.
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache.
Architecture Patterns for Multi-Region Active-Active Applications (ARC209-R2)...Amazon Web Services
Do you need your applications to extend across multiple regions? Whether for disaster recovery, data sovereignty, data locality, or extremely high availability, many AWS customers choose to deploy services across regions. Join us as we explore how to design and succeed with active-active multi-region architectures. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
High Performance Data Streaming with Amazon Kinesis: Best Practices (ANT322-R...Amazon Web Services
The document discusses best practices for high performance data streaming using Amazon Kinesis. It covers introducing Amazon Kinesis and its capabilities, different consumer types (standard vs enhanced fan-out), considerations for scaling a Kinesis data stream, and Comcast's streaming data platform called Headwaters. The key points are how to address producer limits, consumer limits, shard management, consumption speed, and maximum acceptable latency.
This document introduces infrastructure as code (IaC) using Terraform and provides examples of deploying infrastructure on AWS including:
- A single EC2 instance
- A single web server
- A cluster of web servers using an Auto Scaling Group
- Adding a load balancer using an Elastic Load Balancer
It also discusses Terraform concepts and syntax like variables, resources, outputs, and interpolation. The target audience is people who deploy infrastructure on AWS or other clouds.
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache.
Architecture Patterns for Multi-Region Active-Active Applications (ARC209-R2)...Amazon Web Services
Do you need your applications to extend across multiple regions? Whether for disaster recovery, data sovereignty, data locality, or extremely high availability, many AWS customers choose to deploy services across regions. Join us as we explore how to design and succeed with active-active multi-region architectures. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
High Performance Data Streaming with Amazon Kinesis: Best Practices (ANT322-R...Amazon Web Services
The document discusses best practices for high performance data streaming using Amazon Kinesis. It covers introducing Amazon Kinesis and its capabilities, different consumer types (standard vs enhanced fan-out), considerations for scaling a Kinesis data stream, and Comcast's streaming data platform called Headwaters. The key points are how to address producer limits, consumer limits, shard management, consumption speed, and maximum acceptable latency.
This document introduces infrastructure as code (IaC) using Terraform and provides examples of deploying infrastructure on AWS including:
- A single EC2 instance
- A single web server
- A cluster of web servers using an Auto Scaling Group
- Adding a load balancer using an Elastic Load Balancer
It also discusses Terraform concepts and syntax like variables, resources, outputs, and interpolation. The target audience is people who deploy infrastructure on AWS or other clouds.
Migrating Your Databases to AWS - Deep Dive on Amazon RDS and AWS Database Mi...Amazon Web Services
The document discusses migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It outlines that RDS provides a managed relational database service and discusses engines, availability, scaling, backups and security. It then discusses DMS for migrating or replicating databases to AWS targets like RDS and Redshift. The Schema Conversion Tool is also covered for converting schemas during migrations. Real customer examples like Expedia migrating from SQL Server to AWS are provided to illustrate use cases.
AWS Core Services Overview, Immersion Day Huntsville 2019Amazon Web Services
The document provides an overview of AWS core services including compute, storage, database, analytics, machine learning, IoT, and mobile services. It discusses AWS' breadth and depth of services across infrastructure, application services, management tools, and developer tools. It also highlights AWS' leadership in cloud computing with the largest customer base and most comprehensive set of services and features.
Metrics-Driven Performance Tuning for AWS Glue ETL Jobs (ANT326) - AWS re:Inv...Amazon Web Services
AWS Glue provides a horizontally scalable platform for running ETL jobs against a wide variety of data sources. In this builder's session, we cover techniques for understanding and optimizing the performance of your jobs using AWS Glue job metrics. Learn how to identify bottlenecks on the driver and executors, identify and fix data skew, tune the number of DPUs, and address common memory errors.
AWS Summit London 2019 - Containers on AWSMassimo Ferre'
This document discusses various options for running containers on AWS, including EC2 instances, ECS, EKS, Lambda, and Fargate. It provides examples of deploying a sample application called Yelb using each option. EKS is highlighted as providing a managed Kubernetes control plane while allowing customers to manage their own worker nodes. ECS is noted as having deep integration with other AWS services. The document concludes that EKS is well suited for hybrid deployments while ECS provides a more out-of-the-box experience through tighter AWS platform integration.
Prometheus is an open-source monitoring system that collects metrics from instrumented systems and applications and allows for querying and alerting on metrics over time. It is designed to be simple to operate, scalable, and provides a powerful query language and multidimensional data model. Key features include no external dependencies, metrics collection by scraping endpoints, time-series storage, and alerting handled by the AlertManager with support for various integrations.
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Centralizing DNS Management in a Multi-Account Environment (NET322-R2) - AWS ...Amazon Web Services
DNS management and consistent naming across multiple VPCs and multiple accounts can often be a challenge. In this session, we implement a solution that provides a unified namespace across on-premises and AWS environments. Bring your laptop.
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...Flink Forward
Netflix processes trillions of events and petabytes of data a day in the Keystone data pipeline, which is built on top of Apache Flink. As Netflix has scaled up original productions annually enjoyed by more than 150 million global members, data integration across the streaming service and the studio has become a priority. Scalably integrating data across hundreds of different data stores in a way that enables us to holistically optimize cost, performance and operational concerns presented a significant challenge. Learn how we expanded the scope of the Keystone pipeline into the Netflix Data Mesh, our real-time, general-purpose, data transportation platform for moving data between Netflix systems. The Keystone Platform’s unique approach to declarative configuration and schema evolution, as well as our approach to unifying batch and streaming data and processing will be covered in depth.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
The document discusses AWS Glue and how it is used by Realtor.com for building serverless analytics pipelines. It provides an overview of AWS Glue, its features and improvements. It then discusses how Realtor.com uses a template-driven transformation approach with AWS Glue to process and transform raw data into structured data for analytics. Templates are implemented as Python code and allow generalized processing of different data formats and volumes.
Amazon DynamoDB Under the Hood: How We Built a Hyper-Scale Database (DAT321) ...Amazon Web Services
Come to this session to learn how Amazon DynamoDB was built as the hyper-scale database for internet-scale applications. In January 2012, Amazon launched DynamoDB, a cloud-based NoSQL database service designed from the ground up to support extreme scale, with the security, availability, performance, and manageability needed to run mission-critical workloads. This session discloses for the first time the underpinnings of DynamoDB, and how we run a fully managed nonrelational database used by more than 100,000 customers. We cover the underlying technical aspects of how an application works with DynamoDB for authentication, metadata, storage nodes, streams, backup, and global replication.
MLOps and Reproducible ML on AWS with Kubeflow and SageMakerProvectus
Looking to implement MLOps using AWS services and Kubeflow? Come and learn about machine learning from the experts of Provectus and Amazon Web Services (AWS)!
Businesses recognize that machine learning projects are important but go beyond just building and deploying models, which is mostly done by organizations. Successful ML projects entail a complete lifecycle involving ML, DevOps, and data engineering and are built on top of ML infrastructure.
AWS and Amazon SageMaker provide a foundation for building infrastructure for machine learning while Kubeflow is a great open source project, which is not given enough credit in the AWS community. In this webinar, we show how to design and build an end-to-end ML infrastructure on AWS.
Agenda
- Introductions
- Case Study: GoCheck Kids
- Overview of AWS Infrastructure for Machine Learning
- Provectus ML Infrastructure on AWS
- Experimentation
- MLOps
- Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Qingwei Li, ML Specialist Solutions Architect, AWS
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-mlops-and-reproducible-ml-on-aws-with-kubeflow-and-sagemaker-aug-2020/
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Applying DevOps to Databricks can be a daunting task. In this talk this will be broken down into bite size chunks. Common DevOps subject areas will be covered, including CI/CD (Continuous Integration/Continuous Deployment), IAC (Infrastructure as Code) and Build Agents.
We will explore how to apply DevOps to Databricks (in Azure), primarily using Azure DevOps tooling. As a lot of Spark/Databricks users are Python users, will will focus on the Databricks Rest API (using Python) to perform our tasks.
AWS Fargate is a technology for Amazon ECS and EKS* that allows you to run containers without having to manage servers or clusters. Join us to learn more about how Fargate works, why we built it, and how you can get started using it to run containers today.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Best Practices for Building a Data Lake in Amazon S3 and Amazon Glacier, with...Amazon Web Services
Learn how to build a data lake for analytics in Amazon S3 and Amazon Glacier. In this session, we discuss best practices for data curation, normalization, and analysis on Amazon object storage services. We examine ways to reduce or eliminate costly extract, transform, and load (ETL) processes using query-in-place technology, such as Amazon Athena and Amazon Redshift Spectrum. We also review custom analytics integration using Apache Spark, Apache Hive, Presto, and other technologies in Amazon EMR. You'll also get a chance to hear from Airbnb & Viber about their solutions for Big Data analytics using S3 as a data lake.
AWS Security, Identity, & Compliance - An Overview: AWS Security Week at the San Francisco Loft
Presenter: William Reid, CISM, FIP
Head of Security and Compliance Solution Architecture, AWS
Prometheus: Monitoring by "Pravin Magdum" from "Crevise". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
A deep dive into Amazon MSK - ADB206 - Chicago AWS SummitAmazon Web Services
Apache Kafka is a popular, open-source technology for collecting, processing, and analyzing streaming data in real-time. Amazon MSK is a fully managed service that removes the complexities of managing Kafka clusters so that you can focus on building real-time applications. In this session, we provide an overview of Amazon MSK and then discuss how to get started. We then look at some best practices and top tips, and walk through how to decide whether to choose Amazon MSK, Amazon Kinesis, or a mix of both to address your data streaming use cases.
Migrating Your Databases to AWS - Deep Dive on Amazon RDS and AWS Database Mi...Amazon Web Services
The document discusses migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It outlines that RDS provides a managed relational database service and discusses engines, availability, scaling, backups and security. It then discusses DMS for migrating or replicating databases to AWS targets like RDS and Redshift. The Schema Conversion Tool is also covered for converting schemas during migrations. Real customer examples like Expedia migrating from SQL Server to AWS are provided to illustrate use cases.
AWS Core Services Overview, Immersion Day Huntsville 2019Amazon Web Services
The document provides an overview of AWS core services including compute, storage, database, analytics, machine learning, IoT, and mobile services. It discusses AWS' breadth and depth of services across infrastructure, application services, management tools, and developer tools. It also highlights AWS' leadership in cloud computing with the largest customer base and most comprehensive set of services and features.
Metrics-Driven Performance Tuning for AWS Glue ETL Jobs (ANT326) - AWS re:Inv...Amazon Web Services
AWS Glue provides a horizontally scalable platform for running ETL jobs against a wide variety of data sources. In this builder's session, we cover techniques for understanding and optimizing the performance of your jobs using AWS Glue job metrics. Learn how to identify bottlenecks on the driver and executors, identify and fix data skew, tune the number of DPUs, and address common memory errors.
AWS Summit London 2019 - Containers on AWSMassimo Ferre'
This document discusses various options for running containers on AWS, including EC2 instances, ECS, EKS, Lambda, and Fargate. It provides examples of deploying a sample application called Yelb using each option. EKS is highlighted as providing a managed Kubernetes control plane while allowing customers to manage their own worker nodes. ECS is noted as having deep integration with other AWS services. The document concludes that EKS is well suited for hybrid deployments while ECS provides a more out-of-the-box experience through tighter AWS platform integration.
Prometheus is an open-source monitoring system that collects metrics from instrumented systems and applications and allows for querying and alerting on metrics over time. It is designed to be simple to operate, scalable, and provides a powerful query language and multidimensional data model. Key features include no external dependencies, metrics collection by scraping endpoints, time-series storage, and alerting handled by the AlertManager with support for various integrations.
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Centralizing DNS Management in a Multi-Account Environment (NET322-R2) - AWS ...Amazon Web Services
DNS management and consistent naming across multiple VPCs and multiple accounts can often be a challenge. In this session, we implement a solution that provides a unified namespace across on-premises and AWS environments. Bring your laptop.
Virtual Flink Forward 2020: Netflix Data Mesh: Composable Data Processing - J...Flink Forward
Netflix processes trillions of events and petabytes of data a day in the Keystone data pipeline, which is built on top of Apache Flink. As Netflix has scaled up original productions annually enjoyed by more than 150 million global members, data integration across the streaming service and the studio has become a priority. Scalably integrating data across hundreds of different data stores in a way that enables us to holistically optimize cost, performance and operational concerns presented a significant challenge. Learn how we expanded the scope of the Keystone pipeline into the Netflix Data Mesh, our real-time, general-purpose, data transportation platform for moving data between Netflix systems. The Keystone Platform’s unique approach to declarative configuration and schema evolution, as well as our approach to unifying batch and streaming data and processing will be covered in depth.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
The document discusses AWS Glue and how it is used by Realtor.com for building serverless analytics pipelines. It provides an overview of AWS Glue, its features and improvements. It then discusses how Realtor.com uses a template-driven transformation approach with AWS Glue to process and transform raw data into structured data for analytics. Templates are implemented as Python code and allow generalized processing of different data formats and volumes.
Amazon DynamoDB Under the Hood: How We Built a Hyper-Scale Database (DAT321) ...Amazon Web Services
Come to this session to learn how Amazon DynamoDB was built as the hyper-scale database for internet-scale applications. In January 2012, Amazon launched DynamoDB, a cloud-based NoSQL database service designed from the ground up to support extreme scale, with the security, availability, performance, and manageability needed to run mission-critical workloads. This session discloses for the first time the underpinnings of DynamoDB, and how we run a fully managed nonrelational database used by more than 100,000 customers. We cover the underlying technical aspects of how an application works with DynamoDB for authentication, metadata, storage nodes, streams, backup, and global replication.
MLOps and Reproducible ML on AWS with Kubeflow and SageMakerProvectus
Looking to implement MLOps using AWS services and Kubeflow? Come and learn about machine learning from the experts of Provectus and Amazon Web Services (AWS)!
Businesses recognize that machine learning projects are important but go beyond just building and deploying models, which is mostly done by organizations. Successful ML projects entail a complete lifecycle involving ML, DevOps, and data engineering and are built on top of ML infrastructure.
AWS and Amazon SageMaker provide a foundation for building infrastructure for machine learning while Kubeflow is a great open source project, which is not given enough credit in the AWS community. In this webinar, we show how to design and build an end-to-end ML infrastructure on AWS.
Agenda
- Introductions
- Case Study: GoCheck Kids
- Overview of AWS Infrastructure for Machine Learning
- Provectus ML Infrastructure on AWS
- Experimentation
- MLOps
- Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Qingwei Li, ML Specialist Solutions Architect, AWS
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-mlops-and-reproducible-ml-on-aws-with-kubeflow-and-sagemaker-aug-2020/
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Applying DevOps to Databricks can be a daunting task. In this talk this will be broken down into bite size chunks. Common DevOps subject areas will be covered, including CI/CD (Continuous Integration/Continuous Deployment), IAC (Infrastructure as Code) and Build Agents.
We will explore how to apply DevOps to Databricks (in Azure), primarily using Azure DevOps tooling. As a lot of Spark/Databricks users are Python users, will will focus on the Databricks Rest API (using Python) to perform our tasks.
AWS Fargate is a technology for Amazon ECS and EKS* that allows you to run containers without having to manage servers or clusters. Join us to learn more about how Fargate works, why we built it, and how you can get started using it to run containers today.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Best Practices for Building a Data Lake in Amazon S3 and Amazon Glacier, with...Amazon Web Services
Learn how to build a data lake for analytics in Amazon S3 and Amazon Glacier. In this session, we discuss best practices for data curation, normalization, and analysis on Amazon object storage services. We examine ways to reduce or eliminate costly extract, transform, and load (ETL) processes using query-in-place technology, such as Amazon Athena and Amazon Redshift Spectrum. We also review custom analytics integration using Apache Spark, Apache Hive, Presto, and other technologies in Amazon EMR. You'll also get a chance to hear from Airbnb & Viber about their solutions for Big Data analytics using S3 as a data lake.
AWS Security, Identity, & Compliance - An Overview: AWS Security Week at the San Francisco Loft
Presenter: William Reid, CISM, FIP
Head of Security and Compliance Solution Architecture, AWS
Prometheus: Monitoring by "Pravin Magdum" from "Crevise". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
A deep dive into Amazon MSK - ADB206 - Chicago AWS SummitAmazon Web Services
Apache Kafka is a popular, open-source technology for collecting, processing, and analyzing streaming data in real-time. Amazon MSK is a fully managed service that removes the complexities of managing Kafka clusters so that you can focus on building real-time applications. In this session, we provide an overview of Amazon MSK and then discuss how to get started. We then look at some best practices and top tips, and walk through how to decide whether to choose Amazon MSK, Amazon Kinesis, or a mix of both to address your data streaming use cases.
[NEW LAUNCH!] Introducing Amazon Managed Streaming for Kafka (Amazon MSK) (AN...Amazon Web Services
Discover the power of running Apache Kafka on a fully managed AWS service. In this session, we describe how Amazon Managed Streaming for Kafka (Amazon MSK) runs Apache Kafka clusters for you, demo Amazon MSK and a migration, show you how to get started, and walk through other important details about the new service.
Building well architected .NET applications - SVC209 - Atlanta AWS SummitAmazon Web Services
Customers have a wide range of choices for designing and deploying .NET applications on AWS. In this session, we discuss key points for designing and building .NET applications using the AWS Well-Architected Framework. The AWS Well-Architected Framework provides a consistent approach for customers to build secure, high-performing, resilient, and efficient infrastructure that scales with your needs over time. We cover traditional .NET application architectures, .NET CI/CD architectures, and modern .NET architectures that leverage containers and serverless technologies on AWS.
Architecting SAP on Amazon Web Services - SVC216 - Chicago AWS SummitAmazon Web Services
In this session, meet SAP-on-AWS experts to discuss what it takes to implement an SAP landscape on AWS. Starting with a typical customer requirement and sizing, we talk through the process involved in choosing the right organization and procedures to deploy SAP on AWS, from compute configurations and storage to security, management, and monitoring.
Fast-Track Your Application Modernisation Journey with Containers - AWS Summi...Amazon Web Services
The document discusses containers and orchestration platforms like Amazon ECS and Amazon EKS. It introduces the Mythical Misfits application that will be used in the hands-on lab. The lab will involve setting up environments for the monolithic and microservices versions of the application using containers and either ECS or EKS. Participants will build Docker images, deploy to ECS or EKS clusters, split the application into microservices, enable monitoring and logging, and automate deployments.
Perfecting the Media Workflow Experience on AWS - Ben Masek, 월드와이드 미디어 사업개발 헤...Amazon Web Services Korea
Perfecting the Media Workflow Experience on AWS
Ben Masek, 월드와이드 미디어 사업개발 헤드, AWS
디지털혁신과 맞물려 미디어 시장의 트렌드가 급격하게 변화하고 있습니다. 해당 세션에서는 2019년과 앞으로 다가올 미디어의 트렌드를 소개하고, 나아가 다양한 미디어 전략을 통해서 새로운 수익창출을 꾀하고 있는 사례들을 소개하고자 합니다.
Getting Started with ARM-Based EC2 A1 Instances - CMP302 - Anaheim AWS SummitAmazon Web Services
This document outlines a workshop on getting started with ARM-based EC2 instances. It discusses setting up prerequisites like an AWS account and Cloud9 IDE. It will explore creating and running a standard web app on both x86 and ARM processors using services like CloudFormation, CodeBuild, CodeCommit, CodeDeploy, and CodePipeline. The document also provides an overview of Amazon EC2 A1 instances powered by AWS Graviton processors and the ARM software ecosystem supported. Finally, it discusses concepts like microservices, DevOps, and continuous delivery/deployment.
Running Amazon Elastic Compute Cloud (Amazon EC2) workloads at scale - CMP202...Amazon Web Services
Amazon EC2 Fleet makes it easy to optimize compute performance and cost by blending Amazon EC2 Spot, On-Demand, and Reserved Instances purchasing models. In this session, we learn how to use the power of Amazon EC2 Fleet with AWS services such as AWS Auto Scaling, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Service for Kubernetes (Amazon EKS), Amazon EMR, AWS Batch, AWS Thinkbox Deadline, and AWS OpsWorks to programmatically optimize costs while maintaining high performance and availability. We also discuss cost-optimization patterns for workloads such as containers, web services, CI/CD, and big data.
Architecting security and governance through policy guardrails in Amazon EKS ...Amazon Web Services
Amazon EKS makes it easy to run Kubernetes on AWS without managing master nodes or etcd operators. Kubernetes offers a powerful abstraction layer for managing containerized infrastructure, which presents unique challenges to AWS media customers. In this session, we share lessons from Synamedia, and we discuss its reasons for moving to EKS and the security and governance implications for migrating workloads. Learn about the approach and benefits for establishing security and governance with Open Policy Agent (OPA), which uses Kubernetes validating and mutating admission controllers to establish policy guardrails for container registries, input, load balancers, and other objects within EKS.
This document discusses building well-architected .NET applications on AWS. It introduces the AWS Well-Architected Framework and its pillars of security, reliability, performance efficiency, cost optimization, and operational excellence. It covers hosting options for .NET applications like virtual machines, containers, and serverless. It also discusses modernization strategies like rehosting, replatforming, and refactoring workloads to AWS.
Accelerating product development with high performance computing - CMP301 - S...Amazon Web Services
The document discusses using AWS for high performance computing (HPC). It describes how Amazon dogfooded an HPC cluster on AWS to run simulations for product development. This showed the benefits of AWS's flexible, scalable infrastructure for HPC workloads. The document outlines the various AWS services that can be used to build HPC solutions, including compute, storage, automation, and data analytics tools. It also provides examples of how Amazon simplified their HPC cluster management and optimized costs when running simulations on AWS.
This document summarizes a presentation about breaking up monolithic applications into microservices using containers. The presentation covers when and why to use microservices, how Amazon moved from a monolithic architecture to microservices, considerations for moving to microservices, common pitfalls to avoid, and tools for automating infrastructure with microservices like AWS Fargate and AWS CDK. It also addresses common questions around microservices development.
Modernizing legacy applications with Amazon EKS - MAD301 - Chicago AWS SummitAmazon Web Services
The document discusses Amazon EKS (Elastic Kubernetes Service). It provides an overview of Amazon EKS architecture, how customers are using EKS for enterprise app migration, microservices, and machine learning. It also summarizes the key components of EKS including the control plane, worker nodes, networking, and security features.
The document discusses using AWS Fargate to build a highly scalable containerized system. It provides an overview of AWS container services and shows how Fargate allows containers to be run without needing to provision or manage servers. It demonstrates how to use AWS services like ECS, load balancers, and auto scaling to build and scale applications running on Fargate. Specific examples are also given around networking, logging, monitoring, alerts, and CI/CD best practices for Fargate.
Secure and Fast microVM for Serverless Computing using FirecrackerArun Gupta
Firecracker is a lightweight virtualization technology developed by Amazon that provides security and isolation of virtual machines with the speed and density of containers. It uses KVM virtualization and has a minimal guest device model to provide fast launch times of less than 125ms per microVM while using under 5MB of memory per microVM. Firecracker is open source and designed to securely run thousands of multitenant microVMs on a single host through its REST API and by leveraging statistical multiplexing of resources.
- The document discusses Amazon Web Services (AWS) networking services including Amazon Virtual Private Cloud (VPC), security groups, Elastic Compute Cloud (EC2) instance types, container services, serverless computing, and Elastic Load Balancing.
- It provides an overview of each service's capabilities and use cases to help users choose the right AWS services for their workload and infrastructure needs.
- Examples, resources for further reading, and benefits are outlined for each service to aid in understanding and adopting AWS networking offerings.
What's New in Amazon Aurora - ADB203 - Anaheim AWS SummitAmazon Web Services
Amazon Aurora is a fully managed MySQL and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. It is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. This session provides an overview of Aurora, explores recently announced features, such as serverless, multi-master, and performance insights, and helps you get started.
What's new in Amazon Aurora - ADB203 - Atlanta AWS SummitAmazon Web Services
Amazon Aurora is a fully managed MySQL and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. It is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. This session provides an overview of Aurora, explores recently announced features, such as serverless, multi-master, and performance insights, and helps you get started.
Similar to Ditching the overhead - Moving Apache Kafka workloads into Amazon MSK - ADB301 - Chicago AWS Summit (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.