Architecture Patterns for Multi-Region Active-Active Applications (ARC209-R2)...Amazon Web Services
Do you need your applications to extend across multiple regions? Whether for disaster recovery, data sovereignty, data locality, or extremely high availability, many AWS customers choose to deploy services across regions. Join us as we explore how to design and succeed with active-active multi-region architectures. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
Behind the Scenes: Exploring the AWS Global Network (NET305) - AWS re:Invent ...Amazon Web Services
The AWS Global Network provides a secure, highly available, and high- performance infrastructure for customers. In this session, we walk through the architecture of various parts of the AWS network such as Availability Zones, AWS Regions, our Global Network connecting AWS Regions to each other and our Edge Network which provides Internet connectivity. We explain how AWS services such as AWS Direct Connect and Amazon CloudFront integrate with our Global Network to provide the best experience for our customers. We also dive into how the AWS Global Network connects to the rest of the Internet through peering at a global scale. If you are curious about how AWS network infrastructure can support large-scale cat photo distribution or how Internet routing works, this session answers those questions. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum and optimizing your overall capital expense can be challenging. This session presents AWS features and services along with disaster recovery architectures that you can leverage when building highly available and disaster-resilient strategies.
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)Emprovise
Highlights of AWS ReInvent 2023 in Las Vegas. Contains new announcements, deep dive into existing services and best practices, recommended design patterns.
Amazon DynamoDB Under the Hood: How We Built a Hyper-Scale Database (DAT321) ...Amazon Web Services
Come to this session to learn how Amazon DynamoDB was built as the hyper-scale database for internet-scale applications. In January 2012, Amazon launched DynamoDB, a cloud-based NoSQL database service designed from the ground up to support extreme scale, with the security, availability, performance, and manageability needed to run mission-critical workloads. This session discloses for the first time the underpinnings of DynamoDB, and how we run a fully managed nonrelational database used by more than 100,000 customers. We cover the underlying technical aspects of how an application works with DynamoDB for authentication, metadata, storage nodes, streams, backup, and global replication.
Architecture Patterns for Multi-Region Active-Active Applications (ARC209-R2)...Amazon Web Services
Do you need your applications to extend across multiple regions? Whether for disaster recovery, data sovereignty, data locality, or extremely high availability, many AWS customers choose to deploy services across regions. Join us as we explore how to design and succeed with active-active multi-region architectures. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Have you ever wondered what the relative differences are between two of the more popular open source, in-memory data stores and caches? In this session, we will describe those differences and, more importantly, provide live demonstrations of the key capabilities that could have a major impact on your architectural Java application designs.
Behind the Scenes: Exploring the AWS Global Network (NET305) - AWS re:Invent ...Amazon Web Services
The AWS Global Network provides a secure, highly available, and high- performance infrastructure for customers. In this session, we walk through the architecture of various parts of the AWS network such as Availability Zones, AWS Regions, our Global Network connecting AWS Regions to each other and our Edge Network which provides Internet connectivity. We explain how AWS services such as AWS Direct Connect and Amazon CloudFront integrate with our Global Network to provide the best experience for our customers. We also dive into how the AWS Global Network connects to the rest of the Internet through peering at a global scale. If you are curious about how AWS network infrastructure can support large-scale cat photo distribution or how Internet routing works, this session answers those questions. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum and optimizing your overall capital expense can be challenging. This session presents AWS features and services along with disaster recovery architectures that you can leverage when building highly available and disaster-resilient strategies.
Highlights of AWS ReInvent 2023 (Announcements and Best Practices)Emprovise
Highlights of AWS ReInvent 2023 in Las Vegas. Contains new announcements, deep dive into existing services and best practices, recommended design patterns.
Amazon DynamoDB Under the Hood: How We Built a Hyper-Scale Database (DAT321) ...Amazon Web Services
Come to this session to learn how Amazon DynamoDB was built as the hyper-scale database for internet-scale applications. In January 2012, Amazon launched DynamoDB, a cloud-based NoSQL database service designed from the ground up to support extreme scale, with the security, availability, performance, and manageability needed to run mission-critical workloads. This session discloses for the first time the underpinnings of DynamoDB, and how we run a fully managed nonrelational database used by more than 100,000 customers. We cover the underlying technical aspects of how an application works with DynamoDB for authentication, metadata, storage nodes, streams, backup, and global replication.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Amazon Aurora is a relational database built for the cloud and is compatible with MySQL and PostgreSQL. It combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. In this session, we cover some of the key innovations in the Aurora database engine and storage layers. We explain recently announced features, such as Aurora Serverless, Aurora Multi-Master, and Aurora Parallel Query, and we discuss best practices and optimal configurations. See why Aurora is a great fit for new application development and for migrations from overpriced, restrictive commercial databases.
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we conduct a detailed analysis of the differences among the three types of Amazon EBS block storage: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. We discuss how to maximize Amazon EBS performance, with a special eye towards low-latency, high-throughput applications like databases. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Best Practices for ETL with Apache NiFi on Kubernetes - Albert Lewandowski, G...GetInData
Did you like it? Check out our E-book: Apache NiFi - A Complete Guide
https://ebook.getindata.com/apache-nifi-complete-guide
Apache NiFi is one of the most popular services for running ETL pipelines otherwise it’s not the youngest technology. During the talk, there are described all details about migrating pipelines from the old Hadoop platform to the Kubernetes, managing everything as the code, monitoring all corner cases of NiFi and making it a robust solution that is user-friendly even for non-programmers.
Author: Albert Lewandowski
Linkedin: https://www.linkedin.com/in/albert-lewandowski/
___
Getindata is a company founded in 2014 by ex-Spotify data engineers. From day one our focus has been on Big Data projects. We bring together a group of best and most experienced experts in Poland, working with cloud and open-source Big Data technologies to help companies build scalable data architectures and implement advanced analytics over large data sets.
Our experts have vast production experience in implementing Big Data projects for Polish as well as foreign companies including i.a. Spotify, Play, Truecaller, Kcell, Acast, Allegro, ING, Agora, Synerise, StepStone, iZettle and many others from the pharmaceutical, media, finance and FMCG industries.
https://getindata.com
Disaster Recovery for Multi-Region Apache Kafka Ecosystems at Uberconfluent
Speaker: Yupeng Fu, Staff Engineer, Uber
High availability and reliability are important requirements to Uber services, and the services shall tolerate datacenter failures in a region and fail over to another region. In this talk, we will present the active-active Apache Kafka® at Uber and how it facilitates disaster discovery across regions for Uber services. In particular, we will highlight the key components including topic replication, topic aggregation, offsets sync and then walk through several use cases of their disaster recovery strategy using active-active Kafka. Lastly, we will present several interesting challenges and the future work planned.
Yupeng Fu is a staff engineer in Uber Data Org leading the streaming data platform. Previously, he worked at Alluxio and Palantir, building distributed data analysis and storage platforms. Yupeng holds a B.S. and an M.S. from Tsinghua University and did his Ph.D. research on databases at UCSD.
A closer look at the MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. We’ll explore how Aurora uses the AWS cloud to provide high reliability, high durability, and high throughput.
Speakers:
Steve Abraham - Principal Database Specialist Solutions Architect, AWS
Peter Dachnowicz - Sr. Technical Account Manager, AWS
Deep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech TalksAmazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage for use with Amazon EC2 instances. In this technical session, we will present and demonstrate how you can increase capacity, tune performance, and modify volume types on the fly with the latest Amazon EBS innovation, Elastic Volumes. You will learn how Elastic Volumes can significantly reduce both operational complexity and downtime enabling you to right-size your deployment and dynamically adapt as your business needs change. We will describe best practices and share tips for success throughout.
Learning Objectives:
- Learn how to increase capacity, tune performance, and modify volume types
- Learn how you can automate modifications to align with changing business needs.
- Review the different Amazon EBS volume types and receive best practices for each.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
https://aws.amazon.com/webinars/anz-webinar-series/
Introduction to Amazon Route 53 Resolver for Hybrid Cloud (NET215) - AWS re:I...Amazon Web Services
Amazon Route 53 Resolver provides recursive DNS for your Amazon VPC and on-premises networks over VPN or AWS Direct Connect. This session will review common use cases for Route 53 Resolver and go in depth on how it works.
In this presentation we will offer an overview of the fundamental concepts of graph databases and data representation and query technologies;
We will also focus on AWS Property Graph and Apache TinkerPop Gremlin. Then we'll talk about the use of Amazon Neptune: creating a cluster, loading data and running some sample queries;
AWS Neptune - A Fast and reliable Graph Database Built for the CloudAmazon Web Services
Dickson Yue, Solutions Architect, AWS
Amazon Neptune is a fully managed graph database service which has been built ground up for handling rich highly connected data. Come learn how to transform your business with Amazon Neptune and hear diverse use cases such as recommendation engines, knowledge graphs, fraud detection, social networks, network management and life sciences.
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
"
DNS is critical network infrastructure and securing it against attacks like DDoS, NXDOMAIN, hijacking and Malware/APT is very important to protecting any business.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
In this session, we discuss architectural principles that helps simplify big data analytics.
We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on.
Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
by Isaiah Weiner, Sr. Manager of Solutions Architecture, AWS
Companies are using AWS to create and deploy efficient, fast, and cost-effective backup and restore capabilities to protect critical IT systems without incurring the infrastructure expense of a second physical site. In this session, we will talk about cloud-based services AWS provides to enable robust backup and rapid recovery of your IT infrastructure and data.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Amazon Aurora is a relational database built for the cloud and is compatible with MySQL and PostgreSQL. It combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. In this session, we cover some of the key innovations in the Aurora database engine and storage layers. We explain recently announced features, such as Aurora Serverless, Aurora Multi-Master, and Aurora Parallel Query, and we discuss best practices and optimal configurations. See why Aurora is a great fit for new application development and for migrations from overpriced, restrictive commercial databases.
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances. In this technical session, we conduct a detailed analysis of the differences among the three types of Amazon EBS block storage: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. We discuss how to maximize Amazon EBS performance, with a special eye towards low-latency, high-throughput applications like databases. We discuss Amazon EBS encryption and share best practices for Amazon EBS snapshot management. Throughout, we share tips for success.
Best Practices for ETL with Apache NiFi on Kubernetes - Albert Lewandowski, G...GetInData
Did you like it? Check out our E-book: Apache NiFi - A Complete Guide
https://ebook.getindata.com/apache-nifi-complete-guide
Apache NiFi is one of the most popular services for running ETL pipelines otherwise it’s not the youngest technology. During the talk, there are described all details about migrating pipelines from the old Hadoop platform to the Kubernetes, managing everything as the code, monitoring all corner cases of NiFi and making it a robust solution that is user-friendly even for non-programmers.
Author: Albert Lewandowski
Linkedin: https://www.linkedin.com/in/albert-lewandowski/
___
Getindata is a company founded in 2014 by ex-Spotify data engineers. From day one our focus has been on Big Data projects. We bring together a group of best and most experienced experts in Poland, working with cloud and open-source Big Data technologies to help companies build scalable data architectures and implement advanced analytics over large data sets.
Our experts have vast production experience in implementing Big Data projects for Polish as well as foreign companies including i.a. Spotify, Play, Truecaller, Kcell, Acast, Allegro, ING, Agora, Synerise, StepStone, iZettle and many others from the pharmaceutical, media, finance and FMCG industries.
https://getindata.com
Disaster Recovery for Multi-Region Apache Kafka Ecosystems at Uberconfluent
Speaker: Yupeng Fu, Staff Engineer, Uber
High availability and reliability are important requirements to Uber services, and the services shall tolerate datacenter failures in a region and fail over to another region. In this talk, we will present the active-active Apache Kafka® at Uber and how it facilitates disaster discovery across regions for Uber services. In particular, we will highlight the key components including topic replication, topic aggregation, offsets sync and then walk through several use cases of their disaster recovery strategy using active-active Kafka. Lastly, we will present several interesting challenges and the future work planned.
Yupeng Fu is a staff engineer in Uber Data Org leading the streaming data platform. Previously, he worked at Alluxio and Palantir, building distributed data analysis and storage platforms. Yupeng holds a B.S. and an M.S. from Tsinghua University and did his Ph.D. research on databases at UCSD.
A closer look at the MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. We’ll explore how Aurora uses the AWS cloud to provide high reliability, high durability, and high throughput.
Speakers:
Steve Abraham - Principal Database Specialist Solutions Architect, AWS
Peter Dachnowicz - Sr. Technical Account Manager, AWS
Deep Dive on Amazon EBS Elastic Volumes - March 2017 AWS Online Tech TalksAmazon Web Services
Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage for use with Amazon EC2 instances. In this technical session, we will present and demonstrate how you can increase capacity, tune performance, and modify volume types on the fly with the latest Amazon EBS innovation, Elastic Volumes. You will learn how Elastic Volumes can significantly reduce both operational complexity and downtime enabling you to right-size your deployment and dynamically adapt as your business needs change. We will describe best practices and share tips for success throughout.
Learning Objectives:
- Learn how to increase capacity, tune performance, and modify volume types
- Learn how you can automate modifications to align with changing business needs.
- Review the different Amazon EBS volume types and receive best practices for each.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
https://aws.amazon.com/webinars/anz-webinar-series/
Introduction to Amazon Route 53 Resolver for Hybrid Cloud (NET215) - AWS re:I...Amazon Web Services
Amazon Route 53 Resolver provides recursive DNS for your Amazon VPC and on-premises networks over VPN or AWS Direct Connect. This session will review common use cases for Route 53 Resolver and go in depth on how it works.
In this presentation we will offer an overview of the fundamental concepts of graph databases and data representation and query technologies;
We will also focus on AWS Property Graph and Apache TinkerPop Gremlin. Then we'll talk about the use of Amazon Neptune: creating a cluster, loading data and running some sample queries;
AWS Neptune - A Fast and reliable Graph Database Built for the CloudAmazon Web Services
Dickson Yue, Solutions Architect, AWS
Amazon Neptune is a fully managed graph database service which has been built ground up for handling rich highly connected data. Come learn how to transform your business with Amazon Neptune and hear diverse use cases such as recommendation engines, knowledge graphs, fraud detection, social networks, network management and life sciences.
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
"
DNS is critical network infrastructure and securing it against attacks like DDoS, NXDOMAIN, hijacking and Malware/APT is very important to protecting any business.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
In this session, we discuss architectural principles that helps simplify big data analytics.
We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on.
Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
by Isaiah Weiner, Sr. Manager of Solutions Architecture, AWS
Companies are using AWS to create and deploy efficient, fast, and cost-effective backup and restore capabilities to protect critical IT systems without incurring the infrastructure expense of a second physical site. In this session, we will talk about cloud-based services AWS provides to enable robust backup and rapid recovery of your IT infrastructure and data.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
Building a Multi-Region, Active-Active Serverless Backends.Adrian Hornsby
From understanding reliability and availability, this talks walks you through the why and the how of building multi-region, active-active applications, and especially why serverless is a great fit.
Manufacturing companies collect vast troves of process data for tracking purposes. Using this data with advanced analytics can optimize operations, saving time and money. In this session, we explore the latest analytics capabilities to support your goals for optimizing the manufacturing plant floor. Learn how to build dashboards that connect to prediction models driven by sensors across manufacturing processes. Learn how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, AWS Identity and Access Management, and AWS Lambda. We also review a reference architecture that supports data ingestion, event rules, analytics, and the use of machine learning for manufacturing analytics.
Enterprises require that their mission critical business applications such as Microsoft, SAP and Oracle are up and running 24x7. Whatever it is, the requirements are the same: Availability, security and flexibility are key. In this session we will walk through practical examples of how AWS customers operate heavily mission critical applications in the cloud. Through real world customer examples, you will learn how Enterprise deploy mission critical workloads in highly redundant manner as well as apply security controls which will provide you with increased visibility and control of your applications.
ARC306_High Resiliency & Availability Of Online Entertainment Communities Usi...Amazon Web Services
With increase in popularity of online engagement as a means of entertainment, broad use of wide range of communities have become popular. These communities need to be highly available and resilient at scale. Failure of availability could be fatal to the product that are used by the customer. We will share the process you should use to develop your architectural principles that will allow you to reap the benefits of reduced complexity.
Design, Build, and Modernize Your Web Applications with AWSDonnie Prakoso
Cloud makes it super easy for you to spin off your desired IT resources. But, the true value of cloud lies in its capability to provide you a set of building blocks for your applications. Join us in this hands-on session to understand how to use Amazon Virtual Private Cloud (VPC) and Amazon Elastic Compute Cloud (EC2) along with Amazon EC2 Auto Scaling and Elastic Load Balancer to design your scalable architecture and build your applications in no time. Moreover, we will discover how to modernize your application with the help of our serverless service AWS Lambda.
Scale Website dan Mobile Applications Anda di AWS hingga 10 juta penggunaAmazon Web Services
Dengan cloud computing, Anda akan mendapatkan sejumlah keuntungan, seperti kemampuan untuk scale web application atau website secara on-demand. Jika Anda sedang membangun web application dan ingin mulai menggunakan cloud computing, mungkin terlintas di benak Anda, “Mulai dari mana?” Mari bergabung bersama kami dalam sesi ini untuk memahami praktik terbaik untuk scaling infrastruktur Anda, mulai dari nol hingga jutaan pengguna. Kami akan tunjukkan cara terbaik untuk membangun bisnis Anda dengan layanan AWS, membantu memutuskan pilihan yang lebih cerdas, dan bagaimana scaling infrastruktur Anda di cloud.
Airbnb has served over 200,000,000 customers across 191 countries and is one of the largest database consumers on AWS. They have heavily adopted MySQL and have recently completed a migration to Amazon Aurora. In this session, Airbnb shares their story, including design considerations for operating at Airbnb scale, tips, tricks, and advice for others startups, and thoughts on why they decided to run on Aurora.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Learn how to build serverless applications using the AWS Serverless Platform-...Amazon Web Services
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server?
In this session, you will learn how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverless applications, such as the AWS Serverless Application Model (AWS SAM), Chalice, and ClaudiaJS.
We will cover:
- Learn the basics of AWS Lambda and Amazon API Gateway
- Understand how to build a web application using these AWS services
- Learn to architect a serverless application
- Gain an overview of frameworks for building serverless applications
This webinar is a Level 100 session and is suited for:
- Developers
- Solution architects and engineers
- Technical managers
Speakers:
Stephen Liedig, Public Sector Solution Architect, Amazon Web Services
Q&A:
Ed Lima, Solutions Architect, Amazon Web Services
This is your chance to learn directly from top CTOs and Cloud Architects from some of the most innovative AWS customers. In this lightning round session, we'll have an action-packed hour, jumping straight to the architecture and technical detail for some of the most innovative data storage solutions of 2017. Hear how Insitu collects and analyzes data from drone flights in the field with AWS Snowball Edge. See how iRobot collects and analyzes IoT data from their robotic vacuums, mops, and pool cleaners. Learn how Viber maintains a petabyte-scale data lake on Amazon S3. Understand how Alert Logic scales their massive SaaS cloud security solution on Amazon S3 & Amazon Glacier.
AWS Database and Analytics State of the Union - 2017 - DAT201 - re:Invent 2017Amazon Web Services
In this session, we discuss the evolution of database and analytics services in AWS, the new database and analytics services and features we launched this year, and our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different forms. Storage, management, and analysis of this data require database services that scale and perform in ways not possible before. AWS offers a collection of database and other data services—including Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR—to process, store, manage, and analyze data. In this session, we provide an overview of AWS database and analytics services and discuss how customers are using these services today.
From Mainframe to Microservices: Vanguard’s Move to the Cloud - ENT331 - re:I...Amazon Web Services
Maintaining control of sensitive data is critical in the highly regulated financial investments environment that Vanguard operates in. This need for data control complicated Vanguard's move to the cloud. They needed to expand globally to provide a great user experience while at the same time maintaining their mainframe-based backend data architecture. In this session, Vanguard discusses the creative approach they took to decouple their monolithic backend architecture to empower a microservices architecture while maintaining compliance with regulations. They also cover solutions implemented to successfully meet their requirements for security, latency, and end-state consistency.
If you want to deliver videos to all consumers on all devices, building such workloads is complex, time consuming, and expensive. Now, it is fast and easy to implement video-on-demand workflows on AWS and distribute video content to a global audience. Companies, small or large and in various industries, can deliver streaming video without complex professional video tools. In this session, learn how to build complex video workflows entirely in code using AWS services.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
2. Session objectives
1. Understand how to think about application availability
2. Understand how AWS makes services highly-available
3. Understand why you might need Multi-Region Active-Active architecture
4. Review AWS services that are useful for Multi-Region Active-Active design
44. INTER REGION VPC PEERING (New)
HQ
OREGO
N
VPC
Shared
services
VPC
VPC
Customer
network
OREGON
REGION
VPC
VPC
VPC VPC
VPC
IRELAND
REGION
VPC
Shared
services
VPC
VPC
VPC
VPC
VPC
SHARED SERVICES
A M A Z O N B A C K B O N E
V P C P E E R
NO OVERLAPPING IP ADDRESS SPACE
C R O S S R E G I O N P E E R E D
C O N N E C T I O N E N C RY P T E D
47. Amazon RDS Multi-AZ
deployment
Availability Zone 1 Availability Zone 2
Security group
mydb1.abc45345.eu-west-1.rds.amazonaws.com:3306
VPC subnetVPC subnet
Synchronous
physical replication
• Standbys ensure zero data loss in event of the master’s failure.
• Always have a stand-by. Always!!!
• Also note, cross AZ failovers are automatic & fast; whereas cross-
region failovers take time and need nontrivial planning.
61. Error handling in Lambda
1. For stream-based event sources (Amazon Kinesis Streams and
DynamoDB streams), AWS Lambda polls your stream and invokes your
Lambda function.
2. Therefore, if a Lambda function fails, AWS Lambda attempts to process
the erring batch of records until the time the data expires from the
stream.
3. The exception is treated as blocking, and AWS Lambda will not read any
new records from the stream until the failed batch of records either
expires or processed successfully.
This ensures that AWS Lambda processes the stream events in order.
http://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html
65. Global service
• Load balancers and/or servers across multiple regions globally
Region A
Region B
Region C
User for Region A
User for Region C
User for Region B
67. Traffic flow: Quick overview of rules
• Simple routing policy – Use for a single resource that performs a given function for
your domain, for example, a web server that serves content for the example.com
website.
• Failover routing policy – Use when you want to configure active-passive failover.
• Geolocation routing policy – Use when you want to route traffic based on the
location of your users.
68. Traffic flow: Quick overview of rules
• Geoproximity routing policy – Use when you want to route traffic based on the
location of your resources and, optionally, shift traffic from one resource in one
location to resources in another.
• Latency routing policy – Use when you have resources in multiple locations and
you want to route traffic to the resource that provides the best latency.
• Multivalue answer routing policy – Use when you want Amazon Route 53 to
respond to DNS queries with up to eight healthy records selected at random.
• Weighted routing policy – Use to route traffic to multiple resources in proportions
that you specify.