This document summarizes a presentation about Amazon Aurora. It discusses how Aurora provides the speed and availability of commercial databases at a lower cost than open source databases. Aurora is a MySQL and PostgreSQL compatible database that is managed as a service, automating administrative tasks. It utilizes a distributed, self-healing storage system to provide high availability and durability across availability zones.
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
Deep Dive on the Amazon Aurora PostgreSQL-compatible Edition - DAT402 - re:In...Amazon Web Services
Amazon Aurora is a fully-managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The initial launch of Amazon Aurora delivered these benefits for MySQL. We have now added PostgreSQL compatibility to Amazon Aurora. In this session, Amazon Aurora experts discuss best practices to maximize the benefits of the Amazon Aurora PostgreSQL-compatible edition in your environment.
Performance Schema is a powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tuning what to instrument. More than 100 consumers store collected data.
In this tutorial, we will try all the important instruments out. We will provide a test environment and a few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information but have experience with it.
Tutorial at Percona Live Austin 2019
PostgreSQL is a very popular and feature-rich DBMS. At the same time, PostgreSQL has a set of annoying wicked problems, which haven't been resolved in decades. Miraculously, with just a small patch to PostgreSQL core extending this API, it appears possible to solve wicked PostgreSQL problems in a new engine made within an extension.
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
Deep Dive on the Amazon Aurora PostgreSQL-compatible Edition - DAT402 - re:In...Amazon Web Services
Amazon Aurora is a fully-managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The initial launch of Amazon Aurora delivered these benefits for MySQL. We have now added PostgreSQL compatibility to Amazon Aurora. In this session, Amazon Aurora experts discuss best practices to maximize the benefits of the Amazon Aurora PostgreSQL-compatible edition in your environment.
Performance Schema is a powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tuning what to instrument. More than 100 consumers store collected data.
In this tutorial, we will try all the important instruments out. We will provide a test environment and a few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information but have experience with it.
Tutorial at Percona Live Austin 2019
PostgreSQL is a very popular and feature-rich DBMS. At the same time, PostgreSQL has a set of annoying wicked problems, which haven't been resolved in decades. Miraculously, with just a small patch to PostgreSQL core extending this API, it appears possible to solve wicked PostgreSQL problems in a new engine made within an extension.
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022HostedbyConfluent
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022
Apache Kafka without Zookeeper is now production ready! This talk is about how you can run without ZooKeeper, and why you should.
Innodb에서의 Purge 메커니즘 deep internal (by 이근오)I Goo Lee.
The document discusses InnoDB's purge mechanism in MySQL. It explains that purge is needed to reclaim disk space used by deleted or updated data and to prevent performance degradation from long history lists. It then describes how purge works for update undo records, maintaining the before images of updated rows in undo pages to support transaction isolation. Purge eventually removes old undo records after transactions commit or rollback.
My talk for "MySQL, MariaDB and Friends" devroom at Fosdem on February 2, 2019
Born in 2010 in MySQL 5.5.3 as "a feature for monitoring server execution at a low level," grown in 5.6 times with performance fixes and DBA-faced features, in MySQL 5.7 Performance Schema is a mature tool, used by humans and more and more monitoring products. It becomes more popular over the years. In this talk I will give an overview of Performance Schema, focusing on its tuning, performance, and usability.
Performance Schema helps to troubleshoot query performance, complicated locking issues, memory leaks, resource usage, problematic behavior, caused by inappropriate settings and much more. It comes with hundreds of options which allow precisely tune what to instrument. More than 100 consumers store collected data.
Performance Schema is a potent tool. And very complicated at the same time. It does not affect performance in most cases and can slow down server dramatically if configured without care. It collects a lot of data, and sometimes this data is hard to read.
This talk will start from the introduction of how Performance Schema designed, and you will understand why it slowdowns server in some cases and does not affect your queries in others. Then we will discuss which information you can retrieve from Performance Schema and how to do it effectively.
I will cover its companion sys schema and graphical monitoring tools.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability, and durability than was previously available using conventional monolithic database techniques. In this session, we dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and migration from other databases to Amazon Aurora, and share early customer experiences from the field.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022HostedbyConfluent
Introducing KRaft: Kafka Without Zookeeper With Colin McCabe | Current 2022
Apache Kafka without Zookeeper is now production ready! This talk is about how you can run without ZooKeeper, and why you should.
Innodb에서의 Purge 메커니즘 deep internal (by 이근오)I Goo Lee.
The document discusses InnoDB's purge mechanism in MySQL. It explains that purge is needed to reclaim disk space used by deleted or updated data and to prevent performance degradation from long history lists. It then describes how purge works for update undo records, maintaining the before images of updated rows in undo pages to support transaction isolation. Purge eventually removes old undo records after transactions commit or rollback.
My talk for "MySQL, MariaDB and Friends" devroom at Fosdem on February 2, 2019
Born in 2010 in MySQL 5.5.3 as "a feature for monitoring server execution at a low level," grown in 5.6 times with performance fixes and DBA-faced features, in MySQL 5.7 Performance Schema is a mature tool, used by humans and more and more monitoring products. It becomes more popular over the years. In this talk I will give an overview of Performance Schema, focusing on its tuning, performance, and usability.
Performance Schema helps to troubleshoot query performance, complicated locking issues, memory leaks, resource usage, problematic behavior, caused by inappropriate settings and much more. It comes with hundreds of options which allow precisely tune what to instrument. More than 100 consumers store collected data.
Performance Schema is a potent tool. And very complicated at the same time. It does not affect performance in most cases and can slow down server dramatically if configured without care. It collects a lot of data, and sometimes this data is hard to read.
This talk will start from the introduction of how Performance Schema designed, and you will understand why it slowdowns server in some cases and does not affect your queries in others. Then we will discuss which information you can retrieve from Performance Schema and how to do it effectively.
I will cover its companion sys schema and graphical monitoring tools.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability, and durability than was previously available using conventional monolithic database techniques. In this session, we dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and migration from other databases to Amazon Aurora, and share early customer experiences from the field.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
AWS re:Invent 2016: Getting Started with Amazon Aurora (DAT203)Amazon Web Services
Amazon Aurora is a MySQL-compatible relational database engine with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora. Debanjan Saha, general manager for Aurora, explains how Aurora differs from other commonly available databases while staying compatible with MySQL and providing a high-end, cost-effective alternative to commercial and open-source database engines. In addition, Linda Xu, data architect at Ticketmaster, walks you through Ticketmaster's journey to Amazon Aurora, starting with evaluation through production migration of a critical Ticketmaster database to Amazon Aurora. Ticketmaster is one of the world's top 10 e-commerce companies and the global market leader in ticketing. In this session, Linda discusses how Aurora lets Ticketmaster provide better services to their fans, customers, and clients, and helps reduce the cost and operational burden while giving greater flexibility to support heavy traffic spikes.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
This document provides an overview of Amazon Aurora and discusses its performance advantages over traditional databases. Aurora delivers the performance and availability of commercial databases at 1/10th the cost by leveraging simple open source architecture. The document describes how Aurora achieves high performance through its distributed, asynchronous architecture and integration with other AWS services. It also discusses how Aurora provides high availability through its quorum-based storage system and ability to handle failures without stopping writes or restarting the database. Finally, the document shares benchmark results and customer use cases that demonstrate Aurora's ability to scale to large workloads and datasets at significantly lower costs than alternative solutions.
AWS re:Invent 2016: Workshop: Stretching Scalability: Doing more with Amazon ...Amazon Web Services
Easy scalability is a powerful feature of Amazon Aurora. Scalability in its actual definition refers to being able to get larger or smaller depending on the need. Amazon Aurora allows you to easily achieve this by scaling the database instance up or down and adding or removing read replicas. Scaling across regions brings additional resilience to your architectures and could boost your application performance due to geographic proximity. You can perform all of these scaling operations through the Aurora console. You can also automate instance and read scaling using lambda function or scripts based on the usage pattern you define. You can extend the automation by feeding your database usage data from Aurora enhanced monitoring into Machine Learning to provide more sophisticated predictive patterns to drive your automation. In this session we will do a deep dive into how scalability works in Aurora and how to make the best use of it to reduce your cost, increase application performance and architect resilient applications.
You should have good database knowledge and at least some experience with Amazon RDS or Amazon Aurora and should bring your own laptop.
AWS December 2015 Webinar Series - Amazon Aurora: Introduction and MigrationAmazon Web Services
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is available through Amazon RDS as a fully managed database service.
This webinar introduces you to Amazon Aurora, explains common use cases for the service, and discusses methods to migrate your MySQL databases that are on Amazon RDS, Amazon EC2 or on-premises to Amazon Aurora.
Learning Objectives:
How Amazon Aurora is different and similar to traditional databases
Reliability and availability design in Aurora
How Amazon Aurora delivers up to 5x MySQL performance on similar hardware
Learn the scalability in Amazon Aurora: scaling instance size and database size, horizontal scaling with read replicas
Who Should Attend:
IT Managers, DBAs, Enterprise and Solution Architects , Devops Engineers and Developers
Dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Aurora is Amazon's cloud database that provides enterprise-grade capabilities at lower costs than traditional databases. It offers speed and availability through a distributed, fault-tolerant storage system and automatic scaling of storage and compute resources. Aurora provides cross-region replication for high availability and data locality. Engineering Aurora requires experience in databases, storage systems, and distributed systems.
(DAT207) Amazon Aurora: The New Amazon Relational Database EngineAmazon Web Services
In July, AWS announced the launch of Amazon Aurora, a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Relational databases are a cornerstone of the enterprise IT landscape, powering business-critical applications of many kinds. Though they have been around for a while, current commercial relational databases have lagged behind in innovation. Amazon Aurora, a managed database service built for the cloud, is intended to change that. It targets the high-performance needs of business-critical applications with an emphasis on cost-effectiveness. In this session, we will look into how Aurora fits the needs of applications built and bought by enterprises to power their business. You will learn about the overall architecture, capabilities, and cost-effectiveness of Aurora, comparing it to current commercial database offerings. We will explore best practices for enterprises adopting Aurora for existing and new workloads, as well as strategies, tools, and techniques for migrating existing databases to Aurora. You will also hear from Expedia, one of world’s leading travel companies on how they are using Amazon Aurora to power application with high performance database needs.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Build on Amazon Aurora with MySQL Compatibility (DAT348-R4) - AWS re:Invent 2018Amazon Web Services
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. Join this session, and get started with the MySQL-compatible edition, discuss your existing application running on Aurora, or learn about recently announced features, such as Serverless or Parallel Query.
AWS June 2016 Webinar Series - Amazon Aurora Deep Dive - Optimizing Database ...Amazon Web Services
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is a disruptive technology in the database space, bringing a new architectural model and distributed system techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share customer experiences from the field.
Learning Objectives:
Learn how Amazon Aurora delivers 5x the performance and 1/10th the cost
Learn best practices for using Amazon Aurora
(1) Amazon Redshift is a fully managed data warehousing service in the cloud that makes it simple and cost-effective to analyze large amounts of data across petabytes of structured and semi-structured data. (2) It provides fast query performance by using massively parallel processing and columnar storage techniques. (3) Customers like NTT Docomo, Nasdaq, and Amazon have been able to analyze petabytes of data faster and at a lower cost using Amazon Redshift compared to their previous on-premises solutions.
This document provides an overview of Amazon Redshift presented by Pavan Pothukuchi and Chris Liu. The agenda includes an introduction to Redshift, its benefits, use cases, and Coursera's experience using Redshift. Some key benefits highlighted are that Redshift is fast, inexpensive, fully managed, secure, and innovates quickly. Example use cases from NTT Docomo and Nasdaq are discussed. Chris Liu then discusses Coursera's experience moving from no data warehouse to using Redshift over three years, including their current ecosystem involving Redshift, other AWS services, and business intelligence applications. Lessons learned around thinking in Redshift, communicating with users, surprises, and reflections are also shared.
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar SeriesAmazon Web Services
Relational databases are a cornerstone of the enterprise IT landscape, powering business-critical applications of many kinds. Though they have been around for a while, current commercial relational databases have lagged behind in innovation. Amazon Aurora, a managed database service built for the cloud, is intended to change that. It fulfils the high-performance, high-availability needs of business-critical applications with an emphasis on cost-effectiveness. In this session, we will look into how Aurora fits the needs of applications built and bought by enterprises to power their business.
Learning Objectives:
• Explore the overall architecture, capabilities, and cost-effectiveness of Aurora and see how it compares to commercial database offerings
• Learn best practices for enterprises adopting Aurora for existing and new workloads, as well as strategies, tools, and techniques for migrating existing databases to Aurora
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016Amazon Web Services
This session provides the attendee with an overview of Amazon RDS across different database types and then dives deep into the benefits and performance of Amazon Aurora.
AWS January 2016 Webinar Series - Amazon Aurora for Enterprise Database Appli...Amazon Web Services
Amazon Aurora is a relational database service built from the ground up for the cloud. It is fully managed by AWS and provides enterprise-class availability, security, and performance while being simple and cost-effective. Aurora is designed to automatically scale throughput and storage, provide continuous backups, automated patching and replication across availability zones. It offers up to 15 low-latency read replicas and supports databases up to 64TB in size. Customers like Expedia and Alfresco are using Aurora to power their mission critical workloads at scale in a cost-effective manner compared to commercial databases.
Similar to DAT202_Getting started with Amazon Aurora (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
2. What is Amazon Aurora ……
Datab ase re imag ine d for the clou d
Speed and availability of high-end commercial databases
Simplicity and cost-effectiveness of open source databases
Drop-in compatibility with MySQL and PostgreSQL
Simple pay as you go pricing
Delivered as a managed service
4. Scale-out, distributed architecture
Master Replica Replica Replica
Availability
Zone 1
Shared storage volume
Availability
Zone 2
Availability
Zone 3
Storage nodes with SSDs
Purpose-built log-structured distributed
storage system designed for databases
Storage volume is striped across
hundreds of storage nodes distributed
over 3 different availability zones
Six copies of data, two copies in each
availability zone to protect against
AZ+1 failures
Plan to apply same principles to other
layers of the stack
SQL
Transactions
Caching
SQL
Transactions
Caching
SQL
Transactions
Caching
5. Leveraging cloud ecosystem
Lambda
S3
IAM
CloudWatch
Invoke Lambda events from stored procedures/triggers.
Load data from S3, store snapshots and backups in S3.
Use IAM roles to manage database access control.
Upload systems metrics and audit logs to CloudWatch.
6. Automate administrative tasks
Schema design
Query construction
Query optimization
Automatic fail-over
Backup & recovery
Isolation & security
Industry compliance
Push-button scaling
Automated patching
Advanced monitoring
Routine maintenance
Takes care of your time-consuming database management tasks,
freeing you to focus on your applications and business
You
AWS
7. Aurora is used by ¾ of the
top 100 AWS customers
Aurora customer adoption
Fastest growing service in AWS history
8. Who are moving to Aurora and why?
Customers using
commercial engines
Customers using
MySQL engines
Higher performance – up to 5x
Better availability and durability
Reduces cost – up to 60%
Easy migration; no application change
One tenth of the cost; no licenses
Integration with cloud ecosystem
Comparable performance and availability
Migration tooling and services
9. Cassandra (>100 nodes) Aurora (~10 clusters)
RM
DNA analysis and matching
(millions of reads and writes)
Large genealogy company achieved <10ms read latency and an order of
magnitude reduction in projected costs by migrating to Aurora
Data sharded across ~10 Aurora R3.XLarge clusters - sufficient room for vertical scaling
OLAP + OLTP: DNA matching algorithms require millions of reads and batch updates
RM
RM
Data store for high-performance applications
13. WRITE PERFORMANCE READ PERFORMANCE
MySQL SysBench results
R3.8XL: 32 cores / 244 GB RAM
5X faster than RDS MySQL 5.6 & 5.7
Five times higher throughput than stock MySQL
based on industry standard benchmarks.
0
25,000
50,000
75,000
100,000
125,000
150,000
0
100,000
200,000
300,000
400,000
500,000
600,000
700,000
Aurora MySQL 5.6 MySQL 5.7
14. Aurora Scaling
With user connection With number of tables
With database size - SYSBENCH With database size - TPCC
Connections
Amazon
Aurora
RDS MySQL
w/ 30K IOPS
50 40,000 10,000
500 71,000 21,000
5,000 110,000 13,000
Tables
Amazon
Aurora
MySQL
I2.8XL
local SSD
RDS MySQL
w/ 30K IOPS
(single AZ)
10 60,000 18,000 25,000
100 66,000 19,000 23,000
1,000 64,000 7,000 8,000
10,000 54,000 4,000 5,000
8x
U P T O
F A S T E R
11x
U P T O
F A S T E R
DB Size
Amazon
Aurora
RDS MySQL
w/ 30K IOPS
1GB 107,000 8,400
10GB 107,000 2,400
100GB 101,000 1,500
1TB 26,000 1,200
DB Size Amazon Aurora
RDS MySQL
w/ 30K IOPS
80GB 12,582 585
800GB 9,406 69
21
U P T O
F A S T E R
136x
U P T O
F A S T E R
15. Do fewer I/Os
Minimize network packets
Cache prior results
Offload the database engine
DO LESS WORK
Process asynchronously
Reduce latency path
Use lock-free data structures
Batch operations together
BE MORE EFFICIENT
How did we achieve this?
DATABASES ARE ALL ABOUT I/O
NETWORK-ATTACHED STORAGE IS ALL ABOUT PACKETS/SECOND
HIGH-THROUGHPUT PROCESSING IS ALL ABOUT CONTEXT SWITCHES
16. BINLOG DATA DOUBLE-WRITELOG FRM FILES
TYPE OF WRITE
MYSQL WITH REPLICA
EBS mirrorEBS mirror
AZ 1 AZ 2
Amazon S3
EBS
Amazon Elastic
Block Store (EBS)
Primary
Instance
Replica
Instance
1
2
3
4
5
AZ 1 AZ 3
Primary
Instance
Amazon S3
AZ 2
Replica
Instance
ASYNC
4/6 QUORUM
DISTRIBUT
ED WRITES
Replica
Instance
AMAZON AURORA
780K transactions
7,388K I/Os per million txns (excludes mirroring, standby)
Average 7.4 I/Os per transaction
MySQL I/O profile for 30 min Sysbench run
27,378K transactions 35X MORE
0.95 I/Os per transaction (6X amplification) 7.7X LESS
Aurora IO profile for 30 min Sysbench run
Aurora I/O profile
17. Scan
Delete
Aurora lock management
Scan
Delete
Insert
Scan Scan
Insert
Delete
Scan
Insert
Insert
MySQL lock manager Aurora lock manager
Same locking semantics as MySQL
Concurrent access to lock chains
Needed to support many concurrent sessions, high update throughput
Multiple scanners in individual lock chains
Lock-free deadlock detection
19. Online DDL: Aurora vs. MySQL
Full Table copy; rebuilds all indexes
Needs temporary space for DML operations
DDL operation impacts DML throughput
Table lock applied to apply DML changes
Index
LeafLeafLeaf Leaf
Index
Root
table name operation column-name time-stamp
Table 1
Table 2
Table 3
add-col
add-col
add-col
column-abc
column-qpr
column-xyz
t1
t2
t3
Use schema versioning to decode the block.
Modify-on-write primitive to upgrade to latest schema
Currently support add NULLable column at end of table
Add column anywhere and with default coming soon.
MySQL Amazon Aurora
22. Six copies across three availability zones
4 out 6 write quorum; 3 out of 6 read quorum
Peer-to-peer replication for repairs
Volume striped across hundreds of storage nodes
SQL
Transaction
AZ 1 AZ 2 AZ 3
Caching
SQL
Transaction
AZ 1 AZ 2 AZ 3
Caching
Read and write availabilityRead availability
6-way replicated storage
Su rvive s catastrop hic failu re s
23. Up to 15 promotable read replicas
Master
Read
Replica
Read
Replica
Read
Replica
Shared distributed storage volume
Reader end-point
► Up to 15 promotable read replicas across multiple availability zones
► Re-do log based replication leads to low replica lag – typically < 10ms
► Reader end-point with load balancing and auto-scaling * NEW *
25. Cross-region read replicas
Faste r disaste r re cove ry and e nhance d data locality
Promote read-replica to a master
for faster recovery in the event of
disaster
Bring data close to your
customer’s applications in
different regions
Promote to a master for easy
migration
26. Availability is about more than HW failures
You also incur availability disruptions when you
1. Patch your database software – Zero Down Time Patch
2. Perform large scale database reorganizations – Fast Cloning
3. DBA errors requiring database restores – Online Point-in-time Restore
27. Zero downtime patching
Networking
state
Application
state
Storage Service
App
state
Net
state
App
state
Net
state
BeforeZDP
New DB
Engine
Old DB
Engine
New DB
Engine
Old DB
Engine
WithZDP
User sessions terminate
during patching
User sessions remain
active through patching
Storage Service
28. Database backtrack
Backtrack brings the database to a point in time without requiring restore from backups
• Backtracking from an unintentional DML or DDL operation
• Backtrack is not destructive. You can backtrack multiple times to find the right point in time
t0 t1 t2
t0 t1
t2
t3 t4
t3
t4
Rewind to t1
Rewind to t3
Invisible Invisible
30. Security and compliance
Encryption to secure data at rest using
customer managed keys
• AES-256; hardware accelerated
• All blocks on disk and in Amazon S3 are encrypted
• Key management via AWS KMS
Encrypted cross-region replication,
snapshot copy - SSL to secure data in
transit
Advanced auditing and logging without
any performance impact
Database activity monitoring
Data Key 1 Data Key 2 Data Key 3 Data Key 4
Customer Master
Key(s)
Storage
Node
Storage
Node
Storage
Node
Storage
Node
Database
Engine
*NEW*
31. Aurora Auditing
MariaDB server_audit plugin Aurora native audit support
We can sustain over 500K events/sec
Create event string
DDL
DML
Query
DCL
Connect
DDL
DML
Query
DCL
Connect
Write
to File
Create event string
Create event string
Create event string
Create event string
Create event string
Latch-free
queue
Write to File
Write to File
Write to File
MySQL 5.7 Aurora
Audit Off 95K 615K 6.47x
Audit On 33K 525K 15.9x
Sysbench Select-only Workload on 8xlarge Instance
32. Database activity monitoring
Search: Look for specific events across log files.
Metrics: Measure activity in your Aurora DB cluster.
Continuously monitor activity in your DB clusters by sending these audit logs to CloudWatch Logs.
Export to S3 for long term archival; analyze logs using Athena; visualize logs with QuickSight.
Visualizations: Create activity dashboards
Alarms: Get notified or take actions
Amazon Aurora Amazon CloudWatch
Amazon Athena
Amazon QuickSight
S3
33. Industry certifications
Amazon Aurora gives each database
instance IP firewall protection
Aurora offers transparent encryption at rest
and SSL protection for data in transit
Amazon VPC lets you isolate and control
network configuration and connect
securely to your IT infrastructure
AWS Identity and Access Management
provides resource-level permission
controls
*New* *New* *New*
34. Performance Insights
Dashboard showing
Load on Database
• Easy
• Powerful
Identifies source of bottlenecks
• Top SQL
Adjustable time frame
• Hour, day, week , month
• Up to 35 days of data
Max CPU
35. Amazon Aurora is easy to use
Automated storage management, security and compliance,
advanced monitoring, database migration.
36. Simplify storage management
Continuous, incremental backups to Amazon S3
Instantly create user snapshots—no performance impact
Automatic storage scaling up to 64 TB—no performance impact
Automatic restriping, mirror repair, hot spot management, encryption
Up to 64TB of storage – auto-incremented in 10GB units
up to 64 TB
37. Fast database cloning
Create a copy of a database without
duplicate storage costs
• Creation of a clone is nearly instantaneous –
we don’t copy data
• Data copy happens only on write – when
original and cloned volume data differ
Typical use cases:
• Clone a production DB to run tests
• Reorganize a database
• Save a point in time snapshot for analysis
without impacting production system.
Production database
Clone Clone
Clone
Dev/test
applications
Benchmarks
Production
applications
Production
applications
39. Leverage MySQL and AWS ecosystems
Query and
Monitoring
Business
Intelligence
Source: Amazon
Data Integration
“We ran our compatibility test suites against Amazon Aurora and everything
just worked." - Dan Jewett, Vice President of Product Management at Tableau
Lambda IAM
CloudWatch
S3
Route53
KMS
AWS Ecosystem
VPC SWF
41. Amazon Aurora migration options
Source database From where Recommended option
RDS
EC2, on premise
EC2, on premise, RDS
Console based automated
snapshot ingestion and catch
up via binlog replication.
Binary snapshot ingestion
through S3 and catch up via
binlog replication.
Schema conversion using
SCT and data migration via
DMS.
42. Amazon Aurora saves you money
1/10th of the cost of commercial databases
Cheaper than even MySQL
43. Cost of ownership: Aurora vs. MySQL
MySQL config u ration hou rly cost
Primary
r3.8XL
Standby
r3.8XL
Replica
r3.8XL
Replica
R3.8XL
Storage
6 TB / 10 K PIOP
Storage
6 TB / 10 K PIOP
Storage
6 TB / 5 K PIOP
Storage
6 TB / 5 K PIOP
$1.33/hr
$1.33/hr
$1.33/hr $1.33/hr
$2,42/hr
$2,42/hr $2,42/hr
Instance cost: $5.32 / hr
Storage cost: $8.30 / hr
Total cost: $13.62 / hr
$2,42/hr
44. Cost of ownership: Aurora vs. MySQL
Au rora config u ration hou rly cost
Instance cost: $4.86 / hr
Storage cost: $4.43 / hr
Total cost: $9.29 / hr
Primary
r3.8XL
Replica
r3.8XL
Replica
R3.8XL
Storage / 6 TB
$1.62 / hr $1.62 / hr $1.62 / hr
$4.43 / hr
*At a macro level Aurora saves over 50% in
storage cost compared to RDS MySQL.
31.8%
Savings
No idle standby instance
Single shared storage volume
No PIOPs – pay for use I/O
Reduction in overall IOP
45. Cost of ownership: Aurora vs. MySQL
Fu rthe r op p ortu nity for saving
Instance cost: $2.43 / hr
Storage cost: $4.43 / hr
Total cost: $6.86 / hrStorage IOPs assumptions:
1. Average IOPs is 50% of Max IOPs
2. 50% savings from shipping logs vs. full pages
49.6%
Savings
Primary
r3.8XL
Replica
r3.8XL
Replica
r3.8XL
Storage / 6TB
$0.81 / hr $0.81 / hr $0.81 / hr
$4.43 / hr
r3.4XL r3.4XL r3.4XL
Use smaller instance size
Pay-as-you-go storage
46. Higher performance, lower Cost
Fewer instances needed
Smaller instances can be used
Safe.com lowered their bill by 40% by switching from sharded
MySQL to a single Aurora instance.
Double Down Interactive (gaming) lowered their bill by 67%
while also achieving better latencies (most queries ran faster)
and lower CPU utilization.
No need to pre-provision storage
No additional storage for read replicas
47. Higher performance, lower Cost
Our application usage had grown exponentially over the last year. We were
looking for horizontal scaling of our database to address the increased load.
Amazon Aurora’s relatively low replication lag has helped us handle current load
and positions us well for future growth.
“
7x 10x
Database connections CPU utilization
2x
Response time
“
62. Some Other Aurora Sessions
DAT301
Deep Dive on Amazon Aurora MySQL compatible
Edition
#1 Wednesday 4:45p
#2 Friday 11:30am
DAT315
A Practitioner’s Guide on Migrating to, and
Running on Amazon Aurora
Thursday 4pm
DAT334 Amazon Aurora Performance Optimization Wednesday 12pm
DAT331 Airbnb Runs on Amazon Aurora Wednesday 1pm
DAT336
Amazon Aurora Storage Demystified: How It All
Works
Wednesday 4pm
DAT338 Migrating from Oracle to Amazon Aurora Thursday 5:30pm
DAT402
Deep Dive on the Amazon Aurora PostgreSQL-
compatible Edition
Wednesday 4pm