DOWNLOAD SO ANIMATIONS WORK PROPERLY. THANK YOU!
Amazon Aurora is a cloud relational database built from the ground up with a new, ingenious architecture. This video is part of a series.
This Section 2.0 of Amazon Aurora's Deep Dive, primarily on its architecture, has 9 videos to compliment this slide deck. It begins at https://youtu.be/Cnz6mSzca1Y . In this Section I cover at a deeper level architecture from Section 1.0 of my Amazon Aurora Deep Dive videos as well as many more amazing feature innovations. In Section 2.0 here I dive deep into the Amazon Aurora Database Cluster architecture & how that works, another sample of Aurora Clusters using Replicas as well as Cross-Region Read Replicas, it compares Amazon Aurora to MySQL for read scaling, it covers Aurora's Endpoints, & how Aurora's Replication compares to MySQL insofar as lag time is concerned. I then dive DEEP into Amazon Aurora's Logging, Storage, Caching, & Indexing & how that's accomplished. It covers log-structured storage, B-tree indexes, & Aurora's garbage collection. It teaches you how Aurora accomplishes Instant Crash Recovery, Survivable Caches, & then I dive DEEP again into Amazon Aurora's Input/Output (IO) architecture (you'll find this amazing!). I cover how Aurora has fewer IOs/Second, how Network-Attached Storage optimizes Packets/Second, & how Aurora has more asynchronous processing. I compare IO traffic in RDS MySQL to BOTH IO traffic in an Aurora DATABASE as well as IO traffic in its storage nodes. I compare traditional database commits to Aurora's asynchronous group commits, & Aurora's creative Adaptive Thread Pool. I compare IO traffic in read replicas of MySQL vs. Amazon Aurora, then discuss how to understand Aurora's scalability (up or out) & elasticity. You'll learn about Aurora's fault tolerance & restoration architecture & how it accomplishes fast, predictable failovers, I cover Amazon Aurora's backup & restore features, read replicas priority tiers, & how to simulate failures using SQL for testing. I teach how Aurora's performance is blazingly fast via enhancements, & then I go over benchmarking. Lastly, I cover Aurora's security architecture. NOTE: this presentation was made a couple years ago, but you'll have a great foundation from which to progress in your understanding
Cloud Developer Conference May 2011 SiliconIndia : Design for Failure - High ...Harish Ganesan
These slides were presented on SiliconIndia Cloud Developer conference May 2011. The presentation concentrates on architecting High Availability solutions using AWS
RMG206 Introduction to Amazon Elastic Beanstalk - AWS re: Invent 2012Amazon Web Services
Are you looking to build the next viral Facebook application or mobile game? Are you worried about the viral growth of your web application? Are you tired of managing servers and installing software? This session introduces AWS Elastic Beanstalk, the easiest way to deploy and manage web applications on AWS. We’ll show you how you can write your application and let Elastic Beanstalk do the rest.
(APP201) Going Zero to Sixty with AWS Elastic Beanstalk | AWS re:Invent 2014Amazon Web Services
"AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS cloud. This session shows you how to deploy your code to AWS Elastic Beanstalk, easily enable or disable application functionality, and perform zero-downtime deployments through interactive demos and code samples for both Windows and Linux.
Are you new to AWS Elastic Beanstalk? Get up to speed for this session by first completing the 60-minute Fundamentals of AWS Elastic Beanstalk lab in the self-paced Lab Lounge."
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think. Whether you are a developer eager to learn new skills, a solutions architect who wants to solve existing technology problems, the IT professional who wants access to cost-effective, on-demand computing resources, this workshop is for you.
These slides feature some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
Amazon EC2 YouTube Recording: http://youtu.be/TORzO9Oc9oU
Amazon EC2 Demo: http://youtu.be/kMExnVKhmYc
Cloud Developer Conference May 2011 SiliconIndia : Design for Failure - High ...Harish Ganesan
These slides were presented on SiliconIndia Cloud Developer conference May 2011. The presentation concentrates on architecting High Availability solutions using AWS
RMG206 Introduction to Amazon Elastic Beanstalk - AWS re: Invent 2012Amazon Web Services
Are you looking to build the next viral Facebook application or mobile game? Are you worried about the viral growth of your web application? Are you tired of managing servers and installing software? This session introduces AWS Elastic Beanstalk, the easiest way to deploy and manage web applications on AWS. We’ll show you how you can write your application and let Elastic Beanstalk do the rest.
(APP201) Going Zero to Sixty with AWS Elastic Beanstalk | AWS re:Invent 2014Amazon Web Services
"AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS cloud. This session shows you how to deploy your code to AWS Elastic Beanstalk, easily enable or disable application functionality, and perform zero-downtime deployments through interactive demos and code samples for both Windows and Linux.
Are you new to AWS Elastic Beanstalk? Get up to speed for this session by first completing the 60-minute Fundamentals of AWS Elastic Beanstalk lab in the self-paced Lab Lounge."
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think. Whether you are a developer eager to learn new skills, a solutions architect who wants to solve existing technology problems, the IT professional who wants access to cost-effective, on-demand computing resources, this workshop is for you.
These slides feature some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
Amazon EC2 YouTube Recording: http://youtu.be/TORzO9Oc9oU
Amazon EC2 Demo: http://youtu.be/kMExnVKhmYc
In this presentation, Jeff Barr introduces AWS, with a focus on EC2, and then shows how to use AWS Elastic Beanstalk with Git-based deployment of a PHP application.
Wordpress site scaling architecture on cloud infrastructure with AWSLe Kien Truc
Wordpress site scaling architecture on cloud infrastructure with AWS. The architecture including Database, CDN, and deployment model. It's just a high level and concept design
AWS Summit 2013 | Auckland - Your First Week with Amazon EC2Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and is often the starting point for your first week using AWS. This session will introduce these concepts, along with the fundamentals of EC2, by employing an agile approach that is made possible by the cloud. Attendees will experience the reality of what a first week on EC2 looks like from the perspective of someone deploying an actual application on EC2. You will follow them as they progress from deploying their entire application from an EC2 AMI on day 1 to more advanced features and patterns available in EC2 by day 5. Throughout the process we will identify cloud best practices that can be applied to your first week on EC2 and beyond.
This tutorial is an overview on elastic beanstalk. The tutorial includes an introduction to elastic beanstalk, working architecture, basic operation, console(demo) and a summary. Beginning of the tutorial is an introduction to elastic beanstalk. It includes an overview of elastic beanstalk and how it manages applications. It also includes the basic features of elastic beanstalk.
Following is a section of the working architecture. It involves the basic architecture and workflow of elastic beanstalk and explains it in detail. It also involves the benefits of using elastic beanstalk such as root access, easy configuration etc.
Moreover, it also includes the environments elastic beanstalk can work under such as docker, node.js etc. as well as the sample policies. The last section of the tutorial includes a demo of the console of elastic beanstalk and a summary as for the practices which take place "under the hood".
Disaster Recovery Site on AWS - Minimal Cost Maximum Efficiency (STG305) | AW...Amazon Web Services
Implementation of a disaster recovery (DR) site is crucial for the business continuity of any enterprise. Due to the fundamental nature of features like elasticity, scalability, and geographic distribution, DR implementation on AWS can be done at 10-50% of the conventional cost. In this session, we do a deep dive into proven DR architectures on AWS and the best practices, tools and techniques to get the most out of them.
Amazon AWS
What is EC2?
EC2 zones
How to create instance on EC2?
SSH access of EC2
Public vs Internal Vs Elastic IP
EC2 Security Group
EC2 demo app (Ruby)
What is S3 bucket?
S3 demo app (Ruby)
Deploy, Scale and Manage your Application with AWS Elastic BeanstalkAmazon Web Services
AWS Elastic Beanstalk provides an easy way to quickly deploy, manage, and scale applications in the AWS cloud. Through interactive demos, this session will discuss the best practices for deploying and scaling your application, provisioning additional AWS resources and performance tuning. We will also do a deep dive into the recently launched Elastic Beanstalk features and cover some of best practices for using Elastic Beanstalk. This session will benefit both new and experienced users of Elastic Beanstalk.
AWS CloudFormation template with single & redundant systemNaoya Hashimoto
* Use CloudFormation to create Stacks composed of VPC, Internet Gateway, Route Table, ELB, EC2 Instance, EBS Volumes
* Single pattern with EC2 Instances WEB server and DB server with the same AZ
* Redundant pattern with EC2 Instances WEB server and DB server with multi-AZ
PP slides for a presentation for the Queensland SQL Server User Group that covered application candidates/use cases, SQL performance considerations including road tests of new SQL 2014 performance features on AWS EC2 instances, security, HA/DR and licensing.
AWS Webcast - Implementing Windows and SQL Server for High Availability on AWS Amazon Web Services
This webinar is on high availability features for Microsoft Windows Server and SQL Server running on the AWS Cloud. Windows Server Failover Clustering (WSFC) and SQL AlwaysOn Availability Groups are part of the underpinnings for many enterprise-class solutions, including Microsoft SharePoint and .NET applications.
AWS Summit 2013 | Singapore - Your First Week with Amazon EC2Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and is often the starting point for your first week using AWS. This session will introduce these concepts, along with the fundamentals of EC2, by employing an agile approach that is made possible by the cloud. Attendees will experience the reality of what a first week on EC2 looks like from the perspective of someone deploying an actual application on EC2. You will follow them as they progress from deploying their entire application from an EC2 AMI on day 1 to more advanced features and patterns available in EC2 by day 5. Throughout the process we will identify cloud best practices that can be applied to your first week on EC2 and beyond.
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
In this webinar we will take you on a journey, starting with the basics of key creation and security groups and ending with an Auto Scaling application driven by dynamic policies.
Learning Objectives:
• Understand how to use Amazon EC2 beyond a simple single instance use case
• Learn about instance bootstrapping, AMIs and Elastic IPs
• Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
• Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
• Find out how to create an Amazon RDS database and how to test failover between Availability Zones Who Should Attend:
• Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
AWS has a number of services that help enterprise customers deploy solutions that meet high performance, security, and reliability requirements. SQL Server is no exception. In this session, we will explore the different options that exist today to help enterprises meet those types of requirements. Another key capability in AWS is flexibility. Multiple options exist for how enterprises can deploy SQL Server in AWS. We will talk in detail about how to choose between a managed database model like Relational Database Service (RDS) or core compute model like Elastic Compute Cloud (EC2). Finally, we’ll wrap up with an exploration of different operational aspects of SQL Server in AWS.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
In this presentation, Jeff Barr introduces AWS, with a focus on EC2, and then shows how to use AWS Elastic Beanstalk with Git-based deployment of a PHP application.
Wordpress site scaling architecture on cloud infrastructure with AWSLe Kien Truc
Wordpress site scaling architecture on cloud infrastructure with AWS. The architecture including Database, CDN, and deployment model. It's just a high level and concept design
AWS Summit 2013 | Auckland - Your First Week with Amazon EC2Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and is often the starting point for your first week using AWS. This session will introduce these concepts, along with the fundamentals of EC2, by employing an agile approach that is made possible by the cloud. Attendees will experience the reality of what a first week on EC2 looks like from the perspective of someone deploying an actual application on EC2. You will follow them as they progress from deploying their entire application from an EC2 AMI on day 1 to more advanced features and patterns available in EC2 by day 5. Throughout the process we will identify cloud best practices that can be applied to your first week on EC2 and beyond.
This tutorial is an overview on elastic beanstalk. The tutorial includes an introduction to elastic beanstalk, working architecture, basic operation, console(demo) and a summary. Beginning of the tutorial is an introduction to elastic beanstalk. It includes an overview of elastic beanstalk and how it manages applications. It also includes the basic features of elastic beanstalk.
Following is a section of the working architecture. It involves the basic architecture and workflow of elastic beanstalk and explains it in detail. It also involves the benefits of using elastic beanstalk such as root access, easy configuration etc.
Moreover, it also includes the environments elastic beanstalk can work under such as docker, node.js etc. as well as the sample policies. The last section of the tutorial includes a demo of the console of elastic beanstalk and a summary as for the practices which take place "under the hood".
Disaster Recovery Site on AWS - Minimal Cost Maximum Efficiency (STG305) | AW...Amazon Web Services
Implementation of a disaster recovery (DR) site is crucial for the business continuity of any enterprise. Due to the fundamental nature of features like elasticity, scalability, and geographic distribution, DR implementation on AWS can be done at 10-50% of the conventional cost. In this session, we do a deep dive into proven DR architectures on AWS and the best practices, tools and techniques to get the most out of them.
Amazon AWS
What is EC2?
EC2 zones
How to create instance on EC2?
SSH access of EC2
Public vs Internal Vs Elastic IP
EC2 Security Group
EC2 demo app (Ruby)
What is S3 bucket?
S3 demo app (Ruby)
Deploy, Scale and Manage your Application with AWS Elastic BeanstalkAmazon Web Services
AWS Elastic Beanstalk provides an easy way to quickly deploy, manage, and scale applications in the AWS cloud. Through interactive demos, this session will discuss the best practices for deploying and scaling your application, provisioning additional AWS resources and performance tuning. We will also do a deep dive into the recently launched Elastic Beanstalk features and cover some of best practices for using Elastic Beanstalk. This session will benefit both new and experienced users of Elastic Beanstalk.
AWS CloudFormation template with single & redundant systemNaoya Hashimoto
* Use CloudFormation to create Stacks composed of VPC, Internet Gateway, Route Table, ELB, EC2 Instance, EBS Volumes
* Single pattern with EC2 Instances WEB server and DB server with the same AZ
* Redundant pattern with EC2 Instances WEB server and DB server with multi-AZ
PP slides for a presentation for the Queensland SQL Server User Group that covered application candidates/use cases, SQL performance considerations including road tests of new SQL 2014 performance features on AWS EC2 instances, security, HA/DR and licensing.
AWS Webcast - Implementing Windows and SQL Server for High Availability on AWS Amazon Web Services
This webinar is on high availability features for Microsoft Windows Server and SQL Server running on the AWS Cloud. Windows Server Failover Clustering (WSFC) and SQL AlwaysOn Availability Groups are part of the underpinnings for many enterprise-class solutions, including Microsoft SharePoint and .NET applications.
AWS Summit 2013 | Singapore - Your First Week with Amazon EC2Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and is often the starting point for your first week using AWS. This session will introduce these concepts, along with the fundamentals of EC2, by employing an agile approach that is made possible by the cloud. Attendees will experience the reality of what a first week on EC2 looks like from the perspective of someone deploying an actual application on EC2. You will follow them as they progress from deploying their entire application from an EC2 AMI on day 1 to more advanced features and patterns available in EC2 by day 5. Throughout the process we will identify cloud best practices that can be applied to your first week on EC2 and beyond.
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
In this webinar we will take you on a journey, starting with the basics of key creation and security groups and ending with an Auto Scaling application driven by dynamic policies.
Learning Objectives:
• Understand how to use Amazon EC2 beyond a simple single instance use case
• Learn about instance bootstrapping, AMIs and Elastic IPs
• Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
• Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
• Find out how to create an Amazon RDS database and how to test failover between Availability Zones Who Should Attend:
• Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
AWS has a number of services that help enterprise customers deploy solutions that meet high performance, security, and reliability requirements. SQL Server is no exception. In this session, we will explore the different options that exist today to help enterprises meet those types of requirements. Another key capability in AWS is flexibility. Multiple options exist for how enterprises can deploy SQL Server in AWS. We will talk in detail about how to choose between a managed database model like Relational Database Service (RDS) or core compute model like Elastic Compute Cloud (EC2). Finally, we’ll wrap up with an exploration of different operational aspects of SQL Server in AWS.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Amazon Aurora Relational Database Built for the AWS Cloud, Version 1 SeriesDataLeader.io
DOWNLOAD THE PRESENTATION TO SEE THE ANIMATIONS PROPERLY.
Amazon Aurora has been the fastest growing service in AWS history since 2016!
Amazon Aurora is a cloud relational database built from the ground up with a new, ingenious architecture. This video is part of a series.
Section 1.0 here on Amazon Aurora has 16 videos! Skip over the quizzes if you'd like. Amazon Aurora is the fastest growing Service in AWS history since September, 2016 & STILL IS TODAY 2/9/2019! I cover what makes Amazon Aurora so unique & perfect for analytics that must use a relational database. I describe how it came to be, its features, its business value, some comparisons between Amazon Aurora to Amazon RDS for MySQL (now supports PostgreSQL & there's also a Serverless version! I cover high performance & why/how it accomplishes that, a high-level view of Amazon Aurora's Architecture, its ability to scale both up & out, its high availability & durability & how that's achieved, how to secure it, & a few ways to take advantage of different pricing options. It also covers Database Storage & Input/Output (IO), backups, AWS' "Simple Monthly Calculator" (which has been updated since making this video), & how its pricing compares to SQL Server
AWS re:Invent 2016: Workshop: Stretching Scalability: Doing more with Amazon ...Amazon Web Services
Easy scalability is a powerful feature of Amazon Aurora. Scalability in its actual definition refers to being able to get larger or smaller depending on the need. Amazon Aurora allows you to easily achieve this by scaling the database instance up or down and adding or removing read replicas. Scaling across regions brings additional resilience to your architectures and could boost your application performance due to geographic proximity. You can perform all of these scaling operations through the Aurora console. You can also automate instance and read scaling using lambda function or scripts based on the usage pattern you define. You can extend the automation by feeding your database usage data from Aurora enhanced monitoring into Machine Learning to provide more sophisticated predictive patterns to drive your automation. In this session we will do a deep dive into how scalability works in Aurora and how to make the best use of it to reduce your cost, increase application performance and architect resilient applications.
You should have good database knowledge and at least some experience with Amazon RDS or Amazon Aurora and should bring your own laptop.
Backing up Amazon EC2 with Amazon EBS Snapshots - June 2017 AWS Online Tech T...Amazon Web Services
Learning Objectives:
- Learn how to use snapshots effectively to backup EC2 Instances - Learn how to tag snapshots and leverage tagging for tracking costs
- Learn how to automate snapshot management
We’ve made it easy to make a simple point in time backup for your Amazon EC2 Instances. In this tech talk, you will learn about how to use Amazon EBS snapshots to back up your Amazon EC2 environment. We will review the basics of how snapshots work as well as how to tag snapshots, track costs, and automate snapshots leveraging AWS Lambda. We will describe best practices and share tips for success throughout.
Introduction to Amazon Web Services - How to Scale your Next Idea on AWS : A ...Amazon Web Services
Building powerful web applications in the AWS Cloud : A Love Story, Design patterns in web-based cloud architecture, Jinesh Varia gave this talk at Cloud Connect and several other places
http://aws.typepad.com/aws/2011/03/building-powerful-web-applications-in-the-aws-cloud-a-love-story.html
Amazon Aurora Getting started Guide -level 0kartraj
Introduction To Amazon Aurora, Amazon Aurora
applying a Service-oriented architecture
to the database
Aurora Makes it Easy to Run Your Databases
Aurora simplifies storage management
Aurora simplifies Data Security
Aurora is Highly Available
Amazon EC2 forms the backbone of the compute platform for hundreds of thousands of AWS customers, but understanding how to fully utilize EC2 and related services can be challenging.
In this webinar, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Learning Objectives:
Understand how to use Amazon EC2 beyond a simple single instance use case
Learn about instance bootstrapping, AMIs and Elastic IPs
Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
Find out how to create an Amazon RDS database and how to test failover between Availability Zones
Who Should Attend:
Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Amazon EC2:Masterclass by Ian Massingham, Chief Evangelist EMEA, April 18, 2016
AWS re:Invent 2016: Amazon Aurora Best Practices: Getting the Best Out of You...Amazon Web Services
Amazon Aurora is a fully managed relational database engine that provides higher performance, availability and durability than previously possible using conventional monolithic database architectures. After launching a year ago, we continued adding many new features and capabilities to Aurora. In this session AWS Aurora experts will discuss the best practices that will help you put these capabilities to the best use. You will also hear from Amazon Aurora customer Intercom on the best practices they adopted for moving live databases with over two billion rows to a new datastore in Amazon Aurora with almost no downtime or lost records.
Intercom was founded to provide a fundamentally new way for Internet businesses to communicate with customers at scale. For growing startups like Intercom, it’s natural for the load on datastores to grow on a weekly basis. The usual solution to this problem is to get a bigger box from AWS. But very soon you reach a point where bigger boat is not an option anymore. You will learn about the benefits of moving to such a datastore, the problems it introduced, and all about the new ability for scaling that was not there before.
Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial database while delivering similar performance and availability.
In this webinar, learn how using Amazon Aurora allows you to: migrate existing MySQL databases for up to 5x increase in throughput performance, automatically grow database volumes up to 64TB, automatically replicate 6 copies of data across 3 Availability Zones, and transparently failover to ensure high-availability.
Learning Objectives:
• How to migrate existing MySQL databases to Amazon Aurora
• How Amazon Aurora delivers up to 5x MySQL performance on similar hardware
• How to automatically grow database volumes up to 64TB
• How Amazon Aurora protects you data through automated replication and transparent failover
Who Should Attend:
• Security Administrators, IT auditors, Devops Engineers and Developers
An Introduction to Amazon Aurora Cloud-native Relational DatabaseDataLeader.io
DOWNLOAD THE PRESENTATION TO SEE THE ANIMATIONS PROPERLY. Slide 20 has embedded audio that explains the video on this slide.
Overview of "what makes Amazon Aurora" the database of choice for analytics? Watch this to find out! It's a brilliant & effective architectural change!
Presented at the Microsoft SQL Saturday on behalf of AWS
Microsoft DigiGirlz, Teaching Teens About Databases (Trick!)DataLeader.io
PLEASE DOWNLOAD SO ANIMATIONS WORK PROPERLY.
I created this slide presentation to teach teen girls about databases! How did they LOVE the demo? I made a boyfriend table, a girlfriend table, & a "who dated who" table!
It's now presented at all Microsoft DigiGirlz events (with "the hotties" being updated over time.)
How to Build Composite Applications with PRISMDataLeader.io
Created by Emil Stoychev (The Silverlight Show) from Bulgaria at my Microsoft "Pre-MIX!!" ROCK! event.
Topics covered include design & technical concepts in PRISM, composite apps vs. monolithic apps, prism is a set of guidelines not a framework.
Technical Concepts:
1. Bootstrapper is responsible for app initialization
2. CAL includes UnityBootstrapper
3. XAML
4. Configure RegionAdapter Mappings
5. Creating the Shell
6. Initializing Modules
7. Windows Presentation Foundation (WPF)
8. Module Options
1. Design Concepts: modularity, dependency injection container, multi-targeting
2. UI Composition: commanding, eventing
3. View Composition = View Injection = View Discovery
Microsoft Kinect & the Microsoft MIX11 Game PreviewDataLeader.io
Dave Drach, Managing Director for Emerging Businesses at Microsoft gave this presentation at my Microsoft Pre-MIX11 event ROCK!
1. Emerging Business Team Mission: Building Opportunity with VC & Startup Communities
2. Microsoft BizSpark: Global program designed to help accelerate the success of early stage startups
3. Building for the Kinect: the XBOX Dev Kit, integrates XBOX with PC Development environment
4. Kinect Sensor: a hybrid device with input devices, space control is done through a tilt monitor
5. XBOX Studio Overview
6. Human Depth Sensing: Object pattern similarity determines disparity
7. Kinect Depth Sensing: IR pattern similarity determines disparity
8. The Kinect Play Space
9. Player Framing
10. Tilting the Playing Field
11. Provided Data: Depth & Segmentation Map
12. Depth Map Format
13. Skeleton Tracking & Depth
14. Comparing Depth Map to Skeleton
15. Gaussian Filtering
16. Audio Overview
17. Kinect Audio Routing
18. Talking to Your Kinect
19. Biometric Data for Player Recognition
A Microsoft Silverlight User Group Starter Kit Made Available for Everyone to...DataLeader.io
PLEASE DOWNLOAD DECK SO THE ANIMATIONS WORK PROPERLY.
David Silverlight & Kim Schmidt presented this to the Phoenix Silverlight User Group prior to the Silverlight 4 release. The first slide has music, click it. It's the Black Eyed Peas singing "Let's Get it Started!"
The "Silverlight User Group Starter Kit" shown in the presentation was created by these rockstar developers: Kim Schmidt, David Silverlight, Victor Gaudioso, Cigdem Patlak, Colin Blair, John O'Keefe, Al Pascual, Jose Luis Latorre Millas, Edu Couchez, Caleb Jenkins, David Kelley, & Ariel Leroux. It's a fully-functional out-of-the-box user group site to customize.
Some functionality:
1. MVVM-based architecture
2. Streaming live presentations
3. Making use of OOB functionality
4. Remote interaction
5. RIA Services
6. Print & Webcam: Webcam takes picture, puts it on an entry badge you can print to be admitted to the meeting & can print the directions to the meeting
7. Login/Registration
8. Live Chat: ask questions of the presenter or selected person
9. Leave feedback
Architecture:
1. Microsoft Silverlight 4
2. Microsoft Expression Blend 4
3. RIA Services
4. Entity Framework
5. MVVM using SimpleMVVM
6. SQL Server Express
7. Membership using standard .NET Membership Provider
Registration Page: User Info, About You, Your Social Networks
Demo 1: Authentication & Social Networking
Demo 2: MVVM, RIA Services, & Print Event Pass
Demo 3: Video & Webcam Support
BLOOPERS AT THE END!
Building Applications with the Microsoft Kinect SDKDataLeader.io
David Silverlight's powerpoint presentation on the Kinect for Windows SDK. Feb. 29, 2012
NUI = Natural User Interface: it's an invisible interface, the content is the interface, removing the proxy, direct manipulation, gestural interfaces
Kinect for Windows SDK:
1. Kinect explorer
2. Installing & using the Kinect sensor
3. Setting up your dev environment
4. Skeletal tracking fundamentals
5. Working with depth data
6. Audio fundamentals
7. Camera fundamentals
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
1. Amazon Aurora Deep Dive
Video 2.0: Amazon Aurora’s Architecture
“Invent and Simplify”
By Kim Schmidt
Amazon RDS
https://www.amazon.jobs/principles *Image courtesy of AWS properties
2. Section 2.1: Amazon Aurora’s Architecture
“The Unique Architecture of Amazon Aurora, created by re-designing the RDB for TODAY where AWS exists”
*Image courtesy of AWS properties
3. Architectural High-Level View from Video Chapter 1.0: Video Section 1.3
https://youtu.be/Cnz6mSzca1Y?list=PL2EgRNBrbhC4lbTLy9cZnHG2JvEd8wZCr
*Image courtesy of AWS properties
8. For wha
Quiz Time for Section 2.1: Amazon Aurora’s Unique Architecture
Q: Which of the following is NOT true about Amazon Aurora’s Unique Architecture?
a) An Amazon Aurora cluster volume is SSD-based
b) Amazon Aurora’s storage latency can cause data drift between the Primary Instance and the Read Replicas
c) An Amazon Aurora cluster volume spans 3 Availability Zones with 2 storage nodes per Availability Zone, totaling 6 nodes
d) Amazon Aurora uses a service-oriented architecture applied to a database for storage and logging
A: b) Amazon Aurora’s storage latency can cause data drift between
the Primary Instance and the Read Replicas
Explanation:
MySQL’s independent storage layers that have to be monitored & managed
and latency can cause data drift between the Master & the Replica
depending on how much traffic goes thorough. Amazon Aurora’s storage is
shared, so the Replicas can focus 100% on Reads, not Writes
9. This Concludes
Section 2.1: Amazon Aurora’s Unique Architecture
Coming Up Next is Section 2.2:
Amazon Aurora’s Clusters and Replicas
17. For wha
Quiz Time for Section 2.2: Amazon Aurora’s Clusters and Read Replicas
Q: Which of the following is NOT true about Amazon Aurora’s Clusters and Read Replicas?
a) For Cross-Region Replication, you need to ensure that the VPC and the DB Subnet Groups exist in the target region
b) Multiple Read Replicas can be placed in the same Availability Zone
c) As of October, 2016, Amazon Aurora is available in 7 Regions
d) Ensure binary logging for the Parameter Group you’re launching a Cross-Region Aurora cluster is set to “STATEMENT”
A: d) Ensure binary logging for the Parameter Group you’re launching
a Cross-Region Aurora cluster is set to “STATEMENT”
Explanation:
For Cross-Region Replication, he Parameter Group’s binary logging should
be set to “MIXED”. The reason for this is Amazon Aurora cross-region
replication uses MySQL binary replication to replay changes on the cross-
region Read Replica DB cluster.
18. This Concludes
Section 2.2: Amazon Aurora’s Clusters
and Replicas
Coming Up Next is Section 2.3:
Amazon Aurora’s Endpoints and Replication
25. For wha
Quiz Time for Section 2.3: Amazon Aurora’s Endpoints and Replication
Q: What is the Recommended and Safest Endpoint to Connect to Your Database Cluster?
a) Instance Endpoints because they allow direct connection to the Replicas
b) Reader Endpoints to be safe during a failover
c) Reader Endpoints to ensure load-balancing across Read Replicas
d) Cluster Endpoints because your application will reconnect to the new Primary Instance once failover is complete
Answer: d) Cluster Endpoints because your application will reconnect
to the new Primary Instance once failover is complete
Explanation:
During a failover, Aurora continues to serve requests to the Cluster
Endpoint from the newly promoted Primary Instance as that replaces the
failed instance. Aurora serves the cluster endpoint with minimal interruption
of service.
26. This Concludes
Section 2.3: Understanding Amazon Aurora’s
Endpoints and Replication
Coming Up Next is Section 2.4:
Understanding Amazon Aurora’s Logging, Storage,
Caching, and Indexing
30. Amazon Aurora’s Log-Structured Storage - 3
Records are APPENDED to storage –
existing records are never updated
B-Tree indexes hold pointers to the latest
version of a record
This means you’re going to have stale
data, that’s periodically removed through
a garbage collection process
32. Some Benefits of Amazon Aurora’s Log-Structured Storage – (5)
*Image courtesy of AWS properties
33. Some Benefits of Amazon Aurora’s Log-Structured Storage – (6)
*Image courtesy of AWS properties
34. For wha
Quiz Time for Section 2.4: Understanding Amazon Aurora’s
Logging, Storage, Caching & Indexing
Q: What is NOT true about Amazon Aurora’s Logging, Storage, Caching, & Indexing?
a) B Trees don’t have primary keys; they have corresponding leaves
b) It has a garbage collection mechanism
c) The database itself is it’s own write-ahead log
d) Caching is outside the database process and can survive a database restart
A: a) B Trees don’t have primary keys; they have corresponding
leaves
Explanation:
B Trees do have primary keys. They point to other sections that eventually
point to leaf nodes.
35. This Concludes Section 2.4: Understanding
Amazon Aurora’s Logging, Storage,
Caching, and Indexing
Coming Up Next is Section 2.5:
Understanding Amazon Aurora’s I/O
43. I/O Traffic in Amazon Aurora’s Read Replicas
*Image courtesy of AWS properties
44. For wha
Quiz Time for Section 2.5: Understanding Amazon Aurora’s I/Os
Q: What isn’t a way that Amazon Aurora has improved I/O?
a) I/O Traffic is improved by sending smaller packets from the Primary Instance to the Replicas
b) Amazon Aurora storage nodes works synchronously from the peer-to-peer gossip stage, to get the heavy work done upfront
c) I/O is improved through the way database commits occur
d) I/O is improved through connection threading via epoll()
A: b) Amazon Aurora storage nodes works synchronously from the
peer-to-peer gossip stage, to get the heavy work done upfront
Explanation:
For each node, the synchronous work is done once the update queue
acknowledges back to the Primary Instance that things are done, the
remaining steps are asynchronous, including the peer-to-peer gossiping
45. This Concludes Section 2.5: Understanding
Amazon Aurora’s I/O
Coming Up Next is Section 2.6: Understanding
Amazon Aurora’s Scalability and Elasticity
Architectures
50. For wha
Quiz Time for Section 2.6: Understanding Amazon Aurora’s
Scalability and Elasticity Architectures
Q: What is the BEST takeaway from this short chapter on Amazon Aurora’s Scalability and Elasticity?
a) Scalability is NOT Elasticity
b) Elasticity is defined as the degree to which a system is able to adapt to workload changes
c) The goal of being scalable is to be able to be available to your customers as demand for your application grows.
d) Amazon Aurora auto-scales up to meet any peak in traffic and then auto-scales down when demand for your application is less,
providing both great scalability and elasticity.
A: d) Amazon Aurora auto-scales up to meet any peak in traffic then
down when demand for your application is less, providing both great
scalability and elasticity.
Explanation:
Amazon Aurora is the first truly scalable and elastic Relational Database
System, solving decades of limitations and frustrations, and is the best
answer in regard to specific question about Amazon Aurora’s scalability &
elasticity; whereas answer “A” is a generalized answer
51. This Concludes Section 2.6: Understanding
Amazon Aurora’s Scalability and Elasticity
Architectures
Coming Up Next is Section 2.7: Understanding
Amazon Aurora’s Fault Tolerance
and Restoration Architecture
57. Amazon Aurora’s Backup and Restore - 1
Automated backups are always enabled on Amazon S3
You can take database snapshots at any time with no performance impact
If the database fails, Amazon Aurora automatically attempts to recover
your database in a healthy AZ with no data loss
In the unlikely event that your data is unavailable within Amazon Aurora
storage, you can restore from a database snapshot or perform a “point-in-
time” restore operation to a new instance
You can share snapshots with a different AWS account, upon which the
recipient can use the snapshot to restore the database, useful when
sharing the same database between various environments (production,
dev/testing, staging, etc)
You can choose to make your snapshots public
You can share manual snapshots with up to 20 AWS account IDs. Beyond
that, you have a choice to make the snapshot public or request an
increase in your quota
You can share snapshots in all Regions where Amazon Aurora is available
– however, the sharing must be within the same Region as the account
that shared it
58. Amazon Aurora’s Backup and Restore - 2
Amazon Aurora supports 2 kinds of replicas, Read Replicas and by
creating a MySQL Read Replica based on MySQL’s binlog- based
replication engine
You can set up cross-region Aurora Replicas, and you can add
Replicas to that Cross-Region Replica
You can choose to promote the Cross-Region Replica to be the new
Primary Instance; however thereafter the pre-existing Cross-Region
Replication will no longer exist
You can prioritize certain replicas as failover targets by assigning a
Priority Tier – contention chooses the Replica the same size as the
Primary
You can change modify Priority Tiers any time without triggering a failover
You can prevent certain Replicas from being promoted to the Primary
Instance by assigning lower Priority Tiers (0-15, 0=highest)
You can recover your data by creating a new Aurora DB cluster from the
backup data that Aurora retains, or from a DB snapshot you saved
60. Simulating Failures Using SQL - 1
To force crash an Amazon Aurora instance, use the ALTER SYSTEM CRASH:
ALTER SYSTEM CRASH [ {INSTANCE | DISPATCHER | NODE} ] ;
To test disk congestion, simulating disk failure for an Aurora database cluster, use ALTER SYSTEM
SIMULATE DISK CONGESTION:
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT
DISK CONGESTION BETWEEN minimum AND maximum MILLISECONDS
[ IN DISK index | NODE index }
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND];
61. Simulating Failures Using SQL - 2
To simulate a disk failure for an Aurora database cluster, use the ALTER SYSTEM SIMULATE DISK
FAILURE:
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT DISK FAILURE
[ IN DISK index | NODE index ]
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND];
To test an Aurora Replica failure, use ALTER SYSTEM SIMULATE READ REPLICA FAILURE:
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT
READ REPLICA FAILURE [ TO ALL | TO ‘replica name” ]
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND];
62. Restoring Data - 1
Create new Aurora DB cluster from backup data Aurora retains
From a manual snapshot
To determine the latest or earliest restorable time for a DB instance,
look for Latest Restorable Time or Earliest Restorable Time values on the
RDS console. The latest restorable time for a DB cluster is the most
recent point where you can restore your cluster from, typically 5
minutes of the current time. The earliest restorable time specifies
how far back within the backup retention period you can restore your
cluster volume
You can determine when the restore of a DB cluster is complete by
checking the Latest Restorable Time or Earliest Restorable Time. Both
values will return NULL until the restore operation is complete.
65. For wha
Quiz Time for Section 2.7: Understanding Amazon Aurora’s
Fault Tolerance and Restoration Architectures
Q: What is NOT true about Amazon Aurora’s Fault Tolerance and Restoration?
a) You can simulate failure of any number of Amazon Aurora’s Read Replicas simultaneously
b) The lowest numbered Priority Tier will be promoted to the Primary Instance upon failure
c) The MariaDB ODBC/JDBC drivers automates the process of DNS propagation to the new Primary Instance upon failure
d) You can share Amazon Aurora snapshots among different AWS Accounts
A: a) You can simulate failures of any number of Amazon Aurora’s
Read Replicas simultaneously
Explanation:
You can only simulate failures on “all” Read Replicas or “one” individual
Read Replica at a time
66. This Concludes Section 2.7: Understanding
Amazon Aurora’s
Fault Tolerance and Restoration Architecture
Coming Up Next is Section 2.8:
Amazon Aurora’s Performance
68. Amazon Aurora’s Performance Enhancements
*Image courtesy of AWS properties
“Fast Insert”: LOAD DATA and INSERT INTO…SELECT…
You can monitor the following metrics to determine the effectiveness of “fast insert” for
your database cluster:
• “aurora_fast_insert_cache_hits”: A counter that’s incremented when the cached cursor
is successfully retrieved and verified
• “aurora_fast_insert_cache_misses”: A counter that’s incremented when the ca hed
cursor is no longer valid & Aurora performs a normal index traversal
You can retrieve the current value of the fast insert metrics using this command:
mysql> show global status like ‘Aurora_fast_insert%’;
You’ll get an output similar to the following:
+---------------------------------------------------+----------------------+
| Variable _name | Value |
+---------------------------------------------------+----------------------+
| Aurora_fast_insert_cache_hits | 3597400 |
| Aurora_fast_insert_cache_misses | 436494748 |
+---------------------------------------------------+----------------------+
69. This Concludes Section 2.8: Understanding
Amazon Aurora’s Performance
Coming Up Next is Section 2.9:
Amazon Aurora’s Benchmarking
71. MySQL Sysbench Write and Read Performance Benchmarks
*Image courtesy of AWS properties
72. To Reproduce These Results…
*Image courtesy of AWS properties
https://d0.awsstatic.com/product-
marketing/Aurora/RDS_Aurora_Performance_Assessment_
Benchmarking_v1-2.pdf
78. How Are These Amazing Results Achieved?
*Image courtesy of AWS properties
79. New Amazon Aurora Update from Jeff Barr’s Blog
Lambda Function Invocation – The stored procedures you create within you Amazon Aurora databases
can now invoke AWS Lambda functions
Load Data From S3 – You can now import data stored in an Amazon S3 bucket into a table in an
Amazon Aurora database
https://aws.amazon.com/blogs/aws/
https://aws.amazon.com/blogs/aws/amazo
n-aurora-update-call-lambda-functions-
from-stored-procedures-load-data-from-s3/
Jeff Barr
80. For wha
Quiz Time for Section 2.9: Amazon Aurora’s
Benchmarking
Q: Which statement below BEST summarizes how Amazon Aurora’s Amazing Performance is
Achieved?
1. The Aurora Team architected Amazon Aurora to achieve 107K write/sec, and 585K reads/sec!
a) The Aurora Team architected Amazon Aurora to achieve a higher number of network packet/sec without affecting performance
b) The Aurora Team architected Amazon Aurora to do less work more efficiently
a) The Aurora Team architected Amazon Aurora so that all data fits in both the data dictionary and buffer cache!
A: b) The Aurora Team architected Amazon Aurora to do less work
more efficiently
Explanation:
In essence, the Amazon Aurora team reduced IO in as many places as
much as they could to provide incredible, predictable performance
81. This Concludes Section 2.9: Amazon Aurora’s
Benchmarking
Coming Up Next is the Last Section of 2.0,
Section 2.10: Understanding Amazon Aurora’s
Security Architecture
86. This Concludes
Section 2.10: Understanding
Amazon Aurora’s Security Architecture
And the 2.0 Video “Amazon Aurora’s Architecture”
Coming Up Next is
Section 3.0: Amazon Aurora’s
Configuration and Management,
In the Next Video
I ran out of time before I could finish any more Sections. Apologies!