by Mikhail Prudnikov, Sr. Solutions Architect, AWS
In-memory data stores, such as ElastiCache for Redis, enable applications where response times are measured in microseconds. We’ll look at how to design and deploy high-performance applications using ElastiCache, Aurora, DynamoDB, DAX, and Lambda, then we’ll do a hands-on lab to do it ourselves. You’ll need a laptop with a Firefox or Chrome browser.
by Rich Alberth, Solutions Architect, AWS
Modernizing your database environment can bring many benefits, from avoiding technical debt to reducing expenses. AWS Database Migration Service enables easy modernization, enabling you to easily change database versions (and even database engines) and schema topologies while avoiding downtimes. We’ll look at some models for modernization, then do a hands-on exercise to migrate and consolidate MySQL databases to Amazon Aurora. You’ll need a laptop with a Firefox or Chrome browser.
Building with AWS Databases: Match Your Workload to the Right Database (DAT30...Amazon Web Services
We have recently seen some convergence of different database technologies. Many customers are evaluating heterogeneous migrations as their database needs have evolved or changed. Evaluating the best database to use for a job isn't as clear as it was ten years ago. We'll discuss the ideal use cases for relational and nonrelational data services, including Amazon ElastiCache for Redis, Amazon DynamoDB, Amazon Aurora, Amazon Neptune, and Amazon Redshift. This session digs into how to evaluate a new workload for the best managed database option. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
by Rich Alberth, Solutions Architect, AWS
If you need to query relationships between data, you need a graph database. We’ll take a close look at Amazon Neptune, explore the differences between property graphs and RDF, then do graph data queries using Apache Tinkerpop. You’ll need a laptop with a Firefox or Chrome browser.
by Jon McCamant, Sr. Technical Delivery Manager & Mikhail Prudnikov, Sr. Solutions Architect, AWS
A decade ago, relational databases were used for nearly every use case. Today, new technologies are enabling a revolution in databases, creating new options for document, key:value, in-memory, search, and graph capabilities that do not use relational tables. We’ll discuss this revolution in database options and who is using them.
Deep Dive on PostgreSQL Databases on Amazon RDS (DAT324) - AWS re:Invent 2018Amazon Web Services
In this session, we provide an overview of the PostgreSQL options available on AWS, and do a deep dive on Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a fully managed PostgreSQL service, and Amazon Aurora, a PostgreSQL-compatible database with up to 3x the performance of standard PostgreSQL. Learn about the features, functionality, and many innovations in Amazon RDS and Aurora, which give you the background to choose the right service to solve different technical challenges, and the knowledge to easily move between services as your requirements change over time.
by Taz Sayed, Sr Technical Account Manager AWS and Marie Yap, Enterprise Solutions Architect AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by J. Bako, Solutions Architect, AWS
Graph databases are purpose-built to store and navigate relationships. They have advantages for many use cases: social networking, recommendation engines, fraud detection, and others where you need to create relationships between data and quickly query these relationships. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. We’ll discuss when you should use a graph database and look at how to use Neptune.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by Rich Alberth, Solutions Architect, AWS
Modernizing your database environment can bring many benefits, from avoiding technical debt to reducing expenses. AWS Database Migration Service enables easy modernization, enabling you to easily change database versions (and even database engines) and schema topologies while avoiding downtimes. We’ll look at some models for modernization, then do a hands-on exercise to migrate and consolidate MySQL databases to Amazon Aurora. You’ll need a laptop with a Firefox or Chrome browser.
Building with AWS Databases: Match Your Workload to the Right Database (DAT30...Amazon Web Services
We have recently seen some convergence of different database technologies. Many customers are evaluating heterogeneous migrations as their database needs have evolved or changed. Evaluating the best database to use for a job isn't as clear as it was ten years ago. We'll discuss the ideal use cases for relational and nonrelational data services, including Amazon ElastiCache for Redis, Amazon DynamoDB, Amazon Aurora, Amazon Neptune, and Amazon Redshift. This session digs into how to evaluate a new workload for the best managed database option. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
by Rich Alberth, Solutions Architect, AWS
If you need to query relationships between data, you need a graph database. We’ll take a close look at Amazon Neptune, explore the differences between property graphs and RDF, then do graph data queries using Apache Tinkerpop. You’ll need a laptop with a Firefox or Chrome browser.
by Jon McCamant, Sr. Technical Delivery Manager & Mikhail Prudnikov, Sr. Solutions Architect, AWS
A decade ago, relational databases were used for nearly every use case. Today, new technologies are enabling a revolution in databases, creating new options for document, key:value, in-memory, search, and graph capabilities that do not use relational tables. We’ll discuss this revolution in database options and who is using them.
Deep Dive on PostgreSQL Databases on Amazon RDS (DAT324) - AWS re:Invent 2018Amazon Web Services
In this session, we provide an overview of the PostgreSQL options available on AWS, and do a deep dive on Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a fully managed PostgreSQL service, and Amazon Aurora, a PostgreSQL-compatible database with up to 3x the performance of standard PostgreSQL. Learn about the features, functionality, and many innovations in Amazon RDS and Aurora, which give you the background to choose the right service to solve different technical challenges, and the knowledge to easily move between services as your requirements change over time.
by Taz Sayed, Sr Technical Account Manager AWS and Marie Yap, Enterprise Solutions Architect AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
by J. Bako, Solutions Architect, AWS
Graph databases are purpose-built to store and navigate relationships. They have advantages for many use cases: social networking, recommendation engines, fraud detection, and others where you need to create relationships between data and quickly query these relationships. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. We’ll discuss when you should use a graph database and look at how to use Neptune.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
The document discusses Amazon's approach to microservices and some of the data design challenges that arise. It describes how Amazon implements a microservices architecture with single-purpose services that communicate over APIs. It then covers some of the challenges of distributed computing with microservices like transactions across databases and eventual consistency. It provides recommendations for managing data with microservices like using correlation IDs, having each service own rollback logic, and leveraging event-driven architectures. Finally, it discusses challenges around data aggregation for reporting and choosing appropriate data stores.
DAT324_Expedia Flies with DynamoDB Lightning Fast Stream Processing for Trave...Amazon Web Services
Building rich, high-performance streaming data systems requires fast, on-demand access to reference data sets, to implement complex business logic. In this talk, Expedia will discuss the architectural challenges the company faced, and how DAX + DynamoDB fits into the overall architecture and met their design requirements. Additionally, you will hear how DAX that enabled Expedia to add caching to their existing applications in hours, which previously was taking much longer. Session attendees will walk away with three key outputs: 1) Expedia’s overall architectural patterns for streaming data 2) how they uniquely leverage DynamoDB, DAX, Apache Spark, and Apache Kafka to solve these problems 3) the value that DAX provides and how it enabled them to improve our performance and throughput, reduce costs, and all without having to write any new code.
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
by Andre Hass, Specialist Technical Account Manager, AWS
A closer look at the fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. We'll show how to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution.
10 Hacks for Optimizing MySQL in the Cloud - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS
- Learn how to implement a well-designed and tested DR strategy using RDS MySQL Multi-AZ, Read Replicas, and more
- Learn how to utilize AWS Global Infrastructure benefits to build a well-architected MySQL database framework
Build on Amazon Aurora with MySQL Compatibility (DAT348-R4) - AWS re:Invent 2018Amazon Web Services
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. Join this session, and get started with the MySQL-compatible edition, discuss your existing application running on Aurora, or learn about recently announced features, such as Serverless or Parallel Query.
ElastiCache Deep Dive: Design Patterns for In-Memory Data Stores (DAT302-R1) ...Amazon Web Services
In this session, we provide a behind the scenes peek to learn about the design and architecture of Amazon ElastiCache. See common design patterns with our Redis and Memcached offerings and how customers use them for in-memory data processing to reduce latency and improve application throughput. We review ElastiCache best practices, design patterns, and anti-patterns.
by Darin Briskman, Database, Analytics, and Machine Learning AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Best Practices for Migrating Oracle Databases to the Cloud - AWS Online Tech ...Amazon Web Services
Learning Objectives:
- Learn how to migrate Oracle databases to the cloud
- Learn how to run additional components of the Oracle stack on AWS
- Get acquainted with other database options on AWS
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Data Warehousing and Data Lake Analytics, Together - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to discover and prepare your data lake for analytics
- See how you can query across your data warehouse and data lake without moving data
- Understand use cases that give you freedom to store data where you want and analyze it when you need it
The document discusses Amazon's use of AWS analytics technologies. It describes Amazon's enterprise data warehouse, which stores over 5 petabytes of integrated data from multiple sources. It faces challenges from rapid data growth and limited IT budgets. Amazon is addressing this by building a data lake called "Andes" that stores data in S3 and enables analytics using services like Redshift, EMR, and Athena. This provides scalability and choices for SQL, machine learning, and other analytic approaches.
by Mikhail Prudnikov, Sr. Solutions Architect, AWS
Elasticsearch is a popular open-source distributed search and analytics engine, widely used for log analytics and text search – and increasingly used as a primary data store. Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch. We’ll take a look at how to use Elasticsearch Service to manage these different use cases.
Aurora Serverless: Scalable, Cost-Effective Application Deployment (DAT336) -...Amazon Web Services
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Aurora (MySQL-compatible edition) where the database automatically starts up, shuts down, and scales up or down capacity based on your application's needs. It enables you to run your database in the cloud without managing any database instances. Aurora Serverless is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. In this session, we explore these use cases, take a look under the hood, and delve into the future of serverless databases. We also hear a case study from a customer building new functionality on top of Aurora Serverless.
by Manish Mohite, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
The document discusses AWS Glue and how it is used by Realtor.com for building serverless analytics pipelines. It provides an overview of AWS Glue, its features and improvements. It then discusses how Realtor.com uses a template-driven transformation approach with AWS Glue to process and transform raw data into structured data for analytics. Templates are implemented as Python code and allow generalized processing of different data formats and volumes.
ABD304-R-Best Practices for Data Warehousing with Amazon Redshift & SpectrumAmazon Web Services
This document provides an overview and best practices for using Amazon Redshift and Redshift Spectrum for data warehousing. It covers the history and development of Redshift, key concepts like columnar storage, compression, sorting and distribution styles. It provides examples and recommendations for table design, workload management, and query optimization techniques.
by Ben Willett, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
by Ben Willett, Solutions Architect, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
What's New for AWS Purpose Built, Non-relational Databases - DAT204 - re:Inve...Amazon Web Services
In this session, Shawn Bice, VP of NoSQL and QuickSight, will cover what's new in AWS non-relational data services, such as Amazon DynamoDB, Amazon ElastiCache, and Amazon Elastisearch. We will discuss how developers might select different data services to solve different aspects of an application and demo scenarios on which application use cases lend themselves well to which data services. If you’re a developer building massively scaled applications, requiring flexibility, consistent millisecond performance, and trying to understand what non-relational data service you might use, this is a great introductory session.
Building low latency apps with a serverless architecture and in-memory data I...AWS Germany
Memory data stores such as ElastiCache for Redis enables applications with response times in microseconds. By using Aurora, DynamoDB, DAX, Lambda, and ElastiCache, we explored how to design and deploy high-perfomance applications. Learn more here: https://aws.amazon.com/products/databases/
The document discusses Amazon's approach to microservices and some of the data design challenges that arise. It describes how Amazon implements a microservices architecture with single-purpose services that communicate over APIs. It then covers some of the challenges of distributed computing with microservices like transactions across databases and eventual consistency. It provides recommendations for managing data with microservices like using correlation IDs, having each service own rollback logic, and leveraging event-driven architectures. Finally, it discusses challenges around data aggregation for reporting and choosing appropriate data stores.
DAT324_Expedia Flies with DynamoDB Lightning Fast Stream Processing for Trave...Amazon Web Services
Building rich, high-performance streaming data systems requires fast, on-demand access to reference data sets, to implement complex business logic. In this talk, Expedia will discuss the architectural challenges the company faced, and how DAX + DynamoDB fits into the overall architecture and met their design requirements. Additionally, you will hear how DAX that enabled Expedia to add caching to their existing applications in hours, which previously was taking much longer. Session attendees will walk away with three key outputs: 1) Expedia’s overall architectural patterns for streaming data 2) how they uniquely leverage DynamoDB, DAX, Apache Spark, and Apache Kafka to solve these problems 3) the value that DAX provides and how it enabled them to improve our performance and throughput, reduce costs, and all without having to write any new code.
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
by Andre Hass, Specialist Technical Account Manager, AWS
A closer look at the fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. We'll show how to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution.
10 Hacks for Optimizing MySQL in the Cloud - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS
- Learn how to implement a well-designed and tested DR strategy using RDS MySQL Multi-AZ, Read Replicas, and more
- Learn how to utilize AWS Global Infrastructure benefits to build a well-architected MySQL database framework
Build on Amazon Aurora with MySQL Compatibility (DAT348-R4) - AWS re:Invent 2018Amazon Web Services
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. Join this session, and get started with the MySQL-compatible edition, discuss your existing application running on Aurora, or learn about recently announced features, such as Serverless or Parallel Query.
ElastiCache Deep Dive: Design Patterns for In-Memory Data Stores (DAT302-R1) ...Amazon Web Services
In this session, we provide a behind the scenes peek to learn about the design and architecture of Amazon ElastiCache. See common design patterns with our Redis and Memcached offerings and how customers use them for in-memory data processing to reduce latency and improve application throughput. We review ElastiCache best practices, design patterns, and anti-patterns.
by Darin Briskman, Database, Analytics, and Machine Learning AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Best Practices for Migrating Oracle Databases to the Cloud - AWS Online Tech ...Amazon Web Services
Learning Objectives:
- Learn how to migrate Oracle databases to the cloud
- Learn how to run additional components of the Oracle stack on AWS
- Get acquainted with other database options on AWS
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Data Warehousing and Data Lake Analytics, Together - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to discover and prepare your data lake for analytics
- See how you can query across your data warehouse and data lake without moving data
- Understand use cases that give you freedom to store data where you want and analyze it when you need it
The document discusses Amazon's use of AWS analytics technologies. It describes Amazon's enterprise data warehouse, which stores over 5 petabytes of integrated data from multiple sources. It faces challenges from rapid data growth and limited IT budgets. Amazon is addressing this by building a data lake called "Andes" that stores data in S3 and enables analytics using services like Redshift, EMR, and Athena. This provides scalability and choices for SQL, machine learning, and other analytic approaches.
by Mikhail Prudnikov, Sr. Solutions Architect, AWS
Elasticsearch is a popular open-source distributed search and analytics engine, widely used for log analytics and text search – and increasingly used as a primary data store. Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch. We’ll take a look at how to use Elasticsearch Service to manage these different use cases.
Aurora Serverless: Scalable, Cost-Effective Application Deployment (DAT336) -...Amazon Web Services
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Aurora (MySQL-compatible edition) where the database automatically starts up, shuts down, and scales up or down capacity based on your application's needs. It enables you to run your database in the cloud without managing any database instances. Aurora Serverless is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. In this session, we explore these use cases, take a look under the hood, and delve into the future of serverless databases. We also hear a case study from a customer building new functionality on top of Aurora Serverless.
by Manish Mohite, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
Building Serverless Analytics Pipelines with AWS Glue (ANT308) - AWS re:Inven...Amazon Web Services
The document discusses AWS Glue and how it is used by Realtor.com for building serverless analytics pipelines. It provides an overview of AWS Glue, its features and improvements. It then discusses how Realtor.com uses a template-driven transformation approach with AWS Glue to process and transform raw data into structured data for analytics. Templates are implemented as Python code and allow generalized processing of different data formats and volumes.
ABD304-R-Best Practices for Data Warehousing with Amazon Redshift & SpectrumAmazon Web Services
This document provides an overview and best practices for using Amazon Redshift and Redshift Spectrum for data warehousing. It covers the history and development of Redshift, key concepts like columnar storage, compression, sorting and distribution styles. It provides examples and recommendations for table design, workload management, and query optimization techniques.
by Ben Willett, Solutions Architect, AWS
How do you get data from your sources into your Redshift data warehouse? We'll show how to use AWS Glue and Amazon Kinesis Firehose to make it easy to automate the work to get data loaded.
by Ben Willett, Solutions Architect, AWS
Organizations use reports, dashboards, and analytics tools to extract insights from their data, monitor performance, and support decision making. To support these tools, data must be collected and prepared for use. We'll look at two approaches: a structured centralized data repository as a Data Warehouse the less-structured repository of a Data Lake. We'll compare these approaches, examine the services that support each, and explore how they work together.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
What's New for AWS Purpose Built, Non-relational Databases - DAT204 - re:Inve...Amazon Web Services
In this session, Shawn Bice, VP of NoSQL and QuickSight, will cover what's new in AWS non-relational data services, such as Amazon DynamoDB, Amazon ElastiCache, and Amazon Elastisearch. We will discuss how developers might select different data services to solve different aspects of an application and demo scenarios on which application use cases lend themselves well to which data services. If you’re a developer building massively scaled applications, requiring flexibility, consistent millisecond performance, and trying to understand what non-relational data service you might use, this is a great introductory session.
Building low latency apps with a serverless architecture and in-memory data I...AWS Germany
Memory data stores such as ElastiCache for Redis enables applications with response times in microseconds. By using Aurora, DynamoDB, DAX, Lambda, and ElastiCache, we explored how to design and deploy high-perfomance applications. Learn more here: https://aws.amazon.com/products/databases/
RET305-Turbo Charge Your E-Commerce Site wAmazon Cache and Search Solutions.pdfAmazon Web Services
In this retail-focused workshop, we review and solve some of the common technical challenges that retailers face. These include scaling their backend databases to accommodate fluctuating demand and enabling full-text product search to achieve more relevant product search results. Bring your laptops, because after reviewing the proposed solutions, you can get hands-on with Amazon ElastiCache for Redis and see how easy it is to reduce the cost and pressure to your backend database, while dramatically improving the performance. We also show how you can leverage the Amazon Elasticsearch Service for building a full-text search solution.
FINRA's Managed Data Lake: Next-Gen Analytics in the Cloud - ENT328 - re:Inve...Amazon Web Services
FINRA faced challenges with their on-premises data infrastructure, including difficulty tracking data, limited scalability, and high costs. They migrated to a managed data lake on AWS to address these issues. This provided centralized data management with a catalog, separation of storage and compute, encryption, and cost optimization. It enabled faster analytics through Presto querying, machine learning model development, and reduced TCO by 30% compared to their on-premises environment. Lessons learned included embracing disruption, automating infrastructure, and treating infrastructure as code. FINRA is exploring additional AWS services like Athena, Lambda, and Step Functions to continue improving their analytics capabilities.
A decade ago, relational databases were used for nearly every use case. Today, new technologies are enabling a revolution in databases, creating new options for document, key: value, in-memory, search, and graph capabilities that do not use relational tables. We’ll discuss this revolution in database options and who is using them.
The document discusses non-relational databases and how they enable new types of applications. It provides information on key-value, document, and graph databases and how they differ from traditional relational databases. It also describes Amazon DynamoDB, Amazon ElastiCache, Amazon Elasticsearch Service, and Amazon Neptune as examples of non-relational database services on AWS.
In this session, we discuss the evolution of database and analytics services in AWS, the new database and analytics services and features we launched this year, and our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different forms. Storage, management, and analysis of this data require database services that scale and perform in ways not possible before. AWS offers a collection of database and other data services—including Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR—to process, store, manage, and analyze data. In this session, we provide an overview of AWS database and analytics services and discuss how customers are using these services today.
Tinder and DynamoDB: It's a Match! Massive Data Migration, Zero Down Time - D...Amazon Web Services
Are you considering a massive data migration? Do you worry about downtime during a migration? Dr. JunYoung Kwak, Tinder’s Lead Engineering Manager, will share his insights on how Tinder successfully migrated critical user data to DynamoDB with zero downtime. Join us to learn how Tinder leverages DynamoDB performance and scalability to meet the needs of their growing global user base.
The document discusses migrating big data workloads from on-premises environments to AWS. It describes deconstructing current workloads, identifying challenges with on-premises architectures, and how to migrate components to AWS services like Amazon EMR and Amazon S3. The document also shares the experience of Vanguard migrating their big data workload to AWS.
Database Week at the San Francisco Loft: ElastiCache & Redis
Redis is an open source, in-memory data store that delivers sub-millisecond response times enabling millions of requests per second to power real-time applications. It can be used as a fast database, cache, message broker, and queue. Amazon ElastiCache delivers the ease-of-use and power of Redis along with the availability, reliability, scalability, security, and performance suitable for the most demanding applications. We’ll take a close look at Redis and how to use it to power different use cases.
Speaker: Ben Willett - Sr. Solutions Architect, AWS
Redis is an open source, in-memory data store that delivers sub-millisecond response times enabling millions of requests per second to power real-time applications. It can be used as a fast database, cache, message broker, and queue. Amazon ElastiCache delivers the ease-of-use and power of Redis along with the availability, reliability, scalability, security, and performance suitable for the most demanding applications. We’ll take a close look at Redis and how to use it to power different use cases.
Speaker: Samir Karande - Sr. Manager, Solutions Architecture, AWS
Database Week at the San Francisco Loft
ElastiCache & Redis
Redis is an open source, in-memory data store that delivers sub-millisecond response times enabling millions of requests per second to power real-time applications. It can be used as a fast database, cache, message broker, and queue. Amazon ElastiCache delivers the ease-of-use and power of Redis along with the availability, reliability, scalability, security, and performance suitable for the most demanding applications. We’ll take a close look at Redis and how to use it to power different use cases.
Speakers:
Smitty Weygant - Solutions Architect, AWS
This document discusses Amazon ElastiCache, a fully managed in-memory cache and database service. It provides Redis and Memcached compatible data stores that can be used for fast databases, caches, and other use cases. The document outlines key features of ElastiCache like security, high availability, scalability, and common usage patterns. It also provides an example of how GE uses ElastiCache Redis to power its Predix platform and make it easy for developers to create Redis clusters.
Connecting the dots - How Amazon Neptune and Graph Databases can transform yo...Amazon Web Services
This document discusses Amazon Neptune, a fully managed graph database service. It provides an overview of graph databases and their advantages over traditional databases for modeling connected data. It then describes Amazon Neptune's key features, like automatic scaling, high availability across Availability Zones, integration with open standards like Gremlin and SPARQL, and ease of use on AWS. Examples are given showing how to model and query graph data using Gremlin and SPARQL. Finally, it discusses Amazon Neptune's architecture and roadmap for general availability later in 2018.
We have recently seen some convergence of different database technologies. Many customers are evaluating heterogeneous migrations as their database needs have evolved or changed. Evaluating the best database to use for a job isn’t as clear as it was ten years ago. In this session, we discuss the ideal use cases for relational and nonrelational data services, including Amazon ElastiCache for Redis, Amazon DynamoDB, Amazon Aurora, and Amazon Redshift. This session digs into how to evaluate a new workload for the best managed database option.
The document discusses non-relational databases and their advantages over traditional relational databases for modern cloud applications. It introduces Amazon DynamoDB, Amazon Elasticsearch Service, Amazon ElastiCache, and Amazon Neptune as fully managed non-relational database services on AWS. For each service, it provides an overview of features and highlights recent updates and enhancements. It also discusses how different non-relational database models like key-value, document, and graph databases can be used together based on data needs in a polyglot persistence approach.
BDA306 Building a Modern Data Warehouse: Deep Dive on Amazon RedshiftAmazon Web Services
In this session, we take a deep dive on Amazon Redshift architecture and the latest performance enhancements that give you faster insights into your data. We also cover Redshift Spectrum, a feature of Redshift that enables you to analyze data across Redshift and your Amazon S3 data lake to deliver unique insights not possible by analyzing independent data silos. A customer is joining us to share how they were able to extend their data warehouse to their data lake to encompass multiple data sources and data formats. This modern architecture helps them tie together data sources to get actionable insights across their business units.
ABD307_Deep Analytics for Global AWS Marketing OrganizationAmazon Web Services
To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams.
Nearly everything in IT - servers, applications, websites, connected devices, and other things - generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is log analytics. We'll look at how to use centralized log analytics across multiple sources with Amazon Elasticsearch Service.
Level: Intermediate
Speaker: Karan Desai - Solutions Architect, AWS
A Look Under the Hood – How Amazon.com Uses AWS Services for Analytics at Mas...Amazon Web Services
Amazon’s consumer business continues to grow, and so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines such as Amazon EMR and Amazon Redshift.
Similar to Building High Performance Apps with In-memory Data (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.