(DAT303) Oracle on AWS and Amazon RDS: Secure, Fast, and ScalableAmazon Web Services
AWS and Amazon RDS provide advanced features and architectures that enable graceful migration, high performance, elastic scaling, and high availability for Oracle database workloads. Learn best practices for realizing the benefits of the cloud while reducing costs, by running Oracle on AWS in a variety of single- and multi-instance topologies. This session teaches you to take advantage of features unique to AWS and Amazon RDS to free your databases from the confines of the conventional data center.
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
• Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
• Learn about the latest generation of Amazon EC2 Instances
• Learn best practices around instance selection to optimize performance
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
NEW LAUNCH! Introducing PostgreSQL compatibility for Amazon AuroraAmazon Web Services
After we launched Amazon Aurora, a cloud-native relational database with region-wide durability, high availability, fast failover, up to 15 read replicas, and up to five times the performance of MySQL, many of you asked us whether we could deliver the same features - but with PostgreSQL compatibility. We are now delivering a preview of Amazon Aurora with this functionality: we have built a PostgreSQL-compatible edition of Amazon Aurora, sharing the core Amazon Aurora innovations with the object-oriented capabilities, language interfaces, JSON compatibility, ANSI:SQL:2008 compliance, and broad functional richness of PostgreSQL. Amazon Aurora will provide full PostgreSQL compatibility while delivering more than twice the performance of the community PostgreSQL database on many workloads. At this session, we will be discussing the newest addition to Amazon Aurora in detail.
(DAT303) Oracle on AWS and Amazon RDS: Secure, Fast, and ScalableAmazon Web Services
AWS and Amazon RDS provide advanced features and architectures that enable graceful migration, high performance, elastic scaling, and high availability for Oracle database workloads. Learn best practices for realizing the benefits of the cloud while reducing costs, by running Oracle on AWS in a variety of single- and multi-instance topologies. This session teaches you to take advantage of features unique to AWS and Amazon RDS to free your databases from the confines of the conventional data center.
Deep Dive on Amazon EC2 Instances - January 2017 AWS Online Tech TalksAmazon Web Services
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We will also provide an overview of the newest instances announced at re:Invent, including the latest generation of Memory and Compute Optimized Instances R4 and C5 instances, new Storage Optimized High I/O I3 instances, and new larger T2 instances. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives:
• Get an overview of the EC2 instance platform, key platform features, and the concept of instance generations
• Learn about the latest generation of Amazon EC2 Instances
• Learn best practices around instance selection to optimize performance
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
NEW LAUNCH! Introducing PostgreSQL compatibility for Amazon AuroraAmazon Web Services
After we launched Amazon Aurora, a cloud-native relational database with region-wide durability, high availability, fast failover, up to 15 read replicas, and up to five times the performance of MySQL, many of you asked us whether we could deliver the same features - but with PostgreSQL compatibility. We are now delivering a preview of Amazon Aurora with this functionality: we have built a PostgreSQL-compatible edition of Amazon Aurora, sharing the core Amazon Aurora innovations with the object-oriented capabilities, language interfaces, JSON compatibility, ANSI:SQL:2008 compliance, and broad functional richness of PostgreSQL. Amazon Aurora will provide full PostgreSQL compatibility while delivering more than twice the performance of the community PostgreSQL database on many workloads. At this session, we will be discussing the newest addition to Amazon Aurora in detail.
Amazon RDS for Microsoft SQL: Performance, Security, Best Practices (DAT303) ...Amazon Web Services
Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
What’s New in Amazon RDS for Open-Source and Commercial DatabasesAmazon Web Services
In the past year, Amazon Relational Database Service has continued to expand functionality, scalability, availability and ease of use for all supported database engines (PostgreSQL, MySQL, MariaDB, Oracle and Microsoft SQL Server). We’ll take a close look at RDS use cases and new capabilities, splitting the time between open-source and commercial database engines.
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
If you’re running a MySQL database at scale, there’s a good chance you’re sharding your database deployment. Sharding is a useful way to increase the scale of your deployment, but it has drawbacks like higher costs, high administration overheard and lower elasticity. It’s harder to grow or shrink a sharded database deployment to match your traffic patterns. In this session, we will discuss and demonstrate how to use AWS Database Migration Service to consolidate multiple MySQL shards into an Amazon Aurora cluster to reduce cost, improve elasticity and make it easier to manage your database.
Learning Objectives:
Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances. Other Compute options such as ECS and Lambda for processing in the cloud will be introduced and explained at a high level.
Amazon Redshift는 속도가 빠른 페타바이트 규모의 완전관리형 데이터 웨어하우스로, 간편하고 비용 효율적으로 모든 데이터를 기존 비즈니스 인텔리전스 도구를 사용하여 분석할 수 있게 해줍니다. 이 강연에서는 대량 병렬처리를 가능하게 하는 RedShift의 분산 처리구조를 살펴보고, 다양한 데이터 소스 및 포맷으로 부터의 데이터 통합 및 로드를 위한 모범 사례에 대하여 실습을 통하여 학습할 예정입니다.
연사: 김상필, 아마존 웹서비스 솔루션즈 아키텍트
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
AWS Webcast - High Availability SQL Server with Amazon RDSAmazon Web Services
Amazon RDS for Microsoft SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Amazon RDS Multi-AZ deployments provide enhanced availability and durability, making them a natural fit for production database workloads.
Review this webinar to learn more about this easy way to achieve highly available operation of SQL Server. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Amazon RDS performs an automatic failover to the standby, with no administrator intervention required, so that your application can resume database operations as soon as the failover is complete.
이 강연에서는 NoSQL 데이터베이스 서비스인 Amazon DynamoDB 서비스를 간단하게 소개하고, 새롭게 발표된 신규 시간 기반 (TTL) 데이터 관리 기능 및 인메모리 캐시 신규 기능 (Amazon DynamoDB Accelerator) 등에 대해 함께 설명해 드릴 예정입니다.
연사: Pranav Nambiar, 아마존 웹서비스 Amazon DynamoDB 총괄 프로덕트 매니저
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability, and durability than was previously available using conventional monolithic database techniques. In this session, we dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and migration from other databases to Amazon Aurora, and share early customer experiences from the field.
이제 빅데이터란 개념은 익숙한 것이 되었지만 이를 비지니스에 적용하고 최대의 효과를 얻는 방법에 대한 고찰은 여전히 필요합니다. 소중한 데이터를 쉽게 저장 및 분석하고 시각화하는 것은 비즈니스에 대한 통찰을 얻기 위한 중요한 과정입니다.
이 강연에서는 AWS Elastic MapReduce, Amazon Redshift, Amazon Kinesis 등 AWS가 제공하는 다양한 데이터 분석 도구를 활용해 보다 간편하고 빠른 빅데이터 분석 서비스를 구축하는 방법에 대해 소개합니다.
Amazon RDS for Microsoft SQL: Performance, Security, Best Practices (DAT303) ...Amazon Web Services
Come learn about architecting high-performance applications and production workloads using Amazon RDS for SQL Server. Understand how to migrate your data to an Amazon RDS instance, apply security best practices, and optimize your database instance and applications for high availability.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
What’s New in Amazon RDS for Open-Source and Commercial DatabasesAmazon Web Services
In the past year, Amazon Relational Database Service has continued to expand functionality, scalability, availability and ease of use for all supported database engines (PostgreSQL, MySQL, MariaDB, Oracle and Microsoft SQL Server). We’ll take a close look at RDS use cases and new capabilities, splitting the time between open-source and commercial database engines.
Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Serv...Amazon Web Services
If you’re running a MySQL database at scale, there’s a good chance you’re sharding your database deployment. Sharding is a useful way to increase the scale of your deployment, but it has drawbacks like higher costs, high administration overheard and lower elasticity. It’s harder to grow or shrink a sharded database deployment to match your traffic patterns. In this session, we will discuss and demonstrate how to use AWS Database Migration Service to consolidate multiple MySQL shards into an Amazon Aurora cluster to reduce cost, improve elasticity and make it easier to manage your database.
Learning Objectives:
Learn how to scale your MySQL database at reduced cost and higher elasticity, by consolidating multiple shards into one Amazon Aurora cluster.
Deploying a Disaster Recovery Site on AWS: Minimal Cost with Maximum EfficiencyAmazon Web Services
In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances. Other Compute options such as ECS and Lambda for processing in the cloud will be introduced and explained at a high level.
Amazon Redshift는 속도가 빠른 페타바이트 규모의 완전관리형 데이터 웨어하우스로, 간편하고 비용 효율적으로 모든 데이터를 기존 비즈니스 인텔리전스 도구를 사용하여 분석할 수 있게 해줍니다. 이 강연에서는 대량 병렬처리를 가능하게 하는 RedShift의 분산 처리구조를 살펴보고, 다양한 데이터 소스 및 포맷으로 부터의 데이터 통합 및 로드를 위한 모범 사례에 대하여 실습을 통하여 학습할 예정입니다.
연사: 김상필, 아마존 웹서비스 솔루션즈 아키텍트
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
AWS Webcast - High Availability SQL Server with Amazon RDSAmazon Web Services
Amazon RDS for Microsoft SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Amazon RDS Multi-AZ deployments provide enhanced availability and durability, making them a natural fit for production database workloads.
Review this webinar to learn more about this easy way to achieve highly available operation of SQL Server. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Amazon RDS performs an automatic failover to the standby, with no administrator intervention required, so that your application can resume database operations as soon as the failover is complete.
이 강연에서는 NoSQL 데이터베이스 서비스인 Amazon DynamoDB 서비스를 간단하게 소개하고, 새롭게 발표된 신규 시간 기반 (TTL) 데이터 관리 기능 및 인메모리 캐시 신규 기능 (Amazon DynamoDB Accelerator) 등에 대해 함께 설명해 드릴 예정입니다.
연사: Pranav Nambiar, 아마존 웹서비스 Amazon DynamoDB 총괄 프로덕트 매니저
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability, and durability than was previously available using conventional monolithic database techniques. In this session, we dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and migration from other databases to Amazon Aurora, and share early customer experiences from the field.
이제 빅데이터란 개념은 익숙한 것이 되었지만 이를 비지니스에 적용하고 최대의 효과를 얻는 방법에 대한 고찰은 여전히 필요합니다. 소중한 데이터를 쉽게 저장 및 분석하고 시각화하는 것은 비즈니스에 대한 통찰을 얻기 위한 중요한 과정입니다.
이 강연에서는 AWS Elastic MapReduce, Amazon Redshift, Amazon Kinesis 등 AWS가 제공하는 다양한 데이터 분석 도구를 활용해 보다 간편하고 빠른 빅데이터 분석 서비스를 구축하는 방법에 대해 소개합니다.
AWS는 고객의 기존 데이터베이스를 쉽게 클라우드로 이전할 수 있도록 데이터베이스 전환을 돕는 AWS Database Migration Service와 AWS Schema Conversion Tool을 제공합니다. 이 강연에서는 이 도구들을 활용하여 오라클 데이터베이스를 Amazon Aurora 데이터베이스로 이전하는 방법에 대하여 실습을 통하여 학습할 예정입니다.
연사: John Winford, 아마존 웹서비스 시니어 테크니컬 매니저
김상필, 아마존 웹서비스 솔루션즈 아키텍트
AWS re:Invent 2016: Workshop: Using the Database Migration Service (DMS) for ...Amazon Web Services
It can help you do much more. You can use DMS to consolidate multiple databases into a single database or split a single database into multiple databases. You can also use DMS for data distribution to multiple systems. For both of these use cases your source database can be outside of AWS (on premises) or in AWS (EC2 or RDS). DMS can also be used for near real-time replication of data. Replication can be done to one or more targets within AWS, in the same region or across regions. You can also replicate data from databases within AWS to databases outside of AWS. In this session we will discuss all these usage patterns and help you try them out yourselves.
Prerequisites:
You should have good database knowledge and at least some experience with Amazon RDS or Amazon Aurora.
Participants should have an AWS account established and available for use during the workshop.
Please bring your own laptop.
February 2016 Webinar Series - Introduction to AWS Database Migration ServiceAmazon Web Services
AWS Database Migration Service helps you migrate databases to AWS easily and securely with minimal downtime to the source database. AWS Database Migration Service can be used for both homogeneous and heterogeneous database migrations from on-premise to RDS or EC2 as well as EC2 to RDS.
In this webinar, we will provide an introduction to AWS Database Migration Service and go through the details of how you can use it today for your database migration projects. We will also discuss AWS Schema Conversion Tool that help you convert your database schema and code for cross database (heterogeneous) migrations.
Learning Objectives:
Understand what is AWS Database Migration Service
Learn how to start using AWS Database Migration Service
Understand homogenous and heterogeneous migrations
Learn about AWS Schema Conversions Tool
Who Should Attend:
IT Managers, DBAs, Solution Architects, Engineers and Developers
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Migrate from SQL Server or Oracle into Amazon Aurora using AWS Database Migra...Amazon Web Services
As organizations look to improve application performance and decrease costs, they are increasingly looking to migrate from commercial database engines into open source. Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this webinar, we will cover how to use Database Migration Service (DMS) to go about the migration, and how to use the schema conversion tool to convert schemas into Amazon Aurora. We’ll then follow with a quick demo of the entire process, and close with tips and best practices.
Learning Objectives:
Understand how AWS Database migration can help you migrate from a commercial database into Amazon Aurora to improve application performance and decrease database costs.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Migrating Databases to AWS for Business Critical Applications and Analytics Amazon Web Services
Migrating business critical applications to a new environment can be difficult and expensive. The short duration of maintenance windows often dictates the use of costly tools to perform change data capture (CDC) from the source to target databases so that the switch over process happens as quickly as possible. Amazon Web Services recently introduced the Database Migration Service (DMS) that supports the migration of databases from on-premises to the cloud with CDC support. This session will explain how DMS provides a simple and cost effective way to migrate business critical applications to Amazon Web Services. It will also cover how DMS enables new workloads for analytics, dev/test and heterogeneous database migrations.
Migrate from Oracle to Amazon Aurora using AWS Schema Conversion Tool & AWS D...Amazon Web Services
• Understand the issues with commercial database pricing and licensing.
• Learn about the benefits of Amazon Aurora for improving performance and decreasing costs.
• See how AWS Database Migration Service helps with your migration.
• See how AWS Schema Conversion Tool makes conversions simple and quick.
If you’re looking to improve application performance and availability and decrease database costs, it’s time to replace your expensive Oracle databases with an open-source compatible solution. Amazon Aurora is a MySQL-compatible relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. You'll learn how to use the AWS Database Migration Service to migrate your data with minimal downtime, and how the AWS Schema Conversion Tool converts your Oracle schemas and procedural code into Amazon Aurora. We’ll follow with a quick demo of the entire process.
Amazon RDS makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. In this webinar, we'll discuss practical ways of migrating applications to Amazon RDS for Oracle. Customer case studies will illustrate how customers moved to Amazon RDS for Oracle and how they benefited.
Migrating Databases to the Cloud with AWS Database Migration Service (DAT207)...Amazon Web Services
Learn how to convert and migrate your relational databases, nonrelational databases, and data warehouses to the cloud. AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) can help with homogeneous migrations as well as migrations between different database engines, such as Oracle or SQL Server, to Amazon Aurora. Hear from Verizon about how they intend to migrate critical databases to Amazon Aurora with PostgreSQL compatibility from their current on-premises Oracle databases, and learn how they intend to deal with challenges such as conversion of legacy code and complex data types, supporting business resiliency, and maintaining data synchronization during the transition phase.
database migration simple, cross-engine and cross-platform migrations with ...Amazon Web Services
Learn how you can migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases using AWS Database Migration Service. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents. Best of all, we spend most of the time demonstrating the product and showing use cases designed to help your business.”
사례로 알아보는 Database Migration Service : 데이터베이스 및 데이터 이관, 통합, 분리, 분석의 도구 - 발표자: ...Amazon Web Services Korea
Database Migration Service(DMS)는 RDBMS 이외에도 다양한 데이터베이스 이관을 지원합니다. 실제 고객사 사례를 통해 DMS가 데이터베이스 이관, 통합, 분리를 수행하는 데 어떻게 활용되는지 알아보고, 동시에 데이터 분석을 위한 데이터 수집(Data Ingest)에도 어떤 역할을 하는지 살펴보겠습니다.
Amazon Elasticache - Fully managed, Redis & Memcached Compatible Service (Lev...Amazon Web Services Korea
Amazon ElastiCache는 Redis 및 MemCached와 호환되는 완전관리형 서비스로서 현대적 애플리케이션의 성능을 최적의 비용으로 실시간으로 개선해 줍니다. ElastiCache의 Best Practice를 통해 최적의 성능과 서비스 최적화 방법에 대해 알아봅니다.
Internal Architecture of Amazon Aurora (Level 400) - 발표자: 정달영, APAC RDS Speci...Amazon Web Services Korea
ccAmazon Aurora 데이터베이스는 클라우드용으로 구축된 관계형 데이터베이스입니다. Aurora는 상용 데이터베이스의 성능과 가용성, 그리고 오픈소스 데이터베이스의 단순성과 비용 효율성을 모두 제공합니다. 이 세션은 Aurora의 고급 사용자들을 위한 세션으로써 Aurora의 내부 구조와 성능 최적화에 대해 알아봅니다.
[Keynote] 슬기로운 AWS 데이터베이스 선택하기 - 발표자: 강민석, Korea Database SA Manager, WWSO, A...Amazon Web Services Korea
오랫동안 관계형 데이터베이스가 가장 많이 사용되었으며 거의 모든 애플리케이션에서 널리 사용되었습니다. 따라서 애플리케이션 아키텍처에서 데이터베이스를 선택하기가 더 쉬웠지만, 구축할 수 있는 애플리케이션의 유형이 제한적이었습니다. 관계형 데이터베이스는 스위스 군용 칼과 같아서 많은 일을 할 수 있지만 특정 업무에는 완벽하게 적합하지는 않습니다. 클라우드 컴퓨팅의 등장으로 경제적인 방식으로 더욱 탄력적이고 확장 가능한 애플리케이션을 구축할 수 있게 되면서 기술적으로 가능한 일이 달라졌습니다. 이러한 변화는 전용 데이터베이스의 부상으로 이어졌습니다. 개발자는 더 이상 기본 관계형 데이터베이스를 사용할 필요가 없습니다. 개발자는 애플리케이션의 요구 사항을 신중하게 고려하고 이러한 요구 사항에 맞는 데이터베이스를 선택할 수 있습니다.
Demystify Streaming on AWS - 발표자: 이종혁, Sr Analytics Specialist, WWSO, AWS :::...Amazon Web Services Korea
실시간 분석은 AWS 고객의 사용 사례가 점점 늘어나고 있습니다. 이 세션에 참여하여 스트리밍 데이터 기술이 어떻게 데이터를 즉시 분석하고, 시스템 간에 데이터를 실시간으로 이동하고, 실행 가능한 통찰력을 더 빠르게 얻을 수 있는지 알아보십시오. 일반적인 스트리밍 데이터 사용 사례, 비즈니스에서 실시간 분석을 쉽게 활성화하는 단계, AWS가 Amazon Kinesis와 같은 AWS 스트리밍 데이터 서비스를 사용하도록 지원하는 방법을 다룹니다.
Amazon EMR - Enhancements on Cost/Performance, Serverless - 발표자: 김기영, Sr Anal...Amazon Web Services Korea
Amazon EMR은 Apache Spark, Hive, Presto, Trino, HBase 및 Flink와 같은 오픈 소스 프레임워크를 사용하여 분석 애플리케이션을 쉽게 실행할 수 있는 관리형 서비스를 제공합니다. Spark 및 Presto용 Amazon EMR 런타임에는 오픈 소스 Apache Spark 및 Presto에 비해 두 배 이상의 성능 향상을 제공하는 최적화 기능이 포함되어 있습니다. Amazon EMR Serverless는 Amazon EMR의 새로운 배포 옵션이지만 데이터 엔지니어와 분석가는 클라우드에서 페타바이트 규모의 데이터 분석을 쉽고 비용 효율적으로 실행할 수 있습니다. 이 세션에 참여하여 개념, 설계 패턴, 라이브 데모를 사용하여 Amazon EMR/EMR 서버리스를 살펴보고 Spark 및 Hive 워크로드, Amazon EMR 스튜디오 및 Amazon SageMaker Studio와의 Amazon EMR 통합을 실행하는 것이 얼마나 쉬운지 알아보십시오.
Amazon OpenSearch - Use Cases, Security/Observability, Serverless and Enhance...Amazon Web Services Korea
로그 및 지표 데이터를 쉽게 가져오고, OpenSearch 검색 API를 사용하고, OpenSearch 대시보드를 사용하여 시각화를 구축하는 등 Amazon OpenSearch의 새로운 기능과 기능에 대해 자세히 알아보십시오. 애플리케이션 문제를 디버깅할 수 있는 OpenSearch의 Observability 기능에 대해 알아보세요. Amazon OpenSearch Service를 통해 인프라 관리에 대해 걱정하지 않고 검색 또는 모니터링 문제에 집중할 수 있는 방법을 알아보십시오.
Enabling Agility with Data Governance - 발표자: 김성연, Analytics Specialist, WWSO,...Amazon Web Services Korea
데이터 거버넌스는 전체 프로세스에서 데이터를 관리하여 데이터의 정확성과 완전성을 보장하고 필요한 사람들이 데이터에 액세스할 수 있도록 하는 프로세스입니다. 이 세션에 참여하여 AWS가 어떻게 분석 서비스 전반에서 데이터 준비 및 통합부터 데이터 액세스, 데이터 품질 및 메타데이터 관리에 이르기까지 포괄적인 데이터 거버넌스를 제공하는지 알아보십시오. AWS에서의 스트리밍에 대해 자세히 알아보십시오.
Amazon Redshift Deep Dive - Serverless, Streaming, ML, Auto Copy (New feature...Amazon Web Services Korea
이 세션에 참여하여 Amazon Redshift의 새로운 기능을 자세히 살펴보십시오. Amazon Data Sharing, Amazon Redshift Serverless, Redshift Streaming, Redshift ML 및 자동 복사 등에 대한 자세한 내용과 데모를 통해 Amazon Redshift의 새로운 기능을 알고 싶은 사용자에게 적합합니다.
From Insights to Action, How to build and maintain a Data Driven Organization...Amazon Web Services Korea
데이터는 혁신과 변혁의 토대입니다. 비즈니스 혁신을 이끄는 혁신은 특정 시점의 전략이나 솔루션이 아니라 성장을 위한 반복적이고 집단적인 계획입니다. 혁신에 이러한 접근 방식을 채택하는 기업은 전략과 비즈니스 문화에서 데이터를 기반으로 하는 경우가 많습니다. 이러한 접근 방식을 개발하려면 리더가 데이터를 조직의 자산처럼 취급하고 조직이 더 나은 비즈니스 성과를 위해 데이터를 활용할 수 있도록 권한을 부여해야 합니다. AWS와 Amazon이 어떻게 데이터와 분석을 활용하여 확장 가능한 비즈니스 효율성을 창출하고 고객의 가장 복잡한 문제를 해결하는 메커니즘을 개발했는지 알아보십시오.
[Keynote] Accelerating Business Outcomes with AWS Data - 발표자: Saeed Gharadagh...Amazon Web Services Korea
데이터는 최종 소비자의 성공에 초점을 맞춘 디지털 혁신에서 중추적인 역할을 하고 있습니다. 모든 기업들은 데이터를 자산으로 사용하여 사례 제공을 추진하고 까다로운 결과를 해결하고 있습니다. AWS 클라우드 기술과 분석 솔루션의 강력한 성능을 통해 고객은 혁신 여정을 가속화할 수 있습니다. 이 세션에서는 기업 고객들이 클라우드에서 데이터의 힘을 활용하여 혁신 목표를 달성하고 필요한 결과를 제공하는 방법에 대해 다룹니다.
LG전자 - Amazon Aurora 및 RDS 블루/그린 배포를 이용한 데이터베이스 업그레이드 안정성 확보 - 발표자: 이은경 책임, L...Amazon Web Services Korea
LG ThinQ는 LG전자의 가전제품과 서비스를 아우르는 플랫폼 브랜드로서 앱 하나로 간편한 컨트롤, 똑똑한 케어, 스마트한 쇼핑까지 한번에 가능한 플랫폼입니다. ThinQ 플랫폼은 글로벌 서비스로 제공되고 있어, 작업 시간을 최소화하고, 서비스의 영향을 최소화 할 필요가 있었습니다. 따라서 DB 버전 업그레이드 작업 시 애플리케이션 배포가 필요없는 Blue/Green Deployment 방식은 최선의 선택이 되었습니다.
KB국민카드 - 클라우드 기반 분석 플랫폼 혁신 여정 - 발표자: 박창용 과장, 데이터전략본부, AI혁신부, KB카드│강병억, Soluti...Amazon Web Services Korea
온프레미스 분석 플랫폼에는 자원 증설 비용, 자원 관리 비용, 신규 자원 도입 및 환경 설정의 리드타임 등 다양한 측면에서의 한계가 존재합니다. 이에 KB국민카드에서는 기존 분석 플랫폼의 한계를 극복함과 동시에 시너지를 낼 수 있는 클라우드 기반 분석 플랫폼을 설계 및 도입하였습니다. 본 사례 소개는 KB국민카드의 데이터 혁신 여정과 노하우를 소개합니다.
SK Telecom - 망관리 프로젝트 TANGO의 오픈소스 데이터베이스 전환 여정 - 발표자 : 박승전, Project Manager, ...Amazon Web Services Korea
SK Telecom의 망관리 프로젝트인 TANGO에서는 오라클을 기반으로 시스템을 구축하여 운영해 왔습니다. 하지만 늘어나는 사용자와 데이터로 인해 유연하고 비용 효율적인 인프라가 필요하게 되었고, 이에 클라우드 도입을 검토 및 실행에 옮기게 되었습니다. TANGO 프로젝트의 클라우드 도입을 위한 검토부터 준비, 실행 및 이를 통해 얻게 된 교훈과 향후 계획에 대해 소개합니다.
코리안리 - 데이터 분석 플랫폼 구축 여정, 그 시작과 과제 - 발표자: 김석기 그룹장, 데이터비즈니스센터, 메가존클라우드 ::: AWS ...Amazon Web Services Korea
2022년 코리안리는 핵심업무시스템(기간계/정보계 시스템)을 AWS 클라우드로 전환하는 사업과 AWS 클라우드 기반에서 손익분석을 위한 어플리케이션 구축 사업을 동시에 진행하고 있었습니다. 이에 따라 클라우드 전환 이후 시스템 간 상호운용성과 호환성을갖춘 데이터 분석 플랫폼 또한 필요하게 되었습니다. 코리안리 IT 환경에 적합한 플랫폼 선정을 위하여 AWS Native Analytics Platform, 3rd Party Analytics Platform (클라우데라, 데이터브릭스)과의 PoC를 진행하고, 최종적으로 AWS Native Analytics Platform 으로 확정하였습니다. 코리안리는 메가존클라우드와 함께 2022년 10월부터 4개월(구축 3개월, 안정화 및 교육 1개월) 동안 AWS 기반 데이터 분석 플랫폼을 구축하고 활용 범위를 지속적으로 확대하고 있습니다.
LG 이노텍 - Amazon Redshift Serverless를 활용한 데이터 분석 플랫폼 혁신 과정 - 발표자: 유재상 선임, LG이노...Amazon Web Services Korea
LG 이노텍은 세계 시장을 선도하는 글로벌 소재·부품기업으로, Amazon Redshift 을 데이터 분석 플랫폼의 핵심 서비스로 활용하고 있습니다.지속적인 데이터 증가와 업무 확대에 따른 유연한 아키텍처 개선의 필요성에 대처하기 위해, 2022년에 AWS 에서 발표된 Redshift Serverless 를 활용한, 비용 최적화된 아키텍처 개선 과정의 실사례를 엿볼수 있는 기회가 됩니다.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
2. What to Expect from the Session
• Learn about migrating databases with minimal downtime to
Amazon RDS, Amazon Redshift and Amazon Aurora
• Discuss database migrations to same and different engines
• Learn about the converting schemas and stored code from Oracle
and SQL Server to MySQL and Aurora
• One more thing~
3. Embracing the cloud demands a cloud data
strategy
• How will my on-premises data migrate to the cloud?
• How can I make it transparent to my customers?
• Afterwards, how will on-premises and cloud data interact?
• How can I integrate my data assets within AWS?
• Can I get help moving off of commercial databases?
4. Historically, Migration = Cost, Time
• Commercial Migration / Replication software
• Complex to setup and manage
• Legacy schema objects, PL/SQL or T-SQL code
• Application downtime
6. Purposes of data migration
One-time data migration
Between on premises and AWS
Between Amazon EC2 and Amazo
n RDS
Ongoing Replication
Replicate on premises to AWS
Replicate AWS to on premises
Replicate OLTP to BI
Replicate for query offloading
7. Ways to migrate data
Bulk Load
AWS Database Migration Service
Oracle Import/Export
Oracle Data Pump Network Mode
Oracle SQL*Loader
Oracle Materialized Views
CTAS / INSERT over dblink
Ongoing Replication
AWS Database Migration Service
Oracle Data Pump Network Mode
Oracle Materialized Views
Oracle GoldenGate
8. High-speed database migration prior to AWS DMS
EC2
Instance
Linux
Host
On-Premises AWS Availability Zone
Oracle DB
RDS
Oracle
Tsunami Tsunami
DATA_PUMP_DIR
500GB
175GB
~2.5 hours~2.5 hours
Total Time
~7 hours
~3.5 hours~4 hours
9. Start your first migration in 10 minutes or less
Keep your apps running during the migration
Replicate within, to or from Amazon EC2 or RDS
Move data to the same or different database engine
Sign up for preview at aws.amazon.com/dms
AWS
Database Migration
Service
11. Customer
Premises
Application Users
AWS
Internet
VPN
• Start a replication instance
• Connect to source and target
databases
• Select tables, schemas, or databases
Let AWS Database Migration Service
create tables, load data, and keep
them in sync
Switch applications over to the target
at your convenience
Keep your apps running during the migration
AWS
Database Migration Service
12. After migration, use for replication and data
integration
• Replicate data in on-premises databases to AWS
• Replicate OLTP data to Amazon Redshift
• Integrate tables from third-party software into your reporting
or core OLTP systems
• Hybrid cloud is a stepping stone in migration to AWS
13. Cost-effective and no upfront costs
• T2 pricing starts at $0.018 per Hour for T2.micro
• C4 pricing starts at $0.154 per Hour for C4.large
• 50GB GP2 storage included with T2 instances
• 100GB GP2 storage included with C4 instances
•
• Data transfer inbound and within AZ is free
• Data transfer across AZs starts at $0.01 per GB
Swap
Logs
Cache
16. Migrate off Oracle and SQL Server
Move your tables, views, stored procedures and DM
L to MySQL, MariaDB, and Amazon Aurora
Know exactly where manual edits are needed
Download at aws.amazon.com/dms
AWS
Schema Conversion
Tool
17. Get help with converting tables, views, and code
Schemas
Tables
Indexes
Views
Packages
Stored Procedures
Functions
Triggers
Sequences
User Defined Types
Synonyms
21. RAC on Amazon EC2 would be useful
• Test / dev / non-prod; allow testing to cover RAC-related regression cases
• Scale out and back elastically; a good match for the cloud
• Scale beyond the largest instances
• High-RTO redundancy at the host/instance level; App continuity for near zero downtime
• Test scaling limits; a given workload scales only to n nodes on RAC
• Some applications “require” RAC
• Some customers don’t want to re-engineer everything just to move to AWS
• Customers want it!
22.
23. Why no RAC on EC2?
EBS Vol
ume
Shared Storage
EC2
Instance
X
28. Sign Up for AWS Database Migration Service
• Sign up for AWS Database Migration Service Preview now:
• aws.amazon.com/dms
• Download the AWS Schema Conversion Tool:
• aws.amazon.com/dms
Introduce self and Sergei (Senior Product Manager)
Migration from on-premises and traditionally hosted databases to AWS managed database services…
Not only how to migrate between same engines, but also between different, like…
But before you can migrate data anywhere, you need a schema; tables and objects into which to load the data.
We’ll talk about how you can convert database objects, in order to support moving between engines.
These days, we’re hearing a lot of customers tell us they want to move their on-premises applications into the cloud.
But moving applications is simpler than moving the databases they depend on.
Applications are usually stateless, and can be moved fairly easily using a lift and shift approach.
(CLICK) But databases are stateful, and they require more care. To move databases to AWS, requires a data migration strategy.
(CLICK) And when it comes to designing those strategies, customers want to be able to do it with the least possible inconvenience and visibility to their users.
(CLICK) And once an application is migrated to AWS, it’s not the end of the story. Often customers have several applications, some in the cloud and some on premises or in hosted environments.Customers need to be able to synchronize their data between on-premises and cloud-based applications.
(CLICK) And the same goes for applications within AWS. Those applications often share data, and customers want to be able to synchronize and replicate data between the various databases they maintain within AWS.
(CLICK) And one other thing: customers moving applications to the cloud, often see it as an opportunity to break free from commercial databases, which tend to have a heavy licensing burden. We often hear customers asking us for a way to convert their commercial databases into AWS solutions, such as RDS MySQL, Postgres, Aurora and Redshift.
Announcing preview of AWS DMS
Explain what it basically is: A managed hosted data replication service engineered to support graceful migration from legacy database systems into the next generation managed databases at AWS.
There are multiple reasons why you’d want to migrate your data.
Read-only replication: reporting, read scaling
Read/write replication: multi-master
There is a multitude of approaches for migrating your data to AWS.
Choose the method based on Data set size, Network Connectivity (access, latency, and bandwidth), ability to sustain downtime for source DB, Need for continuous data synchronization.
If you can take or day or two of downtime, and you don’t have to do the migration process several times, you want to do a Bulk Load.
If you need to minimize downtime, or when your dataset is large and you can’t shut down access to the source while you are migrating your data, you want to consider Ongoing Replication.
In the past we recommended the following high-performance technique for moving large databases.
Use DataPump and export files in parallel.
Use a box that has multiple disks, to parallelize IO
Compression Makes 500 GB to 175 GB
Time to export 500GB is ~2.5 hours
Transport compressed files to EC2 instance using UDP and the Tsunami server.
Install Tsunami on both the source data host and the target EC2 host
Using UDP you can achieve higher rates for transferring files that using TCP
Start Upload when first files from DataPump become available.
Upload in Parallel. No need to wait till all 18 files are done to start upload
Time to upload 175GB is ~2.5 hours
Step 3.
Transfer Files to Amazon RDS DB instance
Amazon RDS DB instance has an externally accessible Oracle Directory Object DATA_PUMP_DIR
Use UTL_FILE. to move data files to Amazon RDS DATA_PUMP_DIR
BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;
BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;
BEGIN utl_file.fclose(perl_global.fh); END;
Transfer Files as They Are Received, No need to wait till all 18 files are received in the EC2 instance. Start transfer to RDS instance as soon as the first file is received.
Total time to Transfer Files to RDS ~3.5Hours
Step 4.
Import data into the Amazon RDS instance
• Import from within Amazon RDS instance using DBMS_DATAPUMP package
• Submit a job using PL/SQL script
Total Time to Import Data into Amazon RDS: ~4hours
But because we do everything staged and have 18 distinct files, the total duration is ~7hours.
Background
-------------------
Open port 46224 for Tsunami communication
Tsunami UDP Protocol: A fast user-space file transfer protocol that uses TCP control and UDP data for transfer over very high speed long distance networks (≥ 1 Gbps and even 10 GE), designed to provide more throughput than possible with TCP over the same networks.
Tsunami Servers: http://sourceforge.net/projects/tsunami-udp/files/latest/download?_test=goal
Optimize the Data Pump Export
• Reduce the data set to optimal size, avoid indexes
• Use compression and parallel processing
• Use multiple disks with independent I/O
Optimize Data Upload
• Use Tsunami for UDP-based file transfer
• Use large Amazon EC2 instance with SSD or PIOPS volume
• Use multiple disks with independent I/O
• You could use multiple Amazon EC2 instances for parallel upload
Optimize Data File Upload to RDS
• Use the largest Amazon RDS DB instance possible during the import process
• Avoid using Amazon RDS DB instance for any other load during this time
• Provision enough storage in the Amazon RDS DB instance for the uploaded files and imported data
* Like all AWS services, it is easy and straightforward to get started. You can get started with your first migration task in 10 min or less.
You simply connect it to your source and target databases, and it copies the data over, and begins replicating changes from source to target.
*That means that you can keep your apps running during the migration, then switch over at a time that is convenient for your business.
* In addition to one-time database migration, you can also use DMS for ongoing data replication. Replicate within, to or from AWS EC2 or RDS databases
For rinstance, After migrating your database, use the AWS Database Migration Service to replicate data into your Redshift data warehouses, cross-region to other RDS instances, or back to on-premises
*Again- it is heterogeneous ~. With DMS, you can move data between engines. Supports Oracle, Microsoft SQL Server, MySQL, PostgreSQL, MariaDB, Amazon Aurora, Amazon Redshift
* If you would like to sign up for the preview of DMS, go to…
Let’s take a look at how to use the database migr. Service…
From the landing page, just click “get started”.
That will take you to page that describes how DMS works to migrate your data; how you connect it to a source database and target database, then define replication tasks to move the data.
Using the AWS Database Migration Service to migrate data to AWS is simple.
(CLICK) Start by spinning up a DMS instance in your AWS environment
(CLICK) Next, from within DMS, connect to both your source and target databases
(CLICK) Choose what data you want to migrate. DMS lets you migrate tables, schemas, or whole databases
Then sit back and let DMS do the rest. (CLICK) It creates the tables, loads the data, and best of all, keeps them synchronized for as long as you need
That replication capability, which keeps the source and target data in sync, allows customers to switch applications (CLICK) over to point to the AWS database at their leisure.DMS eliminates the need for high-stakes extended outages to migrate production data into the cloud. DMS provides a graceful switchover capability.
But DMSis for much more than just migration.
(CLICK) DMSenables customers to adopt a hybrid approach to the cloud, maintaining some applications on premises, and others within AWS.
There are dozens of compelling use cases for a hybrid cloud approach using DMS.
(CLICK) for customers just getting their feet wet, AWS is a great place to keep up-to-date read-only copies of on-premises data for reporting purposes.AWS services like Aurora, Redshift and RDS are great platforms for this.
(CLICK) With DMS, you can maintain copies of critical business data from third-party or ERP applications, like employee data from Peoplesoft, or financial data from Oracle E-Business Suite, in the databases used by the other applications in your enterprise. In this way, it enables application integration in the enterprise.
(CLICK) Another nice thing about the hybrid cloud approach is that it lets customers become familiar with AWS technology and services gradually.DMS enables that. Moving to the cloud is much simpler if you have a way to link the data and applications that have moved to AWS with those that haven’t.
With the AWS Database Migration Service you pay for the migration instance that moves your data from your source database to your target database.(CLICK) (Actually talk to points) Each database migration instance includes storage sufficient to support the needs of the replication engine, such as swap space, logs, and cache. (CLICK) (actually talk to points) Inbound data transfer is free. (CLICK) Additional charges only apply (CLICK) if you decide to allocate additional storage for data migration logs or when you replicate your data to a database in another region or on-premises.
AWS Database Migration Service currently supports the T2 and C4 instance classes. T2 instances are low-cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline. They are suitable for developing, configuring and testing your database migration process, and for periodic data migration tasks that can benefit from the CPU burst capability.
C4 instances are designed to deliver the highest level of processor performance and achieve significantly higher packet per second (PPS) performance, lower network jitter, and lower network latency. You should use C4 instances if you are migrating large databases and are looking to minimize the migration time.
Elaborate on heterogeneous use cases
Database engine migration – cost savings; Move to full managed and scalable cloud-native – Ent class like Aurora
Low-cost reporting, analytics and BI for systems on commercial OLTP (MySQL Postgres Aurora)
Data integration – customer accounts, data like that, can be presented no only on the master platform, but also in applications that are based on non-commercial
But you can’t just pick up an Oracle table and put it down in MySQL. You can’t run an Oracle PL/SQL package on Postgres.
To migrate or replicate data between engines, you need a way to convert the schema, to build a set of tables and objects on the destination that is native to that engine.
We’ve been working on that problem.
Introduce Sergei
The AWS Schema Conversion Tool is a development environment that you download to your desktop and use to save time when migrating from Oracle and SQL Server to next-generation cloud databases such as Amazon Aurora.
You can convert database objects such as tables, indexes, views, stored procedures, and Data Manipulation Language (DML) statements like SELECT, INSERT, DELETE, UPDATE.