This document provides an overview and summary of Amazon Web Services' (AWS) managed database services, including Amazon Relational Database Service (RDS), Amazon DynamoDB, Amazon ElastiCache, and Amazon Redshift. It discusses the benefits of using fully managed database services over self-managed options, provides feature comparisons and use case examples for each service, and describes how billing works with an emphasis on the free tier offerings for each.
Database migration simple, cross-engine and cross-platform migrations with ...Amazon Web Services
Learn how you can migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases using AWS Database Migration Service. We'll discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We'll also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents. Best of all, we'll spend most of the time demonstrating the product and showing use cases designed to help your business.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Getting started with the hybrid cloud enterprise backup and recovery - TorontoAmazon Web Services
This session is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and makes web scale computing easier for customers. Amazon EC2 provides a wide variety of compute instances suited to every imaginable use case, from static websites to high performance supercomputing on-demand, available via highly flexible pricing options. Amazon EC2 works with Amazon Elastic Block Store (Amazon EBS) and Auto Scaling to make it easy for you to get the performance and availability you need for your applications. This session will introduce the key features and different instance types offered by Amazon EC2, demonstrate how you can get started and provide guidance on choosing the right types of instance and purchasing options.
Database performance is a key factor for keeping your app running fast. As the amount of data and request throughput grow, it becomes harder to keep it nimble and performant. In this webinar, we will discuss using Redis, a popular NoSQL key-value store, to run your data fast at scale. Redis can be used as a cache in front of a database or as a fast in-memory data store. We will also cover how you can use ElastiCache for Redis to easily set up a large multi-terabyte Redis environment and operate it with zero management.
Learning Objectives:
• Understand fast data
• Understand the capabilities of Amazon Elasticache for Redis to cache Relational and NoSQL databases
• Use Amazon Elasticache for Redis as a primary data store
Who Should Attend:
• Developers, Data Architects
AWS and its partners offer a wide range of tools and features to help you to meet your security objectives. These tools mirror the familiar controls you deploy within your on-premises environments. AWS provides security-specific tools and features across network security, configuration management, access control and data security. In addition, AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment. In this session, you will get introduced to the range of security tools and features that AWS offers, and the latest security innovations coming from AWS.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Amazon Web Services ofrece un amplio conjunto de productos globales basados en la nube, incluidas aplicaciones de informática, almacenamiento, bases de datos, análisis, redes, móviles, herramientas para desarrolladores, herramientas de administración, IoT, seguridad y empresariales.
Database migration simple, cross-engine and cross-platform migrations with ...Amazon Web Services
Learn how you can migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases using AWS Database Migration Service. We'll discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We'll also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents. Best of all, we'll spend most of the time demonstrating the product and showing use cases designed to help your business.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Getting started with the hybrid cloud enterprise backup and recovery - TorontoAmazon Web Services
This session is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and makes web scale computing easier for customers. Amazon EC2 provides a wide variety of compute instances suited to every imaginable use case, from static websites to high performance supercomputing on-demand, available via highly flexible pricing options. Amazon EC2 works with Amazon Elastic Block Store (Amazon EBS) and Auto Scaling to make it easy for you to get the performance and availability you need for your applications. This session will introduce the key features and different instance types offered by Amazon EC2, demonstrate how you can get started and provide guidance on choosing the right types of instance and purchasing options.
Database performance is a key factor for keeping your app running fast. As the amount of data and request throughput grow, it becomes harder to keep it nimble and performant. In this webinar, we will discuss using Redis, a popular NoSQL key-value store, to run your data fast at scale. Redis can be used as a cache in front of a database or as a fast in-memory data store. We will also cover how you can use ElastiCache for Redis to easily set up a large multi-terabyte Redis environment and operate it with zero management.
Learning Objectives:
• Understand fast data
• Understand the capabilities of Amazon Elasticache for Redis to cache Relational and NoSQL databases
• Use Amazon Elasticache for Redis as a primary data store
Who Should Attend:
• Developers, Data Architects
AWS and its partners offer a wide range of tools and features to help you to meet your security objectives. These tools mirror the familiar controls you deploy within your on-premises environments. AWS provides security-specific tools and features across network security, configuration management, access control and data security. In addition, AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment. In this session, you will get introduced to the range of security tools and features that AWS offers, and the latest security innovations coming from AWS.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Amazon Web Services ofrece un amplio conjunto de productos globales basados en la nube, incluidas aplicaciones de informática, almacenamiento, bases de datos, análisis, redes, móviles, herramientas para desarrolladores, herramientas de administración, IoT, seguridad y empresariales.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Presented by: Guy Kfir, Senior Account Manager, Amazon Web Services
Customer Guest: David Costa, CTO, Fredhopper
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Building an Amazon Datawarehouse and Using Business Intelligence Analytics ToolsAmazon Web Services
Using AWS has never been easier or more affordable to solve business problems and uncover new opportunities using data. Now, businesses of all sizes and across all industries can take advantage of big data technologies and easily collect, store, process, analyze, and share their data. Gain a thorough understanding of what AWS offers across the big data lifecycle and learn architectural best practices for applying these technologies to your projects. We will also deep dive into how to use AWS services such as Kinesis, DynamoDB, Redshift, and Quicksight to optimize logging, build real-time applications, and analyze and visualize data at any scale.
Cost Optimising Your Architecture Practical Design Steps for Developer Saving...Amazon Web Services
This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3 and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively.
Speaker: Simon Elisha, Head of Solution Architecture, ANZ Public Sector, Amazon Web Services
Level 300
Getting Started with Managed Database Services on AWS - September 2016 Webina...Amazon Web Services
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Overview of managed database services available on AWS
• How to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on the use case
Who Should Attend:
• IT Managers, DBAs, Enterprise and Solution Architects, IT Managers, DBAs, Enterprise and Solution Architects, Devops Engineers and Developers
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAmazon Web Services
Want to get ramped up on how to use Amazon's big data services and launch your first big data application on AWS?
Join us on a journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
In this session we review architecture design patterns for big data solutions on AWS, and give you access to everything you need so that you can rebuild and customize the application yourself.
AWS January 2016 Webinar Series - Amazon Aurora for Enterprise Database Appli...Amazon Web Services
Relational databases are a cornerstone of the enterprise IT landscape, powering business-critical applications of many kinds. Though they have been around for a while, current commercial relational databases have lagged behind in innovation. Amazon Aurora, a managed database service built for the cloud, is intended to change that. It targets the high-performance needs of business-critical applications with an emphasis on cost-effectiveness.
In this session, we will look into how Aurora fits the needs of applications built and bought by enterprises to power their business.
Learning Objectives:
Learn about the overall architecture, capabilities, and cost-effectiveness of Aurora, comparing it to current commercial database offerings
Explore best practices for enterprises adopting Aurora for existing and new applications, as well as strategies, tools, and techniques for migrating existing databases to Aurora
Who Should Attend:
IT Managers, DBAs, Enterprise and Solution Architects , DevOps Engineers and Developers
Scientists, developers, and many other technologists from many different industries are taking advantage of Amazon Web Services to meet the challenges of the increasing volume, variety, and velocity of digital information. Amazon Web Services offers an end-to-end portfolio of cloud computing resources to help you manage big data by reducing costs, gaining a competitive advantage and increasing the speed of innovation.
In this presentation from a webinar focusing on running Data Analytics on AWS, AWS Technical Evangelist, Ian Massingham, discusses the role that AWS services can play in helping you to derive value from your data. Topics include stream processing with Amazon Kinesis, processing data with Amazon Elastic MapReduce (EMR)and its ecosystem of tools and running large scale data warehouses on AWS with Redshift.
Topics covered in this session:
• Discover how AWS customers are extracting value from Big Data
• Understand the role that AWS services could play in helping you to manage your data
• Learn about running Hadoop on AWS Amazon EMR and its ecosystem of tools for data processing and analysis
See a recording of this webinar on YouTube here: http://youtu.be/ueRarqsCbJM
See past and future webinars in the Journey Through the Cloud series here: http://aws.amazon.com/campaigns/emea/journey/
For a deep dive into specific AWS services, you might also be interested in the Masterclass webinar series, which you can find here: http://aws.amazon.com/campaigns/emea/masterclass/
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Evolution of Geospatial Workloads on AWS - AWS PS Summit Canberra Amazon Web Services
Geospatial workloads are often amongst the first to move to AWS in government. This session will cover some common topics in GIS, including optimizing for license costs, leveraging native cloud capabilities and running GIS “desktop" software on AWS cloud.
Speaker: Herman Coomans, Solutions Architect, Amazon Web Services
Level: 200
Customers using AWS benefit from over 1,800 security and compliance controls built into the AWS platform and operations. In this session, you will learn how to take advantage of the advanced security features of the AWS platform to gain the visibility, agility, and control needed to be more secure in the cloud than in legacy environments. We'll take a look at several reference architectures for common workloads and highlight the innovative ways customers are using AWS to manage security more efficiently. After attending this session, you will be familiar with the shared security responsibility model and how you can inherit controls from the rich compliance and accreditation programs maintained by AWS.
AWS re:Invent 2016: Getting Started with Amazon Aurora (DAT203)Amazon Web Services
Amazon Aurora is a MySQL-compatible relational database engine with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora. Debanjan Saha, general manager for Aurora, explains how Aurora differs from other commonly available databases while staying compatible with MySQL and providing a high-end, cost-effective alternative to commercial and open-source database engines. In addition, Linda Xu, data architect at Ticketmaster, walks you through Ticketmaster's journey to Amazon Aurora, starting with evaluation through production migration of a critical Ticketmaster database to Amazon Aurora. Ticketmaster is one of the world's top 10 e-commerce companies and the global market leader in ticketing. In this session, Linda discusses how Aurora lets Ticketmaster provide better services to their fans, customers, and clients, and helps reduce the cost and operational burden while giving greater flexibility to support heavy traffic spikes.
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
Presented by: Guy Kfir, Senior Account Manager, Amazon Web Services
Customer Guest: David Costa, CTO, Fredhopper
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Building an Amazon Datawarehouse and Using Business Intelligence Analytics ToolsAmazon Web Services
Using AWS has never been easier or more affordable to solve business problems and uncover new opportunities using data. Now, businesses of all sizes and across all industries can take advantage of big data technologies and easily collect, store, process, analyze, and share their data. Gain a thorough understanding of what AWS offers across the big data lifecycle and learn architectural best practices for applying these technologies to your projects. We will also deep dive into how to use AWS services such as Kinesis, DynamoDB, Redshift, and Quicksight to optimize logging, build real-time applications, and analyze and visualize data at any scale.
Cost Optimising Your Architecture Practical Design Steps for Developer Saving...Amazon Web Services
This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3 and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively.
Speaker: Simon Elisha, Head of Solution Architecture, ANZ Public Sector, Amazon Web Services
Level 300
Getting Started with Managed Database Services on AWS - September 2016 Webina...Amazon Web Services
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Overview of managed database services available on AWS
• How to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on the use case
Who Should Attend:
• IT Managers, DBAs, Enterprise and Solution Architects, IT Managers, DBAs, Enterprise and Solution Architects, Devops Engineers and Developers
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAmazon Web Services
Want to get ramped up on how to use Amazon's big data services and launch your first big data application on AWS?
Join us on a journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3.
In this session we review architecture design patterns for big data solutions on AWS, and give you access to everything you need so that you can rebuild and customize the application yourself.
AWS January 2016 Webinar Series - Amazon Aurora for Enterprise Database Appli...Amazon Web Services
Relational databases are a cornerstone of the enterprise IT landscape, powering business-critical applications of many kinds. Though they have been around for a while, current commercial relational databases have lagged behind in innovation. Amazon Aurora, a managed database service built for the cloud, is intended to change that. It targets the high-performance needs of business-critical applications with an emphasis on cost-effectiveness.
In this session, we will look into how Aurora fits the needs of applications built and bought by enterprises to power their business.
Learning Objectives:
Learn about the overall architecture, capabilities, and cost-effectiveness of Aurora, comparing it to current commercial database offerings
Explore best practices for enterprises adopting Aurora for existing and new applications, as well as strategies, tools, and techniques for migrating existing databases to Aurora
Who Should Attend:
IT Managers, DBAs, Enterprise and Solution Architects , DevOps Engineers and Developers
Scientists, developers, and many other technologists from many different industries are taking advantage of Amazon Web Services to meet the challenges of the increasing volume, variety, and velocity of digital information. Amazon Web Services offers an end-to-end portfolio of cloud computing resources to help you manage big data by reducing costs, gaining a competitive advantage and increasing the speed of innovation.
In this presentation from a webinar focusing on running Data Analytics on AWS, AWS Technical Evangelist, Ian Massingham, discusses the role that AWS services can play in helping you to derive value from your data. Topics include stream processing with Amazon Kinesis, processing data with Amazon Elastic MapReduce (EMR)and its ecosystem of tools and running large scale data warehouses on AWS with Redshift.
Topics covered in this session:
• Discover how AWS customers are extracting value from Big Data
• Understand the role that AWS services could play in helping you to manage your data
• Learn about running Hadoop on AWS Amazon EMR and its ecosystem of tools for data processing and analysis
See a recording of this webinar on YouTube here: http://youtu.be/ueRarqsCbJM
See past and future webinars in the Journey Through the Cloud series here: http://aws.amazon.com/campaigns/emea/journey/
For a deep dive into specific AWS services, you might also be interested in the Masterclass webinar series, which you can find here: http://aws.amazon.com/campaigns/emea/masterclass/
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is designed to be compatible with MySQL 5.6, so that existing MySQL applications and tools can run without requiring modification. AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
Evolution of Geospatial Workloads on AWS - AWS PS Summit Canberra Amazon Web Services
Geospatial workloads are often amongst the first to move to AWS in government. This session will cover some common topics in GIS, including optimizing for license costs, leveraging native cloud capabilities and running GIS “desktop" software on AWS cloud.
Speaker: Herman Coomans, Solutions Architect, Amazon Web Services
Level: 200
Customers using AWS benefit from over 1,800 security and compliance controls built into the AWS platform and operations. In this session, you will learn how to take advantage of the advanced security features of the AWS platform to gain the visibility, agility, and control needed to be more secure in the cloud than in legacy environments. We'll take a look at several reference architectures for common workloads and highlight the innovative ways customers are using AWS to manage security more efficiently. After attending this session, you will be familiar with the shared security responsibility model and how you can inherit controls from the rich compliance and accreditation programs maintained by AWS.
AWS re:Invent 2016: Getting Started with Amazon Aurora (DAT203)Amazon Web Services
Amazon Aurora is a MySQL-compatible relational database engine with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora. Debanjan Saha, general manager for Aurora, explains how Aurora differs from other commonly available databases while staying compatible with MySQL and providing a high-end, cost-effective alternative to commercial and open-source database engines. In addition, Linda Xu, data architect at Ticketmaster, walks you through Ticketmaster's journey to Amazon Aurora, starting with evaluation through production migration of a critical Ticketmaster database to Amazon Aurora. Ticketmaster is one of the world's top 10 e-commerce companies and the global market leader in ticketing. In this session, Linda discusses how Aurora lets Ticketmaster provide better services to their fans, customers, and clients, and helps reduce the cost and operational burden while giving greater flexibility to support heavy traffic spikes.
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
Using Amazon CloudSearch With Databases - CloudSearch Meetup 061913Michael Bohlig
Presentation on using Amazon CloudSearch with databases. What to use when? How can you use CloudSearch with a database? Tom Hill, Solutions Architect, Amazon CloudSearch
AWS re:Invent 2016: Global Traffic Management with Amazon Route 53 Traffic Fl...Amazon Web Services
As companies grow and expand their global footprint, it becomes increasingly critical to make systems highly available while also improving responsiveness to end-users. Companies are choosing to place their applications closer to end-users to improve performance, which introduces the complications of how to route end-user traffic to the most appropriate endpoints and how to most efficiently route traffic within internal systems.
In this session, learn how customers are using Route 53's Traffic Flow service for global traffic management, improving performance and availability for end users while reducing IT management cost. We will walk through how to use Traffic Flow to manager traffic to your applications' globally-distributed endpoints to optimize for constraints such as endpoint load, the health of your resources, geographic restrictions, and Internet latency. We'll demonstrate how you can configure multiple routing policies and take advantage of code control and versioning for easier management of your DNS and traffic management configuration.
In this session we will discuss how you can leverage the new cross-platform AWS Mobile Services to build a highly scalable and reliable mobile app, powered by the AWS Cloud. We will explore core functionality like authentication and authorization of users, data synchronization, backend infrastructure without the need to manage servers, understanding your user behavior, engaging your users and bringing your users back to your app. No matter if you are building the next great social app, or a front-office enterprise mobile app, this session will discuss best practices and reference architectures for building reliable and scalable mobile apps.
DevOps as a Pathway to AWS | AWS Public Sector Summit 2016Amazon Web Services
The concept of DevOps is a powerful one for federal agencies, promising to provide the responsiveness and speed needed to keep pace with rapidly changing mission requirements. In terms of cloud adoption, DevOps accelerates the development of new, cloud-native applications while building the operational capabilities needed to manage more dynamic environments. During this session, we will review specific options for implementing DevOps using Amazon Web Services (AWS), including development of new Platform-as-a-Service capabilities and rapid migration of enterprise systems.
Easily develop mobile apps powered by AWS services using a single console. Whether you are creating a brand new mobile app or adding features to an existing app, AWS Mobile Hub lets you leverage the features, scalability, reliability, and low cost of AWS in minutes. AWS Mobile Hub walks you through feature selection and configuration. It then automatically provisions the AWS services required to power these features, and generates working quickstart apps for iOS and Android that use your provisioned services.
Test on the same devices your customers use. Run tests across a large selection of physical devices. Unlike emulators, physical devices provide a more accurate understanding of how users interact with your app by taking into account factors such as memory, CPU usage, location, and modifications done by manufactures and carriers to the firmware and software.
Presented by: Danilo Poccia, Technical Evangelist, Amazon Web Services
Join us to learn how the APN can accelerate and support your cloud business strategy. The session will highlight the various routes to market, programs and resources available to AWS Customers and Partners looking to grow and develop their business on AWS.
#EarthOnAWS: How the Cloud Is Transforming Earth Observation | AWS Public Sec...Amazon Web Services
Making earth observation data available in the cloud is accelerating scientific discovery and enabling the creation of new products. Attend and learn how the cloud lets earth scientists, researchers, startups, and GIS professionals gather and analyze earth observation data without worrying about limitations of bandwidth, storage, memory, or processing power. Join us and learn how earth science data projects are becoming more scalable, agile, and efficient with AWS on-demand IT infrastructure.
Join us for a live session based on our popular Masterclass series of online events. Amazon S3 hosts over 2 trillion objects and is used for storing a wide range of data, from system backups to digital media. In this session we will explain the features of Amazon S3 from static website hosting, through server side encryption to Amazon Glacier integration. We will dive deep into the feature sets of Amazon S3 to give a rounded overview of its capabilities, looking at common use cases, APIs and best practice.
Application Delivery Patterns for Developers - Technical 401Amazon Web Services
Every developer has gone through the frustration of creating new features, fixing bugs, or refactoring beautiful code, and then wait for it to reach the promise land of production. Come and learn how to get your changes in the hands of your customers with more speed, reliability, security and quality.
We will dive deep into architectures for continuous delivery pipelines, apply lean principles, and build intelligence into your pipeline.
Speaker: Shiva Narayanaswamy, Solutions Architect, Amazon Web Services
Featured Customer - REA Group
Faster Time to Science - Scaling BioMedical Research in the Cloud with SciOps...Amazon Web Services
Medical Researchers are constantly looking for ways to be able to conduct more experiments, innovate at a faster rate and derive meaningful research outcomes more quickly. One of the major barriers to achieving this is long processing times due to giant datasets. A combined industry and research partnership, large-scale on-demand compute and the cloud has been key to making inroads to solving this very common challenge.
DiUS and the Walter Eliza Hall Institute of Medical Research (WEHI) have been working on approaches to accelerate the capture, processing and analysis of bioimagery and microscopy data used in the research labs at WEHI. In this talk, Pavi and Lachlan will share a case study starting with a background on microscope development and a synopsis of state-of-the-art microscopy techniques requiring large scale compute. The session will then launch into a discussion of scaling complex image analysis using Fiji, a bio-science image analysis package and dealing with ever-growing bioimaging datasets.
You will learn about the development of tailored high performance compute (HPC) platforms on AWS to enable this kind of research as well as the 'convention-over-configuration' framework developed by DiUS as a repeatable solution. Lower level technical considerations around network integration, efficient data movement and cluster compute approaches using CfnCluster on AWS will also be discussed in detail.
Speakers: Lachlan Whitehead, PhD, BioImage Analyst and Microscopy Walter and Eliza Hall Institute of Medical Research & Pavi De Alwis, Snr.Software Engineer, DiUS
What do companies with internal platforms have to change to succeed in the cloud? The four pillars at the heart of IT solutions in the cloud are reliability, performance efficiency, security, and cost optimization. This talk discusses cloud well-architected patterns and the tools that facilitate the development and automate the DevOps process. The talk also provides concrete examples of serverless architecture and migration adoption.
In this session, we walk through the Amazon VPC network presentation and describe the problems we were trying to solve when we created it. Next, we walk through how these problems are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we've implemented and discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features like VPC endpoints.
Amazon Simple Work Flow Engine (SWF): How Beamr uses SWF for video optimizati...Amazon Web Services
Amazon Simple Workflow Service (SWF) helps developers build, run, and scale background jobs that have parallel or sequential steps. Hi, we are Beamr, a Tel-Aviv based startup doing media optimization. Running on AWS we decided to use SWF in order to orchestrate our video processing workflow. In this lecture, Dan Julius, Beamr’s VP RnD, will explain how SWF helps beamr manage the workflow progress, what challenges it solved, and what things you should keep in mind when using this service.
Selecting the Right AWS Database Solution - AWS 2017 Online Tech TalksAmazon Web Services
• Get an overview of managed database services available on AWS
• Learn how to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on your use case
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be economical. We will cover how each service might help support your application and how to get started.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started. We will also have with us Jeongsang Baek, the VP of Engineering from IGAWorks, Korea’s No.1 mobile business platform, who will walk us through their architecture and share with us the key insights that they gained from using the various AWS database technologies to deliver a reliable, efficient and cost-effective experience.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Speaker:
Shaun Pearce, AWS Solutions Architect
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
AWS June Webinar Series - Getting Started: Amazon RedshiftAmazon Web Services
Amazon Redshift is a fast, fully-managed petabyte-scale data warehouse service, for less than $1,000 per TB per year. In this presentation, you'll get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Learn how, with just a few clicks in the AWS Management Console, you can set up with a fully functional data warehouse, ready to accept data without learning any new languages and easily plugging in with the existing business intelligence tools and applications you use today. This webinar is ideal for anyone looking to gain deeper insight into their data, without the usual challenges of time, cost and effort. In this webinar, you will learn: • Understand what Amazon Redshift is and how it works • Create a data warehouse interactively through the AWS Management Console • Load some data into your new Amazon Redshift data warehouse from S3 Who Should Attend • IT professionals, developers, line-of-business managers
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Getting Started with Big Data and HPC in the Cloud - August 2015Amazon Web Services
How can you use Big Data to grow your business and discover new opportunities? When organizations effectively capture, analyze, visualize and apply big data insights to their business goals, they differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line. With Amazon Web Services, businesses and researchers can easily fulfill their high performance computing (HPC) requirements with the added benefit of ad-hoc provisioning, pay-as-you-go pricing and faster time-to-results. Join this session to understand how to run HPC applications in AWS cloud, and about different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
In this session, you get an overview of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service. We'll cover how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also discuss new features, architecture best practices, and share how customers are using Amazon Redshift for their Big Data workloads.
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016Amazon Web Services
This session provides the attendee with an overview of Amazon RDS across different database types and then dives deep into the benefits and performance of Amazon Aurora.
(SOV202) Choosing Among AWS Managed Database Services | AWS re:Invent 2014Amazon Web Services
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We'll cover how each service might help support your application, how much each service costs, and how to get started.
Introduction to Amazon Redshift and What's Next (DAT103) | AWS re:Invent 2013Amazon Web Services
Amazon Redshift is a fast, fully-managed, petabyte-scale data warehouse service that costs less than $1,000 per terabyte per year—less than a tenth the price of most traditional data warehousing solutions. In this session, you get an overview of Amazon Redshift, including how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. Finally, we announce new features that we've been working on over the past few months.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
2. Today’s agenda
• Why managed database services?
• A non-relational managed database
• A relational managed database
• A managed in-memory cache
• A managed data warehouse
• What to do next
4. Options for running your database
• Self-Managed—You are responsible for the hardware,
OS, security, updates, backups, replication etc., but have
full control over it.
• EC2 Instances—You only need to focus on the database
level updates, patches, replication, backups etc. and
don’t have to worry about the hardware or the OS
installation.
• Fully Managed—You get features such as backup and
replication etc. as a package service and don’t have to
bother with patching and updates.
6. A managed service for each major DB type
In-Memory Key-
Value Store
Amazon
ElastiCache
Data
Warehouse
Amazon
Redshift
SQL Database
Engines
Amazon
RDS
Document and
Key-Value Store
Amazon
DynamoDB
12. Amazon Aurora: Fast, available, and MySQL-compatible
SQL
Transactions
AZ 1 AZ 2 AZ 3
Caching
Amazon
S3
5x faster than MySQL on
same hardware
Sysbench: 100K writes/sec
and 500K reads/sec
Designed for 99.99%
availability
6-way replicated storage
across 3 AZs
Scale to 64 TB and 15 read
replicas
13. Amazon RDS is simple and fast to scale
Database instance types
offer a range of CPU and
memory selections
Scale up or down amongnstance
types on demand
Database storage is
scalable on demand
14. Amazon RDS offers fast, predictable storage
General Purpose
(SSD) for most
workloads
Provisioned IOPS
(SSD) for OLTP
workloads up to
30,000 IOPS
Magnetic for small
workloads with
infrequent access
15. High availability Multi-AZ deployments
Enterprise-grade fault tolerance solution for
production databases
16. Choose Read Replicas for greater scalability
Bring data close to your customer’s
applications in different regions
Relieve pressure on your master
node for supporting reads and writes.
Promote a read replica to a master
for faster recovery in the event
of disaster
17. Choose cross-region replication for enhanced data
locality, even more ease of migration
Even faster recovery in the event
of disaster
Bring data close to your customers
Promote to a master for
easy migration
18. Choose cross-region snapshot copy for even
greater durability, ease of migration
Copy a database snapshot
to a different AWS region
Warm standby for disaster
recovery
Base for migration to a
different region
19. How Amazon RDS backups work?
Automated backups
Restore your database to a point
in time
Enabled by default
Choose a retention period, up to
35 days
Manual snapshots
Build a new database instance from
a snapshot when needed
Initiated by you
Persist until you delete them
Stored in Amazon S3
20. You pay for the resources that you use
Monthly
bill = N ×
Further details at http://aws.amazon.com/elasticache/pricing/
Duration for which the
nodes were used
Number of nodes
(Price depends on type of
node)
Free tier (for first 12 months)
750 micro DB instance hours
20 GB of DB storage
20 GB for backups
10 million I/O operations
+
Storage consumed
(Price depends on
type of storage)
GB
24. Amazon DynamoDB: a managed document and
key-value store
Simple and fast to deploy
Simple and fast to scale
• To millions of IOPS
Data is automatically replicated
Fast, predictable performance
• Backed by SSD storage
Secondary indexes offer fast lookups
No cost to get started; pay only for what you consume
25. Popular use cases
Ad serving, retargeting, ID
lookup, user profile
management, session-
tracking, RTB
Tracking state, metadata and
readings from millions of
devices, real-time notifications
Recording game details,
leaderboards, session
information, usage history,
and logs
Storing user profiles, session
details, personalization
settings, entity specific
metadata
Ad Tech IoT Gaming
Mobile
& Web
26. Writes
Replicated continuously to 3 AZs
Persisted to disk (custom SSD)
Reads
Strongly or eventually consistent
No latency trade-off
Automatic replication for rock-solid durability
and availability
27. Amazon DynamoDB is a schemaless database
Table Items
Attributes (name-
value pairs)
28. Each item must include a key
Hash key
(DynamoDB maintains an
unordered index)
29. Each item must include a key
Hash key
Range key
(DynamoDB maintains a
sorted index)
31. Global secondary indexes = “pivot charts”
for your table
Choose which
attributes
to project (if any)
32. Define the desired performance using
provisioned throughput
Read
capacity units
Write
capacity units
1 RPS > 2.5 M
requests in a
month
33. DynamoDB: What are capacity units?
One write per second
up to 1KB
One strongly consistent read
per second up to 4KB
or
Two eventually consistent
reads per second
One write
capacity unit
One read
capacity unit
34. Simple app architecture with Amazon DynamoDB
Elastic Load
Balancing Amazon EC2
app instances
Clients
DynamoDB
Business logic
35. How DynamoDB billing works
Monthly
bill = GB +
Assumes DB instance accessed only from AWS region
Further details at http://aws.amazon.com/dynamodb/pricing/
≈ 5 GB * $0.25 +
21 * 720 hrs * $0.0065/10 +
35 * 720 hrs * $0.0065/50
≈ $14.36
Storage consumed
(plus 100 bytes per item)
Charge for
write capacity units
per hour
+
Charge for
read capacity units
per hour
36. How DynamoDB billing works (with free tier)
Monthly
bill = GB +
Assumes DB instance accessed only from AWS region
Further details at http://aws.amazon.com/dynamodb/pricing/
≈ 5–25 GB * $0.25 +
21–25 * 720 hrs * $0.0065/10 +
35–25 * 720 hrs * $0.0065/50
Storage consumed
(plus 100 bytes per item)
Charge for
write capacity units
per hour
Charge for
read capacity units
per hour
Free tier (for first 12 months)
• 25 GB Storage
• 25 Units Write Capacity
• 25 Units Read Capacity
+
37. How DynamoDB billing works (with free tier)
Monthly
bill = GB +
Assumes DB instance accessed only from AWS region
Further details at http://aws.amazon.com/dynamodb/pricing/
≈ 0 +
0 +
10 * 720 hrs * $0.0065/50
≈ $0.94
Storage consumed
(plus 100 bytes per item)
Charge for
write capacity units
per hour
+
Charge for
read capacity units
per hour
39. NoSQL vs. SQL for a new app: how to choose?
• Strong schema, complex
relationships, transactions
and joins
• Scaling is difficult
• Focus on consistency
over scale and availability
• Schema-less, easy reads
and writes, simple data
model
• Scaling is easy
• Focus on performance and
availability at any scale
NoSQL SQL
41. Amazon
Redshift
a lot faster
a lot cheaper
a whole lot simpler
Relational data warehouse
Massively parallel; petabyte scale
Fully managed
HDD and SSD platforms
$1,000/TB/year; starts at $0.25/hour
42. Who uses Amazon Redshift?
• Reduce costs by extending
DW rather than adding HW
• Migrate completely from
existing DW systems
• Respond faster to business;
provision in minutes
• Improve performance by an
order of magnitude
• Make more data available
for analysis
• Access business data via
standard reporting tools
• Add analytic functionality
to applications
• Scale DW capacity as
demand grows
• Reduce HW and SW costs
by an order of magnitude
Traditional enterprise DW
Companies with big data
SaaS companiesCompanies with big data
43. Amazon Redshift architecture
Leader node
• Simple SQL endpoint
• Stores metadata
• Optimizes query plan
• Coordinates query execution
Compute nodes
• Local columnar storage
• Parallel/distributed execution of all queries,
loads, backups, restores, resizes
Start at just $0.25/hour, grow to 2 PB
(compressed)
• DC1: SSD; scale 160 GB–326 TB
• DS2: HDD; scale 2 TB–2 PB
10 GigE
(HPC)
Ingestion
Backup
Restore
JDBC/ODBC
44. Amazon Redshift dramatically reduces I/O
• With row storage, you do unnecessary
I/O
• To get total amount, you have to read
everything
ID Age State Amount
123 20 CA 500
345 25 WA 250
678 40 FL 125
957 37 WA 375
• Column storage
• Data compression
• Zone maps
• Direct-attached storage
45. • With column storage, you only
read the data you need
ID Age State Amount
123 20 CA 500
345 25 WA 250
678 40 FL 125
957 37 WA 375
Amazon Redshift dramatically reduces I/O
• Column storage
• Data compression
• Zone maps
• Direct-attached storage
47. Amazon Redshift dramatically reduces I/O
10 | 13 | 14 | 26 |…
… | 100 | 245 | 324
375 | 393 | 417…
… 512 | 549 | 623
637 | 712 | 809 …
… | 834 | 921 | 959
10
324
375
623
637
959
• Track the minimum and maximum
value for each block
• Skip over blocks that don’t contain
relevant data
• Column storage
• Data compression
• Zone maps
• Direct-attached storage
48. Amazon Redshift dramatically reduces I/O
• Column storage
• Data compression
• Zone maps
• Direct-attached storage
DW.HS1.8XL:
• > 2 GB/sec scan rate
• Optimized for data processing
• High disk density
DW.HS1.XL:
49. Fully managed, continuous/incremental
backups
Multiple copies within cluster
Continuous and incremental backups to
Amazon S3
Continuous and incremental backups
across regions
Streaming restore
Amazon S3
Amazon S3
Region 1
Region 2
50. Amazon Redshift offers rock-solid fault
tolerance
Amazon S3
Amazon S3
Region 1
Region 2
Disk failures
Node failures
Network failure
AZ/region level disasters
51. You pay for what you use
Further details at https://aws.amazon.com/redshift/pricing/
Monthly
bill = N ×
Number of nodes Duration for which the
nodes were used
(Price depends on type of node)
Free Tier (2 month free trial)
• 750 DC1.Large hours per month
52. Redshift has a large ecosystem
Data Integration Systems IntegratorsBusiness Intelligence
56. Popular use cases
Caching layer for performance or cost optimization
of an underlying database
Storage of ephemeral key-value data
High-performance application patterns such as
leaderboards (for gaming users), session
management, event counters, in-memory lists
60. How ElastiCache billing works
Monthly
bill = N ×
Further details at http://aws.amazon.com/elasticache/pricing/
Duration for which the
nodes were used
Number of nodes
(Price depends on type of
node)
Free tier (for first 12 months)
• 750 micro cache node hours