For people who start to create a cloud service, it’s really important to know how to create a scalable cloud service to fit the growth of the future workloads. In this session, we will introduce how to design a scalable cloud service including AWS services introduction and best practices.
1. The document discusses building web applications on AWS, highlighting its benefits like on-demand access without upfront costs, low costs, global reach, and automatic scaling.
2. It provides examples of companies from startups to enterprises using AWS for a variety of applications beyond just web like mobile, analytics, backup/DR, and even NASA's Mars Rover.
3. The key aspects of designing for AWS are availability, automation, latency, and scale through services like auto-scaling, load balancing, and scalable data stores.
Dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
This document provides an overview of scalable architecture strategies on AWS. It discusses:
1. Scaling the infrastructure seamlessly by adding more resources as needed to support growth in users and traffic, without performance drops or practical limits.
2. How Sanlih E-Television used AWS to support its online strategy and estimated 30% savings over other cloud providers due to AWS's stability, competitive pricing, and ability to integrate internet and mobile services.
3. Different strategies for scaling architectures on AWS including separating databases from application servers, using caching, offloading static content to S3, and implementing auto-scaling and load balancing.
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
Building and scaling your containerized microservices on Amazon ECSAmazon Web Services
This document provides an overview of using Amazon EC2 Container Service (ECS) to build and scale containerized microservices. It discusses microservices concepts, introduces ECS as a container management system, outlines some ECS best practices around version control, load balancing, resource usage, and alerts. It also describes how to use the AWS CLI to automate container lifecycles on ECS including creating clusters, registering tasks, deploying services, scaling, and deleting resources.
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
1. The document discusses building web applications on AWS, highlighting its benefits like on-demand access without upfront costs, low costs, global reach, and automatic scaling.
2. It provides examples of companies from startups to enterprises using AWS for a variety of applications beyond just web like mobile, analytics, backup/DR, and even NASA's Mars Rover.
3. The key aspects of designing for AWS are availability, automation, latency, and scale through services like auto-scaling, load balancing, and scalable data stores.
Dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
This document provides an overview of scalable architecture strategies on AWS. It discusses:
1. Scaling the infrastructure seamlessly by adding more resources as needed to support growth in users and traffic, without performance drops or practical limits.
2. How Sanlih E-Television used AWS to support its online strategy and estimated 30% savings over other cloud providers due to AWS's stability, competitive pricing, and ability to integrate internet and mobile services.
3. Different strategies for scaling architectures on AWS including separating databases from application servers, using caching, offloading static content to S3, and implementing auto-scaling and load balancing.
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
Building and scaling your containerized microservices on Amazon ECSAmazon Web Services
This document provides an overview of using Amazon EC2 Container Service (ECS) to build and scale containerized microservices. It discusses microservices concepts, introduces ECS as a container management system, outlines some ECS best practices around version control, load balancing, resource usage, and alerts. It also describes how to use the AWS CLI to automate container lifecycles on ECS including creating clusters, registering tasks, deploying services, scaling, and deleting resources.
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Deep Dive on Elastic File System - February 2017 AWS Online Tech TalksAmazon Web Services
Organizations face significant challenges moving their applications to the cloud when they require a standard file system interface for accessing their cloud data. In this technical session, we will explore the world’s first cloud-scale file system and its targeted use cases. Attendees will learn about the Amazon Elastic File System (EFS) features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. We will highlight and demonstrate how to deploy Amazon EFS in one of our most common use cases and will share tips for success throughout.
Learning Objectives:
• Recognize why and when to use Amazon EFS
• Understand key technical/security concepts
• Learn how to leverage EFS’s performance
• See a demo of EFS in action
• Review EFS’s economics
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
Managing Data with Voume Velocity, and Variety with Amazon ElastiCache for RedisAmazon Web Services
Learn how to use Amazon ElastiCache with AWS IoT and AWS Lambda to create serverless solutions that let you rapidly make use of large and multisource data sets.
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
AWS 201 - A Walk through the AWS Cloud: App Hosting on AWS - Games, Apps and ...Amazon Web Services
The document provides an overview of app hosting on AWS. It discusses key principles such as focusing on your business rather than infrastructure management, automating and scaling infrastructure, designing for failure, loosely coupling services, and iterating based on data. Specific AWS services are highlighted like EC2, EBS, ELB, RDS, DynamoDB, ElastiCache, Elastic Beanstalk, CloudFormation, Route 53, SQS, SWF, and EMR. Case studies are presented on how companies like NASA, Gumi, and Media Molecule use these AWS services.
Amazon RDS allows customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session we will take a closer look at the capabilities of RDS and all the different options available. We will do a deep dive into how RDS works and how Aurora differs from the rest of the engines.
Amazon Web Services provides startups with the low cost, easy to use infrastructure needed to scale and grow any size business. Attend this session and learn how to migrate your startup to AWS and make the most out of the platform.
Today, it is critical that IT teams are able to easily, consistently deploy to production. Running Docker containers on Amazon Web Services makes it possible to engineer a compliant and DevOps-friendly environment from the ground up. Spring Venture Group successfully migrated to AWS with Docker containers and leveraged Logicworks to migrate to AWS and automate infrastructure build-out and deployment. Join our webinar to learn how Spring Venture Group, an innovative insurance brokerage, reduced risk and improved deployment velocity with Logicworks, AWS, and Docker.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
Michael Needham, a senior manager at AWS, presented on accelerating businesses with SAP on AWS. He discussed how AWS provides scalable, cost-effective infrastructure for SAP workloads. Rob Enslin, president of SAP Global Operations, praised how AWS provisioned a 14 TB HANA system in an "unbelievable" time, helping deliver simplicity. AWS offers a variety of compute instances certified for SAP and manages all infrastructure, allowing customers to focus on innovation.
(DAT204) NoSQL? No Worries: Build Scalable Apps on AWS NoSQL ServicesAmazon Web Services
In this session, we discuss the benefits of NoSQL databases and take a tour of the main NoSQL services offered by AWS—Amazon DynamoDB and Amazon ElastiCache. Then, we hear from two leading customers, Expedia and Mapbox, about their use cases and architectural challenges, and how they addressed them using AWS NoSQL services, including design patterns and best practices. You will walk out of this session having a better understanding of NoSQL and its powerful capabilities, ready to tackle your database challenges with confidence.
It’s been an exciting year for Amazon Aurora, the database with MySQL-compatible and PostgreSQL-compatible database engines. Amazon Aurora combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this deep dive session, we’ll discuss best practices and explore new features, including high availability options, new integrations with AWS services, and the performance management with Amazon RDS Performance Insights.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
This document provides guidance on scaling a web application from 1 user to over 10 million users on AWS. It recommends starting simply with a single EC2 instance and Route 53, then adding redundancy with multiple instances, load balancing, and SQL databases. As users grow over 1,000 techniques like caching, NoSQL, and auto scaling are introduced. Above 500,000 users more services are split out and automated. Reaching over 1 million requires database sharding or federation. The key strategies emphasized are redundancy, automation, splitting services, and leveraging managed AWS services over custom solutions.
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
What’s New in Amazon RDS for Open-Source and Commercial DatabasesAmazon Web Services
In the past year, Amazon RDS has continued to expand functionality, scalability, availability and ease of use for all supported database engines: PostgreSQL, MySQL, MariaDB, Oracle and Microsoft SQL Server. We’ll take a close look at RDS use cases and new capabilities, splitting the time between open-source and commercial database engines.
(HLS402) Getting into Your Genes: The Definitive Guide to Using Amazon EMR, A...Amazon Web Services
The document describes Thermo Fisher Scientific's iterative journey to build a scalable cloud platform using Amazon Web Services for storing and analyzing large scientific datasets. They started with DynamoDB for storage but added S3 to store larger objects and ElastiCache for faster queries. They also implemented Amazon EMR for real-time analysis, improving performance over 10x compared to desktop tools. The platform now enables analyzing millions of records within minutes to provide insights for scientific applications.
This is an introduction to Amazon Redshift and cover the essentials you need to deploy your data warehouse in the cloud so that you can achieve faster analytics and save costs.
大數據運算媒體業案例分享 (Big Data Compute Case Sharing for Media Industry)Amazon Web Services
This document discusses big data and analytics on AWS. It defines big data as large, diverse, and growing volumes of data that are difficult to capture, curate, manage and process with traditional database systems. It notes that the majority of data is now unstructured and that data volumes are growing exponentially. The document outlines the AWS big data platform, which supports batch processing, real-time analytics and machine learning. It provides recommendations on which AWS data stores and analytics services to use depending on data type, access patterns, volume and other attributes.
This introductory seminar explains Cloud Computing and Amazon Web Services (AWS) in great detail.
The presenter, Simone Brunozzi (@simon), is an AWS Technology Evangelist.
Recommended for business/technical audiences.
The document discusses several Amazon Web Services security tools including Amazon Inspector, AWS Config, and AWS Trusted Advisor. Amazon Inspector is a vulnerability assessment service that automates security scans for EC2 instances. AWS Config allows users to automate the evaluation of AWS resource configurations against security best practices. AWS Trusted Advisor monitors AWS infrastructure and identifies security gaps and cost optimization opportunities based on known best practices.
Database Migration – Simple, Cross-Engine and Cross-Platform MigrationAmazon Web Services
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases.
Managing Data with Voume Velocity, and Variety with Amazon ElastiCache for RedisAmazon Web Services
Learn how to use Amazon ElastiCache with AWS IoT and AWS Lambda to create serverless solutions that let you rapidly make use of large and multisource data sets.
AWS re:Invent 2016: Introduction to Managed Database Services on AWS (DAT307)Amazon Web Services
Which database is best suited for your use case? Should you choose a relational database or NoSQL or a data warehouse for your workload? Would a managed service like Amazon RDS, Amazon DynamoDB, or Amazon Redshift work better for you, or would it be better to run your own database on Amazon EC2? FanDuel has been running its fantasy sports service on Amazon Web Services (AWS) since 2012. You will learn best practices and insights from FanDuel’s successful migrations from self-managed databases on EC2 to fully-managed database services.
AWS 201 - A Walk through the AWS Cloud: App Hosting on AWS - Games, Apps and ...Amazon Web Services
The document provides an overview of app hosting on AWS. It discusses key principles such as focusing on your business rather than infrastructure management, automating and scaling infrastructure, designing for failure, loosely coupling services, and iterating based on data. Specific AWS services are highlighted like EC2, EBS, ELB, RDS, DynamoDB, ElastiCache, Elastic Beanstalk, CloudFormation, Route 53, SQS, SWF, and EMR. Case studies are presented on how companies like NASA, Gumi, and Media Molecule use these AWS services.
Amazon RDS allows customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session we will take a closer look at the capabilities of RDS and all the different options available. We will do a deep dive into how RDS works and how Aurora differs from the rest of the engines.
Amazon Web Services provides startups with the low cost, easy to use infrastructure needed to scale and grow any size business. Attend this session and learn how to migrate your startup to AWS and make the most out of the platform.
Today, it is critical that IT teams are able to easily, consistently deploy to production. Running Docker containers on Amazon Web Services makes it possible to engineer a compliant and DevOps-friendly environment from the ground up. Spring Venture Group successfully migrated to AWS with Docker containers and leveraged Logicworks to migrate to AWS and automate infrastructure build-out and deployment. Join our webinar to learn how Spring Venture Group, an innovative insurance brokerage, reduced risk and improved deployment velocity with Logicworks, AWS, and Docker.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
Accelerate your Business with SAP on AWS - AWS Summit Cape Town 2017 Amazon Web Services
Michael Needham, a senior manager at AWS, presented on accelerating businesses with SAP on AWS. He discussed how AWS provides scalable, cost-effective infrastructure for SAP workloads. Rob Enslin, president of SAP Global Operations, praised how AWS provisioned a 14 TB HANA system in an "unbelievable" time, helping deliver simplicity. AWS offers a variety of compute instances certified for SAP and manages all infrastructure, allowing customers to focus on innovation.
(DAT204) NoSQL? No Worries: Build Scalable Apps on AWS NoSQL ServicesAmazon Web Services
In this session, we discuss the benefits of NoSQL databases and take a tour of the main NoSQL services offered by AWS—Amazon DynamoDB and Amazon ElastiCache. Then, we hear from two leading customers, Expedia and Mapbox, about their use cases and architectural challenges, and how they addressed them using AWS NoSQL services, including design patterns and best practices. You will walk out of this session having a better understanding of NoSQL and its powerful capabilities, ready to tackle your database challenges with confidence.
It’s been an exciting year for Amazon Aurora, the database with MySQL-compatible and PostgreSQL-compatible database engines. Amazon Aurora combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this deep dive session, we’ll discuss best practices and explore new features, including high availability options, new integrations with AWS services, and the performance management with Amazon RDS Performance Insights.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
This document provides guidance on scaling a web application from 1 user to over 10 million users on AWS. It recommends starting simply with a single EC2 instance and Route 53, then adding redundancy with multiple instances, load balancing, and SQL databases. As users grow over 1,000 techniques like caching, NoSQL, and auto scaling are introduced. Above 500,000 users more services are split out and automated. Reaching over 1 million requires database sharding or federation. The key strategies emphasized are redundancy, automation, splitting services, and leveraging managed AWS services over custom solutions.
AWS re:Invent 2016: Event Handling at Scale: Designing an Auditable Ingestion...Amazon Web Services
How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazopn EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
What’s New in Amazon RDS for Open-Source and Commercial DatabasesAmazon Web Services
In the past year, Amazon RDS has continued to expand functionality, scalability, availability and ease of use for all supported database engines: PostgreSQL, MySQL, MariaDB, Oracle and Microsoft SQL Server. We’ll take a close look at RDS use cases and new capabilities, splitting the time between open-source and commercial database engines.
(HLS402) Getting into Your Genes: The Definitive Guide to Using Amazon EMR, A...Amazon Web Services
The document describes Thermo Fisher Scientific's iterative journey to build a scalable cloud platform using Amazon Web Services for storing and analyzing large scientific datasets. They started with DynamoDB for storage but added S3 to store larger objects and ElastiCache for faster queries. They also implemented Amazon EMR for real-time analysis, improving performance over 10x compared to desktop tools. The platform now enables analyzing millions of records within minutes to provide insights for scientific applications.
This is an introduction to Amazon Redshift and cover the essentials you need to deploy your data warehouse in the cloud so that you can achieve faster analytics and save costs.
大數據運算媒體業案例分享 (Big Data Compute Case Sharing for Media Industry)Amazon Web Services
This document discusses big data and analytics on AWS. It defines big data as large, diverse, and growing volumes of data that are difficult to capture, curate, manage and process with traditional database systems. It notes that the majority of data is now unstructured and that data volumes are growing exponentially. The document outlines the AWS big data platform, which supports batch processing, real-time analytics and machine learning. It provides recommendations on which AWS data stores and analytics services to use depending on data type, access patterns, volume and other attributes.
This introductory seminar explains Cloud Computing and Amazon Web Services (AWS) in great detail.
The presenter, Simone Brunozzi (@simon), is an AWS Technology Evangelist.
Recommended for business/technical audiences.
The document discusses several Amazon Web Services security tools including Amazon Inspector, AWS Config, and AWS Trusted Advisor. Amazon Inspector is a vulnerability assessment service that automates security scans for EC2 instances. AWS Config allows users to automate the evaluation of AWS resource configurations against security best practices. AWS Trusted Advisor monitors AWS infrastructure and identifies security gaps and cost optimization opportunities based on known best practices.
Digital Transformation through Product and Service InnovationAmazon Web Services
Today large enterprises are under pressure to innovate faster than ever, drive down costs, and deliver increased value to their organisations through more responsive and flexible IT. Organisations that are shifting to a data-driven, insight-powered culture will be in the best position to defend, differentiate and disrupt in their respective industries, potentially expanding their business with new products and revenue sources. Learn how some leading companies are leveraging data – from IoT sources, social sources, enterprise, partners, competitors, and consumers – to unlock new sources of insight.
Derek Ewell, Partner Solutions Architect, Amazon Web Services, ASEAN
Senthil Ramani, Managing Director, APAC Resources, Digital Business Lead, Accenture
This document discusses Amazon Web Services (AWS) Internet of Things (IoT). It describes key AWS IoT services like the message broker, rules engine, device shadows, and registry. The message broker securely connects devices to AWS using protocols like MQTT. The rules engine routes messages between devices and AWS services. Device shadows store device state information. The registry stores device identity and attribute information. AWS IoT aims to simplify and accelerate IoT development by connecting devices to AWS services and providing security and management features.
This document provides an overview of Amazon Redshift data warehousing capabilities. It discusses how Redshift is fast, inexpensive, fully managed, secure, and innovates quickly. It describes how to get started with Redshift, provision clusters, model data, load and query data, and monitor performance. It also provides an example of how MakerBot uses Redshift as part of its "Dream Stack" along with other AWS services for analytics.
This document summarizes Yaşarcan Yılmaz's background and experience in big data and machine learning engineering. It then describes Insider, the company he works for, which uses personalization, predictive segmentation, and real-time technologies to boost loyalty and growth for its customers. Insider collects over 5 TB of data per month from 600 million unique users, and describes some of its predictive analysis and segmentation capabilities such as predicting customer lifetime value, purchase likelihood, and interest-based customer segments. It also outlines Insider's fast and big data architecture using AWS services like Kinesis, Lambda, S3, EMR and Spark.
Amazon CloudFront Best Practices and Anti-patternsAbhishek Tiwari
This document outlines best practices and anti-patterns for using Amazon CloudFront. It begins with an overview of CloudFront and its key capabilities as a content delivery network. It then discusses important CloudFront concepts and provides details on best practices for caching, object invalidation, versioning, compression, expiration settings, domain sharding, and origin server configurations. Anti-patterns around expensive and unmanageable cache invalidation approaches are also presented. The document aims to help users optimize CloudFront performance and manageability.
AWS Webcast - Amazon Web Services for Development and TestAmazon Web Services
An easy way to get started using Amazon Web Services is by deploying development and test workloads. This webinar outlines some of the challenges that customers face with development and test workloads and how AWS can help address those challenges. In addition, we will provide an overview of AWS and highlight some of the key services that you can use for development and test, as well as showing a demonstration.
This document discusses Amazon Web Services and provides information about Kien Nguyen, an AWS Cloud leader at SETA International Vietnam. It lists Amazon S3 for simple storage and Amazon CloudFront for content delivery. The document notes that AWS currently has 13 regions and 35 availability zones, and next year will add 4 more regions and 9 more availability zones. It also provides links to join AWS Vietnam meetup and slack groups to learn more about architecting for high availability.
Learn the fundamentals of Amazon DynamoDB and see the DynamoDB console first-hand as we walk through a demo of building a serverless web application using this high-performance key-value and JSON document store.
This document discusses testing frameworks on AWS cloud. It covers load testing using custom scripts to simulate thousands of users, vulnerability testing using the BlazeClan VAS tool, availability testing using Chaos Monkey to randomly terminate instances, and the features of the BlazeClan solution including pre-built scripts, quick start options, and reporting and analytics capabilities. The solution aims to help customers test applications on AWS cloud faster and more efficiently.
The Connected Home: Managing and Innovating with Offline DevicesAmazon Web Services
AWS Internet of Things (IoT) is a managed cloud platform that can support billions of devices and trillions of messages, and can process and route those messages to Amazon Web Services endpoints and to other devices reliably and securely. In this session we look at patterns and architectures for developing connected applications using Amazon Web Services IoT platform. We will dive into demo applications that tie together physical IoT devices, web browsers, identity providers, and mobile devices to create smart, connected applications using Amazon Web Services.
Markku Lepisto, Principal Technology Evangelist, Amazon Web Services, APAC
AWS offers you the ability to add additional layers of security to your data at rest in the cloud, providing access control as well scalable and efficient encryption features. Flexible key management options allow you to choose whether to have AWS manage the encryption keys or to keep complete control over the keys yourself. In this session, you will learn how to secure data when using AWS services. We will discuss data encryption using Key Management Service, S3 access controls, edge and host access security, and database platform security features.
Intro to Amazon WorkSpaces - AWS June 2016 Webinar SeriesAmazon Web Services
IT organizations are facing increasing pressure to secure corporate data, be more agile and responsive, and keep end users productive – all while saving money. Traditional on-premises VDI solutions have proved to be expensive and complex, don’t scale well, and have not helped organizations be more agile. This webinar will explain how Amazon WorkSpaces can help you support a diverse and dynamic workforce, quickly achieve global scale, and improve your overall security position by keeping sensitive data off end user devices. Come and learn how you can offer your users a desktop experience they will love while also saving money and freeing up IT resources to focus on strategic projects.
Learning Objectives: • Understand how Amazon WorkSpaces, a managed desktop computing service, helps customers be more agile • Get an overview of how Amazon WorkSpaces supports a modern workforce while improving security • Learn about how customers are using Amazon WorkSpaces today
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often untuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
Configuration Management with AWS OpsWorks by Amir Golan, Senior Product Man...Amazon Web Services
This document discusses AWS OpsWorks, a service that allows users to model and manage their applications and infrastructure on AWS. It provides capabilities like configuring instances using Chef, managing the lifecycle of instances through events like setup, configure and deploy, controlling access management with IAM, monitoring resource health with CloudWatch, and analyzing logs. OpsWorks can be integrated with AWS CodePipeline for continuous delivery of applications.
Netflix on Cloud - combined slides for Dev and OpsAdrian Cockcroft
This document contains slides from a presentation given by Adrian Cockcroft on Netflix's use of cloud computing on Amazon Web Services (AWS). The summary includes:
1) Netflix moved most of its infrastructure to AWS to leverage AWS's scale and features rather than building its own datacenters, as capacity growth was unpredictable and datacenters were inflexible.
2) Netflix uses many AWS services including EC2, S3, EBS, EMR and more. It deployed a large movie encoding farm on EC2, stores content on S3, uses EMR/Hadoop for log analysis, and a CDN for content delivery.
3) Netflix has learned that cloud tools don't always scale for large
AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on Amazon Web Services. Developers can simply upload their application code and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring. This session shows you how to connect your Git repository with Amazon Web Services, deploy your code to AWS Elastic Beanstalk, easily enable or disable application functionality, and perform zero-downtime deployments through interactive demos and code samples.
Timothee Cruse, Solutions Architect, Amazon Web Services, ASEAN
Continuous Deployment Practices, with Production, Test and Development Enviro...Amazon Web Services
With AWS companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100% API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session we'll talk about some key concepts and design patterns for Continuous Deployment and Continuous Integration, two elements of lean development of applications and infrastructures.
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Speakers:
Andreas Chatzakis, AWS Solutions Architect
Pete Mounce, Senior Developer, JustEat
Scaling the Platform for Your Startup - Startup Talks June 2015Amazon Web Services
Join AWS at this session to understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
1) The document provides guidance on building a scalable architecture for a startup using AWS services. It outlines an approach from the initial launch through scaling up as the business grows.
2) Key services discussed include EC2, RDS, DynamoDB, S3, CloudFront, ElastiCache, ELB, Auto Scaling and Elastic Beanstalk. The document emphasizes building stateless, scalable components and leveraging managed AWS services.
3) As traffic increases, the architecture scales out individual tiers, adds read replicas, and uses Auto Scaling to dynamically scale the number of instances based on demand. Elastic Beanstalk is also introduced as a way to simplify deploying scalable applications.
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
This document provides guidance on scaling infrastructure on AWS for handling large numbers of users, from 1 user to over 10 million users. It discusses starting simply with a single EC2 instance and database, then expanding horizontally and vertically by adding more instances, separating tiers, using auto-scaling, and implementing a service-oriented architecture. As the number of users grows from thousands to millions, it recommends techniques like database read replicas, DynamoDB, ElastiCache, SQS/SNS, and database sharding or federation. Monitoring, metrics, and outsourcing management are also emphasized as critical pieces for large-scale applications.
Scaling on AWS for the First 10 Million Users at Websummit DublinAmazon Web Services
Ian Massingham gave a presentation on scaling applications on AWS from initial launch to over 1 million users. He began by discussing foundational AWS services and database options. He then walked through examples of scaling an application from 1 user to over 500,000 users by leveraging services like EC2, RDS, DynamoDB, ElastiCache, S3, CloudFront, and Auto Scaling. Key strategies included separating components across instances, adding redundancy, implementing caching, and leveraging auto scaling to dynamically scale resources based on demand. Massingham concluded by discussing strategies for scaling beyond 500,000 users such as service-oriented architectures and workload distribution across availability zones.
Scaling on AWS for the First 10 Million Users at Websummit DublinIan Massingham
In this talk from the Dublin Websummit 2014 AWS Technical Evangelist Ian Massingham discusses the techniques that AWS customers can use to create highly scalable infrastructure to support the operation of large scale applications on the AWS cloud.
Includes a walk-through of how you can evolve your architecture as your application becomes more popular and you need to scale up your infrastructure to support increased demand.
This document provides an overview of best practices for scaling infrastructure on AWS from 1 user to 10 million users. It discusses starting with a single EC2 instance, then expanding horizontally by adding more instances and vertically by increasing instance sizes. As users grow from 1,000 to 500,000, the document recommends separating databases from web servers, using read replicas, caching with ElastiCache, and auto scaling. From 500,000 to 1 million users, it suggests moving to a service-oriented architecture and leveraging other AWS services. Scaling from 5 to 10 million users may require database sharding or moving some functions to NoSQL databases.
AWS Summit 2014 Melbourne - Breakout 5
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
Scaling up to your first 10 million users - Pop-up Loft Tel AvivAmazon Web Services
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Building and Managing Scalable Applications on AWS: 1 to 500K usersAmazon Web Services
This presentation session from the Cloud Management, Services and Applications Theatre at Cloud Expo Europe 2014 explores the techniques and AWS services that you can use in order to build high scalability web applications on AWS. It also features a great overview of a high-scalability mobile application built by Myriad Group, and AWS customer, that serves over 41 million users.
Kalibrr is a startup that provides an online talent assessment platform. They launched their minimum viable product (MVP) on AWS in March 2013, seeing user growth from 0 to 25,000 in two months. AWS allowed Kalibrr to scale easily and provided reliability with no downtime. Kalibrr uses EC2 instances to host their web servers, SES for email, S3 for content storage, ELB for load balancing, and Route 53 for DNS management. AWS's scalability, ease of use, and reliability helped Kalibrr launch their MVP successfully and support further growth.
AWS Summit Auckland 2014 | Scaling on AWS for the First 10 Million UsersAmazon Web Services
You have attended AWS training. Gathered all the relevant information about AWS services but how do you now show the value of the AWS Cloud to your business. This session will run through how you would build a business case for the cloud including TCO and cost comparisons.
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
AWS Summit Sydney 2014 | Scaling on AWS for the First 10 Million UsersAmazon Web Services
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
The document discusses how to build cloud-enabled apps that can scale on AWS. It covers scaling vertically by increasing instance sizes, scaling horizontally by adding more instances, using auto-scaling to dynamically scale based on demand, distributing load with an ELB, scaling databases using read replicas and sharding, and taking advantage of managed database services like RDS and DynamoDB for easier administration. It also discusses decomposing applications into small, stateless components and using infrastructure as code for continuous deployment and agility.
AWS Summit Stockholm 2014 – T1 – Architecting highly available applications o...Amazon Web Services
This session teaches you how to architect scalable, highly-available, and secure applications on AWS. In this session, we cover the differences between traditional and cloud-based availability, how to apply AWS availability options to workloads, architectural design patterns for automatingfault tolerance, and examples of highly available architectures.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
4. A scalable architecture
• Can support growth in users, traffic, data size
• Without practical limits
• Without a drop in performance
• Seamlessly - just by adding more resources
• Efficiently - in terms of cost per user
12. We need a bigger server
• Add larger & faster storage (EBS)
• Use the right instance type
• Easy to change instance sizes
• Not our long term strategy
• Will hit an endpoint eventually
• No fault tolerance
13. Separating web and DB
• More capacity
• Scale each tier individually
• Tailor instance for each tier
– Instance type
– Storage
• Security
– Security groups
– DB in a private VPC subnet
14. But how do I choose what
DB technology I need?
SQL? NoSQL?
15. Why start with a Relational DB?
• SQL is versatile & feature-rich
• Lots of existing code, tools, knowledge
• Clear patterns to scalability*
• Reality: eventually you will have a polyglot data layer
– There will be workloads where NoSQL is a better fit
– Use the right tool for each workload
* for read-heavy apps
16. Key Insight: Relational Databases are Complex
• Our experience running Amazon.com taught us that
relational databases can be a pain to manage and
operate with high availability
• Poorly managed relational databases are a leading
cause of lost sleep and downtime in the IT world!
• Especially for startups with small teams
19. Offload static content
• Amazon S3: highly available hosting that scales
– Static files (JavaScript, CSS, images)
– User uploads
• S3 URLs – serve directly from S3
• Let the web server focus on dynamic content
20. Amazon CloudFront
• Worldwide network of edge locations
• Cache on the edge
– Reduce latency
– Reduce load on origin servers
– Static and dynamic content
– Even few seconds caching of popular content can have huge impact
• Connection optimizations
– Optimize transfer route
– Reuse connections
– Benefits even non cachable content
CloudFront
22. Database caching
• Faster response from RAM
• Reduce load on database
Application server
1. If data in cache,
return result
2. If not in cache,
read from DB
RDS database
Amazon ElastiCache
3. And store
in cache
25. High Availability
Availability Zone a
RDS DB
instance
Web
server
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Amazon CloudFront
ElastiCache
node 1
26. High Availability
Availability Zone a
RDS DB
instance
Availability Zone b
Web
server
Web
server
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Amazon CloudFront
ElastiCache
node 1
27. High Availability
Availability Zone a
RDS DB
instance
Availability Zone b
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
S3 bucket for
static assets
Amazon CloudFront
ElastiCache
node 1
28. Elastic Load Balancing
• Managed Load Balancing Service
• Fault tolerant
• Health Checks
• Distributes traffic across AZs
• Elastic – automatically scales its capacity
29. High Availability
Availability Zone a
RDS DB
instance
Availability Zone b
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
S3 bucket for
static assets
ElastiCache
node 1
Amazon CloudFront
30. High Availability
Availability Zone a
RDS DB
instance
Availability Zone b
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
S3 bucket for
static assets
ElastiCache
node 1
Amazon CloudFront
31. Data layer HA
Availability Zone a
RDS DB
instance
ElastiCache
node 1
Availability Zone b
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
32. Data layer HA
Availability Zone a
RDS DB
instance
ElastiCache
node 1
Availability Zone b
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
ElastiCache
node 2
33. User sessions
• Problem: Often stored on local disk
(not shared)
• Quickfix: ELB Session stickiness
• Solution: DynamoDB
Elastic Load
Balancing
Web
server
Web
server
Logged in Logged out
34. Amazon DynamoDB
• Managed document and key-value store
• Simple to launch and scale
• To millions of IOPS
• Both reads and writes
• Consistent, fast performance
• Durable: perfect for storage of session data
https://github.com/aws/aws-dynamodb-session-tomcat
http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-dynamodb-session-handler.html
36. Replace guesswork with elastic IT
Startups pre-AWS
Demand
Unhappy
Customers
Waste $$$
Traditional
Capacity
Capacity
Demand
AWS Cloud
37. Scaling the web tier
Availability Zone a
RDS DB
instance
ElastiCache
node 1
Availability Zone b
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
ElastiCache
node 2
38. Scaling the web tier
Availability Zone a
RDS DB
instance
ElastiCache
node 1
Availability Zone b
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
ElastiCache
node 2
Web
server
Web
server
39. Scaling the web tier
Availability Zone a
RDS DB
instance
ElastiCache
node 1
Availability Zone b
S3 bucket for
static assets
www.example.com
Amazon Route 53
DNS service
Elastic Load
Balancing
Web
server
Web
server
RDS DB
standby
ElastiCache
node 2
Web
server
Web
server
40. Automatic resizing of compute
clusters based on demand
Feature Details
Control Define minimum and maximum instance pool
sizes and when scaling and cool down occurs.
Integrated to Amazon
CloudWatch
Use metrics gathered by CloudWatch to drive
scaling.
Instance types Run Auto Scaling for on-demand and Spot
Instances. Compatible with VPC.
aws autoscaling create-auto-scaling-group
--auto-scaling-group-name MyGroup
--launch-configuration-name MyConfig
--min-size 4
--max-size 200
--availability-zones us-west-2c, us-west-2b
Auto Scaling Trigger auto-scaling policy
Amazon
CloudWatch
41.
42. ”
“
Sanlih E-Television Uses AWS to Support
Online Strategy
Sanlih E-Television is a nationwide cable TV
network delivering some of the most popular TV
channels in Taiwan.
I estimate that we’ve saved
30% by selecting AWS over
other cloud service
providers.
Andy Wang
Chief Information Officer, Sanlih E-Television
”
“ • Wanted to take advantage of online and streaming
platforms to build on leading position in the market
• Had to ensure IT infrastructure could handle demand
and deliver content
• Began running streaming service, website and mobile
apps on AWS
• Successfully integrated internet and mobile into
channel mix
• Saved time and money due to stability of AWS
platform and competitive pricing of services
43. ”
“
Netflix Delivers Billions of Hours of Content per Month Using AWS.
Netflix is one of the world’s leading Internet television
network with over 57 million members in nearly 50
countries.
Our success with AWS can be
attributed to the scalability,
elasticity, and global availability of
AWS services.
Eva Tse
Director, Big Data Platform , Netflix
”
“ • Needed flexible IT infrastructure to experiment,
analyze, and grow its business worldwide.
• Using AWS to measure its users’ streaming
experiences through its analytics platform.
• Reports a reduction from weeks to seconds in testing
time for new features.
• Netflix operates a 10 PB data ‘warehouse’ on Amazon
S3 comprised of hundreds of millions of objects.
• Designed to deliver billions of hours of content
monthly using tens of thousands of instances across
three regions.
45. What does this mean in practice?
• Only store transient data on local disk
• Needs to persist beyond a single http request?
– Then store it elsewhere
User uploads
User Sessions
Amazon S3
AWS DynamoDB
Application Data
Amazon RDS
46. Having decomposed into
small, loosely coupled,
stateless building blocks
You can now Scale out with ease
Having done that…
47. Having decomposed into
small, loosely coupled,
stateless building blocks
We can also Scale back with ease
Having done that…
48. Take the shortcut
• While this architecture is simple you still need
to deal with:
– Configuration details
– Deploying code to multiple instances
– Maintaining multiple environments (Dev, Test, Prod)
– Maintain different versions of the application
• Solution: Use AWS Elastic Beanstalk
49. AWS Elastic Beanstalk (EB)
• Easily deploy, monitor, and scale three-tier web
applications and services.
• Infrastructure provisioned and managed by EB
• You maintain control.
• Preconfigured application containers
• Easily customizable.
• Support for these platforms:
51. Mobile
Push
Notifications
Mobile
Analytics
Cognito
Cognito
Sync
Analytics
Kinesis
Data
Pipeline
RedShift EMR
Your Applications
AWS Global Infrastructure
Network
VPC
Direct
Connect
Route 53
Storage
EBS S3 Glacier CloudFront
Database
DynamoDBRDS ElastiCache
Deployment & Management
Elastic
Beanstalk
OpsWorks
Cloud
Formation
Code
Deploy
Code
Pipeline
Code
Commit
Security & Administration
CloudWatch Config
Cloud
Trail
IAM Directory KMS
Application
SQS SWF
App
Stream
Elastic
Transcoder
SES
Cloud
Search
SNS
Enterprise Applications
WorkSpaces WorkMail WorkDocs
Compute
EC2 ELB
Auto
Scaling
LambdaECS
52. AWS building blocks
Inherently Scalable & Highly Available Scalable & Highly Available
Elastic Load Balancing
Amazon CloudFront
Amazon Route53
Amazon S3
Amazon SQS
Amazon SES
Amazon CloudSearch
AWS Lambda
…
Amazon DynamoDB
Amazon Redshift
Amazon RDS
Amazon Elasticache
…
Amazon EC2
Amazon VPC
Automated Configurable With the right architecture
53. Stay focused as you scale your team
AWS
Cloud-Based
Infrastructure
Your
Business
More Time to Focus on
Your Business
Configuring Your
Cloud Assets
70%
30%70%
On-Premise
Infrastructure
30%
Managing All of the
“Undifferentiated Heavy Lifting”
55. Amazon Route 53
DNS serviceNo limit
Availability Zone a
RDS DB
instance
ElastiCache
node 2
Availability Zone b
S3 bucket for
static assets
www.example.com
Elastic Load
Balancing
RDS DB
standby
ElastiCache
node 3
RDS read
replica
RDS read
replica
DynamoDB
RDS read
replica
ElastiCache
node 4
RDS read
replica
ElastiCache
node 1
CloudSearchLambdaSES SQS
56. A quick review
• Keep it simple and stateless
• Make use of managed self-scaling services
• Multi-AZ and AutoScale your EC2 infrastructure
• Use the right DB for each workload
• Cache data at multiple levels
• Simplify operations with deployment tools
So lets avoid this by building a scalable architecture.
A scalable architecture can grow without practical limits simply by adding more resources.
We also care about cost efficiency so this is something else our architecture should achieve.
Lets start from day 1. Maybe a couple of developers working on their idea.
You will need a server to host your app for testing and sharing with friends and family or some early enthusiasts.
You sign up for AWS, and with a few clicks you have a server.
You setup that single server - an ec2 instance to test your code and run a private beta.
You install your db and Web server of choice, you upload your code and you are good to go for now.
Soon after that you are ready to open access to your product for a public beta.
If things go well you will soon need a bigger server and that is easy on AWS.
You can add more and faster storage with EBS and you can stop the instance change the size of your instance and start it again with more RAM, CPU etc.
Of course that is not our long term strategy - you will eventually hit an end point. Plus having everything in a single very large server is not great in terms of fault tolerance or cost efficiency.
So as a first step let's go ahead and move the database to its own dedicated instance.
We have 2 servers so instantly a lot more capacity.
But we can also select a different instance type tailored to each workload.
Of course this is also better in terms of security – e.g. we can really lock down access to the db server.
And this is usually the point where someone will ask me, which database should I use? And there two main types of databases that are popular. Relational databases and nosql databases.
And my default answer is that you should start with a Relational database.
There will be exceptions and later on we will talk about those and how those technologies scale.
But Relational databases will work well for most apps. They offer more features and there are more developers that have experience writing apps for them.
So start with that and the reality will be that later you can always add NoSQL later for the right workloads.
But we know from experience that managing Relational databases is hard especially at scale.
Databases are a frequent cause of downtime in the IT world!
This is especially true for startups with limited resources
You won't have access to consultants to help you.
So instead of managing your database on your own on an ec2 instance you can instead use Amazon’s Relational Database service.
RDS
And RDS solves that problem for you. With a few clicks you can have a db server running mysql Oracle SQL server or Postgres .
And AWS handles all the provisioning, hardware replacement, it makes it easy to migrate to a larger server when you need that, it handles backups, security patches ecc so that you can build your application on top of a robust database implementation.
Now we could start scaling those 2 tiers straight away.
But let’s take a step back and implement some quick wins early on in the process.
Low effort changes that will give us a lot of room to breathe and cost efficiency as we grow.
First we want to store any static assets like css files and images on Amazon simple storage service s3.
S3 not only stores those files but it can also act as a highly scalable hosting service.
Instead of serving those assets through your Web server you offload this task to s3 URLs
This will reduce the load for your Web server that can now focus on generating dynamic content.
Secondly we want to use CloudFront, that is a Content Delivery Network.
It can reduce latency for users around the world by caching both static and dynamic content on the edge locations of the AWS global infrastructure.
In some cases even a few seconds of caching for very popular pages can result in a huge reduction of load for your Web server.
Even for non cacheable content CF will provide network optimizations.
So what we are doing here is using Cloudfront to serve the whole application
We can specify a different origin depending on specific file path patterns. In this example we fetch content from s3 or ec2.
Then we can apply caching on one more layer – between the application server and the DB server.
Any frequent queries to the db where the results do not change very often can have their results cached and served from an in memory cache.
This will provide a better experience and reduce the load on your database.
You can install something like Memcached or Redis on a set of EC2 instances but Similarly to what we described for the database you can use a managed service called Elasticache that allows you to run those engines without the operational overhead.
OK so you are done with the beta, have refined your product but you want to get more sincere feedback and iterate fast.
The best way to do this is to start offering paid membership of some sort. Paid customers will typically give you the best feedback. They are demanding and are the ones that already think your product is worth paying for. It is now very important that you introduce high availability to your architecture. A hardware failure should not impact your end users.
Here is the current architecture which has multiple single points of failure.
Eg if the Web server crashes your app won't work.
We add a second Web server from the same AMI but on a separate AZ. Each AWS region has multiple AZs that are physically distinct locations. This allows you to build extremely robust architectures that utilize multiple data centers.
Because we have multiple Web servers we need to distribute http requests with elastic load balancing.
And you don’t need multiple ELB instances because ELB is not a single server. It is itself a managed and fault tolerant service,.
ELB will also automatically scale its own capacity to process incoming requests depending on traffic.
For the database assuming we are using RDS you can enable the multi AZ feature that will launch a secondary node in a different AZ.
In an event of failure RDS will automatically fail over to that instance maintaining the hostname so that you don't need to manually modify your app config.
Similarly for the cache we expand our cluster in 2 AZs.
In the case of memcache each of those nodes stores a portion of the keys so the impact of failure is reduced, only part of our cache will become cold.
in the case of redis we can easily configure elasticache to setup master slave replication and automatic failover.
A problem we have to face when moving from one to 2 servers is how do we manage user sessions. Typically most runtime environments store those on the local file system which is not shared. A user that signs in on one server will be logged out on a subsequent http request that might be serviced by server 2.
A quickfix here's to use elb feature called session stickiness. This will send a particular user to the same backend server every time. We will see later on why this is not our long term solution and why it is better to move this to DynamoDB.
And dynamodb is a managed nosql data store on AWS that stores your data durably in multiple AZs. It also has consistently fast performance so it is ideal for the storage of session data.
In fact for php and tomcat environments there are drop in replacement session handlers that you can use to achieve that.
Going further on our journey let's assume your startup has seen some good traction and is ready to invest on marketing campaigns which could help it go viral.
In traditional hosting environments that is a nice but difficult problem to have. You need to guess how many servers to buy or rent. And you might order too many. Or too few. In AWS you can go to the console and add more web servers required.
You can add for example 2 more web servers
And attach them to Elastic Load Balancing
Elastic Load Balancing itself will scale automatically.
But this is not something you want to do manually.
Even during the same day you have variance in your capacity requirements so you want to automatically adjust the number of servers in your fleet to be as close as possible to your actual needs.
Autoscaling is a service that allows you to do that.
You configure a minimum and a maximum number of servers and you set a rule that defines when you want to add servers or when to remove servers.
E.g. when cpu utilization is high for more than 5 minutes.
STORY BACKGROUND
Sanlih E-Television is a leading cable TV company in Taiwan with about 25 percent of the national viewing audience. The network operates six channels: 24-hour news, drama, lifestyle and pop, international, finance, and music television.
Amazon EC2 to run website, Amazon RDS and Dynamo DB for database service, Amazon Kinesis for real-time application monitoring and clickstream analytics.
AWS used to support its Internet platforms strategy including TV, online news apps, e-commerce, and OTT content.
SOLUTION & BENEFITS
AWS services (EC2) for online campaigns related to its programs, including popular dramas, and for sending out news flashes to mobile devices
Adopting Amazon Elasticsearch Service and Amazon Elastic MapReduce (Amazon EMR) for deeper insights into customer engagement through the company’s multiple online channels.
Saved 30 % over other cloud service providers, 50% over on-premesis solutions
CONTENT TAGS
Main use case: Website/Web App
Additional use case(s): Big Data
Keywords (seperated by commas): broadcast, TV, cable TV network, online platform, e-commerce, TV channels, multiplatform, new media, mobile, streaming services
All AWS Services used by the customer: Amazon EC2, Amazon RDS, Amazon DynamoDB, Amazon Elasticsearch Service, Amazon Kinesis, Amazon Elastic MapReduce
Benefits Realized: Options are: Flexibility, Lower Cost, Lower Time To Market, User Experience
STORY BACKGROUND
Netflix is one of the world’s leading Internet television network with over 57 million members in nearly 50 countries.
The company is using AWS to measure and understand its users’ streaming experiences through its analytics platform. Also using AWS to deliver billions of hours of content per month to users worldwide.
By using AWS, Netflix can reduce its testing times from weeks to seconds and store more than 10 PB of information––hundreds of millions of objects––on Amazon S3.
SOLUTION
[Main use case]. Big Data
[Additional use cases]. Analytics and Business Intelligence (BI); Content Delivery; Database and Data Warehouse; Development and Test
[Keywords separated by commas]. EMR, Analytics, S3, Data Warehouse, Testing, User experience, Hadoop, DevOps
[List all AWS Services used by the customer]. Using Amazon EC2, Amazon EMR, Amazon S3, DynamoDB
BENEFITS
Reduced testing time from weeks to seconds by launching instances instead of procuring servers.
Netflix operates a 10 PB data ‘warehouse’ on Amazon S3 comprised of hundreds of millions of objects.
Designed to deliver billions of hours of content monthly using tens of thousands of instances across three regions.
Moving organization to a DevOps model to promote fast ways to test and experiment new features.
[Benefits Realized]. Availability, Better Performance, Lower Time To Market, Scalability/Elasticity, Speed, User Experience
This sounds very easy and it is as long as you have a stateless architecture on your Web servers.
What does this mean?
Anything that needs to persist beyond the life of a single http request should be stored in shared storage – not on the web server itself.
E.g. in our example we have already done the hard work.
We store user uploads on S3, and user sessions on dynamoDB and everything else perhaps on an RDS database
With that we can simply add more servers when we need them
They will immediately affect new and existing users – we are not using session stickiness.
but more importantly we can terminate any of them at any time - none of them stores any important data that I have not saved elsewhere.
And the architecture I described is simple but you still need to learn about aws autoscalling, deploy your app to multiple servers, maintain different environments for development testing production, and multiple versions of your app, maybe you also want to do ab testing.
With Elastic Beanstalk you just provide your code as a zip file and this service will configure elb, launch servers in autoscalling and deploy your code. It is a free service, you only pay for the resources it launches for you, it supports multiple runtimes, and is very customizable.
You can move a lot faster and hide some complexity by using an automated service like elastic beanstalk.
Another characteristic of scalable architectures is that of loose coupling. You can use SQS – Amazon’s Queing service - to achieve that.
If you have tasks that can be performed asynchronously you can place those in SQS instead of having your users wait for them to be performed. You can use SQS as a buffer that protects your backend systems from sudden spikes. Because the backend system can process the queue in its own pace – so you don’t need to scale up aggressively.
You also move latency out of highly responsive request paths. And can hide any performance or availability issues from your end users.
A few days ago the AWS Lambda service became available and this even allows you to offload the processing of asynchronous tasks to a managed execution layer so that you don’t even need to have ec2 instances to run this code.
And now that we have loads of users it is important we increase our pace and add new features.
Many times when you add functionality you might need to introduce new components to your setup. Perhaps you want to implement advanced search features. Or you want to send push notifications or implement video transcoding.
In those cases your first question should be whether there is an aws service that already achieves that and is already designed to scale instead of figuring out how to implement it on your own on ec2.
We have seen how services like EC2 give you the freedom to architect in myriads of ways or your app needs to be built in a certain way to take advantage of their elasticity
And it is important to realize that the higher level services – you can think of them as building blocks – are already implemented to scale so that you don't have to architect from scratch.
In fact some of those do this automatically for you.
These services are available with a few clicks. And as long as you can use such services you can keep the size of your team small and still achieve great outcomes for your customers.
Even later if you have lots of revenue and you can hire engineers it is always better if they focus on the things that differentiate you and not on how to manage a search cluster.
If we follow the same concept we can keep on scaling with no practical limits.
As a summary the main points from today’s session are the following:
You want to keep things as simple as possible and create a stateless web architecture.
Distribute your resource in multiple AZ and use AutoScaling for your EC2 infrastructure.
But do try to use managed services on AWS as much as possible and select the right db for the right job.
Caching will help you be more efficient and automated deployment tools can help you be operationally efficient.
In terms of next steps there is a lot of documentation online but also I would highly recommend you sign up for AWS Business Support as it can be an extension of your team.