Not having to worry about servers can save you time and effort. Today with a serverless platform, you can globally distribute your web-application to run on dozens of data centers across the planet, with your customers being served from the one nearest to them. In this session to learn how you can combine forces -- with Drupal as a powerful Headless CMS, AWS Lambda@Edge providing serverless compute functionality, and Amazon CloudFront accelerating content through its global network.
In this presentation, we look at architecture, integration examples, and best practices for some of the most popular use-cases from across different channels such as web, mobile, and social media. Learn more: https://aws.amazon.com/cloudfront/
This document summarizes Netflix's global cloud edge architecture. Key points include:
- Netflix uses edge services and a global cloud infrastructure to deliver content to over 1000 device types in over 40 countries.
- Zuul is an open source framework that Netflix uses for dynamic routing, authentication, testing, and security across its edge services.
- Netflix's edge scripting tier allows device teams to rapidly deploy scripts that control endpoints, content formatting, and APIs for different devices.
- RxJava and Hystrix help make the edge service API asynchronous, fault tolerant, and able to handle high concurrency.
- Netflix's delivery pipeline uses techniques like canary analysis, debugging, and load testing to continuously and automatically deploy changes
In a world where compute is paramount, it is all too easy to overlook the importance of storage and IO in the performance and optimization of Spark jobs.
(BDT303) Running Spark and Presto on the Netflix Big Data PlatformAmazon Web Services
In this session, we discuss how Spark and Presto complement the Netflix big data platform stack that started with Hadoop, and the use cases that Spark and Presto address. Also, we discuss how we run Spark and Presto on top of the Amazon EMR infrastructure; specifically, how we use Amazon S3 as our data warehouse and how we leverage Amazon EMR as a generic framework for data-processing cluster management.
Apache Flink: API, runtime, and project roadmapKostas Tzoumas
The document provides an overview of Apache Flink, an open source stream processing framework. It discusses Flink's programming model using DataSets and transformations, real-time stream processing capabilities, windowing functions, iterative processing, and visualization tools. It also provides details on Flink's runtime architecture, including its use of pipelined and staged execution, optimizations for iterative algorithms, and how the Flink optimizer selects execution plans.
Keep Your Cache Always Fresh with Debezium! with Gunnar Morling | Kafka Summi...HostedbyConfluent
The saying goes that there are only two hard things in Computer Science: cache invalidation, and naming things. Well, turns out the first one is solved actually ;)
Join us for this session to learn how to keep read views of your data in distributed caches close to your users, always kept in sync with your primary data stores change data capture. You will learn how to
- Implement a low-latency data pipeline for cache updates based on Debezium, Apache Kafka, and Infinispan
- Create denormalized views of your data using Kafka Streams and make them accessible via plain key look-ups from a cache cluster close by
- Propagate updates between cache clusters using cross-site replication
We'll also touch on some advanced concepts, such as detecting and rejecting writes to the system of record which are derived from outdated cached state, and show in a demo how all the pieces come together, of course connected via Apache Kafka.
Organizations are struggling to make sense of their data within antiquated data platforms. Snowflake, the data warehouse built for the cloud, can help.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
This document provides an overview and agenda for a presentation on architecting for the cloud using the Netflix approach. Some key points:
- Netflix has over 40 million members streaming over 1 billion hours per month of content across over 40 countries.
- Netflix runs on AWS and handles billions of requests per day across thousands of instances globally.
- The presentation will discuss how to build your own platform as a service (PaaS) based on Netflix's open source libraries, including platform services, libraries, and tools.
- The Netflix approach focuses on microservices, automation, and resilience to support rapid iteration on cloud infrastructure.
This document summarizes Netflix's global cloud edge architecture. Key points include:
- Netflix uses edge services and a global cloud infrastructure to deliver content to over 1000 device types in over 40 countries.
- Zuul is an open source framework that Netflix uses for dynamic routing, authentication, testing, and security across its edge services.
- Netflix's edge scripting tier allows device teams to rapidly deploy scripts that control endpoints, content formatting, and APIs for different devices.
- RxJava and Hystrix help make the edge service API asynchronous, fault tolerant, and able to handle high concurrency.
- Netflix's delivery pipeline uses techniques like canary analysis, debugging, and load testing to continuously and automatically deploy changes
In a world where compute is paramount, it is all too easy to overlook the importance of storage and IO in the performance and optimization of Spark jobs.
(BDT303) Running Spark and Presto on the Netflix Big Data PlatformAmazon Web Services
In this session, we discuss how Spark and Presto complement the Netflix big data platform stack that started with Hadoop, and the use cases that Spark and Presto address. Also, we discuss how we run Spark and Presto on top of the Amazon EMR infrastructure; specifically, how we use Amazon S3 as our data warehouse and how we leverage Amazon EMR as a generic framework for data-processing cluster management.
Apache Flink: API, runtime, and project roadmapKostas Tzoumas
The document provides an overview of Apache Flink, an open source stream processing framework. It discusses Flink's programming model using DataSets and transformations, real-time stream processing capabilities, windowing functions, iterative processing, and visualization tools. It also provides details on Flink's runtime architecture, including its use of pipelined and staged execution, optimizations for iterative algorithms, and how the Flink optimizer selects execution plans.
Keep Your Cache Always Fresh with Debezium! with Gunnar Morling | Kafka Summi...HostedbyConfluent
The saying goes that there are only two hard things in Computer Science: cache invalidation, and naming things. Well, turns out the first one is solved actually ;)
Join us for this session to learn how to keep read views of your data in distributed caches close to your users, always kept in sync with your primary data stores change data capture. You will learn how to
- Implement a low-latency data pipeline for cache updates based on Debezium, Apache Kafka, and Infinispan
- Create denormalized views of your data using Kafka Streams and make them accessible via plain key look-ups from a cache cluster close by
- Propagate updates between cache clusters using cross-site replication
We'll also touch on some advanced concepts, such as detecting and rejecting writes to the system of record which are derived from outdated cached state, and show in a demo how all the pieces come together, of course connected via Apache Kafka.
Organizations are struggling to make sense of their data within antiquated data platforms. Snowflake, the data warehouse built for the cloud, can help.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
This document provides an overview and agenda for a presentation on architecting for the cloud using the Netflix approach. Some key points:
- Netflix has over 40 million members streaming over 1 billion hours per month of content across over 40 countries.
- Netflix runs on AWS and handles billions of requests per day across thousands of instances globally.
- The presentation will discuss how to build your own platform as a service (PaaS) based on Netflix's open source libraries, including platform services, libraries, and tools.
- The Netflix approach focuses on microservices, automation, and resilience to support rapid iteration on cloud infrastructure.
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
Elasticsearch is a free and open source distributed search and analytics engine. It allows documents to be indexed and searched quickly and at scale. Elasticsearch is built on Apache Lucene and uses RESTful APIs. Documents are stored in JSON format across distributed shards and replicas for fault tolerance and scalability. Elasticsearch is used by many large companies due to its ability to easily scale with data growth and handle advanced search functions.
How Orange Financial combat financial frauds over 50M transactions a day usin...StreamNative
You will learn how Orange Financial combat financial frauds over 50M transactions a day using Apache Pulsar. The presentation is shared at Strata Data Conference at New York, US, 2019/09.
Introduction to Presto at Treasure DataTaro L. Saito
Presto is a distributed SQL query engine that was developed by Facebook to make SQL queries scalable for large datasets. It translates SQL queries into multiple parallel tasks that can process data across many servers without using intermediate storage. This allows Presto to handle millions of records per second. Presto is now open source and used by many companies for interactive analysis of petabyte-scale datasets.
ClickHouse Query Performance Tips and Tricks, by Robert Hodges, Altinity CEOAltinity Ltd
1. ClickHouse uses a MergeTree storage engine that stores data in compressed columnar format and partitions data into parts for efficient querying.
2. Query performance can be optimized by increasing threads, reducing data reads through filtering, restructuring queries, and changing the data layout such as partitioning strategy and primary key ordering.
3. Significant performance gains are possible by optimizing the data layout, such as keeping an optimal number of partitions, using encodings to reduce data size, and skip indexes to avoid unnecessary I/O. Proper indexes and encodings can greatly accelerate queries.
ClickHouse Deep Dive, by Aleksei MilovidovAltinity Ltd
This document provides an overview of ClickHouse, an open source column-oriented database management system. It discusses ClickHouse's ability to handle high volumes of event data in real-time, its use of the MergeTree storage engine to sort and merge data efficiently, and how it scales through sharding and distributed tables. The document also covers replication using the ReplicatedMergeTree engine to provide high availability and fault tolerance.
This document discusses XML and XPath injection vulnerabilities. It begins with an overview of XML basics like structure and components. It then covers different types of XML injections like in node attributes, node values, and CDATA sections. Next, it discusses XPath basics like syntax and functions. The document outlines techniques for XPath injection vulnerabilities, including blind XPath injection to extract XML file structure. It concludes with recommendations for XPath injection tools and references.
Building a Versatile Analytics Pipeline on Top of Apache Spark with Mikhail C...Databricks
It is common for consumer Internet companies to start off with popular third-party tools for analytics needs. Then, when the user base and the company grows, they end up building their own analytics data pipeline and query engine to cope with their data scale, satisfy custom data enrichment and reporting needs and achieve high quality of their data. That’s exactly the path that was taken at Grammarly, the popular online proofreading service.
In this session, Grammarly will share how they improved business and marketing analytics, previously done with Mixpanel, by building their own in-house analytics engine and application on top of Apache Spark. Chernetsov wil touch upon several Spark tweaks and gotchas that they experienced along the way:
– Outputting data to several storages in a single Spark job
– Dealing with Spark memory model, building a custom spillable data-structure for your data traversal
– Implementing a custom query language with parser combinators on top of Spark sql parser
– Custom query optimizer and analyzer when you want not exactly sql
– Flexible-schema storage and query against multi-schema data with schema conflicts
– Custom aggregation functions in Spark SQL
Agenda:
1.Data Flow Challenges in an Enterprise
2.Introduction to Apache NiFi
3.Core Features
4.Architecture
5.Demo –Simple Lambda Architecture
6.Use Cases
7.Q & A
This document introduces Apache Cassandra, a distributed column-oriented NoSQL database. It discusses Cassandra's architecture, data model, query language (CQL), and how to install and run Cassandra. Key points covered include Cassandra's linear scalability, high availability and fault tolerance. The document also demonstrates how to use the nodetool utility and provides guidance on backing up and restoring Cassandra data.
MaxScale uses an asynchronous and multi-threaded architecture to route client queries to backend database servers. Each thread creates its own epoll instance to monitor file descriptors for I/O events, avoiding locking between threads. Listening sockets are added to a global epoll file descriptor that notifies threads when clients connect, allowing connections to be distributed evenly across threads. This architecture improves performance over the previous single epoll instance approach.
Data saturday Oslo Azure Purview Erwin de KreukErwin de Kreuk
Azure Purview provides unified data governance capabilities including automated data discovery, classification, and lineage visualization. It helps organizations overcome data governance silos, comply with regulations, and increase data agility. The key components of Azure Purview include the Data Map for automated metadata extraction and lineage, the Data Catalog for data discovery and governance, and Insights for monitoring data usage. It supports governance of data across cloud and on-premises environments in a serverless and fully managed platform.
Are your Oracle databases highly available? You have deployed Real Application Clusters (RAC), Data Guard, or Failover Clusters and are well protected against server failures? Great – the prerequisites for a highly available environment are given. However, to assure that backend infrastructure failures also remain transparent to the client, an appropriate configuration is a prerequisite.
This lecture will discuss the Oracle technologies that can be used to achieve automatic client failover functionality. What are the advantages, but also the limitations of these technologies?
CQL is a structured query language for Cassandra that replaces clients. It features a familiar SQL-like syntax for querying Cassandra in a user-friendly way. Key CQL commands include CREATE KEYSPACE, CREATE COLUMNFAMILY, SELECT, UPDATE, DELETE, BATCH, and DROP. Consistency levels can be specified for commands using USING and valid options are CONSISTENCY ZERO to CONSISTENCY DCQUORUMSYNC.
This document discusses Apache NiFi and how it was used to create a new composable data flow system for Schlumberger in just 10 man hours. The previous system was very complex, took over 100 man years to create, and was difficult to change. NiFi allows for easy visualization of the data flow, debugging of issues, and rapid creation of new processors. It also enables quick testing of data flows using curated test data sets and live data in Docker containers. Next steps discussed include further exploring use cases for rig data ingestion with NiFi to provide data provenance and understand the chain of custody of data as it moves through the system.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
AWS Lambda enables you to run code without provisioning or managing servers. AWS Lambda@Edge enables you to write Lambda functions once and execute them wherever your viewers are located. In this session, we walk through examples of web applications that use the serverless programming model for authentication, customization, and security. We show you how to design and deploy intelligent web applications using Lambda@Edge and Amazon CloudFront.
The document discusses using AWS Lambda functions with Amazon CloudFront (Lambda@Edge) to run code at the edge of the CloudFront network. Some key points:
- Lambda@Edge allows running Lambda functions in response to CloudFront events, including viewer requests and origin requests. This can run code closer to users for lower latency.
- Functions can perform authentication, content generation/processing, and other edge logic. They have access to request/response objects and can modify caching behavior.
- Examples include redirecting unauthenticated users, dynamically selecting origins, and template rendering to generate responses from data sources on origin requests before caching.
- With Lambda@Edge, the same code can run globally across CloudFront
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
Elasticsearch is a free and open source distributed search and analytics engine. It allows documents to be indexed and searched quickly and at scale. Elasticsearch is built on Apache Lucene and uses RESTful APIs. Documents are stored in JSON format across distributed shards and replicas for fault tolerance and scalability. Elasticsearch is used by many large companies due to its ability to easily scale with data growth and handle advanced search functions.
How Orange Financial combat financial frauds over 50M transactions a day usin...StreamNative
You will learn how Orange Financial combat financial frauds over 50M transactions a day using Apache Pulsar. The presentation is shared at Strata Data Conference at New York, US, 2019/09.
Introduction to Presto at Treasure DataTaro L. Saito
Presto is a distributed SQL query engine that was developed by Facebook to make SQL queries scalable for large datasets. It translates SQL queries into multiple parallel tasks that can process data across many servers without using intermediate storage. This allows Presto to handle millions of records per second. Presto is now open source and used by many companies for interactive analysis of petabyte-scale datasets.
ClickHouse Query Performance Tips and Tricks, by Robert Hodges, Altinity CEOAltinity Ltd
1. ClickHouse uses a MergeTree storage engine that stores data in compressed columnar format and partitions data into parts for efficient querying.
2. Query performance can be optimized by increasing threads, reducing data reads through filtering, restructuring queries, and changing the data layout such as partitioning strategy and primary key ordering.
3. Significant performance gains are possible by optimizing the data layout, such as keeping an optimal number of partitions, using encodings to reduce data size, and skip indexes to avoid unnecessary I/O. Proper indexes and encodings can greatly accelerate queries.
ClickHouse Deep Dive, by Aleksei MilovidovAltinity Ltd
This document provides an overview of ClickHouse, an open source column-oriented database management system. It discusses ClickHouse's ability to handle high volumes of event data in real-time, its use of the MergeTree storage engine to sort and merge data efficiently, and how it scales through sharding and distributed tables. The document also covers replication using the ReplicatedMergeTree engine to provide high availability and fault tolerance.
This document discusses XML and XPath injection vulnerabilities. It begins with an overview of XML basics like structure and components. It then covers different types of XML injections like in node attributes, node values, and CDATA sections. Next, it discusses XPath basics like syntax and functions. The document outlines techniques for XPath injection vulnerabilities, including blind XPath injection to extract XML file structure. It concludes with recommendations for XPath injection tools and references.
Building a Versatile Analytics Pipeline on Top of Apache Spark with Mikhail C...Databricks
It is common for consumer Internet companies to start off with popular third-party tools for analytics needs. Then, when the user base and the company grows, they end up building their own analytics data pipeline and query engine to cope with their data scale, satisfy custom data enrichment and reporting needs and achieve high quality of their data. That’s exactly the path that was taken at Grammarly, the popular online proofreading service.
In this session, Grammarly will share how they improved business and marketing analytics, previously done with Mixpanel, by building their own in-house analytics engine and application on top of Apache Spark. Chernetsov wil touch upon several Spark tweaks and gotchas that they experienced along the way:
– Outputting data to several storages in a single Spark job
– Dealing with Spark memory model, building a custom spillable data-structure for your data traversal
– Implementing a custom query language with parser combinators on top of Spark sql parser
– Custom query optimizer and analyzer when you want not exactly sql
– Flexible-schema storage and query against multi-schema data with schema conflicts
– Custom aggregation functions in Spark SQL
Agenda:
1.Data Flow Challenges in an Enterprise
2.Introduction to Apache NiFi
3.Core Features
4.Architecture
5.Demo –Simple Lambda Architecture
6.Use Cases
7.Q & A
This document introduces Apache Cassandra, a distributed column-oriented NoSQL database. It discusses Cassandra's architecture, data model, query language (CQL), and how to install and run Cassandra. Key points covered include Cassandra's linear scalability, high availability and fault tolerance. The document also demonstrates how to use the nodetool utility and provides guidance on backing up and restoring Cassandra data.
MaxScale uses an asynchronous and multi-threaded architecture to route client queries to backend database servers. Each thread creates its own epoll instance to monitor file descriptors for I/O events, avoiding locking between threads. Listening sockets are added to a global epoll file descriptor that notifies threads when clients connect, allowing connections to be distributed evenly across threads. This architecture improves performance over the previous single epoll instance approach.
Data saturday Oslo Azure Purview Erwin de KreukErwin de Kreuk
Azure Purview provides unified data governance capabilities including automated data discovery, classification, and lineage visualization. It helps organizations overcome data governance silos, comply with regulations, and increase data agility. The key components of Azure Purview include the Data Map for automated metadata extraction and lineage, the Data Catalog for data discovery and governance, and Insights for monitoring data usage. It supports governance of data across cloud and on-premises environments in a serverless and fully managed platform.
Are your Oracle databases highly available? You have deployed Real Application Clusters (RAC), Data Guard, or Failover Clusters and are well protected against server failures? Great – the prerequisites for a highly available environment are given. However, to assure that backend infrastructure failures also remain transparent to the client, an appropriate configuration is a prerequisite.
This lecture will discuss the Oracle technologies that can be used to achieve automatic client failover functionality. What are the advantages, but also the limitations of these technologies?
CQL is a structured query language for Cassandra that replaces clients. It features a familiar SQL-like syntax for querying Cassandra in a user-friendly way. Key CQL commands include CREATE KEYSPACE, CREATE COLUMNFAMILY, SELECT, UPDATE, DELETE, BATCH, and DROP. Consistency levels can be specified for commands using USING and valid options are CONSISTENCY ZERO to CONSISTENCY DCQUORUMSYNC.
This document discusses Apache NiFi and how it was used to create a new composable data flow system for Schlumberger in just 10 man hours. The previous system was very complex, took over 100 man years to create, and was difficult to change. NiFi allows for easy visualization of the data flow, debugging of issues, and rapid creation of new processors. It also enables quick testing of data flows using curated test data sets and live data in Docker containers. Next steps discussed include further exploring use cases for rig data ingestion with NiFi to provide data provenance and understand the chain of custody of data as it moves through the system.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
AWS Lambda enables you to run code without provisioning or managing servers. AWS Lambda@Edge enables you to write Lambda functions once and execute them wherever your viewers are located. In this session, we walk through examples of web applications that use the serverless programming model for authentication, customization, and security. We show you how to design and deploy intelligent web applications using Lambda@Edge and Amazon CloudFront.
The document discusses using AWS Lambda functions with Amazon CloudFront (Lambda@Edge) to run code at the edge of the CloudFront network. Some key points:
- Lambda@Edge allows running Lambda functions in response to CloudFront events, including viewer requests and origin requests. This can run code closer to users for lower latency.
- Functions can perform authentication, content generation/processing, and other edge logic. They have access to request/response objects and can modify caching behavior.
- Examples include redirecting unauthenticated users, dynamically selecting origins, and template rendering to generate responses from data sources on origin requests before caching.
- With Lambda@Edge, the same code can run globally across CloudFront
Taking Serverless to the Edge - SRV330 - Chicago AWS SummitAmazon Web Services
AWS Lambda enables you to run code without provisioning or managing servers. AWS Lambda@Edge enables you to write Lambda functions once and execute them wherever your viewers are located. In this session, we walk through examples of web applications that use the serverless programming model for authentication, customization, and security. We show you how to design and deploy intelligent web applications using Lambda@Edge and Amazon CloudFront.
Learning Objectives:
- Deep dive on Lambda@Edge and the most recent developments
- Learn how to run your Lambda functions at multiple locations closer to the end viewer
- Learn how it can apply to your web development use cases
Deep Dive into AWS X-Ray: Monitor Modern Applications (DEV324) - AWS re:Inven...Amazon Web Services
Are you spending hours trying to understand how customers are impacted by performance issues and faults in your service-oriented applications? In this session, we show you how customers are using AWS X-Ray to reduce mean time to resolution, get to the root cause faster, and determine customer impact. In addition, one of our X-Ray customers, ConnectWise, presents a case study and best practices on how it is leveraging X-Ray in its production environment. We also show you how to use X-Ray with applications built using AWS services, such as Amazon Elastic Container Service for Kubernetes (Amazon EKS), AWS Fargate, Amazon Elastic Container Service (Amazon ECS), and AWS Lambda to achieve the above.
This session will highlight the most impactful announcements made at AWS re:Invent 2017 while giving you ideas on key use cases for new services and features. We’ll cover major themes of the conference, the new services and features within those themes and how they work together to make it faster and easier to build functionality into your app.
Ensuring Your Windows Server Workloads Are Well-Architected - AWS Online Tech...Amazon Web Services
Learning Objectives:
- Learn about common architecture patterns for network design, Microsoft Active Directory, and business productivity solutions like Dynamics AX, CRM, and Microsoft SharePoint
- Explore common scenarios for legacy and custom .NET, .NET Core with Microsoft SQL deployments and migrations
- Gain insights on simplifying your IT infrastructure and managing your Microsoft workloads in a familiar environment
If you still running servers for website backend, come and see how you can remove server operations from your tasks list and focus on developing the best code and product.
In this session, we will take a common website architecture and show how can we use AWS Lambda, AWS AppSync and other Services to build smarter and better systems.
If you still running servers for website backend, come and see how you can remove server operations from your tasks list and focus on developing the best code and product.In this session, we will take a common website architecture and show how can we use AWS Lambda, AWS AppSync and other Services to build smarter and better systems.
How to build scalable and resilient applications in the cloud - AWS Summit Ca...Amazon Web Services
Speaker: Adrian Hornsby, AWS
Level: 300
Ever wondered how companies delivering global services like Amazon or Netflix are architecting and testing their software systems? If you are curious and wanna learn how they do it - this is for you! In this session, will deep dive into availability, reliability and large-scale architectures and make an introduction to chaos engineering, a discipline that promotes breaking things on purpose in order to learn how to build more resilient systems.
Wildrydes Serverless Workshop Tel AvivBoaz Ziniman
This document summarizes a serverless computing workshop on building a web application called Wild Rydes. The workshop will provide an overview of serverless computing and AWS services, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway, Amazon Cognito, and Amazon S3. Attendees will complete four labs to build components of the application, including hosting a static website on S3, managing user registration with Cognito, creating a backend with Lambda and DynamoDB, and building a REST API with API Gateway.
Build Modern Applications that Align with Twelve-Factor Methods (API303) - AW...Amazon Web Services
Twelve-Factor designs improve component reuse and resilience for developers building large-scale software-as-a-service (SaaS) applications. In recent years, the Twelve-Factor guidelines have become a source of best practices for both developers and operations engineers, regardless of the application’s use case and at nearly any scale. In this workshop, create a modern app to see how the Twelve-Factor Application guidelines align with serverless best practices. Learn how to address those Twelve-Factor guidelines that don’t directly align with serverless architectures or are interpreted differently, and practice by implementing examples using AWS Lambda, AWS Step Functions, Amazon API Gateway, and the AWS Code services. Bring a laptop (Windows/OSX/Linux all supported). Tablets are not appropriate. We also recommend installing the current version of Chrome or Firefox.
Eliminate Migration Confusion: Speed Migration with Automated Tracking (ENT31...Amazon Web Services
The document discusses automating the migration of applications and infrastructure from on-premises to AWS. It covers the benefits of automating discovery, migration, and tracking migrations. Examples of automation architectures are presented, including using AWS services like Application Discovery Service, AWS Migration Hub, and Lambda. The overall goal is to speed up migrations through repeatable, traceable automation.
Customizing Content Delivery with Lambda@Edge (CTD415-R1) - AWS re:Invent 2018Amazon Web Services
Join us for a hands-on session on using AWS Lambda@Edge and Amazon CloudFront to deliver high-performance and personalized experience to your internet users across the globe. Walk away with a working setup of combining Amazon S3, Amazon DynamoDB, and CloudFront with Lambda@Edge to build websites that are simultaneously hosted across AWS locations across the world. Learn how to extend application architecture for scale and performance. We show you how to reduce origin costs by rendering web page content at the edge with open source libraries, such as jQuery and Mustache. We explore architecture, configuration, and DevOps with real examples of how AWS customers are using Lambda@Edge for their websites.
Resiliency and Availability Design Patterns for the CloudAmazon Web Services
We have traditionally built robust software systems by trying to avoid mistakes and by dodging failures when they occur in production or by testing parts of the system in isolation from one another. Modern methods and techniques take a very different approach based on resiliency, which promotes embracing failure instead of trying to avoid it. Resilient architectures enhance observability, leverage well-known patterns such as graceful degradation, timeouts and circuit breakers but also new patterns like cell-based architecture and shuffle sharding. In this session, will review the most useful patterns for building resilient software systems and especially show the audience how they can benefit from the patterns.
The document discusses scaling applications from basic to advanced architectures on AWS. It begins with simple static websites hosted on S3 and evolves to include services like Route53, EC2, databases, authentication with Cognito, load balancing, auto-scaling, caching, and asynchronous processing. The final architectures shown are serverless, event-driven, and use microservices.
As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes reusable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Data Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session helps you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility.
This document discusses serverless architectures and how they can be applied to websites. It begins with an overview of what serverless means, including not having to provision or manage servers, automatic scaling, built-in availability and fault tolerance, and no costs for idle capacity. It then shows how the traditional three-tier web application architecture maps to a serverless implementation using AWS Lambda for the logic tier, API Gateway, DynamoDB, S3, and CloudFront. The document provides examples of security and authorization approaches as well as multi-region support. It concludes by demonstrating a serverless shopping site and inviting the audience to consider what they might build using these approaches.
Serverless Architectural Patterns and Best Practices (ARC305-R2) - AWS re:Inv...Amazon Web Services
The document discusses serverless architectures and best practices. It covers topics like serverless foundations, web applications, stream processing, data lakes, and machine learning. It provides an overview of AWS serverless offerings and architectural patterns for building serverless applications and processing streaming data with services like AWS Lambda, Amazon API Gateway, Amazon Kinesis, Amazon S3, and AWS Step Functions.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.