Amazon S3 supports a wide range of storage classes to help you cost-effectively store your data. Each of the S3 Storage Classes is designed to support different use cases while reliably protecting your data. In this session, Amazon S3 experts will discuss the different S3 Storage Classes, their respective key features, and the unique use cases they support. We will then deep dive into S3’s newest storage class S3 Intelligent-Tiering—the first cloud storage class that automatically optimizes storage costs for data with changing access patterns. S3 Intelligent-Tiering moves data between two storage tiers based on changing access patterns of your objects and is ideal for data where customers don’t know or have a hard time learning how a data set is accessed over time. Attend this session to learn more about creating cost efficiencies with Amazon S3, when to use which storage class, and how S3 Intelligent-Tiering automates cost savings for you.
Introducing S3 Batch Operations: Managing Billions of Objects in Amazon S3 at...Amazon Web Services
Amazon S3’s newest management feature S3 Batch Operations makes it simple to manage billions of objects with a single API request or a few clicks in the S3 Management Console. With this new feature, customers can change object properties and execute core storage management tasks across any number of their objects stored in Amazon S3. S3 Batch Operations include: copying objects between buckets, replacing tag sets, modifying access controls, applying retention dates, and restoring archived objects from Amazon Glacier. Customers can also use S3 Batch Operations to invoke AWS Lambda functions to execute more complex operations. Attend this session to learn more about S3 Batch Operations and how it can save up to 90% of time spent on managing your S3 objects at scale.
EFS Performance: Maximizing Performance for Linux/Unix File Systems (STG314-R...Amazon Web Services
Amazon EFS delivers highly available and highly durable file systems that are distributed across an unconstrained number of storage servers and enables massively parallel access. This means that highly parallelized workloads can drive high levels of aggregate throughput and operations per second. In this chalk talk, we diagram different architectures that leverage this distributed data storage design, and we share best practices around selecting the appropriate performance and throughput mode, configuring clients, ingesting data, and monitoring performance.
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Amazon Web Services
As your data repository grows on AWS using the object storage services Amazon S3 and Amazon Glacier, it becomes increasingly helpful to use particular features to help protect and manage your objects. In this chalk talk, you have the opportunity to speak directly with the AWS engineering team that builds and maintains features like Cross-Region Replication, S3 Storage Class Analysis, S3 Inventory, S3 Lifecycle, Amazon Glacier Vault Lock, and others. Bring your feedback, questions, and expertise to discuss innovative ways to protect data from corruption or malicious and accidental deletion, managing the data lifecycle to reduce costs, identifying wasted storage, and much more.
Get the latest on what we've been developing in Amazon S3. In this session, we share best practices for performance optimization, security, data protection, storage management, and much more. We discuss ways to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and take advantage of object tagging and other features to enhance security.
Migrate Workloads with Large Storage and I/O Demands (GPSTEC311) - AWS re:Inv...Amazon Web Services
When you consider migrating your on-premises storage workloads to AWS, it's important to consider both performance and features. In this session, you learn how to use I/O profiling before you move your workload to AWS in order to understand your performance needs. Learn to translate your performance and feature requirements into solutions which might include AWS services and partner solutions. In addition we show you how to keep monitoring your storage workload once you're running on AWS.
Deep Dive on Amazon S3: Manage Operations Across Amazon S3 Objects at Scale (...Amazon Web Services
As your data stores grow, managing and operating on your stored objects becomes increasingly difficult to scale. In this session, AWS experts demonstrate Amazon S3 features you can use to perform and manage operations across any number of objects, from hundreds to billions, stored in Amazon S3. Learn how to monitor performance, ensure compliance, automate actions, and optimize storage across all your Amazon S3 objects. We also provide relevant use cases that demonstrate the full range of Amazon S3 capabilities and options, such as copying objects across buckets to create development environments, restricting access to sensitive data, or restoring many objects from Amazon Glacier.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
Using Amazon S3 and Amazon Glacier for Backup or Archive Storage (STG339) - A...Amazon Web Services
Whether you’re using Amazon S3 and Amazon Glacier as a backup target for database dumps, building a fully SEC-compliant archive, or something in between, AWS object storage offers a number of capabilities to ensure that your data stays retained, protected, and compliant with the rules of your business. This interactive session covers established best practices and new features to help you meet your retention requirements while minimizing storage costs.
Introducing S3 Batch Operations: Managing Billions of Objects in Amazon S3 at...Amazon Web Services
Amazon S3’s newest management feature S3 Batch Operations makes it simple to manage billions of objects with a single API request or a few clicks in the S3 Management Console. With this new feature, customers can change object properties and execute core storage management tasks across any number of their objects stored in Amazon S3. S3 Batch Operations include: copying objects between buckets, replacing tag sets, modifying access controls, applying retention dates, and restoring archived objects from Amazon Glacier. Customers can also use S3 Batch Operations to invoke AWS Lambda functions to execute more complex operations. Attend this session to learn more about S3 Batch Operations and how it can save up to 90% of time spent on managing your S3 objects at scale.
EFS Performance: Maximizing Performance for Linux/Unix File Systems (STG314-R...Amazon Web Services
Amazon EFS delivers highly available and highly durable file systems that are distributed across an unconstrained number of storage servers and enables massively parallel access. This means that highly parallelized workloads can drive high levels of aggregate throughput and operations per second. In this chalk talk, we diagram different architectures that leverage this distributed data storage design, and we share best practices around selecting the appropriate performance and throughput mode, configuring clients, ingesting data, and monitoring performance.
Protect & Manage Amazon S3 & Amazon Glacier Objects at Scale (STG316-R1) - AW...Amazon Web Services
As your data repository grows on AWS using the object storage services Amazon S3 and Amazon Glacier, it becomes increasingly helpful to use particular features to help protect and manage your objects. In this chalk talk, you have the opportunity to speak directly with the AWS engineering team that builds and maintains features like Cross-Region Replication, S3 Storage Class Analysis, S3 Inventory, S3 Lifecycle, Amazon Glacier Vault Lock, and others. Bring your feedback, questions, and expertise to discuss innovative ways to protect data from corruption or malicious and accidental deletion, managing the data lifecycle to reduce costs, identifying wasted storage, and much more.
Get the latest on what we've been developing in Amazon S3. In this session, we share best practices for performance optimization, security, data protection, storage management, and much more. We discuss ways to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and take advantage of object tagging and other features to enhance security.
Migrate Workloads with Large Storage and I/O Demands (GPSTEC311) - AWS re:Inv...Amazon Web Services
When you consider migrating your on-premises storage workloads to AWS, it's important to consider both performance and features. In this session, you learn how to use I/O profiling before you move your workload to AWS in order to understand your performance needs. Learn to translate your performance and feature requirements into solutions which might include AWS services and partner solutions. In addition we show you how to keep monitoring your storage workload once you're running on AWS.
Deep Dive on Amazon S3: Manage Operations Across Amazon S3 Objects at Scale (...Amazon Web Services
As your data stores grow, managing and operating on your stored objects becomes increasingly difficult to scale. In this session, AWS experts demonstrate Amazon S3 features you can use to perform and manage operations across any number of objects, from hundreds to billions, stored in Amazon S3. Learn how to monitor performance, ensure compliance, automate actions, and optimize storage across all your Amazon S3 objects. We also provide relevant use cases that demonstrate the full range of Amazon S3 capabilities and options, such as copying objects across buckets to create development environments, restricting access to sensitive data, or restoring many objects from Amazon Glacier.
One Data Lake, Many Uses: Enabling Multi-Tenant Analytics with Amazon EMR (AN...Amazon Web Services
One of the benefits of having a data lake is that same data can be consumed by multi-tenant groups—an efficient way to share a persistent Amazon EMR cluster. The same business data can be safely used for many different analytics and data processing needs. In this session, we discuss steps to make an Amazon EMR cluster multi-tenant for analytics, best practices for a multi-tenant cluster, and solutions to common challenges. We also address the security and governance aspects of a multi-tenant Amazon EMR cluster.
Using Amazon S3 and Amazon Glacier for Backup or Archive Storage (STG339) - A...Amazon Web Services
Whether you’re using Amazon S3 and Amazon Glacier as a backup target for database dumps, building a fully SEC-compliant archive, or something in between, AWS object storage offers a number of capabilities to ensure that your data stays retained, protected, and compliant with the rules of your business. This interactive session covers established best practices and new features to help you meet your retention requirements while minimizing storage costs.
Data Lake Implementation: Processing and Querying Data in Place (STG204-R1) -...Amazon Web Services
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select.
Aurora Serverless: Scalable, Cost-Effective Application Deployment (DAT336) -...Amazon Web Services
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Aurora (MySQL-compatible edition) where the database automatically starts up, shuts down, and scales up or down capacity based on your application's needs. It enables you to run your database in the cloud without managing any database instances. Aurora Serverless is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. In this session, we explore these use cases, take a look under the hood, and delve into the future of serverless databases. We also hear a case study from a customer building new functionality on top of Aurora Serverless.
Get the Most out of Your Amazon Elasticsearch Service Domain (ANT334-R1) - AW...Amazon Web Services
The document discusses strategies for optimizing an Amazon Elasticsearch deployment to handle tenant data from a sports technology platform with thousands of organizations. It describes several iterations tried, including using a single index, separate indexes per tenant, and combining tenants into shared indexes. The final approach involved zero-downtime reindexing of tenant data to migrate organizations between indices in order to reduce shard counts and optimize performance and costs.
Build Your Own Log Analytics Solutions on AWS (ANT323-R) - AWS re:Invent 2018Amazon Web Services
With Amazon Elasticsearch Service's simplicity comes a multitude of opportunity to use it as a back end for real-time application and infrastructure monitoring. With this wealth of opportunities comes sprawl - developers in your organization are deploying Amazon Elasticsearch Service for many different workloads and many different purposes. Should you centralize into one Amazon Elasticsearch Service domain? What are the tradeoffs in scale and cost? How do you control access to the data and dashboards? How do you structure your indexes - single tenant or multi-tenant? In this session, we'll explore whether, when, and how to centralize logging across your organization to minimize cost and maximize value and learn how Autodesk has built a unified log analytics solution using Amazon Elasticsearch Service.
Migrating Your NoSQL Database to Amazon DynamoDB (DAT314) - AWS re:Invent 2018Amazon Web Services
AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) can help migrate databases from many supported data sources to supported targets. In this session, we review how the combination of AWS DMS and AWS SCT can help migrate your NoSQL databases, such as MongoDB and Cassandra, to Amazon DynamoDB. We provide an overview of AWS DMS and AWS SCT, and we demonstrate migrating a sample Cassandra database into DynamoDB.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
Extending Analytics Beyond the Data Warehouse, ft. Warner Bros. Analytics (AN...Amazon Web Services
Companies have valuable data that they might not be analyzing due to the complexity, scalability, and performance issues of loading the data into their data warehouse. With the right tools, you can extend your analytics to query data in your data lake—with no loading required. Amazon Redshift Spectrum extends the analytic power of Amazon Redshift beyond data stored in your data warehouse to run SQL queries directly against vast amounts of unstructured data in your Amazon S3 data lake. This gives you the freedom to store your data where you want, in the format you want, and have it available for analytics when you need it. Join a discussion with an Amazon Redshift lead engineer to ask questions and learn more about how you can extend your analytics beyond your data warehouse.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
This document discusses big data analytics architectural patterns and best practices. It covers collecting and storing data from various sources, processing and analyzing data using tools like Amazon Redshift, Amazon Athena and Amazon EMR, and selecting the appropriate tools based on factors like data structure, access patterns, and data temperature. It also discusses stream/real-time analytics tools and machine learning approaches.
Querying Data in Place with AWS Object Storage Features and Analytics Tools (...Amazon Web Services
AWS offers tools and services that make analyzing and processing petabytes of data in the cloud faster, simpler, and more cost effective. In this chalk talk, AWS experts provide an overview of our querying data-in-place services, such as Amazon S3 Select, Amazon Glacier Select, Amazon Athena, and Amazon Redshift Spectrum. We explore best practices around using them with other analytics services (like Amazon EMR and AWS Glue) and third-party tools to build data lakes in Amazon S3 and Amazon Glacier and deploy other analytics solutions. Our AWS experts also provide sample use cases.
Tape Is a Four Letter Word: Back Up to the Cloud in Under an Hour (STG201) - ...Amazon Web Services
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
Migrate Your Hadoop/Spark Workload to Amazon EMR and Architect It for Securit...Amazon Web Services
"Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop/Spark to AWS in order to save costs, increase availability, and improve performance. In this session, AWS customers Airbnb and Guardian Life discuss how they migrated their workload to Amazon EMR. This session focuses on key motivations to move to the cloud. It details key architectural changes and the benefits of migrating Hadoop/Spark workloads to the cloud.
"
Metrics-Driven Performance Tuning for AWS Glue ETL Jobs (ANT331) - AWS re:Inv...Amazon Web Services
AWS Glue provides a horizontally scalable platform for running ETL jobs against a wide variety of data sources. In this builders session, we cover techniques for understanding and optimizing the performance of your jobs using Glue job metrics. Learn how to identify bottlenecks on the driver and executors, identify and fix data skew, tune the number of DPUs, and address common memory errors.
This session provides IT pros and application owners an overview of AWS options for building hybrid storage architectures or even entirely migrating datacenter storage to the AWS cloud. The AWS Storage Gateway connects existing on-premises block, file or tape storage systems to AWS cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process and migrate data into and out of AWS without any network connection at all. Join us to learn how you can close down datacenters, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Build on Amazon Aurora with MySQL Compatibility (DAT348-R4) - AWS re:Invent 2018Amazon Web Services
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. Join this session, and get started with the MySQL-compatible edition, discuss your existing application running on Aurora, or learn about recently announced features, such as Serverless or Parallel Query.
Data Privacy & Governance in the Age of Big Data: Deploy a De-Identified Data...Amazon Web Services
Come to this session to learn a new approach in reducing risk and costs while increasing productivity, organizational alacrity, and customer experience, resulting in a competitive advantage and assorted revenue growth. We share how a de-identified data lake on AWS can help you comply with General Data Protection Regulation (GDPR) and California Consumer Protection Act requirements by solving the issue at its causal element.
Building Serverless Applications Using AWS AppSync and Amazon Neptune (SRV307...Amazon Web Services
In this session, learn how to build a data driven, serverless calorie tracker application with real-time, offline, and data syncing capabilities. The application provides an overview of your progress toward the calorie intake goal you've set, recommended intake remains, and breakdown of calories consumed. Use Amazon Cognito to build signup and sign-in capabilities as well as federated login to Facebook. The application integrates with AWS AppSync to provide real-time data from multiple data sources through GraphQL technology as well as offline capability. AWS AppSync makes it easy to access this data and provide the exact information your application needs. As a bonus, learn to use Amazon Neptune, a fully managed graph database, to build a personalized recommendation engine for calorie intake.
Build Your First Big Data Application on AWS (ANT213-R1) - AWS re:Invent 2018Amazon Web Services
Do you want to increase your knowledge of AWS big data web services and launch your first big data application on the cloud? In this session, we walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You will build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so you can rebuild and customize the application yourself. To get the most from this session, bring your own laptop and have some familiarity with AWS services.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Lock It Down: Configure End-to-End Security & Access Control on Amazon EMR (A...Amazon Web Services
Amazon EMR helps you process all your data for analytics, but with great scale comes great responsibility—you need to make sure that data is secured by design. In this chalk talk, we walk through how to configure your environment to take full advantage of comprehensive security controls: including identifying sensitive data, encrypting data and managing keys, authenticating and authorizing users, utilizing fine-grained access controls, and using audit logs to demonstrate compliance.
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
"
Cost-effective-Data-Management-with-S3-Batch-Operations-and-the-S3-Storage-Cl...Amazon Web Services
Cost-effective Data Management with S3 Batch Operations and the S3 Storage Classes Abstract: As your data lake grows, it becomes increasingly important to manage objects at scale and optimize storage costs and resources. In this session, AWS experts provide an overview of S3’s capabilities that let you manage data at the object, bucket, and account levels. Learn about and watch demos for S3 Batch Operations (a new feature that lets you take action across thousands, millions, and even billions of objects with a single API request or a few clicks in the S3 Management Console). Also learn some cost-optimization best practices by storing objects across the S3 Storage Classes. This includes an overview of our newest storage classes S3 Intelligent-Tiering – the first cloud storage class to automatically deliver cost savings by moving objects with unknown or changing access patterns between two storage tiers – one optimized for frequent access and a lower-cost one optimized for infrequent access.
Data Lake Implementation: Processing and Querying Data in Place (STG204-R1) -...Amazon Web Services
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select.
Aurora Serverless: Scalable, Cost-Effective Application Deployment (DAT336) -...Amazon Web Services
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Aurora (MySQL-compatible edition) where the database automatically starts up, shuts down, and scales up or down capacity based on your application's needs. It enables you to run your database in the cloud without managing any database instances. Aurora Serverless is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. In this session, we explore these use cases, take a look under the hood, and delve into the future of serverless databases. We also hear a case study from a customer building new functionality on top of Aurora Serverless.
Get the Most out of Your Amazon Elasticsearch Service Domain (ANT334-R1) - AW...Amazon Web Services
The document discusses strategies for optimizing an Amazon Elasticsearch deployment to handle tenant data from a sports technology platform with thousands of organizations. It describes several iterations tried, including using a single index, separate indexes per tenant, and combining tenants into shared indexes. The final approach involved zero-downtime reindexing of tenant data to migrate organizations between indices in order to reduce shard counts and optimize performance and costs.
Build Your Own Log Analytics Solutions on AWS (ANT323-R) - AWS re:Invent 2018Amazon Web Services
With Amazon Elasticsearch Service's simplicity comes a multitude of opportunity to use it as a back end for real-time application and infrastructure monitoring. With this wealth of opportunities comes sprawl - developers in your organization are deploying Amazon Elasticsearch Service for many different workloads and many different purposes. Should you centralize into one Amazon Elasticsearch Service domain? What are the tradeoffs in scale and cost? How do you control access to the data and dashboards? How do you structure your indexes - single tenant or multi-tenant? In this session, we'll explore whether, when, and how to centralize logging across your organization to minimize cost and maximize value and learn how Autodesk has built a unified log analytics solution using Amazon Elasticsearch Service.
Migrating Your NoSQL Database to Amazon DynamoDB (DAT314) - AWS re:Invent 2018Amazon Web Services
AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) can help migrate databases from many supported data sources to supported targets. In this session, we review how the combination of AWS DMS and AWS SCT can help migrate your NoSQL databases, such as MongoDB and Cassandra, to Amazon DynamoDB. We provide an overview of AWS DMS and AWS SCT, and we demonstrate migrating a sample Cassandra database into DynamoDB.
Build Data Engineering Platforms with Amazon EMR (ANT204) - AWS re:Invent 2018Amazon Web Services
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
Extending Analytics Beyond the Data Warehouse, ft. Warner Bros. Analytics (AN...Amazon Web Services
Companies have valuable data that they might not be analyzing due to the complexity, scalability, and performance issues of loading the data into their data warehouse. With the right tools, you can extend your analytics to query data in your data lake—with no loading required. Amazon Redshift Spectrum extends the analytic power of Amazon Redshift beyond data stored in your data warehouse to run SQL queries directly against vast amounts of unstructured data in your Amazon S3 data lake. This gives you the freedom to store your data where you want, in the format you want, and have it available for analytics when you need it. Join a discussion with an Amazon Redshift lead engineer to ask questions and learn more about how you can extend your analytics beyond your data warehouse.
Big Data Analytics Architectural Patterns and Best Practices (ANT201-R1) - AW...Amazon Web Services
This document discusses big data analytics architectural patterns and best practices. It covers collecting and storing data from various sources, processing and analyzing data using tools like Amazon Redshift, Amazon Athena and Amazon EMR, and selecting the appropriate tools based on factors like data structure, access patterns, and data temperature. It also discusses stream/real-time analytics tools and machine learning approaches.
Querying Data in Place with AWS Object Storage Features and Analytics Tools (...Amazon Web Services
AWS offers tools and services that make analyzing and processing petabytes of data in the cloud faster, simpler, and more cost effective. In this chalk talk, AWS experts provide an overview of our querying data-in-place services, such as Amazon S3 Select, Amazon Glacier Select, Amazon Athena, and Amazon Redshift Spectrum. We explore best practices around using them with other analytics services (like Amazon EMR and AWS Glue) and third-party tools to build data lakes in Amazon S3 and Amazon Glacier and deploy other analytics solutions. Our AWS experts also provide sample use cases.
Tape Is a Four Letter Word: Back Up to the Cloud in Under an Hour (STG201) - ...Amazon Web Services
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
Migrate Your Hadoop/Spark Workload to Amazon EMR and Architect It for Securit...Amazon Web Services
"Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop/Spark to AWS in order to save costs, increase availability, and improve performance. In this session, AWS customers Airbnb and Guardian Life discuss how they migrated their workload to Amazon EMR. This session focuses on key motivations to move to the cloud. It details key architectural changes and the benefits of migrating Hadoop/Spark workloads to the cloud.
"
Metrics-Driven Performance Tuning for AWS Glue ETL Jobs (ANT331) - AWS re:Inv...Amazon Web Services
AWS Glue provides a horizontally scalable platform for running ETL jobs against a wide variety of data sources. In this builders session, we cover techniques for understanding and optimizing the performance of your jobs using Glue job metrics. Learn how to identify bottlenecks on the driver and executors, identify and fix data skew, tune the number of DPUs, and address common memory errors.
This session provides IT pros and application owners an overview of AWS options for building hybrid storage architectures or even entirely migrating datacenter storage to the AWS cloud. The AWS Storage Gateway connects existing on-premises block, file or tape storage systems to AWS cloud storage over the WAN in a hybrid model. The AWS Snow family of physical devices can capture, pre-process and migrate data into and out of AWS without any network connection at all. Join us to learn how you can close down datacenters, reduce storage footprints, and build solutions for tiering, data lakes, backup, disaster recovery, and migration.
Build on Amazon Aurora with MySQL Compatibility (DAT348-R4) - AWS re:Invent 2018Amazon Web Services
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. Join this session, and get started with the MySQL-compatible edition, discuss your existing application running on Aurora, or learn about recently announced features, such as Serverless or Parallel Query.
Data Privacy & Governance in the Age of Big Data: Deploy a De-Identified Data...Amazon Web Services
Come to this session to learn a new approach in reducing risk and costs while increasing productivity, organizational alacrity, and customer experience, resulting in a competitive advantage and assorted revenue growth. We share how a de-identified data lake on AWS can help you comply with General Data Protection Regulation (GDPR) and California Consumer Protection Act requirements by solving the issue at its causal element.
Building Serverless Applications Using AWS AppSync and Amazon Neptune (SRV307...Amazon Web Services
In this session, learn how to build a data driven, serverless calorie tracker application with real-time, offline, and data syncing capabilities. The application provides an overview of your progress toward the calorie intake goal you've set, recommended intake remains, and breakdown of calories consumed. Use Amazon Cognito to build signup and sign-in capabilities as well as federated login to Facebook. The application integrates with AWS AppSync to provide real-time data from multiple data sources through GraphQL technology as well as offline capability. AWS AppSync makes it easy to access this data and provide the exact information your application needs. As a bonus, learn to use Amazon Neptune, a fully managed graph database, to build a personalized recommendation engine for calorie intake.
Build Your First Big Data Application on AWS (ANT213-R1) - AWS re:Invent 2018Amazon Web Services
Do you want to increase your knowledge of AWS big data web services and launch your first big data application on the cloud? In this session, we walk you through simplifying big data processing as a data bus comprising ingest, store, process, and visualize. You will build a big data application using AWS managed services, including Amazon Athena, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. Along the way, we review architecture design patterns for big data applications and give you access to a take-home lab so you can rebuild and customize the application yourself. To get the most from this session, bring your own laptop and have some familiarity with AWS services.
by Peter Dalton, Principal Consultant AWS and Taz Sayed, Sr Technical Account Manager AWS
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Lock It Down: Configure End-to-End Security & Access Control on Amazon EMR (A...Amazon Web Services
Amazon EMR helps you process all your data for analytics, but with great scale comes great responsibility—you need to make sure that data is secured by design. In this chalk talk, we walk through how to configure your environment to take full advantage of comprehensive security controls: including identifying sensitive data, encrypting data and managing keys, authenticating and authorizing users, utilizing fine-grained access controls, and using audit logs to demonstrate compliance.
Data Transformation Patterns in AWS - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to accelerate common data transformations from a variety of data
- Learn how to efficiently orchestrate transformation jobs
- Learn best practices and methodologies in data preparation for analytics
Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across You...Amazon Web Services
"Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them.
"
Cost-effective-Data-Management-with-S3-Batch-Operations-and-the-S3-Storage-Cl...Amazon Web Services
Cost-effective Data Management with S3 Batch Operations and the S3 Storage Classes Abstract: As your data lake grows, it becomes increasingly important to manage objects at scale and optimize storage costs and resources. In this session, AWS experts provide an overview of S3’s capabilities that let you manage data at the object, bucket, and account levels. Learn about and watch demos for S3 Batch Operations (a new feature that lets you take action across thousands, millions, and even billions of objects with a single API request or a few clicks in the S3 Management Console). Also learn some cost-optimization best practices by storing objects across the S3 Storage Classes. This includes an overview of our newest storage classes S3 Intelligent-Tiering – the first cloud storage class to automatically deliver cost savings by moving objects with unknown or changing access patterns between two storage tiers – one optimized for frequent access and a lower-cost one optimized for infrequent access.
Cost-Effective Data Management with S3 Batch Operations and the S3 Storage Cl...Amazon Web Services
The document summarizes a presentation about cost-effective data management using Amazon S3 storage classes and batch operations. It discusses different storage classes and their use cases, how to optimize data placement using storage class analysis and lifecycle policies, and introduces S3 Intelligent Tiering for automated cost savings. It also outlines how S3 Batch Operations allows managing billions of objects with a single request to save on operations and development time.
Drive Down the Cost of your Data Lake by Using the Right Data TieringBoaz Ziniman
Amazon S3 supports a wide range of storage classes to help you cost-effectively store your data. Each of the S3 Storage Classes is designed to support different use cases while reliably protecting your data. In this session, we will look into the different S3 Storage Classes, their respective key features, and the use cases they support, while focusing on the newest storage class S3 Intelligent-Tiering-the first cloud storage class that automatically optimizes storage costs for data with changing access patterns.
The document discusses building data lakes on AWS. It describes how data lakes extend the traditional data warehouse approach by allowing storage of both structured and unstructured data at massive scales. Amazon S3 provides durable, available, scalable, and easy-to-use storage for the data lake. AWS Glue crawls data to create a data catalog and can automate ETL processes. Amazon Athena and Amazon EMR enable interactive analysis and big data processing through SQL and Spark. The data lake architecture on AWS supports a variety of analytical use cases.
The document introduces two new Amazon S3 features: Amazon S3 Select, which allows users to filter and analyze object data directly in S3 using standard SQL expressions, and Amazon S3 One Zone-IA storage class, which stores object data within a single availability zone at lower costs than S3 Standard storage. It provides overviews and demos of each feature.
AWS Storage Leadership Session: What's New in Amazon S3, Amazon EFS, Amazon E...Amazon Web Services
Mai-Lan Tomsen Bukovec, VP of Amazon S3, introduces the latest innovations across all AWS storage services. In this keynote address, we announce new storage capabilities, and we talk about features and services that make AWS storage unique. We focus on new innovations in object storage, file storage, block storage, and data transfer services. You also hear from executives from companies that are major AWS storage customers, Sony and Expedia, about how they're using AWS storage to create a competitive advantage in their businesses.
Best Practices for Amazon S3 and Amazon Glacier (STG203-R2) - AWS re:Invent 2018Amazon Web Services
Learn best practices for Amazon S3 performance optimization, security, data protection, storage management, and much more. In this session, we look at common Amazon S3 use cases and ways to manage large volumes of data within Amazon S3. We discuss the latest performance improvements and how they impact previous guidance. We also talk about the Amazon S3 data resilience model and how architecture for the AWS Regions and Availability Zones impact architecture for fault tolerance.
Adding to the existing AI services, AWS continues to bridge the gap for developers to build ML solutions without the hurdle of having data science expertise. In this session, learn about the new services announced at re:Invent (Forecast, Textract and Personalize) and get a preview of what to expect when building time series models, OCR and recommendation engines with little to no data science experience.
Machine Learning: Beyond the Hype. Presentation slides from Darin Briskman, Chief Technical Evangelist, Amazon Web Services at the Canadian Executive Cloud & DevSecOps Summit. May 4, 2018 in Toronto and May 11, 2018 in Vancouver. Hosted by TriNimbus
by Mikhail Prudnikov, Sr. Solutions Architect, AWS
In-memory data stores, such as ElastiCache for Redis, enable applications where response times are measured in microseconds. We’ll look at how to design and deploy high-performance applications using ElastiCache, Aurora, DynamoDB, DAX, and Lambda, then we’ll do a hands-on lab to do it ourselves. You’ll need a laptop with a Firefox or Chrome browser.
Get the latest on what we've been developing in Amazon S3. In this session, learn about new advances in S3 performance, security, data protection, storage management, and much more. We'll discuss how to apply the appropriate bucket policies and encryption configurations to enhance security, use S3 Select to accelerate queries, and take advantage of object tagging for data classification.
This document contains multiple sections and diagrams related to Amazon Web Services (AWS) including:
- Diagrams showing synchronous and asynchronous workflows between AWS services like API Gateway, Lambda, DynamoDB, and S3.
- A template in AWS Serverless Application Model (SAM) format defining an AWS Lambda function and API Gateway endpoint.
- Images related to continuous integration/continuous delivery (CI/CD) pipelines in AWS.
The document discusses managed NoSQL databases, including Amazon DynamoDB, Amazon Neptune, and Amazon ElastiCache. It provides an overview of each service, highlighting key features such as DynamoDB being a fast and flexible key-value and document database, Neptune being a fully managed graph database, and ElastiCache providing an in-memory cache. It also discusses why organizations are adopting non-relational databases to address needs for massive scale, low latency, and schema flexibility for highly connected internet applications.
Social Media Analytics using Amazon QuickSight - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Connect to AWS and non-AWS data sources
- Prepare data by joining tables, using SQL queries, adding calculated fields, changing field names and data types, and other techniques
- Create charts and graphs with various chart types and filtering capabilities
This document discusses AWS services for startups. It includes:
1. An overview of Amazon Elastic Container Service (ECS) and AWS Fargate for deploying containerized applications without managing infrastructure.
2. A diagram of a sample CI/CD pipeline using CodePipeline, CodeBuild, CodeDeploy, and CodeCommit to continuously deploy code changes.
3. Growth hacking techniques like using AWS Analytics SDK and KPI metrics to optimize the customer experience.
Query your data in S3 with SQL and optimize for cost and performanceAWS Germany
Streaming services allow you to ingest and analyze events continuously in real time. One of Big Data's principles is to store raw data as long as possible - to be able to answer future questions. If the data is permanently stored in Amazon Simple Storage Service (S3), it can be queried at any time with Amazon Athena without spinning up a database.
This session shows step by step how the data should be structured so that both costs and response times are reduced when using Athena. The details and effects of compression, partitions, and column storage formats are compared. Finally, AWS Glue is used as a fully managed service for Extract Transform Load (ETL) to derive optimized views from the raw data for frequently issued queries.
Build Data Lakes and Analytics on AWS: Patterns & Best PracticesAmazon Web Services
With over 90% of today’s data generated in the last two years, the rate of data growth is showing no sign of slowing down. In this session, we step through the challenges and best practices for capturing data, understanding what data you own, driving insights, and predicting the future using AWS services. We frame the session and demonstrations around common pitfalls of building data lakes and how to successfully drive analytics and insights from data. We also discuss the architecture patterns brought together key AWS services, including Amazon S3, AWS Glue, Amazon Athena, Amazon Kinesis, and Amazon Machine Learning. Discover the real-world application of data lakes for roles including data scientists and business users.
Stephen Moon, Sr. Solutions Architect, Amazon Web Services
James Juniper, Solution Architect for the Geo-Community Cloud, Natural Resources Canada
Similar to Optimizing Costs in Amazon S3 Creating Cost Efficiencies w/ Amazon S3 Storage Classes & Introducing S3 Intelligent-Tiering (STG398-R) - AWS re:Invent 2018 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.