In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives.
This AWS Lambda tutorial shall give you a clear understanding as to how a serverless compute service works. Towards the end, we will also create a full fledged project using AWS Lambda. Below are the topics covered in this tutorial:
1. AWS Compute Domain
2. Why AWS Lambda?
3. What is AWS Lambda?
4. AWS SDKs
5. Using AWS Lambda with Eclipse IDE
KB국민은행은 시작했다 - 쉽고 빠른 클라우드 거버넌스 적용 전략 - 강병억 AWS 솔루션즈 아키텍트 / 장강홍 클라우드플랫폼단 차장, ...Amazon Web Services Korea
클라우드 서비스를 사용하기 위한 안전성 확보 조치들을 다양한 워크로드가 추가될 경우에도 쉽고 빠르게 적용시킬 수 있는 다중 계정 기반의 클라우드 거버넌스 구성 전략을 소개해 드립니다. 그리고 KB국민은행에서는 어떻게 클라우드를 도입하게 되었으며 금융 회사에 클라우드를 도입하기 위해서 지켜야 하는 규제 사항들을 어떻게 대응하였지를 살펴보고, KB국민은행에서 구성한 클라우드 거버넌스 환경을 이용하여 클라우드 워크로드 확산을 어떻게 효과적으로 준비하고 있는지 살펴봅니다.
Key Steps for Setting up your AWS Journey for Success - BusinessAmazon Web Services
When building anything, it's longevity begins with establishing a solid foundation, on AWS you will need to ensure your application is built on top of best practices. We will help you make the best use out of AWS Support and Training, correct account set up and strategies to help you optimise your AWS spend.
Speakers: David Ly, Account Manager and Nathan Besh, Technical Account Manager, Amazon Web Services
Featured Customer - Domain
Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (SRV355-R1) -...Amazon Web Services
Building and deploying serverless applications introduces new challenges for developers whose development workflows are optimized for traditional VM-based applications. In this session, we discuss a method for automating the deployment of serverless applications running on AWS Lambda. First, we cover how you can model and express serverless applications using the open source AWS Serverless Application Model (AWS SAM). Then, we discuss how you can use CI/CD tooling from AWS CodePipeline and AWS CodeBuild, and how to bootstrap the entire toolset using AWS CodeStar. We also cover best practices to embed in your deployment workflow specific to serverless applications.
Maintaining control of sensitive data is critical in the highly regulated financial investments environment that Vanguard operates in. This need for data control complicated Vanguard's move to the cloud. They needed to expand globally to provide a great user experience while at the same time maintaining their mainframe-based backend data architecture. In this session, Vanguard discusses the creative approach they took to decouple their monolithic backend architecture to empower a microservices architecture while maintaining compliance with regulations. They also cover solutions implemented to successfully meet their requirements for security, latency, and end-state consistency.
Speaker: Jon Austin, Enterprise Solutions Architect, AWS
This AWS Lambda tutorial shall give you a clear understanding as to how a serverless compute service works. Towards the end, we will also create a full fledged project using AWS Lambda. Below are the topics covered in this tutorial:
1. AWS Compute Domain
2. Why AWS Lambda?
3. What is AWS Lambda?
4. AWS SDKs
5. Using AWS Lambda with Eclipse IDE
KB국민은행은 시작했다 - 쉽고 빠른 클라우드 거버넌스 적용 전략 - 강병억 AWS 솔루션즈 아키텍트 / 장강홍 클라우드플랫폼단 차장, ...Amazon Web Services Korea
클라우드 서비스를 사용하기 위한 안전성 확보 조치들을 다양한 워크로드가 추가될 경우에도 쉽고 빠르게 적용시킬 수 있는 다중 계정 기반의 클라우드 거버넌스 구성 전략을 소개해 드립니다. 그리고 KB국민은행에서는 어떻게 클라우드를 도입하게 되었으며 금융 회사에 클라우드를 도입하기 위해서 지켜야 하는 규제 사항들을 어떻게 대응하였지를 살펴보고, KB국민은행에서 구성한 클라우드 거버넌스 환경을 이용하여 클라우드 워크로드 확산을 어떻게 효과적으로 준비하고 있는지 살펴봅니다.
Key Steps for Setting up your AWS Journey for Success - BusinessAmazon Web Services
When building anything, it's longevity begins with establishing a solid foundation, on AWS you will need to ensure your application is built on top of best practices. We will help you make the best use out of AWS Support and Training, correct account set up and strategies to help you optimise your AWS spend.
Speakers: David Ly, Account Manager and Nathan Besh, Technical Account Manager, Amazon Web Services
Featured Customer - Domain
Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (SRV355-R1) -...Amazon Web Services
Building and deploying serverless applications introduces new challenges for developers whose development workflows are optimized for traditional VM-based applications. In this session, we discuss a method for automating the deployment of serverless applications running on AWS Lambda. First, we cover how you can model and express serverless applications using the open source AWS Serverless Application Model (AWS SAM). Then, we discuss how you can use CI/CD tooling from AWS CodePipeline and AWS CodeBuild, and how to bootstrap the entire toolset using AWS CodeStar. We also cover best practices to embed in your deployment workflow specific to serverless applications.
Maintaining control of sensitive data is critical in the highly regulated financial investments environment that Vanguard operates in. This need for data control complicated Vanguard's move to the cloud. They needed to expand globally to provide a great user experience while at the same time maintaining their mainframe-based backend data architecture. In this session, Vanguard discusses the creative approach they took to decouple their monolithic backend architecture to empower a microservices architecture while maintaining compliance with regulations. They also cover solutions implemented to successfully meet their requirements for security, latency, and end-state consistency.
Speaker: Jon Austin, Enterprise Solutions Architect, AWS
AWS 클라우드 핵심 서비스로 클라우드 기반 아키텍처 빠르게 구성하기 - 문종민 솔루션즈 아키텍트, AWS :: AWS Summit Seo...Amazon Web Services Korea
AWS 클라우드 핵심 서비스로 클라우드 기반 아키텍처 빠르게 구성하기
문종민 솔루션즈 아키텍트, AWS
본 세션은 AWS를 처음 접하는 분들을 대상으로 AWS의 150여개 이상의 서비스들 중 가장 중심이 되는 컴퓨팅, 스토리지, 네트워크 등의 핵심 서비스를 기술적 관점에서 소개합니다. 클라우드에서 신규 서비스 구축 및 기존 데이터센터 워크로드를 이전할 때, Amazon EC2, S3 및 RDS, VPC 등의 서비스를 통해 어떻게 빠르게 AWS 상에서 시스템 구축할 수 있는지 살펴봅니다.
Microsoft Azure Redis Cache is based on the popular open source Redis Cache. It gives you access to a secure, dedicated Redis cache, managed by Microsoft. A cache created using Azure Redis Cache is accessible from any application within Microsoft Azure.
AWS' philosophy and recommended best practices for building microservices applications, how AWS services like Lambda and API gateway benefit developers building microservices apps, and how customers are using these two and other AWS services to deliver their microservices apps
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you'll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We'll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We'll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
Al mover datos a la nube, los clientes deben comprender los métodos óptimos para los diferentes casos de uso, los tipos de datos que están moviendo y los recursos disponibles en la red, entre otros. Las soluciones de migración y transferencia de AWS contemplan desde la migración de datos con conectividad limitada, almacenamiento en la nube híbrida, transferencias frecuentes de archivos B2B, hasta transferencias de datos en línea y sin conexión. En esta sesión, le mostramos cómo puede acelerar la migración y transferencia de datos de manera simplificada desde y hacia la nube de AWS.
실전! AWS 하이브리드 네트워킹 (AWS Direct Connect 및 VPN 데모 세션) - 강동환, AWS 솔루션즈 아키텍트:: A...Amazon Web Services Korea
발표영상 다시보기: https://youtu.be/yMgwrkqfcbg
AWS Cloud와 On-Premise를 하나로 연결하는 다양한 Network 연결 방식을 실제 Demo를 통해 심도 있게 알아 봅니다. VPN, Direct Connect, Direct Connect Gateway, Public VIF, Transit Gateway등을 직접 구성하는 Demo를 통해 여러분께 적용 가능한 다양한 시나리오를 직접 확인 할 수 있습니다.
AWS Graviton 프로세서는 CPU, 메모리, 스토리지, 네트워킹에 워크로드에 최적화된 솔루션을 제공하고 있습니다. 특히 계획문제와 같이 한정된 자원에 대한 최적의 조합을 탐색하는 워크로드의 경우 연산에 최적화된 인스턴스의 선택이 효과적입니다. 이번 세션에서는 배송 경로 최적화 솔루션에서 AWS Graviton 인스턴스를 활용하여 적은 비용으로 더 높은 성능을 얻은 사례와 함께 AWS Graviton 도입을 위한 노하우를 공유해 드립니다.
This AWS Certification tutorial shall explain you all the certifications offered by AWS, the important topics to learn and the exam pattern. It will also talk about the job trends and the demand for each certification in the market. This AWS Certification tutorial is ideal for those who want to become an AWS Certified Professional.
Below are the topics covered in this tutorial:
1. Amazon Web Services
2. AWS Job Trends
3. AWS Certifications
4. AWS Exam
5. How to Prepare for your AWS Exam?
6. AWS Learning Path
#awscertification #amazoncloud #awstraining #awsjobs
Amazon SageMaker 모델 학습 방법 소개::최영준, 솔루션즈 아키텍트 AI/ML 엑스퍼트, AWS::AWS AIML 스페셜 웨비나Amazon Web Services Korea
Amazon SageMaker Training과 Processing에 처음 입문 하고자 하는 분을 위해 동작 방식을 설명하고, 실행할 수 있는 가이드를 제공합니다.사용자는 Amazon SageMaker 노트북을 생성한 다음, 직접 정의한 별도의 GPU 또는 고성능 CPU로 구성된 학습 클러스터에서 학습 코드를 실행하여, 효율적으로 모델 학습과 데이터 전처리, 추론 결과 후처리 또는 모델 평가 등을 할 수 있도록 합니다. 추가적으로 Amazon SageMaker Experiments를 이용하여 학습 실험에 대한 구조화와 평가 메트릭 간의 비교를 체계적으로 관리하는 방법을 소개합니다.
Network visibility into the traffic traversing your AWS infrastructure - SVC2...Amazon Web Services
Having visibility into the Amazon VPC infrastructure is a foundational element that any cloud administrator needs to maintain and operate an AWS infrastructure that is secure and functional. Visibility into your AWS infrastructure becomes increasingly important as it scales, because it gives you the ability to make key planning decisions and maintain security. This session is intended for anyone wanting to learn about network visibility on AWS, and it includes information about partners and real-life customer use cases. Come see how you, too, can gain insights into the network traffic that is traversing your AWS infrastructure.
Amazon S3 and Amazon Glacier provide developers and IT teams with secure, durable, highly-scalable object storage with no minimum fees or setup costs. In this webcast, we will provide an introduction to each service, dive deep into key features of Amazon S3 and Amazon Glacier, and explore different use cases that these services optimize.
Learning Objectives:
• Business value of Amazon S3 and Amazon Glacier
• Leveraging S3 for web applications, media delivery, big data analytics and backup
• Leveraging Amazon Glacier to build cost effective archives
• Understand the life cycle management of AWS’s storage services
Who Should Attend:
• Developers, DevOps Engineers, Engineers and System Administrators
AWS 클라우드 핵심 서비스로 클라우드 기반 아키텍처 빠르게 구성하기 - 문종민 솔루션즈 아키텍트, AWS :: AWS Summit Seo...Amazon Web Services Korea
AWS 클라우드 핵심 서비스로 클라우드 기반 아키텍처 빠르게 구성하기
문종민 솔루션즈 아키텍트, AWS
본 세션은 AWS를 처음 접하는 분들을 대상으로 AWS의 150여개 이상의 서비스들 중 가장 중심이 되는 컴퓨팅, 스토리지, 네트워크 등의 핵심 서비스를 기술적 관점에서 소개합니다. 클라우드에서 신규 서비스 구축 및 기존 데이터센터 워크로드를 이전할 때, Amazon EC2, S3 및 RDS, VPC 등의 서비스를 통해 어떻게 빠르게 AWS 상에서 시스템 구축할 수 있는지 살펴봅니다.
Microsoft Azure Redis Cache is based on the popular open source Redis Cache. It gives you access to a secure, dedicated Redis cache, managed by Microsoft. A cache created using Azure Redis Cache is accessible from any application within Microsoft Azure.
AWS' philosophy and recommended best practices for building microservices applications, how AWS services like Lambda and API gateway benefit developers building microservices apps, and how customers are using these two and other AWS services to deliver their microservices apps
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more. In this session, you'll learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We'll introduce you to the basics of building with Lambda and how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We'll also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
Al mover datos a la nube, los clientes deben comprender los métodos óptimos para los diferentes casos de uso, los tipos de datos que están moviendo y los recursos disponibles en la red, entre otros. Las soluciones de migración y transferencia de AWS contemplan desde la migración de datos con conectividad limitada, almacenamiento en la nube híbrida, transferencias frecuentes de archivos B2B, hasta transferencias de datos en línea y sin conexión. En esta sesión, le mostramos cómo puede acelerar la migración y transferencia de datos de manera simplificada desde y hacia la nube de AWS.
실전! AWS 하이브리드 네트워킹 (AWS Direct Connect 및 VPN 데모 세션) - 강동환, AWS 솔루션즈 아키텍트:: A...Amazon Web Services Korea
발표영상 다시보기: https://youtu.be/yMgwrkqfcbg
AWS Cloud와 On-Premise를 하나로 연결하는 다양한 Network 연결 방식을 실제 Demo를 통해 심도 있게 알아 봅니다. VPN, Direct Connect, Direct Connect Gateway, Public VIF, Transit Gateway등을 직접 구성하는 Demo를 통해 여러분께 적용 가능한 다양한 시나리오를 직접 확인 할 수 있습니다.
AWS Graviton 프로세서는 CPU, 메모리, 스토리지, 네트워킹에 워크로드에 최적화된 솔루션을 제공하고 있습니다. 특히 계획문제와 같이 한정된 자원에 대한 최적의 조합을 탐색하는 워크로드의 경우 연산에 최적화된 인스턴스의 선택이 효과적입니다. 이번 세션에서는 배송 경로 최적화 솔루션에서 AWS Graviton 인스턴스를 활용하여 적은 비용으로 더 높은 성능을 얻은 사례와 함께 AWS Graviton 도입을 위한 노하우를 공유해 드립니다.
This AWS Certification tutorial shall explain you all the certifications offered by AWS, the important topics to learn and the exam pattern. It will also talk about the job trends and the demand for each certification in the market. This AWS Certification tutorial is ideal for those who want to become an AWS Certified Professional.
Below are the topics covered in this tutorial:
1. Amazon Web Services
2. AWS Job Trends
3. AWS Certifications
4. AWS Exam
5. How to Prepare for your AWS Exam?
6. AWS Learning Path
#awscertification #amazoncloud #awstraining #awsjobs
Amazon SageMaker 모델 학습 방법 소개::최영준, 솔루션즈 아키텍트 AI/ML 엑스퍼트, AWS::AWS AIML 스페셜 웨비나Amazon Web Services Korea
Amazon SageMaker Training과 Processing에 처음 입문 하고자 하는 분을 위해 동작 방식을 설명하고, 실행할 수 있는 가이드를 제공합니다.사용자는 Amazon SageMaker 노트북을 생성한 다음, 직접 정의한 별도의 GPU 또는 고성능 CPU로 구성된 학습 클러스터에서 학습 코드를 실행하여, 효율적으로 모델 학습과 데이터 전처리, 추론 결과 후처리 또는 모델 평가 등을 할 수 있도록 합니다. 추가적으로 Amazon SageMaker Experiments를 이용하여 학습 실험에 대한 구조화와 평가 메트릭 간의 비교를 체계적으로 관리하는 방법을 소개합니다.
Network visibility into the traffic traversing your AWS infrastructure - SVC2...Amazon Web Services
Having visibility into the Amazon VPC infrastructure is a foundational element that any cloud administrator needs to maintain and operate an AWS infrastructure that is secure and functional. Visibility into your AWS infrastructure becomes increasingly important as it scales, because it gives you the ability to make key planning decisions and maintain security. This session is intended for anyone wanting to learn about network visibility on AWS, and it includes information about partners and real-life customer use cases. Come see how you, too, can gain insights into the network traffic that is traversing your AWS infrastructure.
Amazon S3 and Amazon Glacier provide developers and IT teams with secure, durable, highly-scalable object storage with no minimum fees or setup costs. In this webcast, we will provide an introduction to each service, dive deep into key features of Amazon S3 and Amazon Glacier, and explore different use cases that these services optimize.
Learning Objectives:
• Business value of Amazon S3 and Amazon Glacier
• Leveraging S3 for web applications, media delivery, big data analytics and backup
• Leveraging Amazon Glacier to build cost effective archives
• Understand the life cycle management of AWS’s storage services
Who Should Attend:
• Developers, DevOps Engineers, Engineers and System Administrators
Eric Durand once again takes us to a journey of Storage solutions for digital media, using the AWS Cloud.
This presentation was delivered at AWS Toronto, during the Media and Entertainment Symposium.
SRV403 Deep Dive on Object Storage: Amazon S3 and Amazon GlacierAmazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide – with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs serverless analytics on your data and hear about expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts.
Deep Dive on Object Storage: Amazon S3 and Amazon Glacier | AWS Public Sector...Amazon Web Services
In this session, storage experts will walk you through Amazon S3 and Amazon Glacier, bulk data repositories that can deliver 99.999999999% durability and scale past trillions of objects worldwide - with cost points competitive against tape archives. Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. See how Amazon Athena runs "query in place" analytics on your data and hear about the new expedited and bulk retrievals from Amazon Glacier. Learn how AWS customers have built solutions that turn their data from a cost into a strategic asset, and bring your toughest questions straight to our experts. Learn More: https://aws.amazon.com/government-education/
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big DataAmazon Web Services
Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms.
Learning Objectives:
- Review best practices for to reduce costs, protect against data loss, and increase performance in Amazon S3
- Learn about new S3 storage management features that help you align storage with business needs
- Understand data security capabilities available in S3 that help protect against malicious or accidental deletion or other data loss
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on to your object storage workloads.
Darry Osborne takes us on a journey across the AWS Cloud-based storage solutions. He explains S3, Glacier, Snowball and ends with Snowmobile, petabyte-scale data migration. He also talks about use cases, and customer stories. Presented in Montreal at the AWS Innovate show.
Deep Dive on Amazon S3 - March 2017 AWS Online Tech TalksAmazon Web Services
Learn about new and existing Amazon S3 features that can help you better protect your data, save on cost, and improve usability, security, and performance. We will cover a wide variety of Amazon S3 features and go into depth on several newer features with configuration and code snippets, so you can apply the learnings on to your object storage workloads.
Learning Objectives:
• Review best practices for to reduce costs, protect against data loss, and increase performance in Amazon S3
• Learn about new S3 storage management features that help you align storage with business needs
• Understand data security capabilities available in S3 that help protect against malicious or accidental deletion or other data loss
Review this content as Amazon Web Services' (AWS) experts share best practices that are helping libraries save money, be more flexible and cope with the ever-increasing volume of data they are facing.
We will introduce you to AWS Cloud services and explore typical library use cases on AWS with a particular focus on storage and archiving use cases that provide exceptional durability and cost savings.
Deep Dive On Object Storage: Amazon S3 and Amazon Glacier - AWS PS Summit Can...Amazon Web Services
Learn about the different ways you can accelerate data transfer into S3 and get a close look at new tools to secure and manage your data more efficiently. Discover how AWS customers have built solutions that turn their data into a strategic asset.
Speakers: Ben Thurgood. Solutions Architect. Amazon Web Services with Timothy Eckersley, Enterprise Architect, NSW Pathology
Level: 300
Today organizations find themselves in a data rich world with a growing need for increased agility and accessibility of all this data for analysis and deriving keen insights to drive strategic decisions. Creating a data lake helps you to manage all the disparate sources of data you are collecting (in its original format) and extract value. In this session, learn how to architect and implement a data lake in the AWS Cloud. Learn about best practices as we walk through architectural blueprints.
The goal of this course is to give you and in depth knowledge of Amazon S3 and hands on practice using it so you can use it in your own projects or organization. This course covers the basics as well as the more advanced parts that sometimes get left out such as command line commands and detailed security policy examples.
For full video course please visit:
https://www.udemy.com/aws-foundations-amazon-s3-mastery-bootcamp/?couponCode=SLIDESHARE
Active Archiving with Amazon S3 and Tiering to Amazon Glacier - March 2017 AW...Amazon Web Services
Most organizations have data that they need to retain, but is accessed infrequently, if ever. In cases where this data needs to be accessible at a moment’s notice, it’s hard to save money by moving to an archival storage because access times on these platforms are slower. Now, customers are using Amazon S3 & Glacier for “Active Archiving” to reduce storage costs while maintaining the flexibility of instant access. In this tech talk, we’ll show you how implement Active Archiving with AWS Object Storage services, and we’ll provide some real world examples of how AWS customers are saving money with these capabilities today.
Learning Outcomes:
• Define Active Archiving, and understand how it is different from traditional cold archiving
• Review the cost modeling tools available to determine if Active Archiving is a good fit for your organization
• Learn about best practices for using AWS Object Storage features & functionality to enable Active Archiving
AWS May Webinar Series - Getting Started: Storage with Amazon S3 and Amazon G...Amazon Web Services
If you are interested to know more about AWS Chicago Summit, please use the following to register: http://amzn.to/1RooPPL
Amazon S3 and Amazon Glacier provide developers and IT teams with secure, durable, highly-scalable object storage with no minimum fees or setup costs. In this webcast, we will provide an introduction to each service, dive deep into key features of Amazon S3 and Amazon Glacier, and explore different use cases that these services optimize.
Learning Objectives: • Business value of Amazon S3 and Amazon Glacier • Leveraging S3 for web applications, media delivery, big data analytics and backup • Leveraging Amazon Glacier to build cost effective archives • Understand the life cycle management of AWS' storage services
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
In this session, storage experts will walk you through the object storage offering, Amazon S3, a bulk data repository that can deliver 99.999999999% durability and scale past trillions of objects worldwide. Learn about the different ways you can accelerate data transfer to S3 and get a close look at some of the new tools available for you to secure and manage your data more efficiently. Announced at re:Invent 2016, see how you can use Amazon Athena with S3 to run serverless analytics on your data and as a bonus, walk away with some code snippets to use with S3. Hear AWS customers talk about the solutions they have built with S3 to turn their data into a strategic asset, instead of just a cost center. And bring your toughest questions to our experts on hand and walk away that much smarter on how to use object storage from AWS.
With the advent of high definition, on-demand digital media, media and entertainment companies are challenged to evolve their IT infrastructure fast enough to keep up with the demands of their customers. Producing, editing and distributing media assets cost-effectively requires an automated supply chain workflow supported by significant IT infrastructure.
In this Amazon Web Services (AWS) webinar you can learn how you can make use of the economical, elastic, and on-demand compute and storage capacity that AWS offers to address the challenges faced by media & entertainment companies.
You can view a recording of this webinar on YouTube here: http://youtu.be/257u5gWuDdM
Big Data adoption success using AWS Big Data Services - Pop-up Loft TLV 2017Amazon Web Services
In today’s session we will share with you an overview of what the typical challenges when adoption Big Data are, and how the AWS Big Data platform allows you to tackle this challenges and leverage the right Analytical/Big Data solutions in order to become successful with your strategy (Whiteboard presentation)
Similar to Deep Dive on Object Storage: Amazon S3 and Amazon Glacier (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
2. • Technical Evangelist, Developer Advocate,
… Software Engineer
• Own bed in Finland
• Previously:
• Solutions Architect @AWS
• Lead Cloud Architect @Dreambroker
• Director of Engineering, Software Engineer, DevOps, Manager, ... @Hdm
• Researcher @Nokia Research Center
• and a bunch of other stuff.
• Climber, like Ginger shots.
3. What to Expect from the Session
• What you need to know about S3 on AWS.
• Architectural design patterns with S3.
• Best practices & tips.
• Tools to help you.
5. Amazon S3 today
Amazon S3 holds trillions of objects and regularly peaks at
millions of requests per second.
(1,000,000,000,000; one million million; 1012; SI prefix: tera-), ..American and British English
(1,000,000,000,000,000,000; one million million million; 1018; SI prefix: exa-), ..non-English-speaking countries
https://en.wikipedia.org/wiki/Trillion
6. Netflix delivers billions of hours of
content from Amazon S3.
SmugMug stores billions of photos
and images on Amazon S3.
Airbnb handles over 10PB of user
images on Amazon S3.
Soundcloud currently stores 2.5 PB
of data on Amazon Glacier.
Nasdaq uses Amazon S3 to support
years of historical tick data down to
the millisecond.
7. We currently log 20 terabytes of
new data each day, and have
around 10 petabytes of data in S3.
(2014)
FINRA stores over 700 TB of data
on Amazon S3 for low cost, durable,
scalable storage and uses Amazon
EMR for scalable compute workloads using
Hive, Presto, and Spark.
Sony moved over 1M hours of video from
magnetic tape to Glacier for digital
preservation.
8. Choice of storage classes on S3
Standard
Active data Archive dataInfrequently accessed data
Standard - Infrequent Access Amazon Glacier
9. Choice of storage classes on S3
Active data Archive dataInfrequently accessed data
S3 Standard
• Big data analysis
• Content distribution
• Static website
hosting
Standard - IA
• Backup & archive
• Disaster recovery
• File sync & share
• Long-retained data
Amazon Glacier
• Long term archives
• Digital preservation
• Magnetic tape
replacement
11. Back up data to Amazon S3
Amazon
Route53
http://example.com
Traditional
Server
Amazon S3
Bucket
Copy to S3
On Premise
Infrastructure
12. AWS Direct Connect AWS Snowball ISV Connectors
Amazon Kinesis
Firehose
S3 Transfer
Acceleration
AWS Storage
Gateway
Data collection into Amazon S3
AWS Snowmobile
AWS Snowball Edge
13. Fun fact
Since October 2015, AWS
Snowball has moved over
5 billion objects into Amazon S3,
and AWS Snowball appliances
have traveled a distance equal to
circling the world more than 100
times.
19. Amazon Storage Partner Solutions
aws.amazon.com/backup-recovery/partner-solutions/
Note: Represents a sample of storage partners
Backup and RecoveryPrimary Storage Archive
Solutions that leverage file, block, object,
and streamed data formats as an
extension to on-premises storage
Solutions that leverage Amazon S3 for
durable data backup
Solutions that leverage Amazon
Glacier for durable and cost-effective
long-term data backup
22. Protect your data from the “oups”
Versioning
MFA
Protection on delete
• Protects from:
• unintended user deletes
• application failures
• New version with every upload
• Easy retrieval
• Roll back to previous versions
• Requires additional authentication to:
• Change the versioning state of your bucket
• Permanently delete an object version
**default
** versioning-enabled
** suspended
(multi-factor authentication)
26. • Cache content at the edge.
• Lower load on origin.
• Dynamic and static content.
• Custom SSL certificates
• Low TTLs
Amazon CloudFront (CDN)
27. Faster upload over long distances
S3 Transfer Acceleration
S3 Bucket
AWS Edge
Location
Uploader
Optimized
Throughput!
• Change your endpoint, not your code
• No firewall changes or client software
• Longer distance, larger files, more benefit
• Faster or free
• 82 global edge locations
Try it at S3speedtest.com
31. AWS SDKs
• Automatically switching to multipart
transfers when a file is over a specific
size threshold
• Uploading/downloading a file in parallel
• Progress callbacks to monitor transfers
• Retries.
33. Distributing key names
Add randomness to the beginning of the key name
– E.g. with a hash or reversed timestamp (ssmmhhddmmyy)
<my_bucket>/521335461-2013_11_13.jpg
<my_bucket>/465330151-2013_11_13.jpg
<my_bucket>/987331160-2013_11_13.jpg
<my_bucket>/465765461-2013_11_13.jpg
<my_bucket>/125631151-2013_11_13.jpg
<my_bucket>/934563160-2013_11_13.jpg
<my_bucket>/532132341-2013_11_13.jpg
<my_bucket>/565437681-2013_11_13.jpg
<my_bucket>/234567460-2013_11_13.jpg
<my_bucket>/456767561-2013_11_13.jpg
<my_bucket>/345565651-2013_11_13.jpg
<my_bucket>/431345660-2013_11_13.jpg
34. Organize your data with object tags
Manage data based on what it is as opposed to where its located
Up to 10 tags per object
• Tag your objects with key-value pairs
• Write policies once based on the type of data
• Put object with tag or add tag to existing
objects
Tags
44. Amazon Athena: SQL Query on S3
• No loading of data
• Serverless
• Support text, CSV, TSV, JSON, AVRO
• Columnar formats Apache ORC & Parquet
• Access via Console or JDBC driver
• $5 per TB scanned from S3
45.
46.
47.
48.
49. Amazon Redshift Spectrum
Run SQL queries directly against data in S3 using thousands of nodes
Fast @ exabyte scale Elastic & highly available On-demand, pay-per-query
High concurrency: Multiple
clusters access same data
No ETL: Query data in-place
using open file formats
Full Amazon Redshift
SQL support
S3
SQL
50. S3 Inventory
Save time Daily or Weekly delivery Delivery to S3 bucket CSV File Output
Bucket
Key
Version Id
Is Latest
Delete Marker
Size
Last Modified
ETag
StorageClass
Multipart Uploaded
Replication Status
51. Indexing S3 content using Elasticsearch
AWS
Lambda
Amazon
Elasticsearch
Service
Index Record
Search
Query
Source data
Source data
Source data
Source data
S3 Bucket
ObjectCreate
Event
54. Amazon S3 with event-driven workflow
Lambda function
Amazon S3
Invoked in response to events
SNS Topic
Source data
Source data
Source data
Source data
SQS Queue
55. Event-Driven validation layer on Amazon S3
Source data
Source data
Source data
Source data
Data Staging
Layer
/data/source-raw
Data Staging
Layer
/data/source-validated
Input Validation and
Conversion layer
AWS Lambda
56. Event-driven photo manipulation with Lambda
AWS Lambda:
e.g. Resize Images
Users upload photos
S3:
Source Bucket
S3:
Destination Bucket
Triggered on
PUTs
59. Cognito support for Identity
Username
Password
Sign In
SAML
Identity Provider
Amazon Cognito2. Get AWS credentials
API Gateway
DynamoDB S3
Lambda
Cognito User Pools
60. Leverage Amazon S3 directly from the app.
MyApp
Save Pic
Amazon Cognito
Amazon S3
2. Put object
you have a lot to cover and you are happy to field questions after the talk.
A trillion is 1,000,000,000,000, also know as 10 to the 12th power, or one million million. It’s such a large number it’s hard to get your head around it, so sometimes trillion just means “wow, a lot.”
To easily process and analyze 50 Gigabytes of data daily, Airbnb uses Amazon Elastic MapReduce (Amazon EMR). Airbnb is also using Amazon Simple Storage Service (Amazon S3) to house backups and static files, including 10 terabytes of user pictures.
SoundCloud uses a combination of Amazon Simple Storage Service (Amazon S3) and Amazon Glacier as its storage solution. The audio files are placed in Amazon S3 and distributed from there via the SoundCloud website. All files are also copied to Amazon Glacier, to ensure that the data is available at all times, even in the event of a disaster. The company currently stores 2.5 PB of data on Amazon Glacier.
Early in the 2 ½ year migration of FINRA’s Market Regulation Portfolio to the AWS Cloud, FINRA developed a system on AWS to replace an on-premises solution that allowed analysts to query this trade activity. This solution provided fast random access across trillions of trade records, which would quickly grow to over 700 TB of data.
All 3 storage classes are highly dur designed for 11 9s
Standard, designed for the active data, hot workload. High performance Designed for 4 9s availability. General Purpose storage class, new data frequently access, start with standard. Starting at 2.3c/GB.
As data age, access less. SIA, designed for colder or less frequently access data, Offers same high performance3 9s of availability., high throughput and low latency as S3 Standard. starting at 1.25c/GB, 45% lower in storage cost. There is a $0.01/gb retrieval cost.
As the data age further, no one is actively interacting with the old data and you need for record keeping. Glacier designed for long term archival storage. 4/10th of a cent, 3 retrieval options ranging from minutes to hours retrieval, you choose depending on quickly you need the data.
lifecycle
For data that is less-frequently accessed, you can leverage Amazon S3 Standard-IA to save on cost while still benefiting from the great durability and performance as S3 Standard.
In addition to transitioning your data to S-IA as its characteristics change, you can also leverage Amazon S3 Standard-IA for new data that fits the bill for Infrequently accessed data. For example you can leverage the S-IA storage class to stored detailed applications logs that you analyse in-frequently and save on storage cost.
if your storage is for Big data analysis, content distribution, website, consider using S3 standard which is designed for the active, hot analytics workload. Netflix and finra
for backup and archive, DR, you generally don’t access the data until the rare occasion when you need it, when you need it, you need it right away, SIA is designed for those use cases. consider directly putting into Standard-IA.
Glacier long term archive. Sony moved over 1M hours of video from magnetic tape to Glacier for digital preservation
As data age, you access the objects less
Asynchronous:
Storage gateway
You may need a variety of tool to move transfer data in and out of the cloud, that suit the value, time and workflow demands of the moves.
Stream data into S3,kinesis firehose a fully managed streaming service
Connect your on-premise infrastructure with AWS cloud, AWS storage gateway. added file gateway supporting NFS protocol so you can connect to your s3 bucket as an NFS mount point
Migrate PBs of data and do some local processing as the data arrives, Snowball Edge. 100TB of storage capacity with processing power.
If you have to migrate Hundreds of PB to exabytes of data quickly and securely onto the cloud, AWS Snowmobile can help you. a secure data truck with 100PB of capacity that can be dispatched to your site for high-speed data migration.
=======
a native S3 compatible end-point on the appliance, a clustered mode where multiple Snowball Edges act as a single durable storage pool, the ability to run Lambda functions as data is copied to the appliance, hosted File Storage Gateway to provide NAS capabilities, and integration to customers VPCs to connect to AWS resources.
export – you never want to leave. If you do, we get you in just as easy as we got you out
dedicated network connection from your premises to AWS, consider AWS Direct connect
up to 100PB per Snowmobile
Snowmobile uses multiple layers of security designed to protect your data including dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an optional escort security vehicle while in transit.
All data is encrypted with 256-bit encryption keys managed by KMS
Visualize so you can take a data driven approach to manage your storage
You can configure a storage class analysis policy to monitor an entire bucket, a prefix, or object tag. Once infrequent access pattern is observed, you can easily create a new lifecycle age policy based on the results.
Visualize so you can take a data driven approach to manage your storage
You can configure a storage class analysis policy to monitor an entire bucket, a prefix, or object tag. Once infrequent access pattern is observed, you can easily create a new lifecycle age policy based on the results.
Storage class analysis observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class.
You can export S3 Analysis data to use it in the tools of your choice such as Amazon Quicksight
If you have data on-premises, you probably have software from one of these industry leaders. Some of these can help you migrate workload into AWS so your applications can run on AWS or in a hybrid architecture model. They make it easy for you to migrate or scale your growth leveraging S3 and Glacier as cloud storage targets.
Set policies by bucket, prefix, or tags
Set policies for current version or non-current versions
Actions can be combined
Protects from accidental overwrites and deletes
Ability to recover from unintended user deletes or application logic failures,
no performance penalty.
Keeps all versions, new uploads stored separately, with delete, latest version is maintained, delete marker added
Can retrieve deleted or roll back to previous
3 states: **default, not versions saved, deleted objects cannot be retrieved, ** versioning-enabled, as discussed, save versions of overwritten or deleted, ** suspended, all saved versions are maintained, but new versions are not created
New Region coming soon
Hong Kong
Ningxia
Paris
Stockholm
Use cases:
Compliance - store data hundreds of miles apart
Lower latency - distribute data to regional customers
Security - create remote replicas managed by separate AWS accounts
. After enabling CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region. You choose the destination region. whole bucket or prefix, Optionally specify storage class
How it works:
Only replicates new PUTs. Once configured, all new uploads into source bucket will be replicated
Entire bucket or prefix based
1:1 replication between any 2 regions
Versioning required
Deletes and lifecycle actions are not replicated
Cloudfront allows you to cache static content at the CF edge for faster delivery from a local pop to the end user; in other words, your static content gets cached locally to a user and then delivered locally reducing download times for the website overall
there are over 60 CF cache pops around the world as we mentioned earlier.
CloudFront helps lower load on your origin infrastructure
You can front end static content as discussed and dynamic content as well
For dynamic content, CF proxies and accelerates your connection back to your dynamic origin and you would set a 0 ttl on your dynamic content so CloudFront always goes back to origin to fetch this content.
User generated content use cases, you have customers that upload to a centralized bucket from all over the world.
You transfer data on a regular basis across continents. frequent upload from distributed locations
S3-XA leverages AWS edge network when you enable TA, route your data to the closest edge location, so it travel a shorter distance on the public internet and majority of the distance in an optimzed network on the Amazon backbone. There are 68 edge locations
S3-XA uses standard TCP and HTTP so it does not require any firewall exceptions or custom software installation.
Set up is easy. Enable TA on the bucket in the console, update the end point.
FASTER OR FREE There is no cost for using the XA if the upload is not faster. In the event the network is the same as normal upload, you don’t pay for XA.
Although primary is upload/ ingestion to S3. we have seen customer for downloads, sharing video on individual basis, very few people pulling the video. Downloading thru XA is a fast path
Use XA to improve their Upload availability : spotty internet connection. Uploading file across the global poor internet connectivity, increase their upload availability. (FS)
Associate performance wit availability. Taking a long time to upload or download, assume it’s not working properly. Customer may not be as patient and cancel. Finish faster.
-download files from S3 when the files are not pulled down frequently (single user versus many). Customers were seeing a benefit of pulling down files from S3 in the quickest way possible by leveraging S3-XA. for files that are frequently accessed, recommended to use CloudFront which caches your data at the edge
This is how the flow of a request transferred through S3 XA looks like:
The client’s request hits Route 53 which resolves the accelerate endpoint to the best POP latency wise. From there, S3 Transfer Acceleration selects the fastest path to send data over persistent connections to EC2 proxy fleet over HTTPS in the same AWS region as the S3 bucket. We maximize the send and receive windows here to maximize customer’s utilization of the available bandwidth. From here, the request is finally sent to S3.
The service achieves acceleration thanks to: improvements on the slide?
Routing optimized to maximize routing on AMZN network
TCP optimizations along the path to maximize data transfer
Persistent connections to minimize connection setup and maximize connection reuse
Frame.io is a video editing and collaboration company. They allow globally distributed professionals to work collaboratively and edit videos in the cloud. The ability to get the contents uploaded to the cloud as quickly as possible shortens the time it takes for collaboration to happen.
Hudl is a video analytics company used by many professional and amateur sports teams to create, edit and share videos. Coaches use this to improve game play and recruiting purposes. The ability to get these videos uploaded quickly allows coaches and their assistant to analyze the footage earlier and make quicker adjustments.
Viocorp is a cloud based video platform provider headquartered in Sydney. It provides corporates and brands with an easy to use suite of solutions to reach and engage their audiences with online video.
Multipart upload allows you to upload a large object in a set of parts.
upload parts in parallel or any order. pause and resume
when you complete MPU, S3 puts the parts together and create the object for you.
+ When you are uploading large objects over a stable connection with significant bandwidth. You can parallelize uploading parts to maximize your network throughput. Multi-threaded performance.
+ When uploading objects over a spotty network (mobile), you only need to retry the parts that were interrupted. Increase resiliency to network errors;
Third performance topic is getting higher request rate out of S3 by distributing key names. you need to consider key naming scheme if you plan to spike up above 100 RPS on a bucket. S3 routinely scales up to millions of req per second. We spread key names into different partitions, we partition automatically overtime your request rate grows. The indexing layer is sorted in alpha numeric order. So if you name your objects starting with a date, all of these keys are stored into the same partitions, when you make hundreds or thousand of request per second to this set of keys, you may get throttle
maximize the distribution of your keys.
We recommend that you place your hash immediately after the bucket name. maximize the distribution of your keys.
So your transactions can be distributed across partitions and you can scale up your RPS
You do not want the bucket name as part of your hash.
prepend key name with short hash
Store object as the hash of the name and object name in the metadata
As you store on S3, you are used to organizing your data by location (bucket and prefix). You can now organize your data based on what it is with object tags. S3 Tags are key-value pairs applied as many as 10 tags to an object. With tag, You can create IAM policies to manage access, setup S3 Lifecycle policies to transition/expire, and customize storage metrics and analytics.
2 ways to tag your objects, on put object or put tag. Tags can be created, updated or deleted at any time during the lifetime of the object.
Tags are key value pairs, case sensitive
Maximum 10 tags per object
Maximum key length—128 Unicode characters
Maximum value length—256 Unicode characters
Narrative: So how much is this data worth? Well, it depends…
Recent data is highly valuable
If you act on it in time
Perishable Insights (M. Gualtieri, Forrester)
Old + Recent data is more valuable
If you have the means to combine them
Narrative: Processing real-time data as it arrives can let you make decisions much faster and get the most value from your data. But, building your own custom applications to process streaming data is complicated and resource intensive. You need to train or hire developers with the right skillsets, and then wait for months for the applications to be built and fine-tuned, and the operate and scale the application as the business grows.
All of this takes lots of time and money, and, at the end of the day, lots of companies just never get there, settle for the status-quo, and live with information that is hours or days old.
A foundation of highly durable data storage and streaming of any type of data
A metadata index and workflow which helps us categorise and govern data stored in the data lake
A search index and workflow which enables data discovery
A robust set of security controls – governance through technology, not policy
An API and user interface that expose these features to internal and external users
Access via AWS Management Console.
Log into the Console
Create a table
Type in a Hive DDL Statement
Use the console Add Table wizard
Athena uses Hive DDL to define tables.
When you create a table for Athena, you are essentially just creating metadata and, as you run queries, the schema is applied to the data.
Read only query service on S3, your S3 data won’t change by using Athena
Run your query
Console will display the results. result are also written as CSV back to S3 to a location you specify
For context, Athena uses Presto with full standard SQL support, it can handle complex analysis, including large joins, window functions, and arrays.
Amazon S3 inventory is one of the tools Amazon S3 provides to help manage your storage. You can simplify and speed up business workflows and big data jobs using the Amazon S3 inventory, which provides a scheduled alternative to the Amazon S3 synchronous List API operation. Amazon S3 inventory provides a comma-separated values (CSV) flat-file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix (that is, objects that have names that begin with a common string).
Notifications can be published to different targets
SNS topic for broadcasting events to a large number of clients. Push, email, mobile alerts
SQS Queue for processing S3 events asynchronously in a flexible manner. good choice for triggering workflows that pull from queue
Lambda Automatically executes code when an S3 event occurs. Without provisioning servers or managing instances.
Apply custom logic to process content being uploaded into S3.
Watermarking / thumbnailing
Transcoding
Indexing and deduplication
Aggregation and filtering
Pre processing
Content validation
WAF updates
Apply custom logic to process content being uploaded into S3.
Watermarking / thumbnailing
Transcoding
Indexing and deduplication
Aggregation and filtering
Pre processing
Content validation
WAF updates