In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Big Data and Architectural Patterns on AWS - Pop-up Loft Tel AvivAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
(BDT310) Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
February 2016 Webinar Series - Architectural Patterns for Big Data on AWSAmazon Web Services
With an ever-increasing set of technologies to process big data, organizations often struggle to understand how to build scalable and cost-effective big data applications.
In this webinar, we will simplify big data processing as a pipeline comprising various stages; and then show you how to choose the right technology for each stage based on criteria such as data structure, design patterns, and best practices.
Learning Objectives:
Understand key AWS Big Data services including S3, Amazon EMR, Kinesis, and Redshift
Learn architectural patterns for Big Data
Hear best practices for building Big Data applications on AWS
Who Should Attend:
Architects, developers and data scientists who are looking to start a Big Data initiative
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Presented by: Arie Leeuwesteijn, Principal Solutions Architect, Amazon Web Services
Customer Guest: Sander Kieft, Sanoma
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)Amazon Web Services
Join us for this general session where AWS big data experts present an in-depth look at the current state of big data. Learn about the latest big data trends and industry use cases. Hear how other organizations are using the AWS big data platform to innovate and remain competitive. Take a look at some of the most recent AWS big data announcements, as we kick off the Big Data re:Source Mini Con.
Big Data and Architectural Patterns on AWS - Pop-up Loft Tel AvivAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
(BDT310) Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
February 2016 Webinar Series - Architectural Patterns for Big Data on AWSAmazon Web Services
With an ever-increasing set of technologies to process big data, organizations often struggle to understand how to build scalable and cost-effective big data applications.
In this webinar, we will simplify big data processing as a pipeline comprising various stages; and then show you how to choose the right technology for each stage based on criteria such as data structure, design patterns, and best practices.
Learning Objectives:
Understand key AWS Big Data services including S3, Amazon EMR, Kinesis, and Redshift
Learn architectural patterns for Big Data
Hear best practices for building Big Data applications on AWS
Who Should Attend:
Architects, developers and data scientists who are looking to start a Big Data initiative
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
- Learn how AWS can help you process and make better use of your data with meaningful insights.
- Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
- Learn about real time data processing with Amazon Kinesis.
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Presented by: Arie Leeuwesteijn, Principal Solutions Architect, Amazon Web Services
Customer Guest: Sander Kieft, Sanoma
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)Amazon Web Services
Join us for this general session where AWS big data experts present an in-depth look at the current state of big data. Learn about the latest big data trends and industry use cases. Hear how other organizations are using the AWS big data platform to innovate and remain competitive. Take a look at some of the most recent AWS big data announcements, as we kick off the Big Data re:Source Mini Con.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS re:Invent 2016: Achieving Agility by Following Well-Architected Framework...Amazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, and cost optimization when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how National Instruments used the Well-Architected Framework to follow AWS guidelines and best practices. By developing a strategy based on the AWS Well-Architected Framework, National Instruments was able to triple the number of applications running in the cloud without additional head count, significantly increase the frequency of code deployments, and reduce deployment times from two weeks to a single day. As a result, National Instruments was able to deliver a more scalable, dynamic, and resilient LabVIEW platform with agility.
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big DataAmazon Web Services
Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms.
In this session, we cover some of the recently announced features. We then talk about using S3 event sources, the various S3 storage classes, cross-region replication, and VPC endpoints.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
BDA402 Deep Dive: Log analytics with Amazon Elasticsearch ServiceAmazon Web Services
Everything generates logs. Applications, infrastructure, security ... everything. Keeping track of the flood of log data is a big challenge, yet critical to your ability to understand your systems and troubleshoot (or prevent) issues. In this session, we will use both Amazon CloudWatch and application logs to show you how to build an end-to-end log analytics solution. First, we cover how to configure an Amazon Elaticsearch Service domain and ingest data into it using Amazon Kinesis Firehose, demonstrating how easy it is to transform data with Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data and configure a secure analytics environment. We demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
ENT316 Keeping Pace With The Cloud: Managing and Optimizing as You ScaleAmazon Web Services
With cloud maturity comes operational efficiencies and endless potential for innovation and business growth. However, the complexities of governing cloud infrastructure are impeding without the right strategy. Visibility, accountability, and actionable insights are some of the most invaluable considerations. The AWS cloud clearly enables convenience and cost savings for organizations that know how to leverage its full potential. Amazon EC2 Reserved Instances (RIs), in particular, present a tremendous opportunity when scaling to save significantly on capacity but there are many considerations to fully reaping the benefits of RIs. In this session, CloudCheckr CTO Patrick Gartlan will present issues that every organization runs into when scaling, provide best practices for how to combat them and help you show your boss how RIs help you save money and move faster.
This session is brought to you by AWS Summit Chicago sponsor, CloudCheckr.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
AWS re:Invent 2016: Running Lean Architectures: How to Optimize for Cost Effi...Amazon Web Services
Whether you’re a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; choosing the optimal instance type through load testing; taking advantage of Multi-AZ support; and using Amazon CloudWatch to monitor usage and automatically shut off resources when they are not in use. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by leveraging AWS high-level services. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the Amazon Cloud.
Attendees of this session receive a free 30-day trial of enterprise-level Trusted Advisor.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
AWS Data Transfer Services: Data Ingest Strategies Into the AWS CloudAmazon Web Services
Different types and sizes of data require different strategies. In this session, learn about the various features and services available for migrating data, be it small ongoing transactional data or large multi-petabyte volumes. Come learn how customers are using the latest network, streaming and large scale ingest features for their cloud data migrations to AWS storage services.
AWS re:Invent 2016: JustGiving: Serverless Data Pipelines, Event-Driven ETL, ...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), application programming interfaces (API), clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. Building scalable big data pipelines with automated extract-transform-load (ETL) and machine learning processes can address these limitations. JustGiving is the world’s largest social platform for online giving. In this session, we describe how we created several scalable and loosely coupled event-driven ETL and ML pipelines as part of our in-house data science platform called RAVEN. You learn how to leverage AWS Lambda, Amazon S3, Amazon EMR, Amazon Kinesis, and other services to build serverless, event-driven, data and stream processing pipelines in your organization. We review common design patterns, lessons learned, and best practices, with a focus on serverless big data architectures with AWS Lambda.
AWS re:Invent 2016: Workshop: Addressing Your Business Needs with AWS (ARC210)Amazon Web Services
Come and participate with other AWS customers as we focus on the overall experience of using AWS to solve business problems. This is a great opportunity to collaborate with existing and prospective AWS users to validate your thinking and direction with AWS peers, discuss the resources that aid AWS solution design, and give direct feedback on your experience building solutions on AWS.
Cost Optimising Your Architecture Practical Design Steps for Developer Saving...Amazon Web Services
This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3 and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively.
Speaker: Simon Elisha, Head of Solution Architecture, ANZ Public Sector, Amazon Web Services
Level 300
C* Summit 2013: Large Scale Data Ingestion, Processing and Analysis: Then, No...DataStax Academy
The presentation aims to highlight the challenges posed by large scale and near real-time data processing problems. In past, such problems were solved using conventional technologies, primarily a database and JMS queue. However these solutions had their limits and presented serious problems in terms of scale and redundancy. The new breed of products - a la Cassandra & Kafka, being innately distributed in their design, aim to tackle such challenges in a very elegant manner. The presentation will showcase some of the use cases of this genre from the industry and describe the solutions which have been increasing in their sophistication.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS re:Invent 2016: Achieving Agility by Following Well-Architected Framework...Amazon Web Services
The AWS Well-Architected Framework enables customers to understand best practices around security, reliability, performance, and cost optimization when building systems on AWS. This approach helps customers make informed decisions and weigh the pros and cons of application design patterns for the cloud. In this session, you'll learn how National Instruments used the Well-Architected Framework to follow AWS guidelines and best practices. By developing a strategy based on the AWS Well-Architected Framework, National Instruments was able to triple the number of applications running in the cloud without additional head count, significantly increase the frequency of code deployments, and reduce deployment times from two weeks to a single day. As a result, National Instruments was able to deliver a more scalable, dynamic, and resilient LabVIEW platform with agility.
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big DataAmazon Web Services
Analyzing large data sets requires significant compute and storage capacity that can vary in size based on the amount of input data and the analysis required. This characteristic of big data workloads is ideally suited to the pay-as-you-go cloud model, where applications can easily scale up and down based on demand. Learn how Amazon S3 can help scale your big data platform. Hear from Redfin and Twitter about how they build their big data platforms on AWS and how they use S3 as an integral piece of their big data platforms.
In this session, we cover some of the recently announced features. We then talk about using S3 event sources, the various S3 storage classes, cross-region replication, and VPC endpoints.
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Data migration at petabyte scale is now a simple service from AWS. You can easily migrate large volumes of data from on-premises environments to the cloud, quickly get started with the cloud as a backup target, or burst workloads between your on-premises environments and the AWS Cloud. Learn about AWS Snowball, AWS Snowball Edge, AWS Snowmobile and AWS Storage Gateway, and understand which one is the right fit for your requirements. We will go through customer use cases, review the different applications used, and help you cut IT spend and management time on hardware and backup solutions.
BDA402 Deep Dive: Log analytics with Amazon Elasticsearch ServiceAmazon Web Services
Everything generates logs. Applications, infrastructure, security ... everything. Keeping track of the flood of log data is a big challenge, yet critical to your ability to understand your systems and troubleshoot (or prevent) issues. In this session, we will use both Amazon CloudWatch and application logs to show you how to build an end-to-end log analytics solution. First, we cover how to configure an Amazon Elaticsearch Service domain and ingest data into it using Amazon Kinesis Firehose, demonstrating how easy it is to transform data with Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data and configure a secure analytics environment. We demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...Amazon Web Services
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
ENT316 Keeping Pace With The Cloud: Managing and Optimizing as You ScaleAmazon Web Services
With cloud maturity comes operational efficiencies and endless potential for innovation and business growth. However, the complexities of governing cloud infrastructure are impeding without the right strategy. Visibility, accountability, and actionable insights are some of the most invaluable considerations. The AWS cloud clearly enables convenience and cost savings for organizations that know how to leverage its full potential. Amazon EC2 Reserved Instances (RIs), in particular, present a tremendous opportunity when scaling to save significantly on capacity but there are many considerations to fully reaping the benefits of RIs. In this session, CloudCheckr CTO Patrick Gartlan will present issues that every organization runs into when scaling, provide best practices for how to combat them and help you show your boss how RIs help you save money and move faster.
This session is brought to you by AWS Summit Chicago sponsor, CloudCheckr.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
AWS re:Invent 2016: Running Lean Architectures: How to Optimize for Cost Effi...Amazon Web Services
Whether you’re a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; choosing the optimal instance type through load testing; taking advantage of Multi-AZ support; and using Amazon CloudWatch to monitor usage and automatically shut off resources when they are not in use. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by leveraging AWS high-level services. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the Amazon Cloud.
Attendees of this session receive a free 30-day trial of enterprise-level Trusted Advisor.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
AWS Data Transfer Services: Data Ingest Strategies Into the AWS CloudAmazon Web Services
Different types and sizes of data require different strategies. In this session, learn about the various features and services available for migrating data, be it small ongoing transactional data or large multi-petabyte volumes. Come learn how customers are using the latest network, streaming and large scale ingest features for their cloud data migrations to AWS storage services.
AWS re:Invent 2016: JustGiving: Serverless Data Pipelines, Event-Driven ETL, ...Amazon Web Services
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), application programming interfaces (API), clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. Building scalable big data pipelines with automated extract-transform-load (ETL) and machine learning processes can address these limitations. JustGiving is the world’s largest social platform for online giving. In this session, we describe how we created several scalable and loosely coupled event-driven ETL and ML pipelines as part of our in-house data science platform called RAVEN. You learn how to leverage AWS Lambda, Amazon S3, Amazon EMR, Amazon Kinesis, and other services to build serverless, event-driven, data and stream processing pipelines in your organization. We review common design patterns, lessons learned, and best practices, with a focus on serverless big data architectures with AWS Lambda.
AWS re:Invent 2016: Workshop: Addressing Your Business Needs with AWS (ARC210)Amazon Web Services
Come and participate with other AWS customers as we focus on the overall experience of using AWS to solve business problems. This is a great opportunity to collaborate with existing and prospective AWS users to validate your thinking and direction with AWS peers, discuss the resources that aid AWS solution design, and give direct feedback on your experience building solutions on AWS.
Cost Optimising Your Architecture Practical Design Steps for Developer Saving...Amazon Web Services
This session uses practical examples aimed at architects and developers. Using code and AWS CloudFormation in concert with services such as Amazon EC2, Amazon ECS, Lambda, Amazon RDS, Amazon SQS, Amazon SNS, Amazon S3 and more, we demonstrate the financial advantages of different architectural decisions. Attendees will walk away with concrete examples, as well as a new perspective on how they can build systems economically and effectively.
Speaker: Simon Elisha, Head of Solution Architecture, ANZ Public Sector, Amazon Web Services
Level 300
C* Summit 2013: Large Scale Data Ingestion, Processing and Analysis: Then, No...DataStax Academy
The presentation aims to highlight the challenges posed by large scale and near real-time data processing problems. In past, such problems were solved using conventional technologies, primarily a database and JMS queue. However these solutions had their limits and presented serious problems in terms of scale and redundancy. The new breed of products - a la Cassandra & Kafka, being innately distributed in their design, aim to tackle such challenges in a very elegant manner. The presentation will showcase some of the use cases of this genre from the industry and describe the solutions which have been increasing in their sophistication.
Pinal Power, Turning Arizona’s Green Waste into Renewable, Reliable Baseload ...City of Maricopa
Resolution 10-42. A Resolution of the Mayor and City Council of the City of Maricopa, Arizona, in support of the planned Pinal Power, LLC Renewable Energy Facility.
Der Steinschaler GenussGarten liegt in der GenussRegion
Pielachtaler Dirndl, die für ihre malerische Landschaft, viele
tausend Dirndlsträucher und köstliche Dirndlspezialitäten
wie Marmeladen, Chutneys und Brände bekannt ist.
In dem riesigen Küchengarten des Wildkräuterhotels Steinschalerhof
werden auf rund 48.000 m2 über 1.000 Pflanzenarten
naturnah und in Bioqualität gezogen. Den essbaren
Wildpflanzen gilt unsere besondere Leidenschaft. Erntefrische
Gemüse und Salate, Wild- und Küchenkräuter, bunte
Blüten und Obst bilden seit Jahrzehnten die Grundlage für
unsere Grüne-Hauben-Kulinarik. Wildkräuter zeichnen sich
durch eigenständige und intensive Geschmacksnuancen
aus. Kochen mit Wildkräutern erfordert Wissen, Fingerspitzengefühl
und Experimentierfreude. Deshalb probieren die
Steinschaler Köchinnen auch immer wieder neue Rezepte
aus. Viele Gäste besuchen das Steinschaler GenussRestaurant
wegen der reichen Palette an vegetarischen und veganen
Speisen. Wildkräuter verleihen aber auch den regionalen
Fleischgerichten eine besondere Note. Und immer öfter
treten die Steinschaler Wildkräuter im Duett auf - gemeinsam
mit den Pielachtaler Dirndln. Im Steinschaler Hofladen
sind viele der Produkte aus dem GenussGarten und der GenussRegion
zu erwerben.
Bei Gartenurlauben im Steinschalerhof können Sie den Weg
der Wildkräuter von der Aussaat bis zur Ernte und vom Beet
bis auf die Gabel begleiten. Wir führen sie gerne durch den
GenussGarten und geben altes Wissen und eigene Erfahrungen
rund um Garten, Wildkräuter und Kochen in Kursen
weiter.
Erlebnis-Angebote:
*Unkraut im Kochtopf
*Genussvolle Aliens
*Faules Gärtnern
*Wildkräuter Kochkurs
*Koch Dir den Sommer ein!
*Urlaub in den Steinschaler Naturhotels
We believe that Düsseldorf can be a hub of entrepreneurial driven innovation, creativity and high-technology. Innovators and entrepreneurs are everywhere, they just need to come together to collaborate and share the clear vision that leads to a vibrant startup community.
StartupDorf is a local, grassroots and entrepreneur-led network to foster Düsseldorf startup eco-system. A hub and local community to access support to start and grow businesses and to connect with other startup founders.
In the closing keynote to the Media Education Summit in Prague in 2014, Professor Hobbs shares insights gained from working with educators and researchers in Turkey, Russia, Brazil and Israel who are exploring media literacy pedagogy and practice at the elementary and secondary levels. She
describes and analyzes an example of a global media
literacy project that involved Turkish and American
middle-school students. Professor Hobbs considers
how teacher motivations regarding the use of digital
media interact with structural relationships between
government, school and higher education to produce
differential opportunities for innovation. She identifies the many flavors of digital literacy now circulating in contemporary culture and shows how collaborative global research in media literacy education can help researchers examine and question some fundamental assumptions and
expectations of the field.
Sentidos en torno a la “obligatoriedad” de la educación secundariaPolitica29
La extensión de la obligatoriedad a la educación
secundaria, sancionada en la Ley de Educación Nacional,
se apoya en principios de democratización y equidad
educativa, y mantiene correspondencias con demandas
sociales de "más educación". Pero, también, representa
nuevos desafíos para la escolarización /inclusión de los
jóvenes que no asisten a la escuela y para garantizar la terminalidad.
Start-up tecnológica fundada en el año 2011 y dedicada a la creación de aplicaciones móviles. Contamos con amplia experiencia en desarrollos con las plataformas más extendidas en el mercado, focalizando nuestro trabajo en iOS (Apple), Android (Google), Symbian (Nokia) y Windows Phone.
講師: Xiaoyong Han, Solution Architect, AWS
Data collection and storage is a primary challenge for any big data architecture. In this webinar, gain a thorough understanding of AWS solutions for data collection and storage, and learn architectural best practices for applying those solutions to your projects. This session will also include a discussion of popular use cases and reference architectures. In this webinar, you will learn:
• Overview of the different types of data that customers are handling to drive high-scale workloads on AWS, and how to choose the best approach for your workload • Optimization techniques that improve performance and reduce the cost of data ingestion • Leveraging Amazon S3, Amazon DynamoDB, and Amazon Kinesis for storage and data collection
Database and Analytics on the AWS Cloud - AWS Innovate TorontoAmazon Web Services
Antoine Genereux, AWS Solutions Architect, takes us on a tour of database solutions available for the AWS Cloud, and powerful analytics and business intelligence reporting tools.
Big Data adoption success using AWS Big Data Services - Pop-up Loft TLV 2017Amazon Web Services
In today’s session we will share with you an overview of what the typical challenges when adoption Big Data are, and how the AWS Big Data platform allows you to tackle this challenges and leverage the right Analytical/Big Data solutions in order to become successful with your strategy (Whiteboard presentation)
AWS Webcast - Managing Big Data in the AWS Cloud_20140924Amazon Web Services
This presentation deck will cover specific services such as Amazon S3, Kinesis, Redshift, Elastic MapReduce, and DynamoDB, including their features and performance characteristics. It will also cover architectural designs for the optimal use of these services based on dimensions of your data source (structured or unstructured data, volume, item size and transfer rates) and application considerations - for latency, cost and durability. It will also share customer success stories and resources to help you get started.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
by Dario Rivera, Solutions Architect, AWS
The world is producing an ever-increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. John Pignata, AWS Startup Solutions Architect, will discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. He will provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
AWS November Webinar Series - Architectural Patterns & Best Practices for Big...Amazon Web Services
The world is producing an ever-increasing volume, velocity, and variety of data. For many consumers, batch analytics is no longer enough; they need sub-second analysis on fast-moving data. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how?
If you missed this popular presentation at re:Invent, attend this webinar where we simplify big data processing as a pipeline comprising various stages: ingest, store, process, analyze & visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, and durability. Finally, we provide a reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems.
Learning Objectives:
Understand key AWS Big Data services including S3, Amazon EMR, Kinesis, and Redshift
Learn architectural patterns for Big Data
Hear best practices for building Big Data applications on AWS
Didn’t make it to re:Invent? Here’s another chance to attend this popular presentation
Who Should Attend:
Architects, developers and data scientists who are looking to start a Big Data initiative
Working with big volumes of data is a complicated task, but it's even harder if you have to do everything in real time and try to figure it all out yourself. Over the past decades many open-source projects helped solve problems within the data analytics lifecycle around ingestion, storage, processing and visualisation of data. This session will use practical examples to discuss architectural best practices and lessons learned when solving real-time analytics and data visualisation decision-making problems with open-source at scale with the power of Amazon Web Services. It furthermore dives into a demo, using source code from the AWS Labs to visualise live data streams at scale.
Olivier Klein, Solutions Architect, Amazon Web Services, Greater China
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAmazon Web Services
The world is producing an ever-increasing volume, velocity, and variety of data including data from devices. As we step into the era of Internet of things (IOT), for many consumers, batch analytics is no longer enough; they need sub-second analysis on fast-moving data. AWS delivers many technologies for solving big data and IOT problems. But what services should you use, why, when, and how? In this webinar where we simplify big data processing as a pipeline comprising various stages: ingest, store, process, analyze & visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, and durability. Finally, we provide a reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems.
by Sid Chauhan, Solutions architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
by Mamoon Chowdry, Solutions Architect
AWS Data & Analytics Week is an opportunity to learn about Amazon’s family of managed analytics services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon Redshift data warehouse; Data Lake services including Amazon EMR, Amazon Athena, & Amazon Redshift Spectrum; Log Analytics with Amazon Elasticsearch Service; and data preparation and placement services with AWS Glue and Amazon Kinesis. You'll will learn how to get started, how to support applications, and how to scale.
Build Data Lakes and Analytics on AWS: Patterns & Best Practices - BDA305 - A...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
by Avijit Goswami, Sr. Solutions Architect, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2016 Utah Cloud Summit: Big Data Architectural Patterns and Best Practices on AWS
1. Martin Schade, Solutions Architect, Amazon Web Services
January 2016
Big Data Architectural
Patterns and Best Practices
on AWS
2. What to Expect from the Session
Big data challenges
How to simplify big data processing
What technologies should you use?
• Why?
• How?
Reference architecture
Design patterns
5. Plethora of Tools
Amazon
Glacier
S3 DynamoDB
RDS
EMR
Amazon
Redshift
Data PipelineAmazon Kinesis
Cassandra
CloudSearch
Kinesis-
enabled
app
Lambda ML
SQS
ElastiCache
DynamoDB
Streams
6. Is there a reference architecture ?
What tools should I use ?
How ?
Why ?
7. Architectural Principles
• Decoupled “data bus”
• Data → Store → Process → Answers
• Use the right tool for the job
• Data structure, latency, throughput, access patterns, cost
• Use Lambda architecture ideas
• Immutable (append-only) log, batch/speed/serving layer
• Leverage AWS managed services
• No/low admin
• Big data ≠ big cost
8. Simplify Big Data Processing
ingest /
collect
store process /
analyze
consume /
visualize
Time to Answer (Latency)
Throughput
Cost
15. Why Is Amazon S3 Good for Big Data?
• Natively supported by big data frameworks (Spark, Hive, Presto, etc.)
• No need to run compute clusters for storage (unlike HDFS)
• Can run transient Hadoop clusters & Amazon EC2 Spot instances
• Multiple distinct (Spark, Hive, Presto) clusters can use the same data
• Unlimited number of objects
• Very high bandwidth – no aggregate throughput limit
• Highly available – can tolerate AZ failure
• Designed for 99.999999999% durability
• Tired-storage (Standard, IA, Amazon Glacier) via life-cycle policy
• Secure – SSL, client/server-side encryption at rest
• Low cost
16. What about HDFS & Amazon Glacier?
• Use HDFS for very frequently
accessed (hot) data
• Use Amazon S3 Standard for
frequently accessed data
• Use Amazon S3 Standard –
IA for infrequently accessed
data
• Use Amazon Glacier for
archiving cold data
17. Database +
Search
Tier
A
iOS Android
Web Apps
Logstash
Amazon
RDS
Amazon
DynamoDB
Amazon
ES
Amazon
S3
Apache
Kafka
Amazon
Glacier
Amazon
Kinesis
Amazon
DynamoDB
Amazon
ElastiCache
SearchSQLNoSQLCacheStreamStorageFileStorage
Transactional Data
File Data
Stream Data
Mobile
Apps
Search Data
Collect Store
19. Best Practice — Use the Right Tool for the Job
Data Tier
Search
Amazon
Elasticsearch
Service
Amazon
CloudSearch
Cache
Redis
Memcached
SQL
Amazon Aurora
MySQL
PostgreSQL
Oracle
SQL Server
NoSQL
Cassandra
Amazon
DynamoDB
HBase
MongoDB
Database + Search Tier
20. What Data Store Should I Use?
• Data structure → Fixed schema, JSON, key-value
• Access patterns → Store data in the format you will
access it
• Data / access characteristics → Hot, warm, cold
• Cost → Right cost
21. Data Structure and Access Patterns
Access Patterns What to use?
Put/Get (Key, Value) Cache, NoSQL
Simple relationships → 1:N, M:N NoSQL
Cross table joins, transaction, SQL SQL
Faceting, Search Search
Data Structure What to use?
Fixed schema SQL, NoSQL
Schema-free (JSON) NoSQL, Search
(Key, Value) Cache, NoSQL
24. Process / Analyze
Analysis of data is a process of inspecting, cleaning,
transforming, and modeling data with the goal of discovering
useful information, suggesting conclusions, and supporting
decision-making.
Examples
• Interactive dashboards → Interactive analytics
• Daily/weekly/monthly reports → Batch analytics
• Billing/fraud alerts, 1 minute metrics → Real-time analytics
• Sentiment analysis, prediction models → Machine learning
26. What Stream Processing Technology Should I Use?
Spark Streaming Apache Storm Amazon Kinesis
Client Library
AWS Lambda Amazon EMR (Hive,
Pig)
Scale /
Throughput
~ Nodes ~ Nodes ~ Nodes Automatic ~ Nodes
Batch or Real-
time
Real-time Real-time Real-time Real-time Batch
Manageability Yes (Amazon EMR) Do it yourself Amazon EC2 +
Auto Scaling
AWS managed Yes (Amazon EMR)
Fault Tolerance Single AZ Configurable Multi-AZ Multi-AZ Single AZ
Programming
languages
Java, Python, Scala Any language
via Thrift
Java, via
MultiLangDaemon (
.Net, Python, Ruby,
Node.js)
Node.js, Java Hive, Pig, Streaming
languages
High
27. What Data Processing Technology Should I Use?
Amazon
Redshift
Impala Presto Spark Hive
Query
Latency
Low Low Low Low Medium (Tez) –
High (MapReduce)
Durability High High High High High
Data Volume 2 PB
Max
~Nodes ~Nodes ~Nodes ~Nodes
Managed Yes Yes (EMR) Yes (EMR) Yes (EMR) Yes (EMR)
Storage Native HDFS / S3A* HDFS / S3 HDFS / S3 HDFS / S3
SQL
Compatibility
High Medium High Low (SparkSQL) Medium (HQL)
HighMedium
28. What about ETL?
Store Analyze
https://aws.amazon.com/big-data/partner-solutions/
ETL
30. Collect Store Analyze Consume
A
iOS Android
Web Apps
Logstash
Amazon
RDS
Amazon
DynamoDB
Amazon
ES
Amazon
S3
Apache
Kafka
Amazon
Glacier
Amazon
Kinesis
Amazon
DynamoDB
Amazon
Redshift
Impala
Pig
Amazon ML
Streaming
Amazon
Kinesis
AWS
Lambda
AmazonElasticMapReduce
Amazon
ElastiCache
SearchSQLNoSQLCache
StreamProcessingBatchInteractive
Logging
StreamStorage
IoTApplications
FileStorage
Analysis&Visualization
Hot
Cold
Warm
Hot
Slow
Hot
ML
Fast
Fast
Transactional Data
File Data
Stream Data
Notebooks
Predictions
Apps & APIs
Mobile
Apps
IDE
Search Data
ETL
Amazon
QuickSight
31. Consume
• Predictions
• Analysis and Visualization
• Notebooks
• IDE
• Applications & API
Consume
Analysis&Visualization
Amazon
QuickSight
Notebooks
Predictions
Apps & APIs
IDE
Store Analyze ConsumeETL
Business
users
Data Scientist,
Developers
33. Collect Store Analyze Consume
A
iOS Android
Web Apps
Logstash
Amazon
RDS
Amazon
DynamoDB
Amazon
ES
Amazon
S3
Apache
Kafka
Amazon
Glacier
Amazon
Kinesis
Amazon
DynamoDB
Amazon
Redshift
Impala
Pig
Amazon ML
Streaming
Amazon
Kinesis
AWS
Lambda
AmazonElasticMapReduce
Amazon
ElastiCache
SearchSQLNoSQLCache
StreamProcessingBatchInteractive
Logging
StreamStorage
IoTApplications
FileStorage
Analysis&Visualization
Hot
Cold
Warm
Hot
Slow
Hot
ML
Fast
Fast
Amazon
QuickSight
Transactional Data
File Data
Stream Data
Notebooks
Predictions
Apps & APIs
Mobile
Apps
IDE
Search Data
ETL
Reference Architecture
36. Multi-Stage Decoupled “Data Bus”
• Multiple stages
• Storage decoupled from processing
Store Process Store Process
process
store
37. Multiple Processing Applications (or
Connectors) Can Read from or Write to Multiple
Data Stores
Amazon
Kinesis
AWS
Lambda
Amazon
DynamoDB
Amazon
Kinesis S3
Connector
Amazon S3
process
store
38. Processing Frameworks (KCL, Storm, Hive,
Spark, etc.) Could Read from Multiple Data
Stores
Amazon
Kinesis
AWS
Lambda
Amazon
S3
Amazon
DynamoDB
Hive SparkStorm
Amazon
Kinesis S3
Connector
process
store
41. Batch Layer
Amazon
Kinesis
data
process
store
Lambda Architecture
Amazon
Kinesis S3
Connector
Amazon S3
A
p
p
l
i
c
a
t
i
o
n
s
Amazon
Redshift
Amazon EMR
Presto
Hive
Pig
Spark
answer
Speed Layer
answer
Serving
Layer
Amazon
ElastiCache
Amazon
DynamoDB
Amazon
RDS
Amazon
ES
answer
Amazon
ML
KCL
AWS Lambda
Spark Streaming
Storm
42. Summary
• Build decoupled “data bus”
• Data → Store ↔ Process → Answers
• Use the right tool for the job
• Latency, throughput, access patterns
• Use Lambda architecture ideas
• Immutable (append-only) log, batch/speed/serving layer
• Leverage AWS managed services
• No/low admin
• Be cost conscious
• Big data ≠ big cost
Wednesday, Oct 7
2:45 PM - 3:45 PM
Marcello 4501B
Siva Raghupathy leads the Americas Big Data Solutions Architecture team at AWS. He guides developers and architects to build successful big data solutions on AWS. Previously, as a principal technical program manager for AWS database service, he gathered emerging NoSQL requirements and wrote the first version of DynamoDB product specification. Later, as a development manager for Amazon Relational Database Services (RDS) he drove several enhancements. Prior to AWS, he spent several years at Microsoft.
Abstract:
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many services for solving big data problems. But what service should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Abstract:
The world is producing an ever increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many services for solving big data problems. But what service should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Volume – 100 to 150TB a day
Velocity – 1million reads and writes per second is becoming a norm
Variety ->
IOT/log data/ streaming data
Transactional data
File data
Fixed Schema
CSV
Parquet
Avero
Schema-free
JSON
Key-value
Small files, large files,
Hourly server logs: were your systems misbehaving 1hr ago
Weekly / Monthly Bill: what you spent this billing cycle
Daily customer-preferences report from your web site’s click stream: what deal or ad to try next time
Daily fraud reports: was there fraud yesterday
Real-time alerts: what went wrong now
Real-time spending caps: prevent overspending now
Real-time analysis: what to offer the current customer now
Real-time detection: block fraudulent use now
I need to harness big data, fast
I want more happy customers
I want to save/make more money
Before we go into solving the Big architecture, I want to introduce some “tried and test” architecture principles.
Here at AWS we believe you should be using the right tool for the job – “instead of using a big swiss army knife for using a screw dreive, it will be best to use a screw drive - this is especially important for big data architectures. We’ll talk about this more.
Decoupled architecture http://whatis.techtarget.com/definition/decoupled-architecture - In general, a decoupled architecture is a framework for complex work that allows components to remain completely autonomous and unaware of each other…this has been tried and battle test.
Managed services – this is relatively now - Should I install Cassandra or MongoDB or CouchDB on AWS. You obviously can. Sometimes there are good reasons for doing this. Many customers still do this. Netflix is a great example. They run a multi-region Cassandra and are a poster child for how to do this. But for most customers, delegating this task to AWS makes more sense….you are better of spending your time on building features for your customers rather than building highly scalable distributed systems.
Lambda Architecture - Nathan Marz designed this generic architecture addressing common requirements Twitter
throughput = f (volume, request rate)
latency
Cost
Event to action/answer latency?
http://calculator.s3.amazonaws.com/index.html#r=IAD&key=calc-BE3BA3E4-1AC5-4E7A-B542-015056D8EDAF
Kinesis -> $52.14 per month
SQS -> $133.42 per month for puts or $400/month (put, get, delete)
DynamoDB -> $3809.88 per month (10TB of storage cost itself is $2500/month)
Cost (100rpsx 35KB)
$52/month
$133/month * 2 = $266/month
?
Amazon DynamoDB Service (US-East)
$
Provisioned Throughput Capacity:
$120
Indexed Data Storage:
$2560.90
DynamoDB Streams:
$1.3
Amazon SQS Service (US-East)
Pricing Example
Let’s assume that our data producers put 100 records per second in aggregate, and each record is 35KB. In this case, the total data input rate is 3.4MB/sec (100 records/sec*35KB/record). For simplicity, we assume that the throughput and data size of each record are stable and constant throughout the day. Please note that we can dynamically adjust the throughput of our Amazon Kinesis stream at any time.
We first calculate the number of shards needed for our stream to achieve the required throughput. As one shard provides a capacity of 1MB/sec data input and supports 1000 records/sec, four shards provide a capacity of 4MB/sec data input and support 4000 records/sec. So a stream with four shards satisfies our required throughput of 3.4MB/sec at 100 records/sec.
We then calculate our monthly Amazon Kinesis costs using Amazon Kinesis pricing in the US-East Region:
Shard Hour: One shard costs $0.015 per hour, or $0.36 per day ($0.015*24). Our stream has four shards so that it costs $1.44 per day ($0.36*4). For a month with 31 days, our monthly Shard Hour cost is $44.64 ($1.44*31).
PUT Payload Unit (25KB): As our record is 35KB, each record contains two PUT Payload Units. Our data producers put 100 records or 200 PUT Payload Units per second in aggregate. That is 267,840,000 records or 535,680,000 PUT Payload Units per month. As one million PUT Payload Units cost $0.014, our monthly PUT Payload Units cost is $7.499 ($0.014*535.68).
Adding the Shard Hour and PUT Payload Unit costs together, our total Amazon Kinesis costs are $1.68 per day, or $52.14 per month. For $1.68 per day, we have a fully-managed streaming data infrastructure that enables us to continuously ingest 4MB of data per second, or 337GB of data per day in a reliable and elastic manner.
No need to run compute clusters for storage (unlike HDFS)
Can run transient Hadoop clusters & Amazon EC2 spot Instances
Multiple distinct (Spark, Hive, Presto) clusters can use the same data
Highly durable, highly available, highly scalable
Secure
Designed for 99.999999999% durability
https://aws.amazon.com/s3/storage-classes/
Lifecycle Policies - migrated to S3-Standard - IA, archive to Amazon Glacier, or deleted after a specific period of time!
No need to run a Hadoop cluster for storage (unlike HDFS)
Unlimited number of objects
Object size up to 5TB
Very high bandwidth
Versioning
Lifecycle Policies to Achieve to Glacier
Designed for 99.999999999% durability
Server size encryption
Highly available – SLA 99.99 (Standard)
Highly scalable
Low cost
Event notification
Cross-region replication
Natively supported by almost all big data frameworks & tools
2 x 2 Matrix
Structured
Level of query (from none to complex)
Draw down the slide
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making.
Examples:
Data Warehousing: Monthly bill, Interactive dashboards
Machine Learning: Sentiment Analysis, Prediction
Stream processing: Alerts, 1 minute metrics, etc.
http://www.jmlr.org/papers/volume9/li08a/li08a.pdf
Forecasting Web Page Views:
http://nlp.stanford.edu/sentiment/
http://nlp.stanford.edu/sentiment/Methods and Observations
https://en.wikipedia.org/wiki/Data_analysis
Add connector
Direct Acyclic Graphs?
Exactly once processing & DAG? – how do you do this??
https://storm.apache.org/documentation/Rationale.html
http://www.slideshare.net/ptgoetz/apache-storm-vs-spark-streaming
Cost:
Redshift – Moderate
Impala -
Presto – Low
S3A* is an open source connector. It is not in EMR 1.2.1 – using bootstrap you can install 2.2 ( we have a bootstrap action)
Query Speed
Redshift – Extremely fast SQL queries
Spark, Impala – Extremely Fast to Fast Hive QL
Hive, Tez – Moderately Fast to Slow Hive QL
Data Volume?
UDFs?
Manageability?
http://yahoodevelopers.tumblr.com/post/85930551108/yahoo-betting-on-apache-hive-tez-and-yarn
https://amplab.cs.berkeley.edu/benchmark/
http://nerds.airbnb.com/redshift-performance-cost/
Similar to multi-tier web-app-data architectures
Concept of a “data bus” or “data pipeline”
Best practices
BDT318 - Netflix Keystone: How Netflix Handles Data Streams Up to 8 Million Events Per Second
Use Kinesis or Kafka for stream storage
Use appropriate stream processing tool
KCL – Simple, only for Kinesis
Apache Storm – general purpose
Spark Streaming – Windowing, MLlib, Statefull
AWS Lambda – fully managed, no servers, stateless
Use Amazon SNS for Alerts
Use Amazon ML for predictions
Use a managed database/cache for state management
Use a visualization tool for KPI visualization
Lambda Architecture best practices
Use Kinesis or Kafka for stream storage
Maintain an immutable (append only) raw master data in Amazon S3 - use Amazon Kinesis S3 connector or Firehose
Use appropriate batch processing technologies for creating batch views
Use appropriate stream processing technology for real-time views
Use a serving layer with an appropriate database to serve downstream applications
Before we go into solving the Big architecture, I want to introduce some “tried and test” architecture principles.
Here at AWS we believe you should be using the right tool for the job – “instead of using a big swiss army knife for using a screw dreive, it will be best to use a screw drive - this is especially important for big data architectures. We’ll talk about this more.
Decoupled architecture http://whatis.techtarget.com/definition/decoupled-architecture - In general, a decoupled architecture is a framework for complex work that allows components to remain completely autonomous and unaware of each other…this has been tried and battle test.
Managed services – this is relatively now - Should I install Cassandra or MongoDB or CouchDB on AWS. You obviously can. Sometimes there are good reasons for doing this. Many customers still do this. Netflix is a great example. They run a multi-region Cassandra and are a poster child for how to do this. But for most customers, delegating this task to AWS makes more sense….you are better of spending your time on building features for your customers rather than building highly scalable distributed systems.
Lambda Architecture -