by Darin Briskman, Technical Evangelist, AWS
SQL is a powerful tool to query data, but it doesn't cover everything you might need. Sometimes, the precision of SQL is a limitation, that can be overcome by using the flexibility and inherent ranking of search. Learn how to use AWS servcies to create fully managed solutions using Amazon Aurora and Amazon Elasticsearch Service to combine the power of query and search. Level: 200
by Kwesi Edwards, Business Development Manager, AWS
Database migration doesn’t need to be difficult or time-consuming. Learn how AWS Database Migration Service provides an easy, secure migration from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon DynamoDB and EC2 databases, with minimal-downtime. We’ll also see how the AWS Schema Conversion Tool automatically converts your schema and a majority of the custom code, so you can get up and running in the cloud quickly and inexpensively. We’ll discuss alternative data migration strategies for special use cases. Level 200
by Darin Briskman, Technical Evangelist, AWS
Elasticsearch is the most popular open-source search and analytics engine - it's easy to use, but not always easy to configure an manage. Learn about Amazon's fully managed service that provides easier deployment, operation, and scale for Elasticsearch. Level: 200
ABD324_Migrating Your Oracle Data Warehouse to Amazon Redshift Using AWS DMS ...Amazon Web Services
Customers that have Oracle Data warehouses find them complex and expensive to manage. Most are struggling with data load and performance issues. They are looking to migrate to something which is easy to manage, cost effective, and improves their query performance. Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools. Migrating your Oracle data warehouse to Amazon Redshift can substantially improve query and data load performance, increase scalability, and save costs. This workshop leverages AWS Database Migration Service and AWS Schema Conversion Tool to migrate an existing Oracle data warehouse to Amazon Redshift. When migrating your database from one engine to another, you have two major things to consider: the conversion of the schema and code objects, and the migration and conversion of the data itself. You can convert schema and code with AWS SCT and migrate data with AWS DMS. AWS DMS helps you migrate your data easily and securely with minimal downtime. Prerequisites: Have an AWS account with IAM admin permissions and sufficient limits for AWS resources above, with a comfortable working knowledge of AWS console, relational databases (Oracle) and Amazon Redshift.
SQL is a powerful tool to query data, but it doesn't cover everything you might need. Sometimes, the precision of SQL is a limitation, that can be overcome by using the flexibility and inherent ranking of search. Learn how to use AWS servcies to create fully managed solutions using Amazon Aurora and Amazon Elasticsearch Service to combine the power of query and search.
DynamoDB queries enable consistent low latency at any workload, using the partition key, sort key, local secondary indexes, and global secondary indexes. Amazon Elasticsearch Service enables flexible search, including ranking and aggregation. Adding Elasticsearch to DynamoDB opens new capabilities to combine the power of query and search. Learn how Amazon.com uses this combination and how you can use it, too
Visualize your data in Data Lake with AWS Athena and AWS Quicksight Hands-on ...Amazon Web Services
Level 200: Visualize Your Data in Data Lake with AWS Athena and AWS Quicksight
Nowadays, enterprises are building Data Lake which store lots of structured and unstructured data for data analysis. But it takes lots of time for building the data modeling and infrastructure that is required. How to make quick data queries without servers and databases is the next big question for every enterprises.
In this workshop, eCloudvalley, the first and only Premier Consulting Partner in GCR, will demonstrate how to use serverless architecture to visualize your data using Amazon Athena and Amazon Quicksight.
You can easily query and visualize the data in your S3, and get business insights with the combination of these two services. Also, you can also build business reports with other tools such as AWS IoT, Amazon Kinesis Firehose.
Reason to Attend:
Learn how to quickly search for thousands of data on S3 via serverless Amazon's Athena
Learn how to use AWS QuickSight to retrieve information from your database quickly and create detailed reports
Serverlesss Big Data Analytics with Amazon Athena and QuicksightAmazon Web Services
Check out how you can easily query raw data in various formats in Amazon S3, transform it into a canonical form, analyze it, and build dashboards to get more insights from your data.
by Kwesi Edwards, Business Development Manager, AWS
Database migration doesn’t need to be difficult or time-consuming. Learn how AWS Database Migration Service provides an easy, secure migration from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon DynamoDB and EC2 databases, with minimal-downtime. We’ll also see how the AWS Schema Conversion Tool automatically converts your schema and a majority of the custom code, so you can get up and running in the cloud quickly and inexpensively. We’ll discuss alternative data migration strategies for special use cases. Level 200
by Darin Briskman, Technical Evangelist, AWS
Elasticsearch is the most popular open-source search and analytics engine - it's easy to use, but not always easy to configure an manage. Learn about Amazon's fully managed service that provides easier deployment, operation, and scale for Elasticsearch. Level: 200
ABD324_Migrating Your Oracle Data Warehouse to Amazon Redshift Using AWS DMS ...Amazon Web Services
Customers that have Oracle Data warehouses find them complex and expensive to manage. Most are struggling with data load and performance issues. They are looking to migrate to something which is easy to manage, cost effective, and improves their query performance. Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools. Migrating your Oracle data warehouse to Amazon Redshift can substantially improve query and data load performance, increase scalability, and save costs. This workshop leverages AWS Database Migration Service and AWS Schema Conversion Tool to migrate an existing Oracle data warehouse to Amazon Redshift. When migrating your database from one engine to another, you have two major things to consider: the conversion of the schema and code objects, and the migration and conversion of the data itself. You can convert schema and code with AWS SCT and migrate data with AWS DMS. AWS DMS helps you migrate your data easily and securely with minimal downtime. Prerequisites: Have an AWS account with IAM admin permissions and sufficient limits for AWS resources above, with a comfortable working knowledge of AWS console, relational databases (Oracle) and Amazon Redshift.
SQL is a powerful tool to query data, but it doesn't cover everything you might need. Sometimes, the precision of SQL is a limitation, that can be overcome by using the flexibility and inherent ranking of search. Learn how to use AWS servcies to create fully managed solutions using Amazon Aurora and Amazon Elasticsearch Service to combine the power of query and search.
DynamoDB queries enable consistent low latency at any workload, using the partition key, sort key, local secondary indexes, and global secondary indexes. Amazon Elasticsearch Service enables flexible search, including ranking and aggregation. Adding Elasticsearch to DynamoDB opens new capabilities to combine the power of query and search. Learn how Amazon.com uses this combination and how you can use it, too
Visualize your data in Data Lake with AWS Athena and AWS Quicksight Hands-on ...Amazon Web Services
Level 200: Visualize Your Data in Data Lake with AWS Athena and AWS Quicksight
Nowadays, enterprises are building Data Lake which store lots of structured and unstructured data for data analysis. But it takes lots of time for building the data modeling and infrastructure that is required. How to make quick data queries without servers and databases is the next big question for every enterprises.
In this workshop, eCloudvalley, the first and only Premier Consulting Partner in GCR, will demonstrate how to use serverless architecture to visualize your data using Amazon Athena and Amazon Quicksight.
You can easily query and visualize the data in your S3, and get business insights with the combination of these two services. Also, you can also build business reports with other tools such as AWS IoT, Amazon Kinesis Firehose.
Reason to Attend:
Learn how to quickly search for thousands of data on S3 via serverless Amazon's Athena
Learn how to use AWS QuickSight to retrieve information from your database quickly and create detailed reports
Serverlesss Big Data Analytics with Amazon Athena and QuicksightAmazon Web Services
Check out how you can easily query raw data in various formats in Amazon S3, transform it into a canonical form, analyze it, and build dashboards to get more insights from your data.
Matt Nowina, AWS Toronto Enterprise Solutions Architect, takes us on an overview of Cloud Storage solutions, data migration solutions and strategies, and practical examples of how to move data to the Cloud.
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
ABD206-Building Visualizations and Dashboards with Amazon QuickSightAmazon Web Services
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Learning Objectives:
- Learn AWS architectural best practices
- Measure your architecture against pillars of security, reliability, cost, performance, and operations
- Build a plan to remediate and improve your architecture
Modernising your Applications on AWS: AWS SDKs and Application Web Services –...Amazon Web Services
In this session, you learn about the easy-to-use abstractions included in the AWS SDK for .NET and the AWS SDK for Java. We demonstrate how the AWS Toolkits for Visual Studio and Eclipse help streamline the application development process. You also see how to make use of AWS services such as DynamoDB, SQS and S3 in your applications running on AWS. We deep-dive on Amazon DynamoDB to showcase the features of the SDKs and how you can build advanced applications on top of DynamoDB, using the AWS SDKs.
SRV334-Making Things Right with AWS Config Rules and AWS LambdaAmazon Web Services
Custom rules created with AWS Config and AWS Lambda enables organizations to inspect, assess, and remediate changes to AWS resources. These tools provide the development speed and flexibility required for your team to quickly start and finish a job before it becomes an issue for your client. In this workshop, you practice using AWS Lambda to design and implement the AWS Config rules that you think an organization should have ready at a moment’s notice before their next client contacts them about an issue.
When evaluating and planning migrating your data from on premises to the Cloud, you might encounter physical limitations. Amazon offers a suite of tools to help you surmount these limitations by moving data using networks, roads, and technology partners. In this session, we discuss how to move large amounts of data into and out of the Cloud in batches, increments, and streams.
Citrix Moves Data to Amazon Redshift Fast with Matillion ETLAmazon Web Services
Matillion ETL, easily deployable from Amazon Web Services (AWS) Marketplace, helps Citrix collate and summarize data and augment it with more traditional business data from Microsoft SQL Server for additional context. Join our webinar to learn how organizations of any size can move data to the cloud quickly, accurately, and affordably with Matillion ETL.
Join our webinar to learn:
How Citrix moved data to Amazon Redshift with speed and accuracy.
How to make informed, business-critical decisions by analyzing data with Amazon Redshift.
How to speed time-to-value for your analytics initiatives using Matillion’s push-down ELT architecture.
Best Practices for Migrating Oracle Databases to the Cloud - AWS Online Tech ...Amazon Web Services
Learning Objectives:
- Learn how to migrate Oracle databases to the cloud
- Learn how to run additional components of the Oracle stack on AWS
- Get acquainted with other database options on AWS
Database and Analytics on the AWS Cloud - AWS Innovate TorontoAmazon Web Services
Antoine Genereux, AWS Solutions Architect, takes us on a tour of database solutions available for the AWS Cloud, and powerful analytics and business intelligence reporting tools.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Artificial Intelligence on the AWS Cloud - AWS Innovate OttawaAmazon Web Services
• Overview of AI services
• Deploy a data science environment with the AWS Deep Learning AMI
• Leverage the power of Intel GPU’s with NVIDIA Tesla M60 GPUs
• Train and deploy Deep Learning models at scale
by Jon Handler, Principal Solutions Architect and Sanjay Dhar, Solutions Architect, AWS
Nearly everything in IT - servers, applications, websites, connected devices, and other things - generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is log analytics. We'll look at how to use centralized log analytics across multiple sources with Amazon Elasticsearch Service.
BDA402 Deep Dive: Log Analytics with Amazon Elasticsearch ServiceAmazon Web Services
Everything generates logs. Applications, infrastructure, security ... everything. Keeping track of the flood of log data is a big challenge, yet critical to your ability to understand your systems and troubleshoot (or prevent) issues. In this session, we will use both Amazon CloudWatch and application logs to show you how to build an end-to-end log analytics solution. First, we cover how to configure an Amazon Elaticsearch Service domain and ingest data into it using Amazon Kinesis Firehose, demonstrating how easy it is to transform data with Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data and configure a secure analytics environment. We demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
Need to start querying data instantly? Amazon Athena an interactive query service that makes it easy to interactive queries on data in Amazon S3, using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing your data immediately.
In this presentation, we will show you how Amazon Athena makes it easy it is to query your data stored in S3
Matt Nowina, AWS Toronto Enterprise Solutions Architect, takes us on an overview of Cloud Storage solutions, data migration solutions and strategies, and practical examples of how to move data to the Cloud.
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
ABD206-Building Visualizations and Dashboards with Amazon QuickSightAmazon Web Services
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
Getting Started with the Hybrid Cloud: Enterprise Backup and RecoveryAmazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
Learning Objectives:
- Learn AWS architectural best practices
- Measure your architecture against pillars of security, reliability, cost, performance, and operations
- Build a plan to remediate and improve your architecture
Modernising your Applications on AWS: AWS SDKs and Application Web Services –...Amazon Web Services
In this session, you learn about the easy-to-use abstractions included in the AWS SDK for .NET and the AWS SDK for Java. We demonstrate how the AWS Toolkits for Visual Studio and Eclipse help streamline the application development process. You also see how to make use of AWS services such as DynamoDB, SQS and S3 in your applications running on AWS. We deep-dive on Amazon DynamoDB to showcase the features of the SDKs and how you can build advanced applications on top of DynamoDB, using the AWS SDKs.
SRV334-Making Things Right with AWS Config Rules and AWS LambdaAmazon Web Services
Custom rules created with AWS Config and AWS Lambda enables organizations to inspect, assess, and remediate changes to AWS resources. These tools provide the development speed and flexibility required for your team to quickly start and finish a job before it becomes an issue for your client. In this workshop, you practice using AWS Lambda to design and implement the AWS Config rules that you think an organization should have ready at a moment’s notice before their next client contacts them about an issue.
When evaluating and planning migrating your data from on premises to the Cloud, you might encounter physical limitations. Amazon offers a suite of tools to help you surmount these limitations by moving data using networks, roads, and technology partners. In this session, we discuss how to move large amounts of data into and out of the Cloud in batches, increments, and streams.
Citrix Moves Data to Amazon Redshift Fast with Matillion ETLAmazon Web Services
Matillion ETL, easily deployable from Amazon Web Services (AWS) Marketplace, helps Citrix collate and summarize data and augment it with more traditional business data from Microsoft SQL Server for additional context. Join our webinar to learn how organizations of any size can move data to the cloud quickly, accurately, and affordably with Matillion ETL.
Join our webinar to learn:
How Citrix moved data to Amazon Redshift with speed and accuracy.
How to make informed, business-critical decisions by analyzing data with Amazon Redshift.
How to speed time-to-value for your analytics initiatives using Matillion’s push-down ELT architecture.
Best Practices for Migrating Oracle Databases to the Cloud - AWS Online Tech ...Amazon Web Services
Learning Objectives:
- Learn how to migrate Oracle databases to the cloud
- Learn how to run additional components of the Oracle stack on AWS
- Get acquainted with other database options on AWS
Database and Analytics on the AWS Cloud - AWS Innovate TorontoAmazon Web Services
Antoine Genereux, AWS Solutions Architect, takes us on a tour of database solutions available for the AWS Cloud, and powerful analytics and business intelligence reporting tools.
Building Data Lakes That Cost Less and Deliver Results Faster - AWS Online Te...Amazon Web Services
Learning Objectives:
- Get an inside look at Amazon S3 Select and how it helps to accelerate application performance
- Learn about how Amazon Glacier Select helps you extend your data lake to archival storage
- Understand how different applications can leverage these features
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Artificial Intelligence on the AWS Cloud - AWS Innovate OttawaAmazon Web Services
• Overview of AI services
• Deploy a data science environment with the AWS Deep Learning AMI
• Leverage the power of Intel GPU’s with NVIDIA Tesla M60 GPUs
• Train and deploy Deep Learning models at scale
by Jon Handler, Principal Solutions Architect and Sanjay Dhar, Solutions Architect, AWS
Nearly everything in IT - servers, applications, websites, connected devices, and other things - generate discrete, time-stamped records of events called logs. Processing and analyzing these logs to gain actionable insights is log analytics. We'll look at how to use centralized log analytics across multiple sources with Amazon Elasticsearch Service.
BDA402 Deep Dive: Log Analytics with Amazon Elasticsearch ServiceAmazon Web Services
Everything generates logs. Applications, infrastructure, security ... everything. Keeping track of the flood of log data is a big challenge, yet critical to your ability to understand your systems and troubleshoot (or prevent) issues. In this session, we will use both Amazon CloudWatch and application logs to show you how to build an end-to-end log analytics solution. First, we cover how to configure an Amazon Elaticsearch Service domain and ingest data into it using Amazon Kinesis Firehose, demonstrating how easy it is to transform data with Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data and configure a secure analytics environment. We demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
Need to start querying data instantly? Amazon Athena an interactive query service that makes it easy to interactive queries on data in Amazon S3, using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing your data immediately.
In this presentation, we will show you how Amazon Athena makes it easy it is to query your data stored in S3
AWS Analytics Immersion Day - Build BI System from Scratch (Day1, Day2 Full V...Sungmin Kim
How to build Business Intelligence System from scratch on AWS (Day1, Day2)
------------------------------------------------------------------------------------------
2020-03-18(수)~19(목) 2일 동안 온라인으로 진행한 Online AWS Analytics Immersion Day 전체 발표 자료 입니다.
BI(Business Intelligence) 시스템을 설계하는 과정에서 AWS Analytics 서비스들을 어떻게 활용할 수 있는지 설명 드리고자 만든 자료 입니다.
Target Audience
-------------------
Online Analytics Immersion Day는 다음과 같은 고객을 대상으로 진행됩니다.
- AWS Analytics Services (ex. Kinesis, Athena, Redshift, EMR, etc)의 기본 개념을 알고 있지만, 이러한 서비스 활용 방법 및 데이터 분석 시스템 구축 과정이 궁금하신 분
- 데이터 분석 시스템을 구축한 경험은 있지만, 자신이 만든 시스템을 아키텍처 관점에서
어떻게 평가하고 확인할 수 있는지 궁금하신 분
Building analytics applications requires more than just one good service. It requires the ability to capture a vast amount of data, and react to data changes in real time. It requires flexible tools which enable end users to work in the way they can be most productive, and which addresses the needs of both data consumers, as well as data scientists. This analysis won't just be about data exploration and reports, but must be able to support the largest scale, complex machine and deep learning models imaginable. Across it all, strong governance, security, and cataloguing is essential. In this session, come to hear about how to build a full stack analytics application using AWS Services. We'll see how to capture static and dynamic data in real time, and react to data changes. We'll see AWS Services which perform analytics from drag-and-drop, through simple query-on-files, and into exascale data science. At the end, we'll have a data lake architecture that will meet the demands of the most sophisticated analytics customers for many years to come.
AWS Speaker: Ian Robinson, Specialist Solution Architect, Big Data and Analytics, EMEA - Amazon Web Services
Log Analytics with Amazon Elasticsearch Service and Amazon Kinesis - March 20...Amazon Web Services
Log analytics is a common big data use case that allows you to analyze log data from websites, mobile devices, servers, sensors, and more for a wide variety of applications including digital marketing, application monitoring, fraud detection, ad tech, gaming, and IoT. In this tech talk, we will walk you step-by-step through the process of building an end-to-end analytics solution that ingests, transforms, and loads streaming data using Amazon Kinesis Firehose, Amazon Kinesis Analytics and AWS Lambda. The processed data will be saved to an Amazon Elasticsearch Service cluster, and we will use Kibana to visualize the data in near real-time.
Learning Objectives:
1. Reference architecture for building a complete log analytics solution
2. Overview of the services used and how they fit together
3. Best practices for log analytics implementation
AWS re:Invent 2016: Real-Time Data Exploration and Analytics with Amazon Elas...Amazon Web Services
Elasticsearch is a fully featured search engine used for real-time analytics, and Amazon Elasticsearch Service makes it easy to deploy Elasticsearch clusters on AWS. With Amazon ES, you can ingest and process billions of events per day, and explore the data using Kibana to discover patterns. In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data into it using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
Amazon Kinesis Firehose was launched in the end of 2015 and provides easy way for customers to ingest streaming data to Amazon Redshift, Amazon Elasticsearch and Amazon S3. In 2016, we introduced new features that helps customers to implement in-line processing within Kinesis firehose using AWS Lambda. Amazon Kinesis Firehose makes it easy to load real-time, streaming data into AWS without having to build custom stream processing applications. This session is designed for developers, data engineers and analysts who are looking to load, analyze and get powerful insights from real0time streaming data using existing analytics tools. We will explore key features and walk through how to use Kinesis Firehose to ingest streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift and Amazon Elasticsearch service.
From Data Collection to Actionable Insights in 60 Seconds: AWS Developer Work...Amazon Web Services
From Data Collection to Actionable Insights in 60 Seconds: AWS Developer Workshop - Web Summit 2018
Columnar data formats such as Parquet and ORC are designed to optimize both query performance and costs for analytics scenarios. On the other hand, serverless computing platforms such as AWS Lambda allow you to run highly scalable applications without provisioning or managing servers. The combination of columnar storage and serverless computing can drastically simplify many of the pain points related to big data analytics, data collection, data exploration, and ETL orchestration, while at the same time reducing the total cost of ownership.
Speaker: Alex Casalboni - Technical Evangelist, AWS
Your data has value for multiple business functions in your organization. Shorten your time to analytics and take faster, better decisions based on data.
In this session you will learn how you can access your data from a myriad of tools such as multiple EMR clusters, Athena & Redshift.
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Data warehousing is a critical component for analysing and extracting actionable insights from your data. Amazon Redshift allows you to deploy a scalable data warehouse in a matter of minutes and starts to analyse your data right away using your existing business intelligence tools.
by Darin Briskman, Technical Evangelist, AWS
DynamoDB queries enable consistent low latency at any workload, using the partition key, sort key, local secondary indexes, and global secondary indexes. Amazon Elasticsearch Service enables flexible search, including ranking and aggregation. Adding Elasticsearch to DynamoDB opens new capabilities to combine the power of query and search. Learn how Amazon.com uses this combination and how you can use it, too. Level: 200
In this session, storage experts will walk you through the object storage offering, Amazon S3, a bulk data repository that can deliver 99.999999999% durability and scale past trillions of objects worldwide. Learn about the different ways you can accelerate data transfer to S3 and get a close look at some of the new tools available for you to secure and manage your data more efficiently. Announced at re:Invent 2016, see how you can use Amazon Athena with S3 to run serverless analytics on your data and as a bonus, walk away with some code snippets to use with S3. Hear AWS customers talk about the solutions they have built with S3 to turn their data into a strategic asset, instead of just a cost center. And bring your toughest questions to our experts on hand and walk away that much smarter on how to use object storage from AWS.
Serverless Analytics with Amazon Redshift Spectrum, AWS Glue, and Amazon Quic...Amazon Web Services
Learning Objectives:
- Understand how to build a serverless big data solution quickly and easily
- Learn how to discover and prepare all your data for analytics
- Learn how to query and visualize analytics on all your data to create actionable insights
Similar to Adding Search to Relational Databases (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
2. AWS Data Services to Accelerate Your Move to the Cloud
RDS
Open
Source
RDS
Commercial
Aurora
Migration for DB Freedom
DynamoDB
& DAX
ElastiCache EMR Amazon
Redshift
Redshift
Spectrum
AthenaElasticsearch
Service
QuickSightGlue
Databases to Elevate your Apps
Relational Non-Relational
& In-Memory
Analytics to Engage your Data
Inline Data Warehousing Reporting
Data Lake
Amazon AI to Drive the Future
Lex
Polly
Rekognition Machine
Learning
Deep Learning, MXNet
Database Migration
Schema Conversion
3. AWS Data Services to Accelerate Your Move to the Cloud
RDS
Open
Source
RDS
Commercial
Aurora
Migration for DB Freedom
DynamoDB
& DAX
ElastiCache EMR Amazon
Redshift
Redshift
Spectrum
AthenaElasticsearch
Service
QuickSightGlue
Lex
Polly
Rekognition Machine
Learning
Databases to Elevate your Apps
Relational Non-Relational
& In-Memory
Analytics to Engage your Data
Inline Data Warehousing Reporting
Data Lake
Amazon AI to Drive the Future
Deep Learning, MXNet
Database Migration
Schema Conversion
4. Multi-engine support
– Open Source
– Commercial
– Amazon Aurora
Automated provisioning, patching, scaling, backup/restore,
failover
Use with General Purpose SSD or Provisioned IOPS SSD
storage
High availability with RDS Multi-AZ
Amazon RDS: Cheaper, Easier, Better
5. High Availability Multi-AZ Deployments
Enterprise-grade fault tolerant
solution for production
databases
Automatic failover
Synchronous replication
Inexpensive & enabled with one click
6. Up To 5x Performance
Of High-end MySQL
Highly Available
and Durable
MySQL and
PostgreSQL
Compatible
1/10th The Cost Of
Commercial Grade
Databases
Fastest Growing
AWS Service, Ever
Amazon Aurora
Speed and Availability of Commercial, Cost-Effectiveness of Open Source
7. BINLOG DATA DOUBLE-WRITELOG FRM FILES
TYPE OF WRITE
MySQL with Replica
Storage MirrorStorage Mirror
DC 1 DC 2
StorageStorage
Primary
Instance
Replica
Instance
AZ 1 AZ 3
Primary
Instance
Amazon S3
AZ 2
Replica
Instance
ASYNC
4/6 QUORUM
DISTRIBUTED
WRITES
Replica
Instance
Amazon Aurora
780K transactions
7,388K I/Os per million txns (excludes mirroring, standby)
Average 7.4 I/Os per transaction
MySQL IO profile for 30 min. Sysbench run
27,378K transactions 35X MORE
0.95 I/Os per transaction 7.7X LESS
Aurora IO profile for 30 min. Sysbench run
Aurora- Faster Because it is Built for AWS
10. Amazon Elasticsearch Service
Data Flow
Amazon Route
53
Elastic Load
Balancing
AWS IAM
Amazon
CloudWatch
Elasticsearch API
AWS CloudTrail
11. Ways and means
• All data eventually enters at the domain endpoint
• Data can come in single documents (PUT) or batches
(_bulk)
• Some services have direct integration
13. Kinesis Firehose delivery architecture with
transformations
S3 bucket
source records
data source
source records
Amazon Elasticsearch
Service
Firehose
delivery stream
transformed
records
delivery failure
Data transformation
function
transformation failure
16. Elasticsearch works with structured JSON
{
"name" : {
"first" : "Jon",
"last" : "Smith",
}
"age": 26,
"city" : "palo alto",
"years_employed" : 4,
"interests" : [
"guitar",
"sports"
]
}
• Documents contain fields –
name/value pairs
• Fields can nest
• Value types include text,
numerics, dates, and geo
objects
• Field values can be single or
array
• When you send documents to
Elasticsearch they should arrive
as JSON*
*ES 5 can work with unstructured documents
17. If your data is not already in
structured JSON, you must
transform it, creating
structured JSON that
Elasticsearch "understands"
18. The most basic way to transform data
• Run a script in Amazon EC2, Lambda, etc. that reads data
from your data source, creates JSON documents, and ships
to Amazon Elasticsearch Service directly
19. Logstash simplifies transformation
• Logstash is open-source ETL over streams. Run colocated
with your application or read from your source
• Many input plugins and output plugins make it easy to
connect to Logstash
• Grok pattern matching to pull out values and re-write
Application
Instance
20. Elasticsearch 5 ingest processors
When you index documents, you can specify a pipeline.
The pipeline can have a series of processors that
pre-process the data before indexing.
Twenty processors are available, some are simple:
{ "append":
{ "field": "field1"
"value": ["item2", "item3", "item4"] } }
Others are more complex, like the Grok processor for
regex with aliased expressions.
21. Firehose transformations add robust delivery
S3 bucket
source records
data source
source records
Amazon Elasticsearch
Service
Firehose
delivery stream
transformed
records
delivery failure
Data transformation
function
transformation failure
• Inline calls to
Lambda for
free-form
changes to the
underlying data
• Failed
transforms
tracked and
delivered to S3
22. Firehose transformations add robust delivery
intermediate
Amazon S3
bucket
backup S3 bucket
source records
data source
source records
Amazon Elasticsearch
Service
Firehose
delivery stream
transformed
records transformed
records
transformation failure
delivery failure
• Inline calls to Lambda for free-form changes to the
underlying data
• Failed transforms tracked and delivered to S3
24. Cluster is a collection of nodes
Amazon ES cluster
1
3
3
1
Instance 1
2
1
1
2
Instance 2
3
2
2
3
Instance 3Dedicated master nodes
Data nodes: queries and updates
25. Data pattern
Amazon ES cluster
logs_01.21.2017
logs_01.22.2017
logs_01.23.2017
logs_01.24.2017
logs_01.25.2017
logs_01.26.2017
logs_01.27.2017
Shard 1
Shard 2
Shard 3
host
ident
auth
timestamp
etc.
Each index has
multiple shards
Each shard contains
a set of documents
Each document contains
a set of fields and values
One index per day
26. Indices and Mappings
Index: product
Type: cellphone
documentId
Fields: make (keyword), inventory
(int), location (geo point)
Type: reviews
documentId
Fields: make(keyword), review (text),
rating (float), date (date)
http://hostname/product/cellphone/1 http://hostname/product/reviews/1
28. Shards
- Indexes are split into multiple shards
- Primary shards are defined at index creation
- Defaults to 5 Primaries and 1 Replica Shard
- Shards allow
- Horizontal scale
- Distribute and parallelize the operations to increase
throughput
- Create replicas to provide high availability in case of failures
29. Shards … contd
- Shard is a Lucene index
- Number of Replica shards can be changed on the fly but
not the primary shards
- To change the number of primary shards, the index
needs to be re-created
- Shards are automatically balanced when cluster is re-
sized
30. 199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/ HTTP/1.0" 200 6245
Document
Fields
host ident auth timestamp verb request status size
Field indexes
199.72.81.55
unicomp6.unicomp.net
199.120.110.21
burger.letters.com
199.120.110.21
205.212.115.106
d104.aa.net
1, 4, 8, 12, 30, 42, 58, 100...
Postings
Elasticsearch creates an index for
each field, containing the
decomposed values of those fields
31. host:199.72.81.55 AND verb:GET
1,
4,
8,
12,
30,
42,
58,
100
...
Look up
199.72.81.55 GET
1,
4,
9,
50,
58,
75,
90,
103
...
AND
Merge
1,
4,
58
Score
1.2,
3.7,
0.4
Sort
4,
1,
58
The index data structures support fast
retrieval and merging. Scoring and
sorting support best match retrieval
32. - Create Index called product
- Get list of Indices
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open product 95SQ4TS 5 1 0 0 260b 260b
$ curl –XPUT ‘http://hostname/product/’
Index and Document Command Examples
$ curl ‘http://hostname/_cat/indices’
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open product 95SQ4TS 5 1 0 0 260b 260b
34. What happens at Index Operation
http PUT – http://hostname/product/cellphone/1
Elasticsearch Cluster
Instance 1 Instance 2
1
2
32 1
3
Instance 3
1. Indexing operation
2. Shard determined is based on hashing with
document ID.
3. Current node forwards document to node
holding the primary shard
4. Primary shard ensures all replica shards
replay the same indexing operation
1
3
4
35. Mappings
1. Mappings are used to define types of documents.
2. Define various fields in a document
3. Mapping Types –
1. Core
1. Text or keyword
2. Numeric
3. Date
4. Boolean
2. Arrays and Multi-fields
1. Arrays – “tags” : [“blue”,”red”]
2. Multi-fields – Index same data with different settings
3. Pre-defined fields
1. _ttl, _size
2. _uid, _id, _type, _index
3. _all, _source
36. Mapping command examples
curl -XPUT ’http://hostname/product' -H 'Content-Type: application/json' –d‘
{
"mappings": {
"cellphone": {
"properties": {
"make": {
"type": "text"
}
}
}
}
}’
Create an index called product with mapping, cellphone and field make
as type text –
37. Mapping command examples
curl -XPUT ’http://hostname/product/_mapping/reviews' -H 'Content-Type:
application/json' -d’
{
"properties": {
”review": {
"type": "text"
},
“rating”: {
“type”: “int”
}
}
}’
Add a new mapping, reviews, with fields review, as string and rating, as
int, to existing index, product –
38. Mapping command examples
curl -XPUT ’http://hostname/product/_mapping/cellphone' -H 'Content-Type:
application/json' -d’
{
"properties": {
”inventory": {
"type": ”int"
}
}
}’
Add a new field, inventory as integer, to existing mapping, cellphone in
index product –