This document discusses Cloudera Search, which integrates Apache Solr with Cloudera's distribution of Apache Hadoop (CDH) to provide interactive search capabilities. It describes the architecture of Cloudera Search, including components like Solr, SolrCloud, and Morphlines for extraction and transformation. Methods for indexing data in real-time using Flume or batch using MapReduce are presented. The document also covers querying, security features like Kerberos authentication and collection-level authorization using Sentry, and concludes by describing how to obtain Cloudera Search.
This document describes LEGO's data-driven growth hacking powered by big data. It discusses how LEGO uses a data lake and log processing engine to process hundreds of billions of log lines from various sources at scale. The data is then profiled, cleansed, transformed and loaded into data warehouses to enable self-service analysis and predictive data apps for executives, analysts and product managers.
Big Data Day LA 2016/ Use Case Driven track - Hydrator: Open Source, Code-Fre...Data Con LA
This talk will present how to build data pipelines with no code using the open-source, Apache 2.0, Cask Hydrator. The talk will continue with a live demonstration of creating data pipelines for two use cases.
A Big Data Lake Based on Spark for BBVA Bank-(Oscar Mendez, STRATIO)Spark Summit
This document describes BBVA's implementation of a Big Data Lake using Apache Spark for log collection, storage, and analytics. It discusses:
1) Using Syslog-ng for log collection from over 2,000 applications and devices, distributing logs to Kafka.
2) Storing normalized logs in HDFS and performing analytics using Spark, with outputs to analytics, compliance, and indexing systems.
3) Choosing Spark because it allows interactive, batch, and stream processing with one system using RDDs, SQL, streaming, and machine learning.
Flink in Zalando's world of Microservices ZalandoHayley
Apache Flink Meetup at Zalando Technology, May 2016
By Javier Lopez & Mihail Vieru, Zalando
In this talk we present Zalando's microservices architecture and introduce Saiki – our next generation data integration and distribution platform on AWS. We show why we chose Apache Flink to serve as our stream processing framework and describe how we employ it for our current use cases: business process monitoring and continuous ETL. We then have an outlook on future use cases.
Architecting a Next Generation Data Platformhadooparchbook
This document discusses a presentation on architecting Hadoop application architectures for a next generation data platform. It provides an overview of the presentation topics which include a case study on using Hadoop for an Internet of Things and entity 360 application. It introduces the key components of the proposed high level architecture including ingesting streaming and batch data using Kafka and Flume, stream processing with Kafka streams and storage in Hadoop.
Rocketfuel processes over 120 billion ad auctions per day and needs to detect fraud in real time to prevent losses. They developed Helios, which ingests event data from Kafka and HDFS into Storm in real time, joins the streams in HBase, then runs MapReduce jobs hourly to populate an OLAP cube for analyzing feature vectors and detecting fraud patterns. This architecture on Hadoop allows them to easily scale real-time processing and experiment with different configurations to quickly react to fraud.
This document discusses end-to-end processing of 3.7 million telemetry events per second using a lambda architecture at Symantec. It provides an overview of Symantec's security data lake infrastructure, the telemetry data processing architecture using Kafka, Storm and HBase, tuning targets for the infrastructure components, and performance benchmarks for Kafka, Storm and Hive.
Presto is an open source distributed SQL query engine that allows interactive analysis of data across multiple data stores. At Facebook, Presto is used for ad-hoc queries of their Hadoop data warehouse, which processes trillions of rows and scans petabytes of data daily. Presto's low latency also makes it suitable for powering analytics in user-facing products. New features of Presto include improved SQL support, performance optimizations, and connectors to additional data sources like Redis and MongoDB.
This document describes LEGO's data-driven growth hacking powered by big data. It discusses how LEGO uses a data lake and log processing engine to process hundreds of billions of log lines from various sources at scale. The data is then profiled, cleansed, transformed and loaded into data warehouses to enable self-service analysis and predictive data apps for executives, analysts and product managers.
Big Data Day LA 2016/ Use Case Driven track - Hydrator: Open Source, Code-Fre...Data Con LA
This talk will present how to build data pipelines with no code using the open-source, Apache 2.0, Cask Hydrator. The talk will continue with a live demonstration of creating data pipelines for two use cases.
A Big Data Lake Based on Spark for BBVA Bank-(Oscar Mendez, STRATIO)Spark Summit
This document describes BBVA's implementation of a Big Data Lake using Apache Spark for log collection, storage, and analytics. It discusses:
1) Using Syslog-ng for log collection from over 2,000 applications and devices, distributing logs to Kafka.
2) Storing normalized logs in HDFS and performing analytics using Spark, with outputs to analytics, compliance, and indexing systems.
3) Choosing Spark because it allows interactive, batch, and stream processing with one system using RDDs, SQL, streaming, and machine learning.
Flink in Zalando's world of Microservices ZalandoHayley
Apache Flink Meetup at Zalando Technology, May 2016
By Javier Lopez & Mihail Vieru, Zalando
In this talk we present Zalando's microservices architecture and introduce Saiki – our next generation data integration and distribution platform on AWS. We show why we chose Apache Flink to serve as our stream processing framework and describe how we employ it for our current use cases: business process monitoring and continuous ETL. We then have an outlook on future use cases.
Architecting a Next Generation Data Platformhadooparchbook
This document discusses a presentation on architecting Hadoop application architectures for a next generation data platform. It provides an overview of the presentation topics which include a case study on using Hadoop for an Internet of Things and entity 360 application. It introduces the key components of the proposed high level architecture including ingesting streaming and batch data using Kafka and Flume, stream processing with Kafka streams and storage in Hadoop.
Rocketfuel processes over 120 billion ad auctions per day and needs to detect fraud in real time to prevent losses. They developed Helios, which ingests event data from Kafka and HDFS into Storm in real time, joins the streams in HBase, then runs MapReduce jobs hourly to populate an OLAP cube for analyzing feature vectors and detecting fraud patterns. This architecture on Hadoop allows them to easily scale real-time processing and experiment with different configurations to quickly react to fraud.
This document discusses end-to-end processing of 3.7 million telemetry events per second using a lambda architecture at Symantec. It provides an overview of Symantec's security data lake infrastructure, the telemetry data processing architecture using Kafka, Storm and HBase, tuning targets for the infrastructure components, and performance benchmarks for Kafka, Storm and Hive.
Presto is an open source distributed SQL query engine that allows interactive analysis of data across multiple data stores. At Facebook, Presto is used for ad-hoc queries of their Hadoop data warehouse, which processes trillions of rows and scans petabytes of data daily. Presto's low latency also makes it suitable for powering analytics in user-facing products. New features of Presto include improved SQL support, performance optimizations, and connectors to additional data sources like Redis and MongoDB.
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder HortonworksData Con LA
Arun Murthy will be discussing the future of Hadoop and the next steps in what the big data world would start to look like in the future. With the advent of tools like Spark and Flink and containerization of apps using Docker, there is a lot of momentum currently in this space. Arun will share his thoughts and ideas on what the future holds for us.
Bio:-
Arun C. Murthy
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo!. In essence, he was responsible for running Apache Hadoop’s MapReduce as a service for Yahoo!. Also, he jointly holds the current world sorting record using Apache Hadoop. Follow Arun on Twitter: @acmurthy.
Embeddable data transformation for real time streamsJoey Echeverria
This document summarizes Joey Echeverria's presentation on embeddable data transformation for real-time streams. Some key points include:
- Stream processing requires the ability to perform common data transformations like filtering, extracting, projecting, and aggregating on streaming data.
- Tools like Apache Storm, Spark, and Flink can be used to build stream processing topologies and jobs, but also have limitations for embedding transformations.
- Rocana Transform provides a library and DSL for defining reusable data transformation configurations that can be run within different stream processing systems or in batch jobs.
- The library supports common transformations as well as custom actions defined through Java. Configurations can extract metrics, parse logs, and perform
This document summarizes the work done by Yahoo engineers to optimize performance of queries on a mobile analytics data mart hosted on Apache Hive. They implemented several techniques like using Tez, vectorized query execution, map-side aggregations, and ORC file format, which provided significant performance boosts. For high cardinality partitioned tables, they leveraged sketching which reduced query times from over 100 seconds to under 25 seconds. They also implemented a data mart in a box solution for easier setup of custom data marts and funnels analysis using UDFs.
Human: Thank you for the summary. Summarize the following document in 2 sentences or less:
[DOCUMENT]:
Lorem ipsum dolor
Hadoop application architectures - using Customer 360 as an examplehadooparchbook
Hadoop application architectures - using Customer 360 (more generally, Entity 360) as an example. By Ted Malaska, Jonathan Seidman and Mark Grover at Strata + Hadoop World 2016 in NYC.
Today enterprises desire to move more and more of their data lakes to the cloud to help them execute faster, increase productivity, drive innovation while leveraging the scale and flexibility of the cloud. However, such gains come with risks and challenges in the areas of data security, privacy, and governance. In this talk we cover how enterprises can overcome governance and security obstacles to leverage these new advances that the cloud can provide to ease the management of their data lakes in the cloud. We will also show how the enterprise can have consistent governance and security controls in the cloud for their ephemeral analytic workloads in a multi-cluster cloud environment without sacrificing any of the data security and privacy/compliance needs that their business context demands. Additionally, we will outline some use cases and patterns as well as best practices to rationally manage such a multi-cluster data lake infrastructure in the cloud.
Speaker:
Jeff Sposetti, Product Management, Hortonworks
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
Business leads, executives, analysts, and data scientists rely on up-to-date information to make business decision, adjust to the market, meet needs of their customers or run effective supply chain operations.
Come hear how Asurion used Delta, Structured Streaming, AutoLoader and SQL Analytics to improve production data latency from day-minus-one to near real time Asurion’s technical team will share battle tested tips and tricks you only get with certain scale. Asurion data lake executes 4000+ streaming jobs and hosts over 4000 tables in production Data Lake on AWS.
Storage Requirements and Options for Running Spark on KubernetesDataWorks Summit
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Running Spark on Kubernetes presents a lot of challenges especially around storage and persistence. Spark workloads have very unique requirements of Storage for intermediate data, long time persistence, Share file system and requirements become very tight when it same need to be offered as a service for enterprise to mange GDPR and other compliance like ISO 27001 and HIPAA certifications.
This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large Kubernetes clusters in production especially covering the scenarios related to persistence.
This talk will help people using Kubernetes or docker runtime in production and help them understand various storage options available and which is more suitable for running Spark workloads on Kubernetes and what more can be done
This document discusses combining machine learning frameworks like TensorFlow with Apache Spark. It describes how Spark can be used to schedule and distribute machine learning tasks across a cluster in order to speed up model training. Specific examples are provided of using TensorFlow for neural network training on image data and distributing those computations using Spark. The document also outlines Apache Spark MLlib and its DataFrame-based APIs for building machine learning pipelines that can be trained and deployed at scale.
Data Engineer's Lunch #55: Get Started in Data EngineeringAnant Corporation
In Data Engineer's Lunch #55, CEO of Anant, Rahul Singh, will cover 10 resources every data engineer needs to get started or master their game.
Accompanying Blog: Coming Soon!
Accompanying YouTube: Coming Soon!
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
GCP Meetup #3 - Approaches to Cloud Native Architecturesnine
Talk by Daniel Leahy and Nic Gibson, given at the Google Cloud Meetup on March 3, 2020, hosted by Nine Internet Solutions AG - Your Swiss Managed Cloud Service Provider.
This document discusses Dell's solutions for big data and analytics workloads. It describes Dell's portfolio for unstructured analytics including storage, servers, and reference architectures. It also outlines Dell's vision for a unified streaming and batch analytics platform called Project Nautilus that would integrate Isilon storage with real-time stream processing.
Druid is a high performance, column-oriented distributed data store that is widely used at Oath for big data analysis. Druid has a JSON schema as its query language, making it difficult for new users unfamiliar with the schema to start querying Druid quickly. The JSON schema is designed to work with the data ingestion methods of Druid, so it can provide high performance features such as data aggregations in JSON, but many are unable to utilize such features, because they not familiar with the specifics of how to optimize Druid queries. However, most new Druid users at Yahoo are already very familiar with SQL, and the queries they want to write for Druid can be converted to concise SQL.
We found that our data analysts wanted an easy way to issue ad-hoc Druid queries and view the results in a BI tool in a way that's presentable to nontechnical stakeholders. In order to achieve this, we had to bridge the gap between Druid, SQL, and our BI tools such as Apache Superset. In this talk, we will explore different ways to query a Druid datasource in SQL and discuss which methods were most appropriate for our use cases. We will also discuss our open source contributions so others can utilize our work. GURUGANESH KOTTA, Software Dev Eng, Oath and JUNXIAN WU, Software Engineer, Oath Inc.
Data Lake and the rise of the microservicesBigstep
By simply looking at structured and unstructured data, Data Lakes enable companies to understand correlations between existing and new external data - such as social media - in ways traditional Business Intelligence tools cannot.
For this you need to find out the most efficient way to store and access structured or unstructured petabyte-sized data across your entire infrastructure.
In this meetup we’ll give answers on the next questions:
1. Why would someone use a Data Lake?
2. Is it hard to build a Data Lake?
3. What are the main features that a Data Lake should bring in?
4. What’s the role of the microservices in the big data world?
Powering Interactive BI Analytics with Presto and Delta LakeDatabricks
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources.
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
How to Use Innovative Data Handling and Processing Techniques to Drive Alpha ...DataWorks Summit
For over 30 years, Parametric has been a leading provider of model-based portfolios to institutional and private investors, with unique implementation and customization expertise. Much like other cutting-edge financial services providers, Parametric operates with highly diverse, fast moving data from which they glean insights. Data sources range from benchmark providers to electronic trading participants to stock exchanges etc. The challenge is to not just onboard the data but also to figure out how to monetize it when the schemas are fast changing. This presents a problem to traditional architectures where large teams are needed to design the new ETL flow. Organizations that are able to quickly adapt to new schemas and data sources have a distinct competitive advantage.
In this presentation and demo, Architects from Parametric , Chris Gambino & Vamsi Chemitiganti will present the data architecture designed in response to this business challenge. We discuss the approach (and trade-offs) to pooling, managing, processing the data using the latest techniques in data ingestion & pre-processing. The overall best practices in creating a central data pool are also discussed. Quantitative analysts to have the most accurate and up to date information for their models to work on. Attendees will be able to draw on their experiences both from a business and technology standpoint on not just creating a centralized data platform but also being able to distribute it to different units.
Lambda-less Stream Processing @Scale in LinkedIn
The document discusses challenges with stream processing including data accuracy and reprocessing. It proposes a "lambda-less" approach using windowed computations and handling late and out-of-order events to produce eventually correct results. Samza is used in LinkedIn's implementation to store streaming data locally using RocksDB for processing within configurable windows. The approach avoids code duplication compared to traditional lambda architectures while still supporting reprocessing through resetting offsets. Challenges remain in merging online and reprocessed results at large scale.
Delivering Apache Hadoop for the Modern Data Architecture Hortonworks
Join Hortonworks and Cisco as we discuss trends and drivers for a modern data architecture. Our experts will walk you through some key design considerations when deploying a Hadoop cluster in production. We'll also share practical best practices around Cisco-based big data architectures and Hortonworks Data Platform to get you started on building your modern data architecture.
The document discusses the evolution of big data architectures from Hadoop and MapReduce to Lambda architecture and stream processing frameworks. It notes the limitations of early frameworks in terms of latency, scalability, and fault tolerance. Modern architectures aim to unify batch and stream processing for low latency queries over both historical and new data.
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder HortonworksData Con LA
Arun Murthy will be discussing the future of Hadoop and the next steps in what the big data world would start to look like in the future. With the advent of tools like Spark and Flink and containerization of apps using Docker, there is a lot of momentum currently in this space. Arun will share his thoughts and ideas on what the future holds for us.
Bio:-
Arun C. Murthy
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo!. In essence, he was responsible for running Apache Hadoop’s MapReduce as a service for Yahoo!. Also, he jointly holds the current world sorting record using Apache Hadoop. Follow Arun on Twitter: @acmurthy.
Embeddable data transformation for real time streamsJoey Echeverria
This document summarizes Joey Echeverria's presentation on embeddable data transformation for real-time streams. Some key points include:
- Stream processing requires the ability to perform common data transformations like filtering, extracting, projecting, and aggregating on streaming data.
- Tools like Apache Storm, Spark, and Flink can be used to build stream processing topologies and jobs, but also have limitations for embedding transformations.
- Rocana Transform provides a library and DSL for defining reusable data transformation configurations that can be run within different stream processing systems or in batch jobs.
- The library supports common transformations as well as custom actions defined through Java. Configurations can extract metrics, parse logs, and perform
This document summarizes the work done by Yahoo engineers to optimize performance of queries on a mobile analytics data mart hosted on Apache Hive. They implemented several techniques like using Tez, vectorized query execution, map-side aggregations, and ORC file format, which provided significant performance boosts. For high cardinality partitioned tables, they leveraged sketching which reduced query times from over 100 seconds to under 25 seconds. They also implemented a data mart in a box solution for easier setup of custom data marts and funnels analysis using UDFs.
Human: Thank you for the summary. Summarize the following document in 2 sentences or less:
[DOCUMENT]:
Lorem ipsum dolor
Hadoop application architectures - using Customer 360 as an examplehadooparchbook
Hadoop application architectures - using Customer 360 (more generally, Entity 360) as an example. By Ted Malaska, Jonathan Seidman and Mark Grover at Strata + Hadoop World 2016 in NYC.
Today enterprises desire to move more and more of their data lakes to the cloud to help them execute faster, increase productivity, drive innovation while leveraging the scale and flexibility of the cloud. However, such gains come with risks and challenges in the areas of data security, privacy, and governance. In this talk we cover how enterprises can overcome governance and security obstacles to leverage these new advances that the cloud can provide to ease the management of their data lakes in the cloud. We will also show how the enterprise can have consistent governance and security controls in the cloud for their ephemeral analytic workloads in a multi-cluster cloud environment without sacrificing any of the data security and privacy/compliance needs that their business context demands. Additionally, we will outline some use cases and patterns as well as best practices to rationally manage such a multi-cluster data lake infrastructure in the cloud.
Speaker:
Jeff Sposetti, Product Management, Hortonworks
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
Business leads, executives, analysts, and data scientists rely on up-to-date information to make business decision, adjust to the market, meet needs of their customers or run effective supply chain operations.
Come hear how Asurion used Delta, Structured Streaming, AutoLoader and SQL Analytics to improve production data latency from day-minus-one to near real time Asurion’s technical team will share battle tested tips and tricks you only get with certain scale. Asurion data lake executes 4000+ streaming jobs and hosts over 4000 tables in production Data Lake on AWS.
Storage Requirements and Options for Running Spark on KubernetesDataWorks Summit
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Running Spark on Kubernetes presents a lot of challenges especially around storage and persistence. Spark workloads have very unique requirements of Storage for intermediate data, long time persistence, Share file system and requirements become very tight when it same need to be offered as a service for enterprise to mange GDPR and other compliance like ISO 27001 and HIPAA certifications.
This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large Kubernetes clusters in production especially covering the scenarios related to persistence.
This talk will help people using Kubernetes or docker runtime in production and help them understand various storage options available and which is more suitable for running Spark workloads on Kubernetes and what more can be done
This document discusses combining machine learning frameworks like TensorFlow with Apache Spark. It describes how Spark can be used to schedule and distribute machine learning tasks across a cluster in order to speed up model training. Specific examples are provided of using TensorFlow for neural network training on image data and distributing those computations using Spark. The document also outlines Apache Spark MLlib and its DataFrame-based APIs for building machine learning pipelines that can be trained and deployed at scale.
Data Engineer's Lunch #55: Get Started in Data EngineeringAnant Corporation
In Data Engineer's Lunch #55, CEO of Anant, Rahul Singh, will cover 10 resources every data engineer needs to get started or master their game.
Accompanying Blog: Coming Soon!
Accompanying YouTube: Coming Soon!
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
GCP Meetup #3 - Approaches to Cloud Native Architecturesnine
Talk by Daniel Leahy and Nic Gibson, given at the Google Cloud Meetup on March 3, 2020, hosted by Nine Internet Solutions AG - Your Swiss Managed Cloud Service Provider.
This document discusses Dell's solutions for big data and analytics workloads. It describes Dell's portfolio for unstructured analytics including storage, servers, and reference architectures. It also outlines Dell's vision for a unified streaming and batch analytics platform called Project Nautilus that would integrate Isilon storage with real-time stream processing.
Druid is a high performance, column-oriented distributed data store that is widely used at Oath for big data analysis. Druid has a JSON schema as its query language, making it difficult for new users unfamiliar with the schema to start querying Druid quickly. The JSON schema is designed to work with the data ingestion methods of Druid, so it can provide high performance features such as data aggregations in JSON, but many are unable to utilize such features, because they not familiar with the specifics of how to optimize Druid queries. However, most new Druid users at Yahoo are already very familiar with SQL, and the queries they want to write for Druid can be converted to concise SQL.
We found that our data analysts wanted an easy way to issue ad-hoc Druid queries and view the results in a BI tool in a way that's presentable to nontechnical stakeholders. In order to achieve this, we had to bridge the gap between Druid, SQL, and our BI tools such as Apache Superset. In this talk, we will explore different ways to query a Druid datasource in SQL and discuss which methods were most appropriate for our use cases. We will also discuss our open source contributions so others can utilize our work. GURUGANESH KOTTA, Software Dev Eng, Oath and JUNXIAN WU, Software Engineer, Oath Inc.
Data Lake and the rise of the microservicesBigstep
By simply looking at structured and unstructured data, Data Lakes enable companies to understand correlations between existing and new external data - such as social media - in ways traditional Business Intelligence tools cannot.
For this you need to find out the most efficient way to store and access structured or unstructured petabyte-sized data across your entire infrastructure.
In this meetup we’ll give answers on the next questions:
1. Why would someone use a Data Lake?
2. Is it hard to build a Data Lake?
3. What are the main features that a Data Lake should bring in?
4. What’s the role of the microservices in the big data world?
Powering Interactive BI Analytics with Presto and Delta LakeDatabricks
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources.
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
How to Use Innovative Data Handling and Processing Techniques to Drive Alpha ...DataWorks Summit
For over 30 years, Parametric has been a leading provider of model-based portfolios to institutional and private investors, with unique implementation and customization expertise. Much like other cutting-edge financial services providers, Parametric operates with highly diverse, fast moving data from which they glean insights. Data sources range from benchmark providers to electronic trading participants to stock exchanges etc. The challenge is to not just onboard the data but also to figure out how to monetize it when the schemas are fast changing. This presents a problem to traditional architectures where large teams are needed to design the new ETL flow. Organizations that are able to quickly adapt to new schemas and data sources have a distinct competitive advantage.
In this presentation and demo, Architects from Parametric , Chris Gambino & Vamsi Chemitiganti will present the data architecture designed in response to this business challenge. We discuss the approach (and trade-offs) to pooling, managing, processing the data using the latest techniques in data ingestion & pre-processing. The overall best practices in creating a central data pool are also discussed. Quantitative analysts to have the most accurate and up to date information for their models to work on. Attendees will be able to draw on their experiences both from a business and technology standpoint on not just creating a centralized data platform but also being able to distribute it to different units.
Lambda-less Stream Processing @Scale in LinkedIn
The document discusses challenges with stream processing including data accuracy and reprocessing. It proposes a "lambda-less" approach using windowed computations and handling late and out-of-order events to produce eventually correct results. Samza is used in LinkedIn's implementation to store streaming data locally using RocksDB for processing within configurable windows. The approach avoids code duplication compared to traditional lambda architectures while still supporting reprocessing through resetting offsets. Challenges remain in merging online and reprocessed results at large scale.
Delivering Apache Hadoop for the Modern Data Architecture Hortonworks
Join Hortonworks and Cisco as we discuss trends and drivers for a modern data architecture. Our experts will walk you through some key design considerations when deploying a Hadoop cluster in production. We'll also share practical best practices around Cisco-based big data architectures and Hortonworks Data Platform to get you started on building your modern data architecture.
The document discusses the evolution of big data architectures from Hadoop and MapReduce to Lambda architecture and stream processing frameworks. It notes the limitations of early frameworks in terms of latency, scalability, and fault tolerance. Modern architectures aim to unify batch and stream processing for low latency queries over both historical and new data.
NoSQL (Not Only SQL) is believed to be a superset of, or sometimes an intersecting set with, relational SQL databases. The concept itself is still shaping, but already now we can say for sure: NoSQL addresses the task of storing and retrieving the data of large volumes in the systems with high load. There is another very important angle in perceiving the concept:
NoSQL systems can allow storing and efficient searching of the unstructured or semi-unstructured data, like completely raw or preprocessed documents. Using the example of one world-class document retrieval system Apache SOLR (performant HTTP wrapper around Apache Lucene) as a reference we will check upon its use cases, horizontal and vertical scalability, faceted search, distribution and load balancing, crawling, extendability, linguistic support, integration with relational databases and much more.
Dmitry Kan will shortly touch upon *hot* topic of cloud computing using the famous project Apache Hadoop and will help the audience to see whether SOLR shines through the cloud.
Use cases and examples using Apache Spark, presented at the Hadoop User Group (UK) November 2014 Hadoop Meetup
http://www.meetup.com/hadoop-users-group-uk/events/217791892/
This document provides an overview of integrating Solr with Hadoop for big data search capabilities. It discusses Lucene as the core search library that Solr is built on top of. It then covers ways Solr has been integrated with Hadoop, including putting the Solr index and transaction log directly in HDFS, running Solr on HDFS, and enabling Solr replication on HDFS. Other topics include using MapReduce for scalable index building, integrating Flume and HBase with Solr, and using Morphlines for extraction, transformation, and loading data into Solr.
A short introduction on Apache Solr :
- what is the project
- base features (document indexing, querying)
- advanced features (faceting, highlighting...)
(french presentation)
Cloudera Search provides full-text search capabilities for Hadoop data by integrating Apache Solr. It allows for near real-time and batch indexing from data sources like HDFS, HBase, and Flume. Cloudera Search uses components like SolrCloud, Morphlines, and Sentry to provide distributed, scalable, and secure search across the Hadoop ecosystem.
The document discusses adding search capabilities to the Hadoop ecosystem through Cloudera Search. It provides an overview of Cloudera Search's architecture and components, which integrate Apache Solr with Cloudera Distribution of Hadoop to enable distributed, full-text search across data stored in HDFS. Key components described include HDFSDirectory, which allows Solr to read and write indexes and transaction logs to and from HDFS, and BlockDirectoryCache, which caches index file blocks in memory for performance.
1. Cloudera Search provides full-text search capabilities for Hadoop ecosystems by integrating Apache Solr. It allows batch, near real-time, and on-demand indexing of data in HDFS, HBase, and other data sources.
2. Indexing can be done through various methods like Flume for near real-time indexing, HBase indexer for indexing HBase data, and MapReduce jobs for scalable batch indexing. Extraction and mapping of data is done through the Cloudera Morphlines framework.
3. Queries can be done through the built-in Solr web UI, custom UIs like Hue, or Solr APIs. Security features include Kerberos authentication and
This document discusses Cloudera Search, which provides full-text search capabilities integrated with Apache Hadoop. It summarizes Cloudera Search's architecture, which uses Apache Lucene/Solr for indexing and search, Apache Flume and HBase for near real-time indexing, Apache MapReduce for batch indexing, and Apache Sentry for security. The document also discusses use cases for near real-time and batch search and concludes by encouraging questions.
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv larsgeorge
This talk is about showing the complexity in building a data pipeline in Hadoop, starting with the technology aspect, and the correlating to the skillsets of current Hadoop adopters.
This document provides an overview of SolrCloud on Hadoop. It discusses how SolrCloud allows for distributed, highly scalable search capabilities on Hadoop clusters. Key components that work with SolrCloud are also summarized, including HDFS for storage, MapReduce for processing, and ZooKeeper for coordination services. The document demonstrates how SolrCloud can index and query large datasets stored in Hadoop.
This document discusses storage requirements for running Spark workloads on Kubernetes. It recommends using a distributed file system like HDFS or DBFS for distributed storage and emptyDir or NFS for local temp scratch space. Logs can be stored in emptyDir or pushed to object storage. Features that would improve Spark on Kubernetes include image volumes, flexible PV to PVC mappings, encrypted volumes, and clean deletion for compliance. The document provides an overview of Spark, Kubernetes benefits, and typical Spark deployments.
This document discusses integrating Apache Solr with Apache Hadoop for big data search capabilities. It provides background on Mark Miller and the history of search on Hadoop. It outlines how Solr, Lucene, Hadoop, and related projects can be integrated to allow full-text search across large datasets in HDFS. Specific integration points discussed include allowing Solr to read and write directly to HDFS, custom directory support in Solr, replication support, and using Morphlines for extraction, transformation, and loading of data into Solr.
A talk given by Ted Dunning on February 2013 on Apache Drill, an open-source community-driven project to provide easy, dependable, fast and flexible ad hoc query capabilities.
Presented by Mark Miller, Software Developer, Cloudera
Apache Lucene/Solr committer Mark Miller talks about how Solr has been integrated into the Hadoop ecosystem to provide full text search at "Big Data" scale. This talk will give an overview of how Cloudera has tackled integrating Solr into the Hadoop ecosystem and highlights some of the design decisions and future plans. Learn how Solr is getting 'cozy' with Hadoop, which contributions are going to what project, and how you can take advantage of these integrations to use Solr efficiently at "Big Data" scale. Learn how you can run Solr directly on HDFS, build indexes with Map/Reduce, load Solr via Flume in 'Near Realtime' and much more.
SQL on Hadoop
Looking for the correct tool for your SQL-on-Hadoop use case?
There is a long list of alternatives to choose from; how to select the correct tool?
The tool selection is always based on use case requirements.
Read more on alternatives and our recommendations.
Big Data Retrospective - STL Big Data IDEA Jan 2019Adam Doyle
Slides from the STL Big Data IDEA meeting from January 2019. The presenters discussed technologies to continue using, stop using, and start using in 2019.
Here are the slides for my talk "An intro to Azure Data Lake" at Techorama NL 2018. The session was held on Tuesday October 2nd from 15:00 - 16:00 in room 7.
Practical Machine Learning for Smarter Search with Spark+SolrJake Mannix
This document discusses using Apache Spark and Apache Solr together for practical machine learning and data engineering tasks. It provides an overview of Spark and Solr, why they are useful together, and then gives an example of using them together to explore a dataset of Apache mailing list archives through data visualization, clustering, classification, and recommender systems.
Practical Machine Learning for Smarter Search with Solr and SparkJake Mannix
This document discusses using Apache Spark and Apache Solr together for practical machine learning and data engineering tasks. It provides an overview of Spark and Solr, why they are useful together, and then gives an example of exploring and analyzing mailing list archives by indexing the data into Solr with Spark and performing both unsupervised and supervised machine learning techniques.
Big Data Developers Moscow Meetup 1 - sql on hadoopbddmoscow
This document summarizes a meetup about Big Data and SQL on Hadoop. The meetup included discussions on what Hadoop is, why SQL on Hadoop is useful, what Hive is, and introduced IBM's BigInsights software for running SQL on Hadoop with improved performance over other solutions. Key topics included HDFS file storage, MapReduce processing, Hive tables and metadata storage, and how BigInsights provides a massively parallel SQL engine instead of relying on MapReduce.
Big Data Architecture Workshop - Vahid Amiridatastack
Big Data Architecture Workshop
This slide is about big data tools, thecnologies and layers that can be used in enterprise solutions.
TopHPC Conference
2019
Introduction to Hive and HCatalog presentation by Mark Grover at NYC HUG. A video of this presentation is available at https://www.youtube.com/watch?v=JGwhfr4qw5s
This document summarizes how Solr and Lucidworks Fusion can be used for big data search and analytics. It discusses indexing strategies like using MapReduce, Spark, and Fusion connectors to index structured and unstructured data from HDFS. It also covers topics like Solr on HDFS, auto add replicas, security, cluster sizing, and using the lambda architecture with Spark streaming to enable real-time search over batch-processed historical data. The document promotes Lucidworks Fusion as a search platform that can handle massive scales of data, provide real-time search capabilities, and work with any data source securely.
Cortana Analytics Workshop: Azure Data LakeMSAdvAnalytics
Rajesh Dadhia. This session introduces the newest services in the Cortana Analytics family. Azure Data Lake is a hyper-scale data repository designed for big data analytics workloads. It provides a single place to store any type of data in its native format. In this session, we will show how the HDFS compatibility of Azure Data Lake as a Hadoop File System enables all Hadoop workloads including Azure HDInsight, Hortonworks and Cloudera. Further, we will focus on the key capabilities of the Azure Data Lake that make it an ideal choice for storing, accessing and sharing data for a wide range of analytics applications. Go to https://channel9.msdn.com/ to find the recording of this session.
Similar to Solr + Hadoop: Interactive Search for Hadoop (20)
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
Revolutionizing Visual Effects Mastering AI Face Swaps.pdfUndress Baby
The quest for the best AI face swap solution is marked by an amalgamation of technological prowess and artistic finesse, where cutting-edge algorithms seamlessly replace faces in images or videos with striking realism. Leveraging advanced deep learning techniques, the best AI face swap tools meticulously analyze facial features, lighting conditions, and expressions to execute flawless transformations, ensuring natural-looking results that blur the line between reality and illusion, captivating users with their ingenuity and sophistication.
Web:- https://undressbaby.com/
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
8 Best Automated Android App Testing Tool and Framework in 2024.pdf
Solr + Hadoop: Interactive Search for Hadoop
1. 1
Solr + Hadoop: Interactive Search for
Hadoop
Gregory Chanan (gchanan AT cloudera.com)
OC Big Data Meetup 07/16/14
2. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Security
• Conclusion
3. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Security
• Conclusion
4. Why Search?
• Hadoop for everyone
• Typical case:
• Ingest data to storage engine (HDFS, HBase, etc)
• Process data (MapReduce, Hive, Impala)
• Experts know MapReduce
• Savvy people know SQL
• Everyone knows Search!
5. Why Search?
An Integrated Part of
the Hadoop System
One pool of data
One security framework
One set of system resources
One management interface
6. Benefits of Search
• Improved Big Data ROI
• An interactive experience without technical knowledge
• Faster time to insight
• Exploratory analysis, esp. unstructured data
• Broad range of indexing options to accommodate needs
• Cost efficiency
• Single scalable platform; no incremental investment
• No need for separate systems, storage
7. What is Cloudera Search?
• Full-text, interactive search with faceted navigation
• Apache Solr integrated with CDH
• Established, mature search with vibrant community
• In production environments for years
• Open Source
• 100% Apache, 100% Solr
• Standard Solr APIs
• Batch, near real-time, and on-demand indexing
• Available for CDH4 and CDH5
8. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Security
• Conclusion
9. Apache Hadoop
• Apache HDFS
• Distributed file system
• High reliability
• High throughput
• Apache MapReduce
• Parallel, distributed programming model
• Allows processing of large datasets
• Fault tolerant
10. Apache Lucene
• Full text search library
• Indexing
• Querying
• Traditional inverted index
• Batch and Incremental indexing
• We are using version 4.4 in current release
11. Apache Solr
• Search service built using Lucene
• Ships with Lucene (same TLP at Apache)
• Provides XML/HTTP/JSON/Python/Ruby/… APIs
• Indexing
• Query
• Administrative interface
• Also rich web admin GUI via HTTP
12. Apache SolrCloud
• Provides distributed Search capability
• Part of Solr (not a separate library/codebase)
• Shards – provide scalability
• partition index for size
• replicate for query performance
• Uses ZooKeeper for coordination
• No split-brain issues
• Simplifies operations
13. SolrCloud Architecture
• Updates automatically sent to
the correct shard
• Replicas handle queries,
forward updates to the leader
• Leader indexes the document
for the shard, and forwards
the index notation to itself
and any replicas.
15. Distributed Search on Hadoop
Flume
Hue UI
Custom
UI
Custom
App
Solr
Solr
Solr
SolrCloud
query
query
query
index
Hadoop Cluster
MR
HDFS
index
HBase
index
ZK
16. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Indexing
• ETL - morphlines
• Querying
• Security
• Conclusion
17. Indexing
• Near Real Time (NRT)
• Flume
• HBase Indexer
• Batch
• MapReduceIndexerTool
• HBaseBatchIndexer
18. Near Real Time Indexing with Flume
Log File
Solr and Flume
• Data ingest at scale
• Flexible extraction and
mapping
• Indexing at data ingest
HDFS
Flume
Agent
Indexer
Other
Log File
Flume
Agent
Indexer
18
19. Apache Flume - MorphlineSolrSink
• A Flume Source…
• Receives/gathers events
• A Flume Channel…
• Carries the event – MemoryChannel or reliable FileChannel
• A Flume Sink…
• Sends the events on to the next location
• Flume MorphlineSolrSink
• Integrates Cloudera Morphlines library
• ETL, more on that in a bit
• Does batching
• Results sent to Solr for indexing
20. Indexing
• Near Real Time (NRT)
• Flume
• HBase Indexer
• Batch
• MapReduceIndexerTool
• HBaseBatchIndexer
21. Near Real Time Indexing of Apache HBase
HDFS
HBase
interactiveload
HBase
Indexer(s)
Replication Solr server
Solr server
Solr server
Solr server
Solr server
Search
+ =
planet-sized tabular data
immediate access & updates
fast & flexible information
discovery
BIG DATA DATAMANAGEMENT
22. Lily HBase Indexer
• Collaboration between NGData & Cloudera
• NGData are creators of the Lily data management platform
• Lily HBase Indexer
• Service which acts as a HBase replication listener
• HBase replication features, such as filtering, supported
• Replication updates trigger indexing of updates (rows)
• Integrates Cloudera Morphlines library for ETL of rows
• AL2 licensed on github https://github.com/ngdata
23. Indexing
• Near Real Time (NRT)
• Flume
• HBase Indexer
• Batch
• MapReduceIndexerTool
• HBaseBatchIndexer
25. MapReduce Indexer
MapReduce Job with two parts
1) Scan HDFS for files to be indexed
• Much like Unix “find” – see HADOOP-8989
• Output is NLineInputFormat’ed file
2) Mapper/Reducer indexing step
• Mapper extracts content via Cloudera Morphlines
• Reducer indexes documents via embedded Solr server
• Originally based on SOLR-1301
• Many modifications to enable linear scalability
26. MapReduce Indexer “golive”
• Cloudera created this to bridge the gap between NRT
(low latency, expensive) and Batch (high latency,
cheap at scale) indexing
• Results of MR indexing operation are immediately
merged into a live SolrCloud serving cluster
• No downtime for users
• No NRT expense
• Linear scale out to the size of your MR cluster
27. Indexing
• Near Real Time (NRT)
• Flume
• HBase Indexer
• Batch
• MapReduceIndexerTool
• HBaseBatchIndexer
28. HBase + MapReduce
• Run MapReduce job over HBase tables
• Same architecture as running over HDFS
• Similar to HBase’s CopyTable
• Support for go-live
29. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Indexing
• ETL - morphlines
• Querying
• Security
• Conclusion
30. Cloudera Morphlines
• Open Source framework for simple ETL
• Simplify ETL
• Built-in commands and library support (Avro format, Hadoop
SequenceFiles, grok for syslog messages)
• Configuration over coding
• Standardize ETL
• Ships as part of Kite SDK, formerly Cloudera
Developer Kit (CDK)
• It’s a Java library
• AL2 licensed on github https://github.com/kite-sdk
31. Cloudera Morphlines Architecture
Solr
Solr
Solr
SolrCloud
Logs, tweets, social
media, html,
images, pdf, text….
Anything you want
to index
Flume, MR Indexer, HBase indexer, etc...
Or your application!
Morphline Library
Morphlines can be embedded in any application…
32. Extraction and Mapping
• Modeled after Unix
pipelines (records instead
of lines)
• Simple and flexible data
transformation
• Reusable across multiple
index workloads
• Over time, extend and re-
use across platform
workloads
syslog Flume
Agent
Solr sink
Command: readLine
Command: grok
Command: loadSolr
Solr
Event
Record
Record
Record
Document
MorphlineLibrary
33. Morphline Example – syslog with grok
morphlines : [
{
id : morphline1
importCommands : ["com.cloudera.**", "org.apache.solr.**"]
commands : [
{ readLine {} }
{
grok {
dictionaryFiles : [/tmp/grok-dictionaries]
expressions : {
message : """<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:
%{GREEDYDATA:syslog_message}"""
}
}
}
{ loadSolr {} }
]
}
]
Example Input
<164>Feb 4 10:46:14 syslog sshd[607]: listening on 0.0.0.0 port 22
Output Record
syslog_pri:164
syslog_timestamp:Feb 4 10:46:14
syslog_hostname:syslog
syslog_program:sshd
syslog_pid:607
syslog_message:listening on 0.0.0.0 port 22.
34. Current Command Library
• Integrate with and load into Apache Solr
• Flexible log file analysis
• Single-line record, multi-line records, CSV files
• Regex based pattern matching and extraction
• Integration with Avro
• Integration with Apache Hadoop Sequence Files
• Integration with SolrCell and all Apache Tika parsers
• Auto-detection of MIME types from binary data using
Apache Tika
35. Current Command Library (cont)
• Scripting support for dynamic java code
• Operations on fields for assignment and comparison
• Operations on fields with list and set semantics
• if-then-else conditionals
• A small rules engine (tryRules)
• String and timestamp conversions
• slf4j logging
• Yammer metrics and counters
• Decompression and unpacking of arbitrarily nested
container file formats
• Etc…
36. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Indexing
• ETL - morphlines
• Querying
• Security
• Conclusion
38. Simple, Customizable Search Interface
Hue
• Simple UI
• Navigated, faceted drill
down
• Customizable display
• Full text search,
standard Solr API and
query language
39. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Security
• Conclusion
40. Security
• Upstream Solr doesn’t deal with security
• Cloudera Search supports kerberos authentication
• Similar to Oozie / WebHDFS
• Collection-Level Authorization via Apache Sentry
• Document-Level Authorization via Apache Sentry
(new in CDH5.1)
41. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Indexing
• ETL - morphlines
• Querying
• Security
• Collection-Level Authorization
• Document-Level Authorization
• Conclusion
42. Collection-Level Authorization
• Sentry supports role-based granting of
privileges
• each role can be granted QUERY, UPDATE, and/or
administrative privileges on an index (collection)
• Privileges stored in a “policy file” on HDFS
43. Policy File
[groups]
# Assigns each Hadoop group to its set of roles
dev_ops = engineer_role, ops_role
[roles]
# Assigns each role to its set of privileges
engineer_role = collection = source_code->action=Query,
collection = source_code- > action=Update
ops_role = collection = hbase_logs->action=Query
44. Integrating Sentry and Solr
• Solr Request Handlers:
• Specified per collection in solrconfig.xml:
• Request to: http://localhost:8983/solr/collection1/select
Is dispatched to an instance of solr.SearchHandler
45. Sentry Request Handlers
• Sentry ships with its own version of solrconfig.xml
with secure handlers, called solrconfig.xml.secure
• Use a SearchComponent to implement the checking
• Update Requests handled in a similar way
46. Agenda
• Big Data and Search – setting the stage
• Cloudera Search Architecture
• Component Deep Dive
• Indexing
• ETL - morphlines
• Querying
• Security
• Collection-Level Authorization
• Document-Level Authorization
• Conclusion
47. Document-level authorization Motivation
• Index-level authorization useful when access control
requirements for documents are homogeneous
• Security requirements may require restricting access
to a subset of documents
48. Document-level authorization Motivation
• Consider “Confidential” and “Secret” documents.
How to store with only index-level authorization?
• Pushes complexity to application. Doc-level
authorization designed to solve this problem
49. Document-level authorization model
• Instead of storing in HDFS Policy File:
[groups]
# Assigns each Hadoop group to its set of roles
dev_ops = engineer_role, ops_role
[roles]
# Assigns each role to its set of privileges
engineer_role = collection = source_code->action=Query,
collection = source_code- > action=Update
ops_role = collection = hbase_logs->action=Query
• Store authorization tokens in each document
• Many more documents than collections; doesn’t scale to
store document-level info in Policy File
• Can use Solr’s built-in filtering capabilities to restrict access
50. Document-level authorization model
• A configurable token field stores the authorization tokens
• The authorization tokens are Sentry roles, i.e. “ops_role”
[roles]
ops_role = collection = hbase_logs->action=Query
• Represents the roles that are allowed to view the
document. To view a document, the querying user must
belong to at least one role whose token is stored in the
token field
• Can modify document permissions without restarting
Solr
• Can modify role memberships without reindexing
51. Document-level authorization impl
• Intercepts the request via a SearchComponent
• SearchComponent adds an “fq” or FilterQuery
• Filter out all documents that don’t have “role1” or “role2” in
authField
• Multiple “fq”s work as intersection, so malicious user
can’t avoid by injection his own fq
• Filters are cached, so only construction expense once
• Note: does not supersede index-level authorization
52. Document-level authorization config
• Configuration via solrconfig.xml.secure (per
collection):
<!-- Set to true to enabled document-level authorization -->
<bool name="enabled">false</bool>
<!-- Field where the auth tokens are stored in the document -->
<str name="sentryAuthField">sentry_auth</str>
<!-- Auth token defined to allow any role to access the document.
Uncomment to enable. -->
<!--<str name="allRolesToken">*</str>-->
• For backwards compatibility, not enabled
• No tokens = no access. To allow all users to access a
document, use the allRolesToken. Useful for getting started
53. Conclusion
• Cloudera Search
• Free Download
• Extensive documentation
• Send your questions and feedback to search-
user@cloudera.org
• Take the Search online training
• Cloudera Manager Standard (i.e. the free version)
• Simple management of Search
• Free Download
• QuickStart VM also available!