Building Advanced Analytics Pipelines with Azure DatabricksLace Lofranco
Participants will get a deep dive into one of Azure’s newest offering: Azure Databricks, a fast, easy and collaborative Apache® Spark™ based analytics platform optimized for Azure. In this session, we start with a technical overview of Spark and quickly jump into Azure Databricks’ key collaboration features, cluster management, and tight data integration with Azure data sources. Concepts are made concrete via a detailed walk through of an advance analytics pipeline built using Spark and Azure Databricks.
Full video of the presentation: https://www.youtube.com/watch?v=14D9VzI152o
Presentation demo: https://github.com/devlace/azure-databricks-anomaly
Building Advanced Analytics Pipelines with Azure DatabricksLace Lofranco
Participants will get a deep dive into one of Azure’s newest offering: Azure Databricks, a fast, easy and collaborative Apache® Spark™ based analytics platform optimized for Azure. In this session, we start with a technical overview of Spark and quickly jump into Azure Databricks’ key collaboration features, cluster management, and tight data integration with Azure data sources. Concepts are made concrete via a detailed walk through of an advance analytics pipeline built using Spark and Azure Databricks.
Full video of the presentation: https://www.youtube.com/watch?v=14D9VzI152o
Presentation demo: https://github.com/devlace/azure-databricks-anomaly
Data Quality Patterns in the Cloud with Azure Data FactoryMark Kromer
This is my slide presentation from Pragmatic Works' Azure Data Week 2019: Data Quality Patterns in the Cloud with Azure Data Factory using Mapping Data Flows
Stream data processing is increasingly required to support business needs for faster actionable insight with growing volume of information from more sources. Apache Apex is a true stream processing framework for low-latency, high-throughput and reliable processing of complex analytics pipelines on clusters. Apex is designed for quick time-to-production, and is used in production by large companies for real-time and batch processing at scale.
This session will use an Apex production use case to walk through the incremental transition from a batch pipeline with hours of latency to an end-to-end streaming architecture with billions of events per day which are processed to deliver real-time analytical reports. The example is representative for many similar extract-transform-load (ETL) use cases with other data sets that can use a common library of building blocks. The transform (or analytics) piece of such pipelines varies in complexity and often involves business logic specific, custom components.
Topics include:
* Pipeline functionality from event source through queryable state for real-time insights.
* API for application development and development process.
* Library of building blocks including connectors for sources and sinks such as Kafka, JMS, Cassandra, HBase, JDBC and how they enable end-to-end exactly-once results.
* Stateful processing with event time windowing.
* Fault tolerance with exactly-once result semantics, checkpointing, incremental recovery
* Scalability and low-latency, high-throughput processing with advanced engine features for auto-scaling, dynamic changes, compute locality.
* Who is using Apex in production, and roadmap.
Following the session attendees will have a high level understanding of Apex and how it can be applied to use cases at their own organizations.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. As an Azure service, customers automatically benefit from the native integration with other Azure services such as Power BI, SQL Data Warehouse, and Cosmos DB, as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs.
ETL Made Easy with Azure Data Factory and Azure DatabricksDatabricks
Data Engineers are responsible for data cleansing, prepping, aggregating, and loading analytical data stores, which is often difficult and time-consuming. Azure Data Factory makes this work easy and expedites solution development. We’ll demonstrate how Azure Data Factory can enable a new UI-driven ETL design paradigm on top of Azure Databricks for building scaled-out data transformation pipelines.
Designing the Next Generation of Data Pipelines at Zillow with Apache SparkDatabricks
The trade-off between development speed and pipeline maintainability is a constant for data engineers, especially for those in a rapidly evolving organization
Open Source Big Data Ingestion - Without the Heartburn!Pat Patterson
Big Data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Upstream data sources can 'drift' due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail, inducing heartburn in even the most resilient data scientist. This session will survey the big data ingestion landscape, focusing on how open source tools such as Sqoop, Flume, Nifi and StreamSets can keep the data pipeline flowing.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Hyperspace is a recently open-sourced (https://github.com/microsoft/hyperspace) indexing sub-system from Microsoft. The key idea behind Hyperspace is simple: Users specify the indexes they want to build. Hyperspace builds these indexes using Apache Spark, and maintains metadata in its write-ahead log that is stored in the data lake. At runtime, Hyperspace automatically selects the best index to use for a given query without requiring users to rewrite their queries. Since Hyperspace was introduced, one of the most popular asks from the Spark community was indexing support for Delta Lake. In this talk, we present our experiences in designing and implementing Hyperspace support for Delta Lake and how it can be used for accelerating queries over Delta tables. We will cover the necessary foundations behind Delta Lake’s transaction log design and how Hyperspace enables indexing support that seamlessly works with the former’s time travel queries.
Building Data Pipelines with Spark and StreamSetsPat Patterson
Big data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Metadata in upstream sources can ‘drift’ due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail. StreamSets Data Collector (SDC) is an Apache 2.0 licensed open source platform for building big data ingest pipelines that allows you to design, execute and monitor robust data flows. In this session we’ll look at how SDC’s “intent-driven” approach keeps the data flowing, with a particular focus on clustered deployment with Spark and other exciting Spark integrations in the works.
New Developments in the Open Source Ecosystem: Apache Spark 3.0, Delta Lake, ...Databricks
<p>In this talk, we will highlight major efforts happening in the Spark ecosystem. In particular, we will dive into the details of adaptive and static query optimizations in Spark 3.0 to make Spark easier to use and faster to run. We will also demonstrate how new features in Koalas, an open source library that provides Pandas-like API on top of Spark, helps data scientists gain insights from their data quicker.</p>
Delta from a Data Engineer's PerspectiveDatabricks
Take a walk through the daily struggles of a data engineer in this presentation as we cover what is truly needed to create robust end to end Big Data solutions.
Building Data Intensive Analytic Application on Top of Delta LakesDatabricks
Why to build your own analytics application on top on Delta lake : – Every enterprise is building a data lake. However, these data lakes are plagued by low user adoption, poor data quality, and result in lower ROI. – BI tools may not be enough for your use case, especially, when you want to build a data driven analytical web application such as paysa. – Delta’s ACID guarantees allows you to build a real-time reporting app that displays consistent and reliable data
In this talk we will learn :
how to build your own analytics app on top of delta lake.
how Delta Lake helps you build pristine data lake with several ways to expose data to end-users
how analytics web application can be backed by custom Query layer that executes Spark SQL in remote Databricks cluster.
We’ll explore various options to build an analytics application using various backend technologies.
Various Architecture pattern/components/frameworks can be used to build custom analytics platform in no time.
How to leverage machine learning to build advanced analytics applications Demo: Analytics application built on Play Framework(for back-end), React(for front-end), Structured Streaming for ingesting data from Delta table. Live query analytics on real time data ML predictions based on analytics data
New Performance Benchmarks: Apache Impala (incubating) Leads Traditional Anal...Cloudera, Inc.
Recording Link: http://bit.ly/LSImpala
Author: Greg Rahn, Cloudera Director of Product Management
In this session, we'll review the recent set of benchmark tests the Apache Impala (incubating) performance team completed that compare Apache Impala to a traditional analytic database (Greenplum), as well as to other SQL-on-Hadoop engines (Hive LLAP, Spark SQL, and Presto). We'll go over the methodology and results, and we'll also discuss some of the performance features and best practices that make this performance possible in Impala. Lastly, we'll look at some recent advancements in in Impala over the past few releases.
Performance Optimizations in Apache ImpalaCloudera, Inc.
Apache Impala is a modern, open-source MPP SQL engine architected from the ground up for the Hadoop data processing environment. Impala provides low latency and high concurrency for BI/analytic read-mostly queries on Hadoop, not delivered by batch frameworks such as Hive or SPARK. Impala is written from the ground up in C++ and Java. It maintains Hadoop’s flexibility by utilizing standard components (HDFS, HBase, Metastore, Sentry) and is able to read the majority of the widely-used file formats (e.g. Parquet, Avro, RCFile).
To reduce latency, such as that incurred from utilizing MapReduce or by reading data remotely, Impala implements a distributed architecture based on daemon processes that are responsible for all aspects of query execution and that run on the same machines as the rest of the Hadoop infrastructure. Impala employs runtime code generation using LLVM in order to improve execution times and uses static and dynamic partition pruning to significantly reduce the amount of data accessed. The result is performance that is on par or exceeds that of commercial MPP analytic DBMSs, depending on the particular workload. Although initially designed for running on-premises against HDFS-stored data, Impala can also run on public clouds and access data stored in various storage engines such as object stores (e.g. AWS S3), Apache Kudu and HBase. In this talk, we present Impala's architecture in detail and discuss the integration with different storage engines and the cloud.
Data Quality Patterns in the Cloud with Azure Data FactoryMark Kromer
This is my slide presentation from Pragmatic Works' Azure Data Week 2019: Data Quality Patterns in the Cloud with Azure Data Factory using Mapping Data Flows
Stream data processing is increasingly required to support business needs for faster actionable insight with growing volume of information from more sources. Apache Apex is a true stream processing framework for low-latency, high-throughput and reliable processing of complex analytics pipelines on clusters. Apex is designed for quick time-to-production, and is used in production by large companies for real-time and batch processing at scale.
This session will use an Apex production use case to walk through the incremental transition from a batch pipeline with hours of latency to an end-to-end streaming architecture with billions of events per day which are processed to deliver real-time analytical reports. The example is representative for many similar extract-transform-load (ETL) use cases with other data sets that can use a common library of building blocks. The transform (or analytics) piece of such pipelines varies in complexity and often involves business logic specific, custom components.
Topics include:
* Pipeline functionality from event source through queryable state for real-time insights.
* API for application development and development process.
* Library of building blocks including connectors for sources and sinks such as Kafka, JMS, Cassandra, HBase, JDBC and how they enable end-to-end exactly-once results.
* Stateful processing with event time windowing.
* Fault tolerance with exactly-once result semantics, checkpointing, incremental recovery
* Scalability and low-latency, high-throughput processing with advanced engine features for auto-scaling, dynamic changes, compute locality.
* Who is using Apex in production, and roadmap.
Following the session attendees will have a high level understanding of Apex and how it can be applied to use cases at their own organizations.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. As an Azure service, customers automatically benefit from the native integration with other Azure services such as Power BI, SQL Data Warehouse, and Cosmos DB, as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs.
ETL Made Easy with Azure Data Factory and Azure DatabricksDatabricks
Data Engineers are responsible for data cleansing, prepping, aggregating, and loading analytical data stores, which is often difficult and time-consuming. Azure Data Factory makes this work easy and expedites solution development. We’ll demonstrate how Azure Data Factory can enable a new UI-driven ETL design paradigm on top of Azure Databricks for building scaled-out data transformation pipelines.
Designing the Next Generation of Data Pipelines at Zillow with Apache SparkDatabricks
The trade-off between development speed and pipeline maintainability is a constant for data engineers, especially for those in a rapidly evolving organization
Open Source Big Data Ingestion - Without the Heartburn!Pat Patterson
Big Data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Upstream data sources can 'drift' due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail, inducing heartburn in even the most resilient data scientist. This session will survey the big data ingestion landscape, focusing on how open source tools such as Sqoop, Flume, Nifi and StreamSets can keep the data pipeline flowing.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Hyperspace is a recently open-sourced (https://github.com/microsoft/hyperspace) indexing sub-system from Microsoft. The key idea behind Hyperspace is simple: Users specify the indexes they want to build. Hyperspace builds these indexes using Apache Spark, and maintains metadata in its write-ahead log that is stored in the data lake. At runtime, Hyperspace automatically selects the best index to use for a given query without requiring users to rewrite their queries. Since Hyperspace was introduced, one of the most popular asks from the Spark community was indexing support for Delta Lake. In this talk, we present our experiences in designing and implementing Hyperspace support for Delta Lake and how it can be used for accelerating queries over Delta tables. We will cover the necessary foundations behind Delta Lake’s transaction log design and how Hyperspace enables indexing support that seamlessly works with the former’s time travel queries.
Building Data Pipelines with Spark and StreamSetsPat Patterson
Big data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Metadata in upstream sources can ‘drift’ due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail. StreamSets Data Collector (SDC) is an Apache 2.0 licensed open source platform for building big data ingest pipelines that allows you to design, execute and monitor robust data flows. In this session we’ll look at how SDC’s “intent-driven” approach keeps the data flowing, with a particular focus on clustered deployment with Spark and other exciting Spark integrations in the works.
New Developments in the Open Source Ecosystem: Apache Spark 3.0, Delta Lake, ...Databricks
<p>In this talk, we will highlight major efforts happening in the Spark ecosystem. In particular, we will dive into the details of adaptive and static query optimizations in Spark 3.0 to make Spark easier to use and faster to run. We will also demonstrate how new features in Koalas, an open source library that provides Pandas-like API on top of Spark, helps data scientists gain insights from their data quicker.</p>
Delta from a Data Engineer's PerspectiveDatabricks
Take a walk through the daily struggles of a data engineer in this presentation as we cover what is truly needed to create robust end to end Big Data solutions.
Building Data Intensive Analytic Application on Top of Delta LakesDatabricks
Why to build your own analytics application on top on Delta lake : – Every enterprise is building a data lake. However, these data lakes are plagued by low user adoption, poor data quality, and result in lower ROI. – BI tools may not be enough for your use case, especially, when you want to build a data driven analytical web application such as paysa. – Delta’s ACID guarantees allows you to build a real-time reporting app that displays consistent and reliable data
In this talk we will learn :
how to build your own analytics app on top of delta lake.
how Delta Lake helps you build pristine data lake with several ways to expose data to end-users
how analytics web application can be backed by custom Query layer that executes Spark SQL in remote Databricks cluster.
We’ll explore various options to build an analytics application using various backend technologies.
Various Architecture pattern/components/frameworks can be used to build custom analytics platform in no time.
How to leverage machine learning to build advanced analytics applications Demo: Analytics application built on Play Framework(for back-end), React(for front-end), Structured Streaming for ingesting data from Delta table. Live query analytics on real time data ML predictions based on analytics data
New Performance Benchmarks: Apache Impala (incubating) Leads Traditional Anal...Cloudera, Inc.
Recording Link: http://bit.ly/LSImpala
Author: Greg Rahn, Cloudera Director of Product Management
In this session, we'll review the recent set of benchmark tests the Apache Impala (incubating) performance team completed that compare Apache Impala to a traditional analytic database (Greenplum), as well as to other SQL-on-Hadoop engines (Hive LLAP, Spark SQL, and Presto). We'll go over the methodology and results, and we'll also discuss some of the performance features and best practices that make this performance possible in Impala. Lastly, we'll look at some recent advancements in in Impala over the past few releases.
Performance Optimizations in Apache ImpalaCloudera, Inc.
Apache Impala is a modern, open-source MPP SQL engine architected from the ground up for the Hadoop data processing environment. Impala provides low latency and high concurrency for BI/analytic read-mostly queries on Hadoop, not delivered by batch frameworks such as Hive or SPARK. Impala is written from the ground up in C++ and Java. It maintains Hadoop’s flexibility by utilizing standard components (HDFS, HBase, Metastore, Sentry) and is able to read the majority of the widely-used file formats (e.g. Parquet, Avro, RCFile).
To reduce latency, such as that incurred from utilizing MapReduce or by reading data remotely, Impala implements a distributed architecture based on daemon processes that are responsible for all aspects of query execution and that run on the same machines as the rest of the Hadoop infrastructure. Impala employs runtime code generation using LLVM in order to improve execution times and uses static and dynamic partition pruning to significantly reduce the amount of data accessed. The result is performance that is on par or exceeds that of commercial MPP analytic DBMSs, depending on the particular workload. Although initially designed for running on-premises against HDFS-stored data, Impala can also run on public clouds and access data stored in various storage engines such as object stores (e.g. AWS S3), Apache Kudu and HBase. In this talk, we present Impala's architecture in detail and discuss the integration with different storage engines and the cloud.
This is a run-through at a 200 level of the Microsoft Azure Big Data Analytics for the Cloud data platform based on the Cortana Intelligence Suite offerings.
Jethro data meetup index base sql on hadoop - oct-2014Eli Singer
JethroData Index based SQL on Hadoop engine.
Architecture comparison of MPP / Full-Scan sql engines such as Impala and Hive to index-based access such as Jethro.
SQL and NoSQL NYC meetup Oct 20 2014
Boaz Raufman
ADV Slides: When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
Not Your Father’s Data Warehouse: Breaking Tradition with InnovationInside Analysis
The Briefing Room with Dr. Robin Bloor and Teradata
Live Webcast on May 20, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=f09e84f88e4ca6e0a9179c9a9e930b82
Traditional data warehouses have been the backbone of corporate decision making for over three decades. With the emergence of Big Data and popular technologies like open-source Apache™ Hadoop®, some analysts question the lifespan of the data warehouse and the future role it will play in enterprise information management. But it’s not practical to believe that emerging technologies provide a wholesale replacement of existing technologies and corporate investments in data management. Rather, a better approach is for new innovations and technologies to complement and build upon existing solutions.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor as he explains where tomorrow’s data warehouse fits in the information landscape. He’ll be briefed by Imad Birouty of Teradata, who will highlight the ways in which his company is evolving to meet the challenges presented by different types of data and applications. He will also tout Teradata’s recently-announced Teradata® Database 15 and Teradata® QueryGrid™, an analytics platform that enables data processing across the enterprise.
Visit InsideAnlaysis.com for more information.
Building a Pluggable Analytics Stack with Cassandra (Jim Peregord, Element Co...DataStax
Element Fleet has the largest benchmark database in our industry and we needed a robust and linearly scalable platform to turn this data into actionable insights for our customers. The platform needed to support advanced analytics, streaming data sets, and traditional business intelligence use cases.
In this presentation, we will discuss how we built a single, unified platform for both Advanced Analytics and traditional Business Intelligence using Cassandra on DSE. With Cassandra as our foundation, we are able to plug in the appropriate technology to meet varied use cases. The platform we’ve built supports real-time streaming (Spark Streaming/Kafka), batch and streaming analytics (PySpark, Spark Streaming), and traditional BI/data warehousing (C*/FiloDB). In this talk, we are going to explore the entire tech stack and the challenges we faced trying support the above use cases. We will specifically discuss how we ingest and analyze IoT (vehicle telematics data) in real-time and batch, combine data from multiple data sources into to single data model, and support standardized and ah-hoc reporting requirements.
About the Speaker
Jim Peregord Vice President - Analytics, Business Intelligence, Data Management, Element Corp.
Cassandra Summit 2014: Internet of Complex Things Analytics with Apache Cassa...DataStax Academy
Speaker: Mohammed Guller, Application Architect & Lead Developer at Glassbeam.
Learn how Cassandra can be used to build a multi-tenant solution for analyzing operational data from Internet of Complex Things (IoCT). IoCT includes complex systems such as computing, storage, networking and medical devices. In this session, we will discuss why Glassbeam migrated from a traditional RDBMS-based architecture to a Cassandra-based architecture. We will discuss the challenges with our first-generation architecture and how Cassandra helped us overcome those challenges. In addition, we will share our next-gen architecture and lessons learned.
Apache CarbonData+Spark to realize data convergence and Unified high performa...Tech Triveni
Challenges in Data Analytics:
Different application scenarios need different storage solutions: HBASE is ideal for point query scenarios but unsuitable for multi-dimensional queries. MPP is suitable for data warehouse scenarios but engine and data are coupled together which hampers scalability. OLAP stores used in BI applications perform best for Aggregate queries but full scan queries perform at a sub-optimal performance. Moreover, they are not suitable for real-time analysis. These distinct systems lead to low resource sharing and need different pipelines for data and application management.
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
DAMA & Denodo Webinar: Modernizing Data Architecture Using Data Virtualization Denodo
Watch here: https://bit.ly/2NGQD7R
In an era increasingly dominated by advancements in cloud computing, AI and advanced analytics it may come as a shock that many organizations still rely on data architectures built before the turn of the century. But that scenario is rapidly changing with the increasing adoption of real-time data virtualization - a paradigm shift in the approach that organizations take towards accessing, integrating, and provisioning data required to meet business goals.
As data analytics and data-driven intelligence takes centre stage in today’s digital economy, logical data integration across the widest variety of data sources, with proper security and governance structure in place has become mission-critical.
Attend this session to learn:
- Learn how you can meet cloud and data science challenges with data virtualization.
- Why data virtualization is increasingly finding enterprise-wide adoption
- Discover how customers are reducing costs and improving ROI with data virtualization
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
2. 2
Machine Data
• Logs
• Diagnostic Bundles
• Utility Data
• Machine Monitoring Data
• User Activity
Machine/Streaming Data
• Machine data is a critical piece with highest
volume and is fast moving
• Systems are hardest to build and scale for it
• The rest of the data fall naturally into the
design.
• In a hospital various reading are taken – heart beat, blood pressure, breathing rate
• Water companies measure the acidity of the water in their reservoirs
• Racing cars : companies want to know every aspect of how their car is performing
• Utility meters
• Web server
• linux/firewall/router : syslogs
3. What you want to do with Machine Data?
3
Store Search
NRT
Analytics
Mix with
other
data
Stream
Build
Cool
Features
Time
Series
Alerts
Reports/
ML
Parse
4. So what’s the CHALLENGE?
4
Huge Fast Moving
1TBx1000=1PB The whole story changes with Scale.
8. 8
Storing/Indexing data into the right Stores
Impala/HDFS/HIVE All data goes in here and accessible with SQL queries
Supports very high write throughputs and very fast scans
All Data
SolrCloud Data needed for real-time complex search
Last X Days
HBase Data needed for real-time serving and searching/scanning based on key
Supports very high write/read throughputs. Supports filtering.
Row Key is indexed and sorted. Limit scanning of huge sets.
Result Sets, Configuration
OpenTSDB Time Series Data for Monitoring/Alerting
Metrics
9. 9
Searching Solr
Searches are done via HTTP GET on the select URL with the query string in the q parameter.
• q=video&fl=name,id (return only name and id fields)
• q=video&fl=name,id,score (return relevancy score as well)
• q=video&fl=*,score (return all stored fields, as well as relevancy score)
• q=video&sort=price desc&fl=name,id,price (add sort specification: sort by price descending)
• q=video&wt=json (return response in JSON format)
Use the "sort' parameter to specify "field direction" pairs, separated by commas if there's more
than one sort field:
• q=video&sort=price desc
• q=video&sort=price asc
• q=video&sort=inStock asc, price desc
• "score" can also be used as a field name when specifying a sort
10. 10
Searching Solr – Faceted Search
Faceted search allows users who’re running searches to see a high-level
breakdown of their search results based upon one or more aspects (facets) of their
documents, allowing them to select filters to drill into those search results.
http://localhost:8983/solr/select?q=*:*&facet=true&facet.field=tags
11. Impala
11
• Raises the bar for query performance. Does extensive query optimization.
• Uses the same metadata, SQL syntax (Hive SQL), ODBC driver and user interface (Hue
Beeswax) as Apache Hive
• Impala circumvents MapReduce to directly access the data through a specialized distributed
query engine
• Queries that require multiple MapReduce phases in Hive or require reduce-side joins will see
a higher speedup than, say, simple single-table aggregation queries
12. 12
Searching with Impala
• Perfect for large data scans
• Partitioning will reduce the amount of data scanned
• Impala caches the Metadata
• Define SQL statements for searching from Impala/HIVE. Use Regex for defining new fields
during search time.
Partitions
• Partition by day: 365
• Partition by hour: 8760
• Partition by minute: 525600
13. Impala
13
HDFS DN
Query Exec Engine
Query Coordinator
Query Planner
HBase HDFS DN
Query Exec Engine
Query Coordinator
Query Planner
HBaseHDFS DN
Query Exec Engine
Query Coordinator
Query Planner
HBase
Fully MPP
Distributed
Local Direct Reads
ODBC
SQL App
Common Hive SQL and interface Unified metadata and scheduler
HDFS NN
Hive
Metastore YARN
State
Store
14. 14
Store/Search
ImpalaSolrCloud
Table
Hour
Table
Day
P P P P P P P P
Read
Alias
C C C C C CC
Update
Alias
http://localhost:8983/solr/admin/collections?action=CREATEALIAS
&name=readalias&collection=C2,C3
More Optimizations = Faster Performance
16. 16
Unified Data Access
Impala
App
Server
App
Server
App
Server
SolrCloud HS2 OpenTSDBHBase
Intelligent Search Server
Knows
• What data is residing where
• How to query the various stores
• Stores Intermediate Results
• Learns from queries
Metastore
Intermediate Results
Pipes/SQL
REST/JSON/JDBC/Thrift
17. Builder/Admin
17
Performs all the Background Management & Admin tasks
• Create new DataSets
• Manage Schemas
• Manage Collections : Store last X days of data in Solr. Use Aliases to map to
collections/day.
• Regenerate Solr Index when needed or requested by the Admin
• Manage the Impala Partitions. Last X days vs last Y months vs last Z years
18. Intelligent Search Server
18
• Parse Query Requests
• Get DataSet definition from the metadata store
• Generate the Query Plan
• Should I fetch from SolrCloud/Impala?
• Is there intermediate result stored that I can use?
• If not a power-user, would this query be very long running?
• Execute the Query
• Store results in HBase if applicable
• Support expressions, aggregate functions, expressions, normal functions
20. 20
Searching/Querying
• SLA requirements
• Be able to search last X days of logs within seconds
• Be able to search last Y weeks of logs within minutes
• Be able to search last Z months of logs within 15 minutes
• Searching consists of:
• Specifying a dataset and time period
• Searching for a regular expression in the dataset
• Displaying the matching records
• Displaying the count of keywords in various facets
(hostname, city, IP)
• Further filtering by various facet selections
• Allow users to define new fields
21. 21
Sample Queries
How many times did someone view a page on the website?
dataset=logs1 method=GET | stats count AS Views
How many resulted in purchases?
dataset=logs1 method=GET | stats count AS Views, count(eval(action="purchase")) as Purchases
What was purchased and how much was made?
dataset=logs1 * action=purchase | stats count AS "# Purchased", values(price) AS Price, sum(price) AS Total
by product_name
Which items were purchased most?
dataset=logs1 action=purchase | top category_id
::You can also save your searches under some label::
22. 22
Drill Downs, More Queries & Subsearch
Click on ‘Tablets’ from the Top Purchases
This kicks off a new search. Search is updated to include the filter for the field/value pair category=flowers
How many different customers purchased tablets?
dataset=logs1 action=purchase category_id=tablets| stats uniqueuecount(clientip)
How many tablets did each customer buy?
dataset=logs1 action=purchase category_id=tablets| stats count BY clientip
The customer who bought the most items yesterday and what he or she bought?
dataset=logs1 action=purchase [search dataset=logs1 action=purchase | top limit=1 clientip |
table clientip] | stats count, values(product_id) by clientip
23. Querying with SQL & Pipes
23
Query SQL
Top 25: business with most of
the reviews
SELECT name, review_count FROM business
ORDER BY review_count DESC LIMIT 25 | chart ...
Top 25: coolest restaurants SELECT r.business_id, name, SUM(cool) AS coolness
FROM review r JOIN business b
ON (r.business_id = b.business_id)
WHERE categories LIKE '%Restaurants%'
GROUP BY r.business_id, name
ORDER BY coolness DESC
LIMIT 25
24. 24
Index Fields
• Index Time
• Solr : Will slow down with more indexes
• Impala : Relies on Partitioning, Bucketing and Filtering
• Define additional indexed fields through the Builder
• Search Time field Extraction
• Does not affect the index size
• Size of data processed gets larger
• Storing of results helps
25. 25
Adding New Fields & Updating the Index
Adding new field
• Update the Morphines to parse the new field
• Update the Solr Schema.
• Update the Impala/HIVE table definitions
Indexing Options
• Re-index all data on HDFS
• Also used when say an index is lost
• Can also be run on the data in HBase
• Support new fields for new data only
26. 26
Speeding Search…
• Save the Search Results (HBase)
• Search Results can be shared
• Searches are speeded by saving previous result and then running an
incremental search.
27. 27
Dashboards & Charts
• Create New Dashboards and populate them
• Add a search you have just run to a new or existing dashboard
Chart of purchases and views for each product
dataset=ecommerce method=GET | chart count AS views, count(eval(action="purchase")) AS
purchases by category_id”
• Top Items Sold
• Total Number of Exceptions
• Total Number of Visits
• Map of Visitor Locations
• Pages/Visit
28. 28
Define the Schema for Incoming Data
• Log data comes in different formats : apache logs, syslog, log4j etc.
• Define the fields to be extracted from each : Timestamp, IP, Host, Message…
• Define Solr Schema
• Can create separate collections for different datasets and time ranges
• Define Tables for Impala & HIVE
• Partition things by date. If needed partition some stuff by hour.
• Impala performs great on partitioned data for NRT queries
• Define Schema for HBase Tables (Need to optimize for writes and for reads)
• Composite Key : DataSet, Application, Component, Some Prefix, Timestamp
• Application, User ID, Timestamp
32. 32
Streaming in data into HDFS/HBase/SolrCloud
HDFS
HBase
Solr Cloud
Solr
Shard
Solr
Shard
Solr
Shard
HIVE/Impala
Storm
Flume
Source
Channels
SolrSink
33. 33
Parsing with Morphlines
From various processes
• Flume
• MapReduceIndexerTool
• My Application
• SolrCloud
• HBase
• HDFS
• …
ETL into various Stores
Can be embedded into any Application…
Morphline
34. Parsing
34
morphlines : [
morphline1
commands : [
readMultiLine
Break up the message text into SOLR fields
Geneate Unique ID
Convert the timestamp field to "yyyy-MM-dd'T'HH:mm:ss.SSSZ"
Sanitize record fields that are unknown to Solr schema.xml
load the record into a SolrServer
]
]
http://cloudera.github.io/cdk/docs/current/cdk-morphlines/morphlinesReferenceGuide.html
https://github.com/cloudera/cdk/tree/master/cdk-morphlines/cdk-morphlines-core/src/test/resources/test-
morphlines
37. 37
Real Time Indexing into Solr
agent.sinks.solrSink.type=org.apache.flume.sink.solr.morphline.Morphline
SolrSink
agent.sinks.solrSink.channel=solrChannel
agent.sinks.solrSink.morphlineFile=/tmp/morphline.conf
Morphline file, which encodes the transformation logic, is exactly identical in both
Real Time and Batch Indexing examples
38. Batch Indexing with MapReduceIndexTool
38
Mapper
Parse input into Indexable
Document
Mapper
Parse input into Indexable
Document
Mapper
Parse input into Indexable
Document
Reducer
(shard 1 : index document)
Reducer
(shard 2 : index document)
Index
Shard 1
Index
Shard 2
Scalable way to create the indexes on HDFS
39. 39
Batch Indexing with MapReduceIndexTool
MR Indexer
• Read the HDFS directory
• Pass them through the Morphline
• Merge the indexes into live SOLR servers
# hadoop jar /usr/lib/solr/contrib/mr/search-mr-*-job.jar
org.apache.solr.hadoop.MapReduceIndexerTool --morphline-file <morphline file>
--output-dir <hdfs URI for indexes> --go-live --zk-host clust2:2181/solr –collection
logs_collection
<HDFS URI with the files to index>
MapReduceIndexerTool is a MapReduce batch job driver that creates a set of
Solr index shards from a set of input files and writes the indexes into HDFS
42. 42
NRT Use Cases
• Analytics/Aggregations
• Total number of page-views of a URL in a given time-period
• Reach : Number of unique people exposed to a URL
• Generate analytic metrics like Sum,Distinct,Count,Top K etc.
• Alert when the number of HTTP Error 500 in the last 60 sec > 2
• Get real-time state information about infrastructure and services.
• Understand outages or how complex systems interact together.
• Real time intrusion detection
• Measure SLAs (availability, latency, etc.)
• Tune applications and databases for maximum performance
• Do capacity planning
43. 43
Monitoring /Alerting Use Cases
• Counting: real-time counting analytics such as how many requests per day, how
many sign-ups, how many purchases, etc.
• Correlation: near-real-time analytics such as desktop vs. mobile users, which
devices fail at the same time, etc.
• Research: more in-depth analytics that run in batch mode on the historical data
such as detecting sentiments, etc.
44. 44
NRT Alerts & Aggregations Implementation
• Rule based alerts in Flume
• Aggregations in Flume/HBase
• Time-series data in HBase/OpenTSDB
HBase
• Counters : avoids need to lock a row, read the value, increment it, write it back, and eventually unlock the row
hbase(main):001:0> create 'counters', 'daily', 'weekly', 'monthly'
0 row(s) in 1.1930 seconds
hbase(main):002:0> incr 'counters', '20110101', 'daily:hits', 1
COUNTER VALUE = 1
hbase(main):003:0> incr 'counters', '20110101', 'daily:hits', 1
COUNTER VALUE = 2
hbase(main):04:0> get_counter 'counters', '20110101', 'daily:hits'
COUNTER VALUE = 2
Increment increment1 = new Increment(Bytes.toBytes("20110101"));
increment1.addColumn(Bytes.toBytes("daily"), Bytes.toBytes("clicks"), 1);
increment1.addColumn(Bytes.toBytes("daily"), Bytes.toBytes("hits"), 1);
increment1.addColumn(Bytes.toBytes("weekly"), Bytes.toBytes("clicks"), 10);
increment1.addColumn(Bytes.toBytes("weekly"), Bytes.toBytes("hits"), 10);
Result result1 = table.increment(increment1);
• Allows alerting logic to be at one place in Flume
Interceptors
• You write your code in simple interfaces in Flume
• Backed up by HBase
• Easy to define rules over here
45. 45
(NRT + Batch) Analytics
• Batch Workflow
• Does incremental computes of the data and loads the result into say HBase
• Is too slow for the needs in many cases. Also the views are out of date
• Compensating for last few hours of data is done in Flume
• Applications query both real-time view and batch view and merge the results
Web
Server
Web
Server
NRT
Batch
46. Alerts
46
Storage / Processing
Intelligent Search
Batch
Index
Interactive
SQL
Interactive
Search
Batch
Analytics
Stream
NRT
Analytics
Parse Time
Series
/Metri
cs
K/V
Store
MetaStore
Builder/Admin
Alerts
Alerts
47. 47
Time Series with OpenTSDB
• OpenTSDB is a time series database.
• It is also a data plotting system.
• Runs on HBase
• Each TSD can handle 2000+ new data points per sec per core
Time series is a series of data-points of some particular metric over time.
48. Interacting with OpenTSDB
48
put proc.loadavg.1m 1288946927 0.36 host=foo
put proc.loadavg.5m 1288946927 0.62 host=foo
put proc.loadavg.1m 1288946942 0.43 host=foo
put proc.loadavg.5m 1288946942 0.62 host=foo
You can communicate with the TSD via a simple telnet-style protocol, and via
HTTP
In OpenTSDB, a data point is made of:
• A metric name : (http.hits)
• A UNIX timestamp
• A value (64 bit integer or double-precision floating point value).
• A set of tags (key-value pairs) that annotate this data point : (to store for all
the places where a metric exists) : eg. hostname, customer
54. 54
CSI - Cloudera Support Interface
• Components Used
• HBase, Solr, Impala, MR
• Features
• Enables searching & analytics for data from different sources in a single UI
• Data Collected
• Customer Diagnostics
• Hadoop Daemon Logs
• Hadoop Daemon Configurations
• Host hardware info
• Host OS settings and configurations
• Support Cases, Public Apache Jiras, Public Mailing Lists, and Salesforce Account Data
55. 55
CSI – Log Visualization within Customer Dashboard
58. Queries Supported
58
• What are the most commonly encountered errors?
• How many IOExceptions have we recorded from datanodes in a certain month?
• What is the distribution of workloads across Impala, Apache Hive, and HBase?
• Which OS versions are most commonly used?
• What are the mean and variance of hardware configurations?
• How many types of hardware configuration are there at a single customer site?
• Does anyone use a specific parameter that we want to deprecate?
59. 59
Summary
The beauty of this system is it solves a number of core and difficult use cases. It is
also open to integrating with various other systems and making the overall solution
much better.
• Scalability
• Flexibility in building new features & products
• Low Cost of Ownership
• Ease of Managing Big Data