This document provides an overview of Hortonworks DataFlow, which is powered by Apache NiFi. It discusses how the growth of IoT data is outpacing our ability to consume it and how NiFi addresses the new requirements around collecting, securing and analyzing data in motion. Key features of NiFi are highlighted such as guaranteed delivery, data provenance, and its ability to securely manage bidirectional data flows in real-time. Common use cases like predictive analytics, compliance and IoT optimization are also summarized.
Big Data Day LA 2016/ Big Data Track - Building scalable enterprise data flow...Data Con LA
This document discusses Apache NiFi and stream processing. It provides an overview of NiFi's key concepts of managing data flow, data provenance, and securing data. NiFi allows users to visually build data flows with drag and drop processors. It offers features such as guaranteed delivery, data buffering, prioritized queuing, and data provenance. NiFi is based on Flow-Based Programming and is used to reliably transfer data between systems, enrich and prepare data, and deliver data to analytic platforms.
HDF Powered by Apache NiFi IntroductionMilind Pandit
The document discusses Apache NiFi and its role in managing enterprise data flows, providing an overview of NiFi's key features and capabilities for reliable data transfer, preparation, and routing. It also demonstrates how NiFi is used in common use cases and provides examples of building simple data flows in NiFi to ingest, filter, and deliver data.
This document provides an overview of Hortonworks and Hadoop. It discusses Hortonworks' customer momentum, the Hortonworks Data Platform (HDP), and Hortonworks' role as a partner for customer success. It also summarizes challenges with traditional data systems, how Hadoop emerged as a foundation for a new data architecture, and how HDP delivers a comprehensive data management platform.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
Apache NiFi - Flow Based Programming MeetupJoseph Witt
These are the slides from the July 11th Meetup in Toronto for the Flow Based Programming meetup group at Lighthouse covering Enterprise Dataflow with Apache NiFi.
Hortonworks Data in Motion Webinar Series - Part 1Hortonworks
VIEW THE ON-DEMAND WEBINAR: http://hortonworks.com/webinar/introduction-hortonworks-dataflow/
Learn about Hortonworks DataFlow (HDFTM) and how you can easily augment your existing data systems – Hadoop and otherwise. Learn what Dataflow is all about and how Apache NiFi, MiNiFi, Kafka and Storm work together for streaming analytics.
Introduction to Apache NiFi - Seattle Scalability MeetupSaptak Sen
The document introduces Apache NiFi, an open source tool for data flow. It discusses how data from the Internet of Things is growing faster than can be consumed and highlights Apache NiFi's ability to securely collect, process and distribute this data in motion. The key concepts of Apache NiFi are described as managing the flow of information, ensuring data provenance, and securing the control and data planes. Example use cases are provided and the document demonstrates Apache NiFi's visual interface for creating data flows between processors to ingest, transform and output data in real-time.
Taking DataFlow Management to the Edge with Apache NiFi/MiNiFiBryan Bende
This document provides an overview of a presentation about taking dataflow management to the edge with Apache NiFi and MiniFi. The presentation discusses the problem of moving data between systems with different formats, protocols, and security requirements. It introduces Apache NiFi as a solution for dataflow management and introduces Apache MiniFi for managing dataflows at the edge. The presentation includes a demo and time for Q&A.
Big Data Day LA 2016/ Big Data Track - Building scalable enterprise data flow...Data Con LA
This document discusses Apache NiFi and stream processing. It provides an overview of NiFi's key concepts of managing data flow, data provenance, and securing data. NiFi allows users to visually build data flows with drag and drop processors. It offers features such as guaranteed delivery, data buffering, prioritized queuing, and data provenance. NiFi is based on Flow-Based Programming and is used to reliably transfer data between systems, enrich and prepare data, and deliver data to analytic platforms.
HDF Powered by Apache NiFi IntroductionMilind Pandit
The document discusses Apache NiFi and its role in managing enterprise data flows, providing an overview of NiFi's key features and capabilities for reliable data transfer, preparation, and routing. It also demonstrates how NiFi is used in common use cases and provides examples of building simple data flows in NiFi to ingest, filter, and deliver data.
This document provides an overview of Hortonworks and Hadoop. It discusses Hortonworks' customer momentum, the Hortonworks Data Platform (HDP), and Hortonworks' role as a partner for customer success. It also summarizes challenges with traditional data systems, how Hadoop emerged as a foundation for a new data architecture, and how HDP delivers a comprehensive data management platform.
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
Apache NiFi - Flow Based Programming MeetupJoseph Witt
These are the slides from the July 11th Meetup in Toronto for the Flow Based Programming meetup group at Lighthouse covering Enterprise Dataflow with Apache NiFi.
Hortonworks Data in Motion Webinar Series - Part 1Hortonworks
VIEW THE ON-DEMAND WEBINAR: http://hortonworks.com/webinar/introduction-hortonworks-dataflow/
Learn about Hortonworks DataFlow (HDFTM) and how you can easily augment your existing data systems – Hadoop and otherwise. Learn what Dataflow is all about and how Apache NiFi, MiNiFi, Kafka and Storm work together for streaming analytics.
Introduction to Apache NiFi - Seattle Scalability MeetupSaptak Sen
The document introduces Apache NiFi, an open source tool for data flow. It discusses how data from the Internet of Things is growing faster than can be consumed and highlights Apache NiFi's ability to securely collect, process and distribute this data in motion. The key concepts of Apache NiFi are described as managing the flow of information, ensuring data provenance, and securing the control and data planes. Example use cases are provided and the document demonstrates Apache NiFi's visual interface for creating data flows between processors to ingest, transform and output data in real-time.
Taking DataFlow Management to the Edge with Apache NiFi/MiNiFiBryan Bende
This document provides an overview of a presentation about taking dataflow management to the edge with Apache NiFi and MiniFi. The presentation discusses the problem of moving data between systems with different formats, protocols, and security requirements. It introduces Apache NiFi as a solution for dataflow management and introduces Apache MiniFi for managing dataflows at the edge. The presentation includes a demo and time for Q&A.
Beyond Messaging Enterprise Dataflow powered by Apache NiFiIsheeta Sanghi
This document discusses Apache NiFi, an open source software project that provides a dataflow solution for gathering, processing, and delivering data between systems. NiFi addresses challenges with traditional messaging systems by allowing for data routing, transformation, prioritization, and provenance tracking. It uses a flow-based programming model where data moves through a directed graph of processes connected by queues. The project started at the National Security Agency in 2006 and became a top-level Apache project in 2015.
Yifeng Jiang gives a presentation introducing Apache Nifi. He begins with an overview of himself and the agenda. He then provides an introduction to Nifi including terminology like FlowFile and Processor. Key aspects of Nifi are demonstrated including the user interface, provenance tracking, queue prioritization, cluster architecture, and a demo of real-time data processing. Example use cases are discussed like indexing JSON tweets and indexing data from a relational database. The presentation concludes that Nifi is an easy to use and powerful system for processing and distributing data with 90 built-in processors.
Learn more: http://hortonworks.com/hdf/
Log data can be complex to capture, typically collected in limited amounts and difficult to operationalize at scale. HDF expands the capabilities of log analytics integration options for easy and secure edge analytics of log files in the following ways:
More efficient collection and movement of log data by prioritizing, enriching and/or transforming data at the edge to dynamically separate critical data. The relevant data is then delivered into log analytics systems in a real-time, prioritized and secure manner.
Cost-effective expansion of existing log analytics infrastructure by improving error detection and troubleshooting through more comprehensive data sets.
Intelligent edge analytics to support real-time content-based routing, prioritization, and simultaneous delivery of data into Connected Data Platforms, log analytics and reporting systems for comprehensive coverage and retention of Internet of Anything data.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Difference between apache spark and apache nifiGaneshJoshi47
Apache Spark is an open source cluster computing framework that provides fault tolerance and data parallelism. Apache Nifi is a software project that automates data flow between systems using a flow-based programming model. While Apache Nifi focuses on data ingestion and distribution, Apache Spark is designed for rapid computation through interactive querying and memory management. Apache Nifi works in standalone mode while Apache Spark can operate in standalone, YARN, and other cluster modes. Both have different use cases and advantages for data processing.
How is it that one system can query terabytes of data, yet still provide interactive query support? This talk will discuss two of the underlying technologies that allow Apache Hive to support fast query response, both on-premise in HDFS and in cloud object stores such as S3 and WASB.
LLAP was introduced in Hive 2.6. It provides standing processes that securely cache Hive’s columnar data and can do query processing without ever needing to start tasks in Hadoop. We will cover LLAP’s architecture, intended uses cases, and performance numbers for both on-premise and in the cloud.
The second technology is the integration of Hive with Apache Druid. Druid excels at low-latency, interactive queries over streaming data. Its method of storing data makes it very well suited for OLAP style queries. We will cover how Hive can be integrated with Druid to support real-time streaming of data from Kafka and OLAP queries.
This document provides an overview of Apache NiFi and data flow fundamentals. It begins with an introduction to Apache NiFi and outlines the agenda. It then discusses data flow and streaming fundamentals, including challenges in moving data effectively. The document introduces Apache NiFi's architecture and capabilities for addressing these challenges. It also previews a live demo of NiFi and discusses the NiFi community.
Webinar Series Part 5 New Features of HDF 5Hortonworks
Overview of the newest features of Hortonworks DataFlow highlighting the new processors, new user interface, edge intelligence powered by Apache MiNiFi and new support for multi-tenancy and new zero master clustering architecture
Hortonworks Data In Motion Series Part 4Hortonworks
How real-world enterprises leverage Hortonworks DataFlow/Apache NiFi to to create real-time data flows in record time to enable new business opportunities, improve customer retention, accelerate big data projects from months to minutes through increased efficiency and reduced costs.
On-Demand webinar: http://hortonworks.com/webinar/paradigm-shift-business-usual-real-time-dataflows-record-time/
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Harnessing Data-in-Motion with HDF 2.0, introduction to Apache NIFI/MINIFIHaimo Liu
Introducing the new Hortonworks DataFlow (HDF) release, HDF 2.0. Also provides introduction to the flow management part of the platform, powered by Apache NIFI and MINIFI.
Learn about HDF and how you can easily augment your existing data systems - Hadoop and otherwise. Learn what Dataflow is all about and how Apache NiFi, MiNiFi, Kafka and Storm work together for streaming analytics.
Apache NiFi Crash Course - San Jose Hadoop SummitAldrin Piri
This document provides an overview of Apache NiFi and dataflow. It begins with defining what dataflow is and the challenges of moving data effectively. It then introduces Apache NiFi, describing its key features like guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document discusses NiFi's architecture including its use of FlowFiles to move data agnostically through processors. It also covers NiFi's extension points and integration with other systems. Finally, it describes a live demo use case of using NiFi to integrate real-time traffic data for urban planning.
Learn more: http://hortonworks.com/hdf/
Log data can be complex to capture, typically collected in limited amounts and difficult to operationalize at scale. HDF expands the capabilities of log analytics integration options for easy and secure edge analytics of log files in the following ways:
More efficient collection and movement of log data by prioritizing, enriching and/or transforming data at the edge to dynamically separate critical data. The relevant data is then delivered into log analytics systems in a real-time, prioritized and secure manner.
Cost-effective expansion of existing log analytics infrastructure by improving error detection and troubleshooting through more comprehensive data sets.
Intelligent edge analytics to support real-time content-based routing, prioritization, and simultaneous delivery of data into Connected Data Platforms, log analytics and reporting systems for comprehensive coverage and retention of Internet of Anything data.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains, including significantly improved performance for ACID tables. The talk will also provide a glimpse of what is expected to come in the near future.
Speaker: Alan Gates, Co-Founder, Hortonworks
This document provides an agenda and overview of topics for a Hortonworks data movement and management meetup. The agenda includes networking, introductions, discussions on Falcon use cases and releases, Hive disaster recovery, server-side extensions, ADF/instance search, Hive-based ingestion/export, Spark integration, and Sqoop 2 features. An overview of Falcon describes its high-level abstraction of Hadoop data processing services. Usage scenarios focus on dataset replication, lifecycle management, and lineage/traceability. The document also discusses Falcon examples for replication, retention, and late data handling.
Apache Hive is a rapidly evolving project, many people are loved by the big data ecosystem. Hive continues to expand support for analytics, reporting, and bilateral queries, and the community is striving to improve support along with many other aspects and use cases. In this lecture, we introduce the latest and greatest features and optimization that appeared in this project last year. This includes benchmarks covering LLAP, Apache Druid's materialized views and integration, workload management, ACID improvements, using Hive in the cloud, and performance improvements. I will also tell you a little about what you can expect in the future.
Achieving a 360-degree view of manufacturing via open source industrial data ...DataWorks Summit
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality), and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and siloed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems, and other sources with real-time operations data from sensors, PLCs, SCADA systems, and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a view into a roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• Understand key use cases commonly undertaken by manufacturing enterprises
• Understand the value of using multivariate manufacturing data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
Speakers
Michael Ger, General Manager Manufacturing and Automotive, Hortonworks
Wade Salazar, Solutions Engineer, Hortonworks
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
Beyond Messaging Enterprise Dataflow powered by Apache NiFiIsheeta Sanghi
This document discusses Apache NiFi, an open source software project that provides a dataflow solution for gathering, processing, and delivering data between systems. NiFi addresses challenges with traditional messaging systems by allowing for data routing, transformation, prioritization, and provenance tracking. It uses a flow-based programming model where data moves through a directed graph of processes connected by queues. The project started at the National Security Agency in 2006 and became a top-level Apache project in 2015.
Yifeng Jiang gives a presentation introducing Apache Nifi. He begins with an overview of himself and the agenda. He then provides an introduction to Nifi including terminology like FlowFile and Processor. Key aspects of Nifi are demonstrated including the user interface, provenance tracking, queue prioritization, cluster architecture, and a demo of real-time data processing. Example use cases are discussed like indexing JSON tweets and indexing data from a relational database. The presentation concludes that Nifi is an easy to use and powerful system for processing and distributing data with 90 built-in processors.
Learn more: http://hortonworks.com/hdf/
Log data can be complex to capture, typically collected in limited amounts and difficult to operationalize at scale. HDF expands the capabilities of log analytics integration options for easy and secure edge analytics of log files in the following ways:
More efficient collection and movement of log data by prioritizing, enriching and/or transforming data at the edge to dynamically separate critical data. The relevant data is then delivered into log analytics systems in a real-time, prioritized and secure manner.
Cost-effective expansion of existing log analytics infrastructure by improving error detection and troubleshooting through more comprehensive data sets.
Intelligent edge analytics to support real-time content-based routing, prioritization, and simultaneous delivery of data into Connected Data Platforms, log analytics and reporting systems for comprehensive coverage and retention of Internet of Anything data.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Difference between apache spark and apache nifiGaneshJoshi47
Apache Spark is an open source cluster computing framework that provides fault tolerance and data parallelism. Apache Nifi is a software project that automates data flow between systems using a flow-based programming model. While Apache Nifi focuses on data ingestion and distribution, Apache Spark is designed for rapid computation through interactive querying and memory management. Apache Nifi works in standalone mode while Apache Spark can operate in standalone, YARN, and other cluster modes. Both have different use cases and advantages for data processing.
How is it that one system can query terabytes of data, yet still provide interactive query support? This talk will discuss two of the underlying technologies that allow Apache Hive to support fast query response, both on-premise in HDFS and in cloud object stores such as S3 and WASB.
LLAP was introduced in Hive 2.6. It provides standing processes that securely cache Hive’s columnar data and can do query processing without ever needing to start tasks in Hadoop. We will cover LLAP’s architecture, intended uses cases, and performance numbers for both on-premise and in the cloud.
The second technology is the integration of Hive with Apache Druid. Druid excels at low-latency, interactive queries over streaming data. Its method of storing data makes it very well suited for OLAP style queries. We will cover how Hive can be integrated with Druid to support real-time streaming of data from Kafka and OLAP queries.
This document provides an overview of Apache NiFi and data flow fundamentals. It begins with an introduction to Apache NiFi and outlines the agenda. It then discusses data flow and streaming fundamentals, including challenges in moving data effectively. The document introduces Apache NiFi's architecture and capabilities for addressing these challenges. It also previews a live demo of NiFi and discusses the NiFi community.
Webinar Series Part 5 New Features of HDF 5Hortonworks
Overview of the newest features of Hortonworks DataFlow highlighting the new processors, new user interface, edge intelligence powered by Apache MiNiFi and new support for multi-tenancy and new zero master clustering architecture
Hortonworks Data In Motion Series Part 4Hortonworks
How real-world enterprises leverage Hortonworks DataFlow/Apache NiFi to to create real-time data flows in record time to enable new business opportunities, improve customer retention, accelerate big data projects from months to minutes through increased efficiency and reduced costs.
On-Demand webinar: http://hortonworks.com/webinar/paradigm-shift-business-usual-real-time-dataflows-record-time/
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
Harnessing Data-in-Motion with HDF 2.0, introduction to Apache NIFI/MINIFIHaimo Liu
Introducing the new Hortonworks DataFlow (HDF) release, HDF 2.0. Also provides introduction to the flow management part of the platform, powered by Apache NIFI and MINIFI.
Learn about HDF and how you can easily augment your existing data systems - Hadoop and otherwise. Learn what Dataflow is all about and how Apache NiFi, MiNiFi, Kafka and Storm work together for streaming analytics.
Apache NiFi Crash Course - San Jose Hadoop SummitAldrin Piri
This document provides an overview of Apache NiFi and dataflow. It begins with defining what dataflow is and the challenges of moving data effectively. It then introduces Apache NiFi, describing its key features like guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document discusses NiFi's architecture including its use of FlowFiles to move data agnostically through processors. It also covers NiFi's extension points and integration with other systems. Finally, it describes a live demo use case of using NiFi to integrate real-time traffic data for urban planning.
Learn more: http://hortonworks.com/hdf/
Log data can be complex to capture, typically collected in limited amounts and difficult to operationalize at scale. HDF expands the capabilities of log analytics integration options for easy and secure edge analytics of log files in the following ways:
More efficient collection and movement of log data by prioritizing, enriching and/or transforming data at the edge to dynamically separate critical data. The relevant data is then delivered into log analytics systems in a real-time, prioritized and secure manner.
Cost-effective expansion of existing log analytics infrastructure by improving error detection and troubleshooting through more comprehensive data sets.
Intelligent edge analytics to support real-time content-based routing, prioritization, and simultaneous delivery of data into Connected Data Platforms, log analytics and reporting systems for comprehensive coverage and retention of Internet of Anything data.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains, including significantly improved performance for ACID tables. The talk will also provide a glimpse of what is expected to come in the near future.
Speaker: Alan Gates, Co-Founder, Hortonworks
This document provides an agenda and overview of topics for a Hortonworks data movement and management meetup. The agenda includes networking, introductions, discussions on Falcon use cases and releases, Hive disaster recovery, server-side extensions, ADF/instance search, Hive-based ingestion/export, Spark integration, and Sqoop 2 features. An overview of Falcon describes its high-level abstraction of Hadoop data processing services. Usage scenarios focus on dataset replication, lifecycle management, and lineage/traceability. The document also discusses Falcon examples for replication, retention, and late data handling.
Apache Hive is a rapidly evolving project, many people are loved by the big data ecosystem. Hive continues to expand support for analytics, reporting, and bilateral queries, and the community is striving to improve support along with many other aspects and use cases. In this lecture, we introduce the latest and greatest features and optimization that appeared in this project last year. This includes benchmarks covering LLAP, Apache Druid's materialized views and integration, workload management, ACID improvements, using Hive in the cloud, and performance improvements. I will also tell you a little about what you can expect in the future.
Achieving a 360-degree view of manufacturing via open source industrial data ...DataWorks Summit
Continuously improving factory operations is of critical importance to manufacturers. Consider the facts: the total cost of poor quality amounts to a staggering 20% of sales (American Society of Quality), and unplanned downtime costs plants approximately $50 billion per year (Deloitte).
The most pressing questions are: which process variables effect quality and yield and which process variables predict equipment failure? Getting to those answers is providing forward thinking manufacturers a leg up over competitors.
The speakers address the data management challenges facing today's manufacturers, including proprietary systems and siloed data sources, as well as an inability to make sensor-based data usable.
Integrating enterprise data from ERP, MES, maintenance systems, and other sources with real-time operations data from sensors, PLCs, SCADA systems, and historians represents a major first step. But how to get started? What is the value of a data lake? How are AI/ML being applied to enable real time action?
Join us for this educational session, which includes a view into a roadmap for an open source industrial IoT data management platform.
Key Takeaways:
• Understand key use cases commonly undertaken by manufacturing enterprises
• Understand the value of using multivariate manufacturing data sources, as opposed to a single sensor on a piece of equipment
• Understand advances in big data management and streaming analytics that are paving the way to next-generation factory performance
Speakers
Michael Ger, General Manager Manufacturing and Automotive, Hortonworks
Wade Salazar, Solutions Engineer, Hortonworks
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
Learn how Hortonworks Data Flow (HDF), powered by Apache Nifi, enables organizations to harness IoAT data streams to drive business and operational insights. We will use the session to provide an overview of HDF, including detailed hands-on lab to build HDF pipelines for capture and analysis of streaming data.
Recording and labs available at:
http://hortonworks.com/partners/learn/#hdf
Hortonworks Data in Motion Webinar Series Part 7 Apache Kafka Nifi Better Tog...Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
https://www.brighttalk.com/webcast/9573/224063
The document introduces Hortonworks DataFlow (HDF) and Apache NiFi. It discusses how HDF addresses challenges with enterprise data flow, such as variability in data formats/schemas, size/speed of data, security, and scaling. HDF provides a visual user interface and secure end-to-end data routing. It also offers data provenance for governance and compliance. HDF and Apache NiFi can be used for real-time data ingestion and management, data as a service, regulatory compliance, security applications like asset/personnel protection and fraud prevention, and other big data use cases.
The document discusses Apache NiFi and its role in the Hadoop ecosystem. It provides an overview of NiFi, describes how it can be used to integrate with Hadoop components like HDFS, HBase, and Kafka. It also discusses how NiFi supports stream processing integrations and outlines some use cases. The document concludes by discussing future work, including improving NiFi's high availability, multi-tenancy, and expanding its ecosystem integrations.
This document discusses integrating Internet of Things (IOT) data, streaming analytics, and machine learning using Apache NiFi and SAS Event Stream Processing. It describes how SAS ESP can be used to build real-time analytics models using a drag-and-drop interface to detect patterns in streaming data. It also outlines how SAS ESP can integrate with Hortonworks Data Flow (NiFi) to enable rapid prototyping of machine learning models on streaming data within an open framework. Finally, it provides an overview of how SAS ESP connectors and adapters allow flexibility and integration with other data sources.
Design a Dataflow in 7 minutes with Apache NiFi/HDFHortonworks
The document describes how to create a live dataflow in Hortonworks DataFlow in 7 minutes. It involves dragging and dropping two processors onto a canvas - one for data intake and one for data output - configuring each processor, connecting them, and starting the flow. The dataflow can then be dynamically adjusted and tuned in real-time. Hortonworks DataFlow also allows viewing data provenance to trace the lineage and changes of data as it flows through the system.
Extending the Yahoo Streaming Benchmark + MapR BenchmarksJamie Grier
The document summarizes benchmark tests that were performed to compare the throughput of Apache Storm and Apache Flink for processing streaming data. The original Yahoo! benchmark showed Storm outperforming Flink. However, the author repeated the tests and was able to achieve much higher throughput with Flink by addressing bottlenecks. When deployed on a high-performance MapR cluster, Flink processed over 72 million messages per second, significantly outperforming the original Storm results. The document concludes by noting Flink's compatibility features that allow reuse of existing Storm applications and components.
Integrating Apache NiFi and Apache FlinkHortonworks
Hortonworks DataFlow delivers data to streaming analytics platforms, inclusive of Storm, Spark and Flink
These are slides from an Apache Flink Meetup: Integration of Apache Flink and Apache Nifi, Feb 4 2016
Reference architecture for Internet of ThingsSujee Maniyam
What kind of a data infrastructure is needed, to support Internet of Things?
This talk presents a reference architecture.
We are actually building this architecture as open source project. See here : bit.ly / iotxyz
Apache Beam (formerly Google Cloud Dataflow SDK) is an unified model and set of language-specific SDKs for defining and executing data processing workflows. You design pipelines, simplifying the mechanics of large-scale batch and streaming data processing and can run on a number of runtimes like Apache Flink, Apache Spark, and Google Cloud Dataflow (a cloud service).
This presentation introduces the Beam programming model, and how you can use it to design your pipelines, transporting PCollection and applying some PTransforms. You will see how the same code will be "translated" to a target runtimes thanks to a specific runner. You will also have an overview of the current roadmap, with the new interesting features.
Apache Atlas provides centralized metadata services and cross-component dataset lineage tracking for Hadoop components. It aims to enable transparent, reproducible, auditable and consistent data governance across structured, unstructured, and traditional database systems. The near term roadmap includes dynamic access policy driven by metadata and enhanced Hive integration. Apache Atlas also pursues metadata exchange with non-Hadoop systems and third party vendors through REST APIs and custom reporters.
Google Cloud Dataflow is a next generation managed big data service based on the Apache Beam programming model. It provides a unified model for batch and streaming data processing, with an optimized execution engine that automatically scales based on workload. Customers report being able to build complex data pipelines more quickly using Cloud Dataflow compared to other technologies like Spark, and with improved performance and reduced operational overhead.
Apache Beam is a unified programming model for batch and streaming data processing. It defines concepts for describing what computations to perform (the transformations), where the data is located in time (windowing), when to emit results (triggering), and how to accumulate results over time (accumulation mode). Beam aims to provide portable pipelines across multiple execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow. The talk will cover the key concepts of the Beam model and how it provides unified, efficient, and portable data processing pipelines.
This document discusses Apache NiFi and how it was used to create a new composable data flow system for Schlumberger in just 10 man hours. The previous system was very complex, took over 100 man years to create, and was difficult to change. NiFi allows for easy visualization of the data flow, debugging of issues, and rapid creation of new processors. It also enables quick testing of data flows using curated test data sets and live data in Docker containers. Next steps discussed include further exploring use cases for rig data ingestion with NiFi to provide data provenance and understand the chain of custody of data as it moves through the system.
Agenda:
1.Data Flow Challenges in an Enterprise
2.Introduction to Apache NiFi
3.Core Features
4.Architecture
5.Demo –Simple Lambda Architecture
6.Use Cases
7.Q & A
Using Hadoop as a platform for Master Data ManagementDataWorks Summit
This document discusses using Hadoop as a platform for master data management. It begins by explaining what master data management is and its key components. It then discusses how MDM relates to big data and some of the challenges of implementing MDM on Hadoop. The document provides a simplified example of traditional MDM and how it could work on Hadoop. It outlines some common approaches to matching and merging data on Hadoop. Finally, it discusses a sample MDM tool that could implement matching in Hadoop through MapReduce jobs and provide online MDM services through an accessible database.
BigData Techcon - Beyond Messaging with Apache NiFiAldrin Piri
This document discusses Apache NiFi, an open source software project that provides a dataflow solution for gathering, processing, and delivering data between systems. NiFi addresses challenges with traditional messaging systems by allowing for data routing, transformation, prioritization, and provenance tracking. The document outlines NiFi's architecture and capabilities, provides a brief history of the project, and invites the reader to learn more and get involved with the Apache NiFi community.
The document discusses Apache NiFi, an open source software project that provides a dataflow solution for managing enterprise data movement and integration. It describes challenges with traditional messaging systems for enterprise dataflow and introduces Apache NiFi as an alternative. NiFi is based on Flow-Based Programming and allows users to visually create dataflows that can transform, route, and process data in real-time. The document includes a demonstration of NiFi and discusses its architecture, features, and future proposals.
Data in Motion - Data at Rest - Hortonworks a Modern ArchitectureMats Johansson
Presentation at Data Innovation Summit 2016 in Stockholm
How to build a modern data architecture supporting data in motion and data at rest with Hortonworks Data Flow and Data Platform.
Hortonworks Oracle Big Data Integration Hortonworks
Slides from joint Hortonworks and Oracle webinar on November 11, 2014. Covers the Modern Data Architecture with Apache Hadoop and Oracle Data Integration products.
Hortonworks & Bilot Data Driven Transformations with HadoopMats Johansson
- Traditional systems are under pressure due to their inability to manage new data sources and costly scaling. A modern data architecture using Apache Hadoop emerges to provide a centralized platform for all enterprise data and applications.
- Hortonworks Data Platform is powered by Apache Hadoop and provides a flexible, scalable platform for storing and processing all data types from any source and supports a variety of applications. It offers governance, security, and operations controls for enterprise data management.
Curing the Kafka blindness—Streams Messaging ManagerDataWorks Summit
Companies who use Kafka today struggle with monitoring and managing Kafka clusters. Kafka is a key backbone of IoT streaming analytics applications. The challenge is understanding what is going on overall in the Kafka cluster including performance, issues and message flows. No open source tool caters to the needs of different users that work with Kafka: DevOps/developers, platform team, and security/governance teams. See how the new Hortonworks Streams Messaging Manager enables users to visualize their entire Kafka environment end-to-end and simplifies Kafka operations.
In this session learn how SMM visualizes the intricate details of how Apache Kafka functions in real time while simultaneously surfacing every nuance of tuning, optimizing, and measuring input and output. SMM will assist users to quickly understand and operate Kafka while providing the much-needed transparency that sophisticated and experienced users need to avoid all the pitfalls of running a Kafka cluster.
Hortonworks Data In Motion Webinar Series Pt. 2Hortonworks
This document discusses Hortonworks' HDF 2.0 platform for managing data in motion and at rest. The platform includes tools for data ingestion, streaming, and storage. It also allows partners to integrate their solutions and get certified. Use cases highlighted include log analytics, IoT, and connected vehicles. The ecosystem supports ingesting data from various sources and processing it using tools like NiFi, Kafka, and Storm.
Predicting Customer Experience through Hadoop and Customer Behavior GraphsHortonworks
Enhancing a customer experience has become essential for communication service providers to effectively manage customer churn and build a strong, long lasting relationship with their customers. This has become increasingly challenging as customer interactions occur across multiple channels. Understanding customer behavior and how it applies across channels is the key to ensuring the best level of experience is achieved by each customer.
In this webinar Hortonworks and Apigee discuss how service providers can capture and visualize customer behavior across customer interaction points like call center events (IVR and chat) and combine it with network data, to predict customer calls and patterns of digital channel abandonment using Hadoop and predictive analysis and visualization tools..
We will identify ways to develop a 360 degree view across a customer’s household through an HDP Data Lake and visualize customer interaction patterns and predict expected behavior using Apigee Insights to identify and initiate the Next-Best-Action for a customer to ensure a superior level of customer experience.
Data proliferation from 7+ billion humans and 20+ billion devices from every walk of life has been the focus in the last decade. With the velocity, variety and volume of data, every data organization’s goal shifted to protecting and monetizing data from rapidly growing network of IOT embedded objects and sensors.
One of the true and tried business continuity methodology of storing and retrieving vast amount of data has been through replication of Hadoop systems on hybrid clouds and in geographically distributed data centers. Replication is similar to Blockchain using autonomous smart contracts instantiated on the metadata and data so that the replicated data follows a single source of truth.
Replicas can be maintained across geographically distributed data centers giving greater risk tolerance capabilities to the businesses continuity plan for the data-sets. With intelligent predictive analytics based on usage patterns, dynamic tiering policies can be triggered on the data sets to provide true value-add to the data. The temperature of the data is used to move data between hot/warm/cold/archival storage based on configurable policies leading to greater reduction in total cost of ownership.
Users in 2018 and beyond demand absolute availability of data as and when they desire. The dynamic data access management is fundamental concept to satisfy the business continuity plan. Seamless enterprise-grade disaster recovery to support business continuity use case has significant challenges around replicating security and governance on data-sets. In this talk we will discuss how the above challenge can be addressed for supporting seamless replication and disaster recovery for Hadoop-scale data. NIRU ANISETI, Product Manager, Hortonworks
A Comprehensive Approach to Building your Big Data - with Cisco, Hortonworks ...Hortonworks
Companies in every industry look for ways to explore new data types and large data sets that were previously too big to capture, store and process. They need to unlock insights from data such as clickstream, geo-location, sensor, server log, social, text and video data. However, becoming a data-first enterprise comes with many challenges.
Join this webinar organized by three leaders in their respective fields and learn from our experts how you can accelerate the implementation of a scalable, cost-efficient and robust Big Data solution. Cisco, Hortonworks and Red Hat will explore how new data sets can enrich existing analytic applications with new perspectives and insights and how they can help you drive the creation of innovative new apps that provide new value to your business.
This document discusses how Hortonworks Data Platform (HDP) can enable enterprises to build a modern data architecture centered around Hadoop. It describes how HDP provides a centralized platform for managing all types of data at scale using technologies like YARN. Case studies are presented showing how companies have used HDP to optimize costs, develop new analytics applications, and work towards creating a unified "data lake". The document outlines the key components of HDP including its support for any application, any data, and deployment anywhere. It also highlights how partners extend HDP's capabilities and how Hortonworks provides enterprise-grade support.
Slides from the joint webinar. Learn how Pivotal HAWQ, one of the world’s most advanced enterprise SQL on Hadoop technology, coupled with the Hortonworks Data Platform, the only 100% open source Apache Hadoop data platform, can turbocharge your Data Science efforts.
Together, Pivotal HAWQ and the Hortonworks Data Platform provide businesses with a Modern Data Architecture for IT transformation.
Enterprise IIoT Edge Processing with Apache NiFiTimothy Spann
April 5, 2018 IoT Fusion 2018 Conference in Philadelphia, PA hosted by Chariot Solutions. This talk is about Apache NiFi, MiniFi, Python, Deep Learning, NVidia Jetson TX1, Raspberry Pi, Apache MXNet, TensorFlow and how to run things at the edge and process in your big data center. http://iotfusion.net/session/ https://github.com/tspannhw/IoTFusion2018Talk
Hortonworks - IBM Cognitive - The Future of Data ScienceThiago Santiago
The document discusses Hortonworks and IBM's partnership around data management and analytics. It highlights how their combined platforms can power the modern data architecture with solutions for data at rest and in motion. Examples are provided of how customers like Merck and JPMC have leveraged Hortonworks' technologies to gain insights from their data and drive business outcomes. Industries that are investing in data science are also listed.
Talk at DataconDC Oct 3 2017. Covers the background of the data in motion problem space, use cases for Data in Motion, and talks through flow management, stream processing, and enterprise services necessary in a data in motion platform.
Hortonworks provides an open source Apache Hadoop data platform to help organizations solve big data problems. It was founded in 2011 and was the first Hadoop company to go public. Hortonworks has over 800 employees across 17 countries and over 1,350 technology partners. Hortonworks' Hadoop Data Platform is a collection of Apache projects that provides data management, data access, governance and integration, operations, and security capabilities for enterprises. The platform supports batch, interactive, and streaming analytics on large volumes of structured and unstructured data across on-premise and cloud deployments.
Hortonworks provides an open source Apache Hadoop data platform for managing large volumes of data. It was founded in 2011 and went public in 2014. Hortonworks has over 800 employees across 17 countries and partners with over 1,350 technology companies. Hortonworks' Data Platform is a collection of Apache projects that provides data management, access, governance, integration, operations and security capabilities. It supports batch, interactive and real-time processing on a shared infrastructure using the YARN resource management system.
Using Spark Streaming and NiFi for the Next Generation of ETL in the EnterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Speaker: Andrew Psaltis, Principal Solution Engineer, Hortonworks
Human Information is made up of ideas, is diverse, and has context.
Ideas don’t exactly match like data does; they have distance.
Human Information is not static – it’s dynamic and lives everywhere.
Details on applications
HAVEn is integrated to costumers architecture through other n Apps
HP has started modifying our existing application portfolio to use HAVEn
And HP is building new applications that leverage power of HAVEn
Many customers are already building applications that use multiple HAVEn
Similar to Hortonworks DataFlow & Apache Nifi @Oslo Hadoop Big Data (20)
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.