This introductory level talk is about Apache Flink: a multi-purpose Big Data analytics framework leading a movement towards the unification of batch and stream processing in the open source.
With the many technical innovations it brings along with its unique vision and philosophy, it is considered the 4 G (4th Generation) of Big Data Analytics frameworks providing the only hybrid (Real-Time Streaming + Batch) open source distributed data processing engine supporting many use cases: batch, streaming, relational queries, machine learning and graph processing.
In this talk, you will learn about:
1. What is Apache Flink stack and how it fits into the Big Data ecosystem?
2. How Apache Flink integrates with Hadoop and other open source tools for data input and output as well as deployment?
3. Why Apache Flink is an alternative to Apache Hadoop MapReduce, Apache Storm and Apache Spark.
4. Who is using Apache Flink?
5. Where to learn more about Apache Flink?
Building End-to-End Delta Pipelines on GCPDatabricks
Delta has been powering many production pipelines at scale in the Data and AI space since it has been introduced for the past few years.
Built on open standards, Delta provides data reliability, enhances storage and query performance to support big data use cases (both batch and streaming), fast interactive queries for BI and enabling machine learning. Delta has matured over the past couple of years in both AWS and AZURE and has become the de-facto standard for organizations building their Data and AI pipelines.
In today’s talk, we will explore building end-to-end pipelines on the Google Cloud Platform (GCP). Through presentation, code examples and notebooks, we will build the Delta Pipeline from ingest to consumption using our Delta Bronze-Silver-Gold architecture pattern and show examples of Consuming the delta files using the Big Query Connector.
Overview of Apache Flink: Next-Gen Big Data Analytics FrameworkSlim Baltagi
These are the slides of my talk on June 30, 2015 at the first event of the Chicago Apache Flink meetup. Although most of the current buzz is about Apache Spark, the talk shows how Apache Flink offers the only hybrid open source (Real-Time Streaming + Batch) distributed data processing engine supporting many use cases: Real-Time stream processing, machine learning at scale, graph analytics and batch processing.
In these slides, you will find answers to the following questions: What is Apache Flink stack and how it fits into the Big Data ecosystem? How Apache Flink integrates with Apache Hadoop and other open source tools for data input and output as well as deployment? What is the architecture of Apache Flink? What are the different execution modes of Apache Flink? Why Apache Flink is an alternative to Apache Hadoop MapReduce, Apache Storm and Apache Spark? Who is using Apache Flink? Where to learn more about Apache Flink?
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Zipline is Airbnb’s data management platform specifically designed for ML use cases. Previously, ML practitioners at Airbnb spent roughly 60% of their time on collecting and writing transformations for machine learning tasks. Zipline reduces this task from months to days – by making the process declarative. It allows data scientists to easily define features in a simple configuration language. The framework then provides access to point-in-time correct features – for both – offline model training and online inference. In this talk we will describe the architecture of our system and the algorithm that makes the problem of efficient point-in-time correct feature generation, tractable.
The attendee will learn
Importance of point-in-time correct features for achieving better ML model performance
Importance of using change data capture for generating feature views
An algorithm – to efficiently generate features over change data. We use interval trees to efficiently compress time series features. The algorithm allows generating feature aggregates over this compressed representation.
A lambda architecture – that enables using the above algorithm – for online feature generation.
A framework, based on category theory, to understand how feature aggregations be distributed, and independently composed.
While the talk if fairly technical – we will introduce all the concepts from first principles with examples. Basic understanding of data-parallel distributed computation and machine learning might help, but are not required.
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data PipelinesDATAVERSITY
With the aid of any number of data management and processing tools, data flows through multiple on-prem and cloud storage locations before it’s delivered to business users. As a result, IT teams — including IT Ops, DataOps, and DevOps — are often overwhelmed by the complexity of creating a reliable data pipeline that includes the automation and observability they require.
The answer to this widespread problem is a centralized data pipeline orchestration solution.
Join Stonebranch’s Scott Davis, Global Vice President and Ravi Murugesan, Sr. Solution Engineer to learn how DataOps teams orchestrate their end-to-end data pipelines with a platform approach to managing automation.
Key Learnings:
- Discover how to orchestrate data pipelines across a hybrid IT environment (on-prem and cloud)
- Find out how DataOps teams are empowered with event-based triggers for real-time data flow
- See examples of reports, dashboards, and proactive alerts designed to help you reliably keep data flowing through your business — with the observability you require
- Discover how to replace clunky legacy approaches to streaming data in a multi-cloud environment
- See what’s possible with the Stonebranch Universal Automation Center (UAC)
Apache Flink is an open source platform for distributed stream and batch data processing. It provides two APIs - a DataStream API for real-time streaming and a DataSet API for batch processing. The document introduces Flink's core concepts like sources, sinks, transformations, and windows. It also provides instructions on setting up a Flink project and describes some use cases like processing Twitter feeds. Additional resources like tutorials, documentation and mailing lists are referenced to help users get started with Flink.
Building large scale transactional data lake using apache hudiBill Liu
Data is a critical infrastructure for building machine learning systems. From ensuring accurate ETAs to predicting optimal traffic routes, providing safe, seamless transportation and delivery experiences on the Uber platform requires reliable, performant large-scale data storage and analysis. In 2016, Uber developed Apache Hudi, an incremental processing framework, to power business critical data pipelines at low latency and high efficiency, and helps distributed organizations build and manage petabyte-scale data lakes.
In this talk, I will describe what is APache Hudi and its architectural design, and then deep dive to improving data operations by providing features such as data versioning, time travel.
We will also go over how Hudi brings kappa architecture to big data systems and enables efficient incremental processing for near real time use cases.
Speaker: Satish Kotha (Uber)
Apache Hudi committer and Engineer at Uber. Previously, he worked on building real time distributed storage systems like Twitter MetricsDB and BlobStore.
website: https://www.aicamp.ai/event/eventdetails/W2021043010
Building End-to-End Delta Pipelines on GCPDatabricks
Delta has been powering many production pipelines at scale in the Data and AI space since it has been introduced for the past few years.
Built on open standards, Delta provides data reliability, enhances storage and query performance to support big data use cases (both batch and streaming), fast interactive queries for BI and enabling machine learning. Delta has matured over the past couple of years in both AWS and AZURE and has become the de-facto standard for organizations building their Data and AI pipelines.
In today’s talk, we will explore building end-to-end pipelines on the Google Cloud Platform (GCP). Through presentation, code examples and notebooks, we will build the Delta Pipeline from ingest to consumption using our Delta Bronze-Silver-Gold architecture pattern and show examples of Consuming the delta files using the Big Query Connector.
Overview of Apache Flink: Next-Gen Big Data Analytics FrameworkSlim Baltagi
These are the slides of my talk on June 30, 2015 at the first event of the Chicago Apache Flink meetup. Although most of the current buzz is about Apache Spark, the talk shows how Apache Flink offers the only hybrid open source (Real-Time Streaming + Batch) distributed data processing engine supporting many use cases: Real-Time stream processing, machine learning at scale, graph analytics and batch processing.
In these slides, you will find answers to the following questions: What is Apache Flink stack and how it fits into the Big Data ecosystem? How Apache Flink integrates with Apache Hadoop and other open source tools for data input and output as well as deployment? What is the architecture of Apache Flink? What are the different execution modes of Apache Flink? Why Apache Flink is an alternative to Apache Hadoop MapReduce, Apache Storm and Apache Spark? Who is using Apache Flink? Where to learn more about Apache Flink?
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Zipline is Airbnb’s data management platform specifically designed for ML use cases. Previously, ML practitioners at Airbnb spent roughly 60% of their time on collecting and writing transformations for machine learning tasks. Zipline reduces this task from months to days – by making the process declarative. It allows data scientists to easily define features in a simple configuration language. The framework then provides access to point-in-time correct features – for both – offline model training and online inference. In this talk we will describe the architecture of our system and the algorithm that makes the problem of efficient point-in-time correct feature generation, tractable.
The attendee will learn
Importance of point-in-time correct features for achieving better ML model performance
Importance of using change data capture for generating feature views
An algorithm – to efficiently generate features over change data. We use interval trees to efficiently compress time series features. The algorithm allows generating feature aggregates over this compressed representation.
A lambda architecture – that enables using the above algorithm – for online feature generation.
A framework, based on category theory, to understand how feature aggregations be distributed, and independently composed.
While the talk if fairly technical – we will introduce all the concepts from first principles with examples. Basic understanding of data-parallel distributed computation and machine learning might help, but are not required.
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data PipelinesDATAVERSITY
With the aid of any number of data management and processing tools, data flows through multiple on-prem and cloud storage locations before it’s delivered to business users. As a result, IT teams — including IT Ops, DataOps, and DevOps — are often overwhelmed by the complexity of creating a reliable data pipeline that includes the automation and observability they require.
The answer to this widespread problem is a centralized data pipeline orchestration solution.
Join Stonebranch’s Scott Davis, Global Vice President and Ravi Murugesan, Sr. Solution Engineer to learn how DataOps teams orchestrate their end-to-end data pipelines with a platform approach to managing automation.
Key Learnings:
- Discover how to orchestrate data pipelines across a hybrid IT environment (on-prem and cloud)
- Find out how DataOps teams are empowered with event-based triggers for real-time data flow
- See examples of reports, dashboards, and proactive alerts designed to help you reliably keep data flowing through your business — with the observability you require
- Discover how to replace clunky legacy approaches to streaming data in a multi-cloud environment
- See what’s possible with the Stonebranch Universal Automation Center (UAC)
Apache Flink is an open source platform for distributed stream and batch data processing. It provides two APIs - a DataStream API for real-time streaming and a DataSet API for batch processing. The document introduces Flink's core concepts like sources, sinks, transformations, and windows. It also provides instructions on setting up a Flink project and describes some use cases like processing Twitter feeds. Additional resources like tutorials, documentation and mailing lists are referenced to help users get started with Flink.
Building large scale transactional data lake using apache hudiBill Liu
Data is a critical infrastructure for building machine learning systems. From ensuring accurate ETAs to predicting optimal traffic routes, providing safe, seamless transportation and delivery experiences on the Uber platform requires reliable, performant large-scale data storage and analysis. In 2016, Uber developed Apache Hudi, an incremental processing framework, to power business critical data pipelines at low latency and high efficiency, and helps distributed organizations build and manage petabyte-scale data lakes.
In this talk, I will describe what is APache Hudi and its architectural design, and then deep dive to improving data operations by providing features such as data versioning, time travel.
We will also go over how Hudi brings kappa architecture to big data systems and enables efficient incremental processing for near real time use cases.
Speaker: Satish Kotha (Uber)
Apache Hudi committer and Engineer at Uber. Previously, he worked on building real time distributed storage systems like Twitter MetricsDB and BlobStore.
website: https://www.aicamp.ai/event/eventdetails/W2021043010
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
Moving Beyond Lambda Architectures with Apache KuduCloudera, Inc.
The document discusses the Lambda architecture, its advantages and disadvantages, and how Kudu can serve as an alternative. The Lambda architecture marries batch and real-time processing by using separate batch, speed, and serving layers. While it provides scalability, maintaining two code bases is complex. Kudu can fill the gap by enabling both fast analytics on frequently updated data through its ability to support updates, scans and lookups simultaneously. Examples of how Kudu has been used by Xiaomi to simplify their analytics pipeline and reduce latency are provided. The document cautions against premature optimization and advocates optimizing only as needed.
The document summarizes a technical seminar on Hadoop. It discusses Hadoop's history and origin, how it was developed from Google's distributed systems, and how it provides an open-source framework for distributed storage and processing of large datasets. It also summarizes key aspects of Hadoop including HDFS, MapReduce, HBase, Pig, Hive and YARN, and how they address challenges of big data analytics. The seminar provides an overview of Hadoop's architecture and ecosystem and how it can effectively process large datasets measured in petabytes.
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Google BigQuery is a cloud data warehouse and spreadsheet database that allows users to import, store, and query data in various formats like CSV, JSON, and Google Sheets. It provides a sandbox account with 10GB of free storage and 1TB of free queries per month. To use it, users create a BigQuery project, import data into datasets and tables, and then query the data using SQL syntax.
Slides for the talk at AI in Production meetup:
https://www.meetup.com/LearnDataScience/events/255723555/
Abstract: Demystifying Data Engineering
With recent progress in the fields of big data analytics and machine learning, Data Engineering is an emerging discipline which is not well-defined and often poorly understood.
In this talk, we aim to explain Data Engineering, its role in Data Science, the difference between a Data Scientist and a Data Engineer, the role of a Data Engineer and common concepts as well as commonly misunderstood ones found in Data Engineering. Toward the end of the talk, we will examine a typical Data Analytics system architecture.
Building a Feature Store around Dataframes and Apache SparkDatabricks
A Feature Store enables machine learning (ML) features to be registered, discovered, and used as part of ML pipelines, thus making it easier to transform and validate the training data that is fed into machine learning systems. Feature stores can also enable consistent engineering of features between training and inference, but to do so, they need a common data processing platform.
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
Flink Forward San Francisco 2022.
In modern data platform architectures, stream processing engines such as Apache Flink are used to ingest continuous streams of data into data lakes such as Apache Iceberg. Streaming ingestion to iceberg tables can suffer by two problems (1) small files problem that can hurt read performance (2) poor data clustering that can make file pruning less effective. To address those two problems, we propose adding a shuffling stage to the Flink Iceberg streaming writer. The shuffling stage can intelligently group data via bin packing or range partition. This can reduce the number of concurrent files that every task writes. It can also improve data clustering. In this talk, we will explain the motivations in details and dive into the design of the shuffling stage. We will also share the evaluation results that demonstrate the effectiveness of smart shuffling.
by
Gang Ye & Steven Wu
The presentation covers following topics: 1) Hadoop Introduction 2) Hadoop nodes and daemons 3) Architecture 4) Hadoop best features 5) Hadoop characteristics. For more further knowledge of Hadoop refer the link: http://data-flair.training/blogs/hadoop-tutorial-for-beginners/
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Model serving made easy using Kedro pipelines - Mariusz Strzelecki, GetInDataGetInData
If you want to stay up to date, subscribe to our newsletter here: https://bit.ly/3tiw1I8
Presentation from the performance given by Mariusz during the Data Science Summit ML Edition.
Author: Mariusz Strzelecki
Linkedin: https://www.linkedin.com/in/mariusz-strzelecki/
___
Company:
Getindata is a company founded in 2014 by ex-Spotify data engineers. From day one our focus has been on Big Data projects. We bring together a group of best and most experienced experts in Poland, working with cloud and open-source Big Data technologies to help companies build scalable data architectures and implement advanced analytics over large data sets.
Our experts have vast production experience in implementing Big Data projects for Polish as well as foreign companies including i.a. Spotify, Play, Truecaller, Kcell, Acast, Allegro, ING, Agora, Synerise, StepStone, iZettle and many others from the pharmaceutical, media, finance and FMCG industries.
https://getindata.com
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...Simplilearn
The document provides information about Hadoop training. It discusses the need for Hadoop in today's data-heavy world. It then describes what Hadoop is, its ecosystem including HDFS for storage and MapReduce for processing. It also discusses YARN and provides a bank use case. It further explains the architecture and working of HDFS and MapReduce in processing large datasets in parallel across clusters.
Unified Batch and Real-Time Stream Processing Using Apache FlinkSlim Baltagi
This talk was given at Capital One on September 15, 2015 at the launch of the Washington DC Area Apache Flink Meetup. Apache flink is positioned at the forefront of 2 major trends in Big Data Analytics:
- Unification of Batch and Stream processing
- Multi-purpose Big Data Analytics frameworks
In these slides, we will also find answers to the burning question: Why Apache Flink? You will also learn more about how Apache Flink compares to Hadoop MapReduce, Apache Spark and Apache Storm.
Near real-time statistical modeling and anomaly detection using Flink!Flink Forward
Flink Forward San Francisco 2022.
At ThousandEyes we receive billions of events every day that allow us to monitor the internet; the most important aspect of our platform is to detect outages and anomalies that have a potential to cause serious impact to customer applications and user experience. Automatic detection of such events at lowest latency and highest accuracy is extremely important for our customers and their business. After launching several resilient and low latency data pipelines in production using Flink we decided to take it up a notch; we leveraged Flink to build statistical models in near real-time and apply them on incoming stream of events to detect anomalies! In this session we will deep dive into the design as well as discuss pitfalls and learnings while developing our real-time platform that leverages Debezium, Kafka, Flink, ElasticCache and DynamoDB to process events at scale!
by
Kunal Umrigar & Balint Kurnasz
This presentation contains following slides,
Introduction To OLAP
Data Warehousing Architecture
The OLAP Cube
OLTP Vs. OLAP
Types Of OLAP
ROLAP V/s MOLAP
Benefits Of OLAP
Introduction - Apache Kylin
Kylin - Architecture
Kylin - Advantages and Limitations
Introduction - Druid
Druid - Architecture
Druid vs Apache Kylin
References
For any queries
Contact Us:- argonauts007@gmail.com
Data product thinking-Will the Data Mesh save us from analytics historyRogier Werschkull
Data Mesh: What is it, for Who, for who definitely not?
What are it's foundational principles and how could we take some of them to our current Data Analytical Architectures?
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Flink vs. Spark: this is the slide deck of my talk at the 2015 Flink Forward conference in Berlin, Germany, on October 12, 2015. In this talk, we tried to compare Apache Flink vs. Apache Spark with focus on real-time stream processing. Your feedback and comments are much appreciated.
This document discusses Pinot, Uber's real-time analytics platform. It provides an overview of Pinot's architecture and data ingestion process, describes a case study on modeling trip data in Pinot, and benchmarks Pinot's performance on ingesting large volumes of data and answering queries in real-time.
Data Con LA 2020
Description
In this session, I introduce the Amazon Redshift lake house architecture which enables you to query data across your data warehouse, data lake, and operational databases to gain faster and deeper insights. With a lake house architecture, you can store data in open file formats in your Amazon S3 data lake.
Speaker
Antje Barth, Amazon Web Services, Sr. Developer Advocate, AI and Machine Learning
Moving Beyond Lambda Architectures with Apache KuduCloudera, Inc.
The document discusses the Lambda architecture, its advantages and disadvantages, and how Kudu can serve as an alternative. The Lambda architecture marries batch and real-time processing by using separate batch, speed, and serving layers. While it provides scalability, maintaining two code bases is complex. Kudu can fill the gap by enabling both fast analytics on frequently updated data through its ability to support updates, scans and lookups simultaneously. Examples of how Kudu has been used by Xiaomi to simplify their analytics pipeline and reduce latency are provided. The document cautions against premature optimization and advocates optimizing only as needed.
The document summarizes a technical seminar on Hadoop. It discusses Hadoop's history and origin, how it was developed from Google's distributed systems, and how it provides an open-source framework for distributed storage and processing of large datasets. It also summarizes key aspects of Hadoop including HDFS, MapReduce, HBase, Pig, Hive and YARN, and how they address challenges of big data analytics. The seminar provides an overview of Hadoop's architecture and ecosystem and how it can effectively process large datasets measured in petabytes.
Batch Processing at Scale with Flink & IcebergFlink Forward
Flink Forward San Francisco 2022.
Goldman Sachs's Data Lake platform serves as the firm's centralized data platform, ingesting 140K (and growing!) batches per day of Datasets of varying shape and size. Powered by Flink and using metadata configured by platform users, ingestion applications are generated dynamically at runtime to extract, transform, and load data into centralized storage where it is then exported to warehousing solutions such as Sybase IQ, Snowflake, and Amazon Redshift. Data Latency is one of many key considerations as producers and consumers have their own commitments to satisfy. Consumers range from people/systems issuing queries, to applications using engines like Spark, Hive, and Presto to transform data into refined Datasets. Apache Iceberg allows our applications to not only benefit from consistency guarantees important when running on eventually consistent storage like S3, but also allows us the opportunity to improve our batch processing patterns with its scalability-focused features.
by
Andreas Hailu
Google BigQuery is a cloud data warehouse and spreadsheet database that allows users to import, store, and query data in various formats like CSV, JSON, and Google Sheets. It provides a sandbox account with 10GB of free storage and 1TB of free queries per month. To use it, users create a BigQuery project, import data into datasets and tables, and then query the data using SQL syntax.
Slides for the talk at AI in Production meetup:
https://www.meetup.com/LearnDataScience/events/255723555/
Abstract: Demystifying Data Engineering
With recent progress in the fields of big data analytics and machine learning, Data Engineering is an emerging discipline which is not well-defined and often poorly understood.
In this talk, we aim to explain Data Engineering, its role in Data Science, the difference between a Data Scientist and a Data Engineer, the role of a Data Engineer and common concepts as well as commonly misunderstood ones found in Data Engineering. Toward the end of the talk, we will examine a typical Data Analytics system architecture.
Building a Feature Store around Dataframes and Apache SparkDatabricks
A Feature Store enables machine learning (ML) features to be registered, discovered, and used as part of ML pipelines, thus making it easier to transform and validate the training data that is fed into machine learning systems. Feature stores can also enable consistent engineering of features between training and inference, but to do so, they need a common data processing platform.
Zeus: Uber’s Highly Scalable and Distributed Shuffle as a ServiceDatabricks
Zeus is an efficient, highly scalable and distributed shuffle as a service which is powering all Data processing (Spark and Hive) at Uber. Uber runs one of the largest Spark and Hive clusters on top of YARN in industry which leads to many issues such as hardware failures (Burn out Disks), reliability and scalability challenges.
Tame the small files problem and optimize data layout for streaming ingestion...Flink Forward
Flink Forward San Francisco 2022.
In modern data platform architectures, stream processing engines such as Apache Flink are used to ingest continuous streams of data into data lakes such as Apache Iceberg. Streaming ingestion to iceberg tables can suffer by two problems (1) small files problem that can hurt read performance (2) poor data clustering that can make file pruning less effective. To address those two problems, we propose adding a shuffling stage to the Flink Iceberg streaming writer. The shuffling stage can intelligently group data via bin packing or range partition. This can reduce the number of concurrent files that every task writes. It can also improve data clustering. In this talk, we will explain the motivations in details and dive into the design of the shuffling stage. We will also share the evaluation results that demonstrate the effectiveness of smart shuffling.
by
Gang Ye & Steven Wu
The presentation covers following topics: 1) Hadoop Introduction 2) Hadoop nodes and daemons 3) Architecture 4) Hadoop best features 5) Hadoop characteristics. For more further knowledge of Hadoop refer the link: http://data-flair.training/blogs/hadoop-tutorial-for-beginners/
Enabling a Data Mesh Architecture with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3rwWhyv
The Data Mesh architectural design was first proposed in 2019 by Zhamak Dehghani, principal technology consultant at Thoughtworks, a technology company that is closely associated with the development of distributed agile methodology. A data mesh is a distributed, de-centralized data infrastructure in which multiple autonomous domains manage and expose their own data, called “data products,” to the rest of the organization.
Organizations leverage data mesh architecture when they experience shortcomings in highly centralized architectures, such as the lack domain-specific expertise in data teams, the inflexibility of centralized data repositories in meeting the specific needs of different departments within large organizations, and the slow nature of centralized data infrastructures in provisioning data and responding to changes.
In this session, Pablo Alvarez, Global Director of Product Management at Denodo, explains how data virtualization is your best bet for implementing an effective data mesh architecture.
You will learn:
- How data mesh architecture not only enables better performance and agility, but also self-service data access
- The requirements for “data products” in the data mesh world, and how data virtualization supports them
- How data virtualization enables domains in a data mesh to be truly autonomous
- Why a data lake is not automatically a data mesh
- How to implement a simple, functional data mesh architecture using data virtualization
Model serving made easy using Kedro pipelines - Mariusz Strzelecki, GetInDataGetInData
If you want to stay up to date, subscribe to our newsletter here: https://bit.ly/3tiw1I8
Presentation from the performance given by Mariusz during the Data Science Summit ML Edition.
Author: Mariusz Strzelecki
Linkedin: https://www.linkedin.com/in/mariusz-strzelecki/
___
Company:
Getindata is a company founded in 2014 by ex-Spotify data engineers. From day one our focus has been on Big Data projects. We bring together a group of best and most experienced experts in Poland, working with cloud and open-source Big Data technologies to help companies build scalable data architectures and implement advanced analytics over large data sets.
Our experts have vast production experience in implementing Big Data projects for Polish as well as foreign companies including i.a. Spotify, Play, Truecaller, Kcell, Acast, Allegro, ING, Agora, Synerise, StepStone, iZettle and many others from the pharmaceutical, media, finance and FMCG industries.
https://getindata.com
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...Simplilearn
The document provides information about Hadoop training. It discusses the need for Hadoop in today's data-heavy world. It then describes what Hadoop is, its ecosystem including HDFS for storage and MapReduce for processing. It also discusses YARN and provides a bank use case. It further explains the architecture and working of HDFS and MapReduce in processing large datasets in parallel across clusters.
Unified Batch and Real-Time Stream Processing Using Apache FlinkSlim Baltagi
This talk was given at Capital One on September 15, 2015 at the launch of the Washington DC Area Apache Flink Meetup. Apache flink is positioned at the forefront of 2 major trends in Big Data Analytics:
- Unification of Batch and Stream processing
- Multi-purpose Big Data Analytics frameworks
In these slides, we will also find answers to the burning question: Why Apache Flink? You will also learn more about how Apache Flink compares to Hadoop MapReduce, Apache Spark and Apache Storm.
Near real-time statistical modeling and anomaly detection using Flink!Flink Forward
Flink Forward San Francisco 2022.
At ThousandEyes we receive billions of events every day that allow us to monitor the internet; the most important aspect of our platform is to detect outages and anomalies that have a potential to cause serious impact to customer applications and user experience. Automatic detection of such events at lowest latency and highest accuracy is extremely important for our customers and their business. After launching several resilient and low latency data pipelines in production using Flink we decided to take it up a notch; we leveraged Flink to build statistical models in near real-time and apply them on incoming stream of events to detect anomalies! In this session we will deep dive into the design as well as discuss pitfalls and learnings while developing our real-time platform that leverages Debezium, Kafka, Flink, ElasticCache and DynamoDB to process events at scale!
by
Kunal Umrigar & Balint Kurnasz
This presentation contains following slides,
Introduction To OLAP
Data Warehousing Architecture
The OLAP Cube
OLTP Vs. OLAP
Types Of OLAP
ROLAP V/s MOLAP
Benefits Of OLAP
Introduction - Apache Kylin
Kylin - Architecture
Kylin - Advantages and Limitations
Introduction - Druid
Druid - Architecture
Druid vs Apache Kylin
References
For any queries
Contact Us:- argonauts007@gmail.com
Data product thinking-Will the Data Mesh save us from analytics historyRogier Werschkull
Data Mesh: What is it, for Who, for who definitely not?
What are it's foundational principles and how could we take some of them to our current Data Analytical Architectures?
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Flink vs. Spark: this is the slide deck of my talk at the 2015 Flink Forward conference in Berlin, Germany, on October 12, 2015. In this talk, we tried to compare Apache Flink vs. Apache Spark with focus on real-time stream processing. Your feedback and comments are much appreciated.
This document discusses Pinot, Uber's real-time analytics platform. It provides an overview of Pinot's architecture and data ingestion process, describes a case study on modeling trip data in Pinot, and benchmarks Pinot's performance on ingesting large volumes of data and answering queries in real-time.
Step-by-Step Introduction to Apache Flink Slim Baltagi
This a talk that I gave at the 2nd Apache Flink meetup in Washington DC Area hosted and sponsored by Capital One on November 19, 2015. You will quickly learn in step-by-step way:
How to setup and configure your Apache Flink environment?
How to use Apache Flink tools?
3. How to run the examples in the Apache Flink bundle?
4. How to set up your IDE (IntelliJ IDEA or Eclipse) for Apache Flink?
5. How to write your Apache Flink program in an IDE?
Aljoscha Krettek - Portable stateful big data processing in Apache BeamVerverica
Apache Beam's new State API brings scalability and consistency to fine-grained stateful processing while remaining portable to any Beam runner. Aljoscha Krettek introduces the new state and timer features in Beam and shows how to use them to express common real-world use cases in a backend-agnostic manner.
Hadoop or Spark: is it an either-or proposition? By Slim BaltagiSlim Baltagi
Hadoop or Spark: is it an either-or proposition? An exodus away from Hadoop to Spark is picking up steam in the news headlines and talks! Away from marketing fluff and politics, this talk analyzes such news and claims from a technical perspective.
In practical ways, while referring to components and tools from both Hadoop and Spark ecosystems, this talk will show that the relationship between Hadoop and Spark is not of an either-or type but can take different forms such as: evolution, transition, integration, alternation and complementarity.
Fundamentals of Stream Processing with Apache Beam, Tyler Akidau, Frances Perry confluent
Apache Beam (unified Batch and strEAM processing!) is a new Apache incubator project. Originally based on years of experience developing Big Data infrastructure within Google (such as MapReduce, FlumeJava, and MillWheel), it has now been donated to the OSS community at large.
Come learn about the fundamentals of out-of-order stream processing, and how Beam’s powerful tools for reasoning about time greatly simplify this complex task. Beam provides a model that allows developers to focus on the four important questions that must be answered by any stream processing pipeline:
What results are being calculated?
Where in event time are they calculated?
When in processing time are they materialized?
How do refinements of results relate?
Furthermore, by cleanly separating these questions from runtime characteristics, Beam programs become portable across multiple runtime environments, both proprietary (e.g., Google Cloud Dataflow) and open-source (e.g., Flink, Spark, et al).
Apache Beam is a unified programming model for batch and streaming data processing. It defines concepts for describing what computations to perform (the transformations), where the data is located in time (windowing), when to emit results (triggering), and how to accumulate results over time (accumulation mode). Beam aims to provide portable pipelines across multiple execution engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow. The talk will cover the key concepts of the Beam model and how it provides unified, efficient, and portable data processing pipelines.
This talk given at the Hadoop Summit in San Jose on June 28, 2016, analyzes a few major trends in Big Data analytics.
These are a few takeaways from this talk:
- Adopt Apache Beam for easier development and portability between Big Data Execution Engines.
- Adopt stream analytics for faster time to insight, competitive advantages and operational efficiency.
- Accelerate your Big Data applications with In-Memory open source tools.
- Adopt Rapid Application Development of Big Data applications: APIs, Notebooks, GUIs, Microservices…
- Have Machine Learning part of your strategy or passively watch your industry completely transformed!
- How to advance your strategy for hybrid integration between cloud and on-premise deployments?
Kafka, Apache Kafka evolved from an enterprise messaging system to a fully distributed streaming data platform (Kafka Core + Kafka Connect + Kafka Streams) for building streaming data pipelines and streaming data applications.
This talk, that I gave at the Chicago Java Users Group (CJUG) on June 8th 2017, is mainly focusing on Kafka Streams, a lightweight open source Java library for building stream processing applications on top of Kafka using Kafka topics as input/output.
You will learn more about the following:
1. Apache Kafka: a Streaming Data Platform
2. Overview of Kafka Streams: Before Kafka Streams? What is Kafka Streams? Why Kafka Streams? What are Kafka Streams key concepts? Kafka Streams APIs and code examples?
3. Writing, deploying and running your first Kafka Streams application
4. Code and Demo of an end-to-end Kafka-based Streaming Data Application
5. Where to go from here?
Building Streaming Data Applications Using Apache KafkaSlim Baltagi
Apache Kafka evolved from an enterprise messaging system to a fully distributed streaming data platform for building real-time streaming data pipelines and streaming data applications without the need for other tools/clusters for data ingestion, storage and stream processing.
In this talk you will learn more about:
1. A quick introduction to Kafka Core, Kafka Connect and Kafka Streams: What is and why?
2. Code and step-by-step instructions to build an end-to-end streaming data application using Apache Kafka
Apache Flink: Real-World Use Cases for Streaming AnalyticsSlim Baltagi
This face to face talk about Apache Flink in Sao Paulo, Brazil is the first event of its kind in Latin America! It explains how Apache Flink 1.0 announced on March 8th, 2016 by the Apache Software Foundation (link), marks a new era of Big Data analytics and in particular Real-Time streaming analytics. The talk maps Flink's capabilities to real-world use cases that span multiples verticals such as: Financial Services, Healthcare, Advertisement, Oil and Gas, Retail and Telecommunications.
In this talk, you learn more about:
1. What is Apache Flink Stack?
2. Batch vs. Streaming Analytics
3. Key Differentiators of Apache Flink for Streaming Analytics
4. Real-World Use Cases with Flink for Streaming Analytics
5. Who is using Flink?
6. Where do you go from here?
Okkam is an Italian SME specializing in large-scale data integration using semantic technologies. It provides services for public administration and restaurants by building and managing very large entity-centric knowledge bases. Okkam uses Apache Flink as its data processing framework for tasks like domain reasoning, managing the RDF data lifecycle, detecting duplicate records, entity record linkage, and telemetry analysis by combining Flink with technologies like Parquet, Jena, Sesame, ELKiBi, HBase, Solr, MongoDB, and Weka. The presenters work at Okkam and will discuss their use of Flink in more detail in their session.
This document discusses streaming and parallel decision trees in Flink. It motivates the need for a classifier system that can learn from streaming data and classify both the streaming training data and new streaming data. It describes the architecture of keeping the classifier model fresh as new data streams in, allowing classification during the learning process in real-time. It also outlines decision tree algorithms and their implementation using Flink streaming.
Capital One is a large consumer and commercial bank that wanted to improve its real-time monitoring of customer activity data to detect and resolve issues quickly. Its legacy solution was expensive, proprietary, and lacked real-time and advanced analytics capabilities. Capital One implemented a new solution using Apache Flink for its real-time stream processing abilities. Flink provided cost-effective, real-time event processing and advanced analytics on data streams to help meet Capital One's goals. It also aligned with the company's technology strategy of using open source solutions.
Pinot is a realtime distributed OLAP datastore, which is used at LinkedIn to deliver scalable real time analytics with low latency. It can ingest data from offline data sources (such as Hadoop and flat files) as well as online sources (such as Kafka). Pinot is designed to scale horizontally.
Why apache Flink is the 4G of Big Data Analytics FrameworksSlim Baltagi
This document provides an overview and agenda for a presentation on Apache Flink. It begins with an introduction to Apache Flink and how it fits into the big data ecosystem. It then explains why Flink is considered the "4th generation" of big data analytics frameworks. Finally, it outlines next steps for those interested in Flink, such as learning more or contributing to the project. The presentation covers topics such as Flink's APIs, libraries, architecture, programming model and integration with other tools.
Making Great User Experiences, Pittsburgh Scrum MeetUp, Oct 17, 2017Carol Smith
Everything is designed, yet some interactions are much better than others. What does it take to make a great experience? What are the areas that UX specialists focus on? How do skills in cognitive psycology, computer science and design come together? Carol introduces basic concepts in user experience design that you can use to improve the user's expeirence and/or clearly communicate with designers.
This document provides an overview of Apache Flink and discusses why it is suitable for real-world streaming analytics. The document contains an agenda that covers how Flink is a multi-purpose big data analytics framework, why streaming analytics are emerging, why Flink is suitable for real-world streaming analytics, novel use cases enabled by Flink, who is using Flink, and where to go from here. Key points include Flink innovations like custom memory management, its DataSet API, rich windowing semantics, and native iterative processing. Flink's streaming features that make it suitable for real-world use include its pipelined processing engine, stream abstraction, performance, windowing support, fault tolerance, and integration with Hadoop.
Overview of Apache Fink: the 4 G of Big Data Analytics FrameworksSlim Baltagi
Slides of my talk at the Hadoop Summit Europe in Dublin, Ireland on April 13th, 2016. The talk introduces Apache Flink as both a multi-purpose Big Data analytics framework and real-world streaming analytics framework. It is focusing on Flink's key differentiators and suitability for streaming analytics use cases. It also shows how Flink enables novel use cases such as distributed CEP (Complex Event Processing) and querying the state by behaving like a key value data store.
Overview of Apache Fink: The 4G of Big Data Analytics FrameworksSlim Baltagi
This document provides an overview of Apache Flink and discusses why it is suitable for real-world streaming analytics. The document contains an agenda that covers how Flink is a multi-purpose big data analytics framework, why streaming analytics are emerging, why Flink is suitable for real-world streaming analytics, novel use cases enabled by Flink, who is using Flink, and where to go from here. Key points include Flink innovations like custom memory management, its DataSet API, rich windowing semantics, and native iterative processing. Flink's streaming features that make it suitable for real-world use include its pipelined processing engine, stream abstraction, performance, windowing support, fault tolerance, and integration with Hadoop.
Big Data, a recent phenomenon. Everyone talks about it, but do you really know what Big Data is? Join our four-part series about Big Data and you will get answers to your questions!
We will cover Introduction to Big Data and available platforms which we can use to deal with Big Data. And in the end, we are going to give you an insight into the possible future of dealing with Big Data.
Spark, Flink, Presto and many others. This is just a sample of frameworks which are used in real companies and we will talk about some of them.
In the previous episode of this Big Data series, we talked about the basic information concerning Big Data. This presentation, however, will be much more technical as we will be covering the most popular platforms you can use to deal with Big Data 2.0 Systems and learn about the key differences between these platforms. Let’s go!
#CHEDTEB
www.chedteb.eu
Present and future of unified, portable, and efficient data processing with A...DataWorks Summit
The world of big data involves an ever-changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the big data ecosystem together; it enables users to "run any data processing pipeline anywhere."
This talk will briefly cover the capabilities of the Beam model for data processing and discuss its architecture, including the portability model. We’ll focus on the present state of the community and the current status of the Beam ecosystem. We’ll cover the state of the art in data processing and discuss where Beam is going next, including completion of the portability framework and the Streaming SQL. Finally, we’ll discuss areas of improvement and how anybody can join us on the path of creating the glue that interconnects the big data ecosystem.
Speaker
Davor Bonaci, Apache Software Foundation; Simbly, V.P. of Apache Beam; Founder/CEO at Operiant
Slim Baltagi, director of Enterprise Architecture at Capital One, gave a presentation at Hadoop Summit on major trends in big data analytics. He discussed 1) increasing portability between execution engines using Apache Beam, 2) the emergence of stream analytics driven by data streams, technology advances, business needs and consumer demands, 3) the growth of in-memory analytics using tools like Alluxio and RocksDB, 4) rapid application development using APIs, notebooks, GUIs and microservices, 5) open sourcing of machine learning systems by tech giants, and 6) hybrid cloud computing models for deploying big data applications both on-premise and in the cloud.
Slim Baltagi, director of Enterprise Architecture at Capital One, gave a presentation at Hadoop Summit on major trends in big data analytics. He discussed 1) increasing portability between execution engines using Apache Beam, 2) the emergence of stream analytics to enable real-time insights, and 3) leveraging in-memory technologies. He also covered 4) rapid application development tools, 5) open-sourcing of machine learning systems, and 6) hybrid cloud deployments of big data applications across on-premise and cloud environments.
Rise of Intermediate APIs - Beam and Alluxio at Alluxio Meetup 2016Alluxio, Inc.
This document discusses the rise of intermediary APIs like Apache Beam and Alluxio that allow users to write data processing jobs and express storage lifecycles independently of physical constraints. Intermediary APIs provide portability across frameworks and unified access to multiple storage systems. Alluxio in particular provides an in-memory filesystem that can cache data from various storage sources, while Beam allows processing jobs to run on different execution engines. These intermediary APIs create a path for easy technology adoption and focus on features over connectivity.
Aljoscha Krettek offers a very short introduction to stream processing before diving into writing code and demonstrating the features in Apache Flink that make truly robust stream processing possible, with a focus on correctness and robustness in stream processing.
All of this will be done in the context of a real-time analytics application that we’ll be modifying on the fly based on the topics we’re working though, as Aljoscha exercises Flink’s unique features, demonstrates fault recovery, clearly explains why event time is such an important concept in robust, stateful stream processing, and covers the features you need in a stream processor to do robust, stateful stream processing in production.
We’ll also use a real-time analytics dashboard to visualize the results we’re computing in real time, allowing us to easily see the effects of the code we’re developing as we go along.
Topics include:
* Apache Flink
* Stateful stream processing
* Event time versus processing time
* Fault tolerance
* State management in the face of faults
* Savepoints
* Data reprocessing
Build Deep Learning Applications for Big Data Platforms (CVPR 2018 tutorial)Jason Dai
This document outlines an agenda for a talk on building deep learning applications on big data platforms using Analytics Zoo. The agenda covers motivations around trends in big data, deep learning frameworks on Apache Spark like BigDL and TensorFlowOnSpark, an introduction to Analytics Zoo and its high-level pipeline APIs, built-in models, and reference use cases. It also covers distributed training in BigDL, advanced applications, and real-world use cases of deep learning on big data at companies like JD.com and World Bank. The talk concludes with a question and answer session.
Present and future of unified, portable and efficient data processing with Ap...DataWorks Summit
The world of big data involves an ever-changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the big data ecosystem together; it enables users to "run any data processing pipeline anywhere."
This talk will briefly cover the capabilities of the Beam model for data processing and discuss its architecture, including the portability model. We’ll focus on the present state of the community and the current status of the Beam ecosystem. We’ll cover the state of the art in data processing and discuss where Beam is going next, including completion of the portability framework and the Streaming SQL. Finally, we’ll discuss areas of improvement and how anybody can join us on the path of creating the glue that interconnects the big data ecosystem.
Speaker
Davor Bonaci, V.P. of Apache Beam; Founder/CEO at Operiant
Transitioning Compute Models: Hadoop MapReduce to SparkSlim Baltagi
This presentation is an analysis of the observed trends in the transition from the Hadoop ecosystem to the Spark ecosystem. The related talk took place at the Chicago Hadoop User Group (CHUG) meetup held on February 12, 2015.
The document discusses Hadoop and big data technologies. It begins with an introduction to big data concepts and the various Hadoop components like HDFS, MapReduce, YARN, Hive, Pig and Mahout. It then explains how big data is different from traditional data warehousing through the concept of schema-on-read. Finally, it provides recommendations on tools for working with big data technologies locally and in the cloud, as well as sources of inspiration like sandbox environments, Apache projects and GitHub.
Spark is an open-source cluster computing framework that can run analytics applications much faster than Hadoop by keeping data in memory rather than on disk. While Spark can access Hadoop's HDFS storage system and is often used as a replacement for Hadoop's MapReduce, Hadoop remains useful for batch processing and Spark is not expected to fully replace it. Spark provides speed, ease of use, and integration of SQL, streaming, and machine learning through its APIs in multiple languages.
In the past, emerging technologies took years to mature. In the case of big data, while effective tools are still emerging, the analytics requirements are changing rapidly resulting in businesses to either make it or be left behind
ApacheCon 2021 Apache Deep Learning 302Timothy Spann
ApacheCon 2021 Apache Deep Learning 302
Tuesday 18:00 UTC
Apache Deep Learning 302
Timothy Spann
This talk will discuss and show examples of using Apache Hadoop, Apache Kudu, Apache Flink, Apache Hive, Apache MXNet, Apache OpenNLP, Apache NiFi and Apache Spark for deep learning applications. This is the follow up to previous talks on Apache Deep Learning 101 and 201 and 301 at ApacheCon, Dataworks Summit, Strata and other events. As part of this talk, the presenter will walk through using Apache MXNet Pre-Built Models, integrating new open source Deep Learning libraries with Python and Java, as well as running real-time AI streams from edge devices to servers utilizing Apache NiFi and Apache NiFi - MiNiFi. This talk is geared towards Data Engineers interested in the basics of architecting Deep Learning pipelines with open source Apache tools in a Big Data environment. The presenter will also walk through source code examples available in github and run the code live on Apache NiFi and Apache Flink clusters.
Tim Spann is a Developer Advocate @ StreamNative where he works with Apache NiFi, Apache Pulsar, Apache Flink, Apache MXNet, TensorFlow, Apache Spark, big data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Principal Field Engineer at Cloudera, a senior solutions architect at AirisData and a senior field engineer at Pivotal. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on big data, the IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as IoT Fusion, Strata, ApacheCon, Data Works Summit Berlin, DataWorks Summit Sydney, and Oracle Code NYC. He holds a BS and MS in computer science.
* https://github.com/tspannhw/ApacheDeepLearning302/
* https://github.com/tspannhw/nifi-djl-processor
* https://github.com/tspannhw/nifi-djlsentimentanalysis-processor
* https://github.com/tspannhw/nifi-djlqa-processor
* https://www.linkedin.com/pulse/2021-schedule-tim-spann/
The document discusses Hopsworks, an open source platform for self-service Apache Spark, Flink, Kafka, and Hadoop clusters that addresses issues with usability, security, and operations that have hindered Hadoop's adoption. Hopsworks provides a web-based interface and uses a distributed database to store metadata externally for improved scalability. It introduces new abstractions like projects and datasets to simplify cluster management and data sharing.
Tiny Batches, in the wine: Shiny New Bits in Spark StreamingPaco Nathan
London Spark Meetup 2014-11-11 @Skimlinks
http://www.meetup.com/Spark-London/events/217362972/
To paraphrase the immortal crooner Don Ho: "Tiny Batches, in the wine, make me happy, make me feel fine." http://youtu.be/mlCiDEXuxxA
Apache Spark provides support for streaming use cases, such as real-time analytics on log files, by leveraging a model called discretized streams (D-Streams). These "micro batch" computations operated on small time intervals, generally from 500 milliseconds up. One major innovation of Spark Streaming is that it leverages a unified engine. In other words, the same business logic can be used across multiple uses cases: streaming, but also interactive, iterative, machine learning, etc.
This talk will compare case studies for production deployments of Spark Streaming, emerging design patterns for integration with popular complementary OSS frameworks, plus some of the more advanced features such as approximation algorithms, and take a look at what's ahead — including the new Python support for Spark Streaming that will be in the upcoming 1.2 release.
Also, let's chat a bit about the new Databricks + O'Reilly developer certification for Apache Spark…
Realizing the promise of portability with Apache BeamJ On The Beach
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam (incubating) aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms.
In this talk, I will:
Cover briefly the capabilities of the Beam model for data processing and integration with IOs, as well as the current state of the Beam ecosystem.
Discuss the benefits Beam provides regarding portability and ease-of-use.
Demo the same Beam pipeline running on multiple runners in multiple deployment scenarios (e.g. Apache Flink on Google Cloud, Apache Spark on AWS, Apache Apex on-premise).
Give a glimpse at some of the challenges Beam aims to address in the future.
How to select a modern data warehouse and get the most out of it?Slim Baltagi
In the first part of this talk, we will give a setup and definition of modern cloud data warehouses as well as outline problems with legacy and on-premise data warehouses.
We will speak to selecting, technically justifying, and practically using modern data warehouses, including criteria for how to pick a cloud data warehouse and where to start, how to use it in an optimum way and use it cost effectively.
In the second part of this talk, we discuss the challenges and where people are not getting their investment. In this business-focused track, we cover how to get business engagement, identifying the business cases/use cases, and how to leverage data as a service and consumption models.
In this presentation, we:
1. Look at the challenges and opportunities of the data era
2. Look at key challenges of the legacy data warehouses such as data diversity, complexity, cost, scalabilily, performance, management, ...
3. Look at how modern data warehouses in the cloud not only overcome most of these challenges but also how some of them bring additional technical innovations and capabilities such as pay as you go cloud-based services, decoupling of storage and compute, scaling up or down, effortless management, native support of semi-structured data ...
4. Show how capabilities brought by modern data warehouses in the cloud, help businesses, either new or existing ones, during the phases of their lifecycle such as launch, growth, maturity and renewal/decline.
5. Share a Near-Real-Time Data Warehousing use case built on Snowflake and give a live demo to showcase ease of use, fast provisioning, continuous data ingestion, support of JSON data ...
Modern big data and machine learning in the era of cloud, docker and kubernetesSlim Baltagi
There is a major shift in web and mobile application architecture from the ‘old-school’ one to a modern ‘micro-services’ architecture based on containers. Kubernetes has been quite successful in managing those containers and running them in distributed computing environments.
Now enabling Big Data and Machine Learning on Kubernetes will allow IT organizations to standardize on the same Kubernetes infrastructure. This will propel adoption and reduce costs.
Kubeflow is an open source framework dedicated to making it easy to use the machine learning tool of your choice and deploy your ML applications at scale on Kubernetes. Kubeflow is becoming an industry standard as well!
Both Kubernetes and Kubeflow will enable IT organizations to focus more effort on applications rather than infrastructure.
Apache Kafka vs RabbitMQ: Fit For Purpose / Decision TreeSlim Baltagi
Kafka as a streaming data platform is becoming the successor to traditional messaging systems such as RabbitMQ. Nevertheless, there are still some use cases where they could be a good fit. This one single slide tries to answer in a concise and unbiased way where to use Apache Kafka and where to use RabbitMQ. Your comments and feedback are much appreciated.
Apache Fink 1.0: A New Era for Real-World Streaming AnalyticsSlim Baltagi
These are the slides of my talk at the Chicago Apache Flink Meetup on April 19, 2016. This talk explains how Apache Flink 1.0 announced on March 8th, 2016 by the Apache Software Foundation, marks a new era of Real-Time and Real-World streaming analytics. The talk will map Flink's capabilities to streaming analytics use cases.
Apache Flink Crash Course by Slim Baltagi and Srini PalthepuSlim Baltagi
In this hands-on Apache Flink presentation, you will learn in a step-by-step tutorial style about:
• How to setup and configure your Apache Flink environment: Local/VM image (on a single machine), cluster (standalone), YARN, cloud (Google Compute Engine, Amazon EMR, ... )?
• How to get familiar with Flink tools (Command-Line Interface, Web Client, JobManager Web Interface, Interactive Scala Shell, Zeppelin notebook)?
• How to run some Apache Flink example programs?
• How to get familiar with Flink's APIs and libraries?
• How to write your Apache Flink code in the IDE (IntelliJ IDEA or Eclipse)?
• How to test and debug your Apache Flink code?
• How to deploy your Apache Flink code in local, in a cluster or in the cloud?
• How to tune your Apache Flink application (CPU, Memory, I/O)?
Big Data at CME Group: Challenges and Opportunities Slim Baltagi
Presentation given on September 18, 2012 at the 'Hadoop in Finance Day' conference held in Chicago and organized by Fountainhead Lab at Microsoft's offices.
A Big Data Journey: Bringing Open Source to FinanceSlim Baltagi
Slim Baltagi & Rick Fath. Closing Keynote: Big Data Executive Summit. Chicago 11/28/2012.
PART I – Hadoop at CME: Our Practical Experience
1. What’s CME Group Inc.?
2. Big Data & CME Group: a natural fit!
3. Drivers for Hadoop adoption at CME Group
4. Key Big Data projects at CME Group
5. Key Learning’s
PART II - Bringing Hadoop to the Enterprise:
Challenges & Opportunities
PART II - Bringing Hadoop to the Enterprise
1. What is Hadoop, what it isn’t and what it can help you do?
2. What are the operational concerns and risks?
3. What organizational changes to expect?
4. What are the observed Hadoop trends?
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
"Financial Odyssey: Navigating Past Performance Through Diverse Analytical Lens"sameer shah
Embark on a captivating financial journey with 'Financial Odyssey,' our hackathon project. Delve deep into the past performance of two companies as we employ an array of financial statement analysis techniques. From ratio analysis to trend analysis, uncover insights crucial for informed decision-making in the dynamic world of finance."
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
1. Apache Flink: What,
How, Why, Who, Where?
By @SlimBaltagi
Director of Big Data Engineering
Capital One
1
New York City (NYC) Apache Flink Meetup
Civic Hall, NYC
February 2nd, 2016
New York City (NYC) Apache Flink Meetup
Civic Hall, NYC
February 2nd, 2016
2. Agenda
I. What is Apache Flink stack and how it fits
into the Big Data ecosystem?
II. How Apache Flink integrates with Hadoop
and other open source tools?
III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark?
IV. Who is using Apache Flink?
V. Where to learn more about Apache Flink?
2
3. I. What is Apache Flink stack and how it
fits into the Big Data ecosystem?
1. What is Apache Flink?
2. What is Flink Execution Engine?
3. What are Flink APIs?
4. What are Flink Domain Specific Libraries?
5. What is Flink Architecture?
6. What is Flink Programming Model?
7. What are Flink tools?
3
4. 1. What is Apache Flink?
1.1 Apache project with a cool logo!
1.2 Project that evolved the concept of a multi-
purpose Big Data analytics framework
1.3 Project with a unique vision and philosophy
1.4 Only Hybrid ( Real-Time streaming + Batch)
engine supporting many use cases
1.5 Major contributor to the movement of
unification of streaming and batch
1.6 The 4G of Big Data Analytics frameworks
4
5. 1.1 Apache project with a cool logo!
Apache Flink, like Apache Hadoop and
Apache Spark, is a community-driven open source
framework for distributed Big Data Analytics.
Apache Flink has its origins in a research project
called Stratosphere of which the idea was conceived in
late 2008 by professor Volker Markl from the
Technische Universität Berlin in Germany.
Flink joined the Apache incubator in April 2014 and
graduated as an Apache Top Level Project (TLP) in
December 2014.
dataArtisans (data-artisans.com) is a German start-up
company based in Berlin and is leading the
development of Apache Flink. 5
6. 1.1 Apache project with a cool logo
Squirrel is an animal! This reflects the harmony with
other animals in the Hadoop
ecosystem (Zoo): elephant,
pig, python, camel, …
A squirrel is swift and
agile
This reflects the meaning of
the word ‘Flink’: German for
“nimble, swift, speedy”
Red color of the body This reflects the roots of the
project at German universities:
In harmony with red squirrels in
Germany
Colorful tail This reflects an open source
project as the colors match the
ones of the feather symbolizing
the Apache Software Foundation
7. 1.2 Project that evolved the concept of a multi-
purpose Big Data analytics framework
7
What is a typical Big Data Analytics Stack: Hadoop, Spark, Flink, …?
8. 1.2 Project that evolved the concept of a multi-
purpose Big Data analytics framework
Apache Flink, written in Java and Scala, consists of:
1. Big data processing engine: distributed and
scalable streaming dataflow engine
2. Several APIs in Java/Scala/Python:
• DataSet API – Batch processing
• DataStream API – Real-Time streaming analytics
3. Domain-Specific Libraries:
• FlinkML: Machine Learning Library for Flink
• Gelly: Graph Library for Flink
• Table: Relational Queries
• FlinkCEP: Complex Event Processing for Flink8
10. • Declarativity
• Query optimization
• Efficient parallel in-
memory and out-of-
core algorithms
• Massive scale-out
• User Defined
Functions
• Complex data types
• Schema on read
• Real-Time
Streaming
• Iterations
• Memory
Management
• Advanced
Dataflows
• General APIs
Draws on concepts
from
MPP Database
Technology
Draws on concepts
from
Hadoop MapReduce
Technology
Add
1.3 Project with a unique vision and philosophy
Apache Flink’s original vision was getting the best from
both worlds: MPP Technology and Hadoop MapReduce
Technologies:
11. 1.3 Project with a unique vision and philosophy
All streaming all the time: execute everything as
streams including batch!!
Write like a programming language, execute like a
database.
Alleviate the user from a lot of the pain of:
• manually tuning memory assignment to
intermediate operators
• dealing with physical execution concepts (e.g.,
choosing between broadcast and partitioned joins,
reusing partitions).
11
12. 1.3 Project with a unique vision and philosophy
Little configuration required
• Requires no memory thresholds to configure – Flink
manages its own memory
• Requires no complicated network configurations –
Pipelining engine requires much less memory for data
exchange
• Requires no serializers to be configured – Flink
handles its own type extraction and data
representation
Little tuning required: Programs can be adjusted
to data automatically – Flink’s optimizer can
choose execution strategies automatically 12
13. 1.3 Project with a unique vision and philosophy
Support for many file systems:
• Flink is File System agnostic. BYOS: Bring Your
Own Storage
Support for many deployment options:
• Flink is agnostic to the underlying cluster
infrastructure. BYOC: Bring Your Own Cluster
Be a good citizen of the Hadoop ecosystem
• Good integration with YARN
Preserve your investment in your legacy Big Data
applications: Run your legacy code on Flink’s
powerful engine using Hadoop and Storm
compatibility layers and Cascading adapter. 13
14. 1.3 Project with a unique vision and philosophy
Native Support of many use cases on top of the same
streaming engine
• Batch
• Real-Time streaming
• Machine learning
• Graph processing
• Relational queries
Support building complex data pipelines
leveraging native libraries without the need to
combine and manage external ones.
14
15. 1.4 The only hybrid (Real-Time Streaming +
Batch) open source distributed data processing
engine natively supporting many use cases:
Real-Time stream processing Machine Learning at scale
Graph AnalysisBatch Processing
15
16. 1.5 Major contributor to the movement of unification of
streaming and batch
Dataflow proposal for incubation has been renamed to
Apache Beam ( for combination of Batch and Stream)
https://wiki.apache.org/incubator/BeamProposal
Apache Beam was accepted to the Apache incubation
on February 1st, 2016 http://incubator.apache.org/projects/beam.html
Dataflow/Beam & Spark: A Programming Model
Comparison, February 3rd,
2016https://cloud.google.com/dataflow/blog/dataflow-beam-and-spark-comparison
By Tyler Akidau & Frances Perry, Software Engineers, Apache
Beam Committers
16
17. 1.5 Major contributor to the movement of unification of
streaming and batch
Apache Flink includes DataFlow on Flink http://data-
artisans.com/dataflow-proposed-as-apache-incubator-project/
Keynotes of the Flink Forward 2015 conference:
• Keynote on October 12th, 2015 by Kostas Tzoumas and Stephan
Ewen of dataArtisanshttp://www.slideshare.net/FlinkForward/k-tzoumas-s-
ewen-flink-forward-keynote/
• Keynote on October 13th, 2015 by William Vambenepe of
Googlehttp://www.slideshare.net/FlinkForward/william-vambenepe-
google-cloud-dataflow-and-flink-stream-processing-by-default
17
18. 1.6 The 4G of Big Data Analytics frameworks
Apache Flink is not YABDAF (Yet Another Big Data
Analytics Framework)!
Flink brings many technical innovations and a unique
vision and philosophy that distinguish it from:
Other multi-purpose Big Data analytics frameworks
such as Apache Hadoop and Apache Spark
Single-purpose Big Data Analytics frameworks such
as Apache Storm
Apache Flink is the 4G (4th Generation) of Big Data
Analytics frameworks succeeding to Apache Spark.
18
19. Apache Flink as the 4G of Big Data Analytics
Batch Batch
Interactive
Batch
Interactive
Near-Real
Time Streaming
Iterative
processing
Hybrid
(Streaming +Batch)
Interactive
Real-Time
Streaming
Native Iterative
processing
MapReduce Direct Acyclic
Graphs (DAG)
Dataflows
RDD: Resilient
Distributed
Datasets
Cyclic Dataflows
1st
Generation
(1G)
2ndGeneration
(2G)
3rd Generation
(3G)
4th Generation
(4G)
19
20. How Big Data Analytics engines evolved?
The evolution of Massive-Scale Data Processing
Tyler Akidau, Google. Strata + Hadoop World, Singapore,
December 2, 2015. Slides:
https://docs.google.com/presentation/d/10vs2PnjynYMtDpwFsqmSePtMnf
JirCkXcHZ1SkwDg-s/present?slide=id.g63ca2a7cd_0_527
The world beyond batch:
Streaming 101, Tyler Akidau, Google, August 5, 2015
http://radar.oreilly.com/2015/08/the-world-beyond-batch-streaming-
101.html
Streaming 102, Tyler Akidau, Google, January 20, 2016
https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102
It covers topics like event-time vs. processing-time, windowing,
watermarks, triggers, and accumulation.
20
21. 2. What is Flink Execution Engine?
The core of Flink is a distributed and scalable streaming
dataflow engine with some unique features:
1. True streaming capabilities: Execute everything as
streams
2. Versatile: Engine allows to run all existing MapReduce,
Cascading, Storm, Google DataFlow applications
3. Native iterative execution: Allow some cyclic dataflows
4. Handling of mutable state
5. Custom memory manager: Operate on managed
memory
6. Cost-Based Optimizer: for both batch and stream
processing 21
22. 3. Flink APIs
3.1 DataSet API for static data - Java, Scala,
and Python
3.2 DataStream API for unbounded real-time
streams - Java and Scala
22
23. 3.1 DataSet API – Batch processing
case class Word (word: String, frequency: Int)
val env = StreamExecutionEnvironment.getExecutionEnvironment()
val lines: DataStream[String] = env.fromSocketStream(...)
lines.flatMap {line => line.split(" ")
.map(word => Word(word,1))}
.window(Time.of(5,SECONDS)).every(Time.of(1,SECONDS))
.keyBy("word").sum("frequency")
.print()
env.execute()
val env = ExecutionEnvironment.getExecutionEnvironment()
val lines: DataSet[String] = env.readTextFile(...)
lines.flatMap {line => line.split(" ")
.map(word => Word(word,1))}
.groupBy("word").sum("frequency")
.print()
env.execute()
DataSet API (batch): WordCount
DataStream API (streaming): Window WordCount
23
24. 3.2 DataStream API – Real-Time Streaming
Analytics
Flink Streaming provides high-throughput, low-latency
stateful stream processing system with rich windowing
semantics.
Streaming Fault-Tolerance allows Exactly-once
processing delivery guarantees for Flink streaming
programs that analyze streaming sources persisted by
Apache Kafka.
Flink Streaming provides native support for iterative
stream processing.
Data streams can be transformed and modified using
high-level functions similar to the ones provided by the
batch processing API.
24
25. 3.2 DataStream API – Real-Time Streaming
Analytics
Flink being based on a pipelined (streaming) execution
engine akin to parallel database systems allows to:
• implement true streaming & batch
• integrate streaming operations with rich windowing
semantics seamlessly
• process streaming operations in a pipelined way with
lower latency than micro-batch architectures and
without the complexity of lambda architectures.
It has built-in connectors to many data sources like
Flume, Kafka, Twitter, RabbitMQ, etc
25
26. 3.2 DataStream API – Real-Time Streaming
Analytics
Apache Flink: streaming done right. Till Rohrmann.
January 31, 2016
https://fosdem.org/2016/schedule/event/hpc_bigdata_flink_streaming/
Web resources about stream processing with Apache
Flink at the Flink Knowledge Base
http://sparkbigdata.com/component/tags/tag/49-flink-streaming
26
28. 4.1 FlinkML - Machine Learning Library
FlinkML is the Machine Learning (ML) library for Flink.
It is written in Scala and was added in March 2015.
FlinkML aims to provide:
• an intuitive API
• scalable ML algorithms
• tools that help minimize glue code in end-to-end ML
applications
FlinkML will allow data scientists to:
• test their models locally using subsets of data
• use the same code to run their algorithms at a much
larger scale in a cluster setting.
28
29. 4.1 FlinkML
FlinkML unique features are:
1. Exploiting the in-memory data streaming nature of
Flink.
2. Natively executing iterative processing algorithms
which are common in Machine Learning.
3. Streaming ML designed specifically for data
streams.
FlinkML: Large-scale machine learning with Apache
Flink, Theodore Vasiloudis. October 21, 2015
Slides: https://sics.app.box.com/s/044omad6200pchyh7ptbyxkwvcvaiowu
Video: https://www.youtube.com/watch?v=k29qoCm4c_k&feature=youtu.be
Check more FlinkML web resources at the Apache
Flink Knowledge Base: http://sparkbigdata.com/component/tags/tag/51-29
30. 4.2 Table – Relational Queries
Table API, written in Scala , allows specifying
operations using SQL-like expressions instead of
manipulating DataSet or DataStream.
Table API can be used in both batch (on structured
data sets) and streaming programs (on structured
data streams).http://ci.apache.org/projects/flink/flink-docs-
master/libs/table.html
Flink Table web resources at the Apache Flink
Knowledge Base: http://sparkbigdata.com/component/tags/tag/52-
flink-table
30
31. 4.2 Table API – Relational Queries
val customers = envreadCsvFile(…).as('id, 'mktSegment)
.filter("mktSegment = AUTOMOBILE")
val orders = env.readCsvFile(…)
.filter( o =>
dateFormat.parse(o.orderDate).before(date) )
.as("orderId, custId, orderDate, shipPrio")
val items = orders
.join(customers).where("custId = id")
.join(lineitems).where("orderId = id")
.select("orderId, orderDate, shipPrio,
extdPrice * (Literal(1.0f) – discount) as
revenue")
val result = items
.groupBy("orderId, orderDate, shipPrio")
.select("orderId, revenue.sum, orderDate, shipPrio")
Table API (queries)
31
32. 4.3 Gelly – Graph Analytics for Flink
Gelly is Flink's large-scale graph processing API,
available in Java and Scala, which leverages Flink's
efficient delta iterations to map various graph
processing models (vertex-centric and gather-sum-
apply) to dataflows.
Gelly provides:
• A set of methods and utilities to create, transform
and modify graphs
• A library of graph algorithms which aims to simplify
the development of graph analysis applications
• Iterative graph algorithms are executed leveraging
mutable state
32
33. 4.3 Gelly – Graph Analytics for Flink
Gelly allows Flink users to perform end-to-end data
analysis, without having to build complex pipelines and
combine different systems.
It can be seamlessly combined with Flink's DataSet API,
which means that pre-processing, graph creation, graph
analysis and post-processing can be done in the same
application.
Gelly documentation https://ci.apache.org/projects/flink/flink-docs-
master/libs/gelly_guide.html
Introducing Gelly: Graph Processing with Apache Flink
http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html
Check out more Gelly web resources at the Apache Flink
Knowledge Base: http://sparkbigdata.com/component/tags/tag/50-gelly33
35. 4.4 FlinkCEP: Complex Event Processing for
Flink
FlinkCEP is the complex event processing library for
Flink. It allows you to easily detect complex event
patterns in a stream of endless data.
Complex events can then be constructed from
matching sequences. This gives you the opportunity to
quickly get hold of what’s really important in your data.
https://ci.apache.org/projects/flink/flink-docs-
master/apis/streaming/libs/cep.html
35
36. 5. What is Flink Architecture?
Flink implements the Kappa Architecture:
run batch programs on a streaming system.
References about the Kappa Architecture:
• Questioning the Lambda Architecture - Jay Kreps ,
July 2nd, 2014 http://radar.oreilly.com/2014/07/questioning-the-lambda-
architecture.html
• Turning the database inside out with Apache
Samza -Martin Kleppmann, March 4th, 2015
o http://www.youtube.com/watch?v=fU9hR3kiOK0 (VIDEO)
o http://martin.kleppmann.com/2015/03/04/turning-the-database-inside-
out.html(TRANSCRIPT)
o http://blog.confluent.io/2015/03/04/turning-the-database-inside-out-with-
apache-samza/ (BLOG)
36
37. 5. What is Flink Architecture?
5.1 Client
5.2 Master (Job Manager)
5.3 Worker (Task Manager)
37
38. 5.1 Client
Type extraction
Optimize: in all APIs not just SQL queries as in Spark
Construct job Dataflow graph
Pass job Dataflow graph to job manager
Retrieve job results
Job Manager
Client
case class Path (from: Long, to: Long)
val tc = edges.iterate(10) {
paths: DataSet[Path] =>
val next = paths
.join(edges)
.where("to")
.equalTo("from") {
(path, edge) =>
Path(path.from, edge.to)
}
.union(paths)
.distinct()
next
}
Optimizer
Type
extraction
Data Source
orders.tbl
Filter
Map
DataSource
lineitem.tbl
Join
Hybrid Hash
buildHT probe
hash-part
[0] hash-part [0]
GroupRed
sort
forward
38
39. 5.2 Job Manager (JM) with High Availability
Parallelization: Create Execution Graph
Scheduling: Assign tasks to task managers
State tracking: Supervise the execution
Job Manager
Data
Source
orders.tbl
Filter
Map
DataSource
lineitem.tbl
Join
Hybrid Hash
buildHT probe
hash-part [0]
hash-part
[0]
GroupRed
sort
forwar
d
Task
Manager
Task
Manager
Task
Manager
Task
Manager
Data
Source
orders.tbl
Filter
Map
DataSour
ce
lineitem.tbl
Join
Hybrid Hash
build
HT
prob
e
hash-part [0] hash-part [0]
GroupRed
sort
forwar
d
Data
Source
orders.tbl
Filter
Map
DataSour
ce
lineitem.tbl
Join
Hybrid Hash
build
HT
prob
e
hash-part [0] hash-part [0]
GroupRed
sort
forwar
d
Data
Source
orders.tbl
Filter
Map
DataSour
ce
lineitem.tbl
Join
Hybrid Hash
build
HT
prob
e
hash-part [0] hash-part [0]
GroupRed
sort
forwar
d
Data
Source
orders.tbl
Filter
Map DataSource
lineitem.tbl
Join
Hybrid
Hash
build
HT
prob
e
hash-part [0] hash-part [0]
GroupRed
sort
forwar
d
39
40. 5.3 Task Manager ( TM)
Operations are split up into tasks depending on the
specified parallelism
Each parallel instance of an operation runs in a
separate task slot
The scheduler may run several tasks from different
operators in one task slot
Task Manager
S
l
o
t
Task ManagerTask Manager
S
l
o
t
S
l
o
t
40
41. 6. What is Flink Programming Model?
DataSet and DataStream as programming
abstractions are the foundation for user programs
and higher layers.
Flink extends the MapReduce model with new
operators that represent many common data analysis
tasks more naturally and efficiently.
All operators will start working in memory and
gracefully go out of core under memory pressure.
41
42. 6.1 DataSet
DataSet: abstraction for distributed data and the
central notion of the batch programming API
Files and other data sources are read into DataSets
• DataSet<String> text = env.readTextFile(…)
Transformations on DataSets produce DataSets
• DataSet<String> first = text.map(…)
DataSets are printed to files or on stdout
• first.writeAsCsv(…)
Computation is specified as a sequence of lazily
evaluated transformations
Execution is triggered with env.execute()
42
43. 6.1 DataSet
Used for Batch Processing
Data
Set
Operation
Data
Set
Source
Example: Map and Reduce operation
Sink
b h
2 1
3 5
7 4
… …
Map Reduce
a
1
2
…
43
44. 6.2 DataStream
Real-time event streams
Data
Stream
Operation
Data
Stream
Source Sink
Stock Feed
Name Price
Microsoft 124
Google 516
Apple 235
… …
Alert if
Microsoft
> 120
Write
event to
database
Sum
every 10
seconds
Alert if
sum >
10000
Microsoft 124
Google 516
Apple 235
Microsoft 124
Google 516
Apple 235
Example: Stream from a live stock feed
44
45. 7. What are Apache Flink tools?
7.1 Command-Line Interface (CLI)
7.2 Web Submission Client
7.3 Job Manager Web Interface
7.4 Interactive Scala Shell
7.5 Zeppelin Notebook
45
46. 7.1 Command-Line Interface (CLI)
Flink provides a CLI to run programs that are packaged
as JAR files, and control their execution.
bin/flink has 4 major actions
• run #runs a program.
• info #displays information about a program.
• list #lists scheduled and running jobs
• cancel #cancels a running job.
Example: ./bin/flink info ./examples/KMeans.jar
See CLI usage and related examples:
https://ci.apache.org/projects/flink/flink-docs-master/apis/cli.html
46
48. 7.2 Web Submission Client
Flink provides a web interface to:
• Upload programs
• Execute programs
• Inspect their execution plans
• Showcase programs
• Debug execution plans
• Demonstrate the system as a whole
The web interface runs on port 8080 by default.
To specify a custom port set the webclient.port
property in the ./conf/flink.yaml configuration file.
48
49. 7.3 Job Manager Web Interface
Overall system status
Job execution details
Task Manager resource
utilization
49
50. 7.3 Job Manager Web Interface
The JobManager web frontend allows to :
• Track the progress of a Flink program
as all status changes are also logged to
the JobManager’s log file.
• Figure out why a program failed as it
displays the exceptions of failed tasks and
allow to figure out which parallel task first
failed and caused the other tasks to cancel
the execution.
50
52. 7.4 Interactive Scala Shell
Flink comes with an Interactive Scala Shell - REPL (
Read Evaluate Print Loop ) :
./bin/start-scala-shell.sh
Interactive queries
Let’s you explore data quickly
It can be used in a local setup as well as in a
cluster setup.
The Flink Shell comes with command history and
auto completion.
Complete Scala API available
So far only batch mode is supported. There is
plan to add streaming in the future:
https://ci.apache.org/projects/flink/flink-docs-master/scala_shell.html
52
54. 7.5 Zeppelin Notebook
Web-based interactive computation
environment
Collaborative data analytics and
visualization tool
Combines rich text, execution code, plots
and rich media
Exploratory data science
Saving and replaying of written code
Storytelling
54
55. Agenda
I. What is Apache Flink stack and how it fits
into the Big Data ecosystem?
II. How Apache Flink integrates with Hadoop
and other open source tools?
III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark?
IV. Who is using Apache Flink?
V. Where to learn more about Apache Flink?
55
56. II. How Apache Flink integrates with Hadoop and
other open source tools?
Service Open Source Tool
Storage/Servi
ng Layer
Data Formats
Data
Ingestion
Services
Resource
Management
56
57. II. How Apache Flink integrates with Hadoop and
other open source tools?
Flink integrates well with other open source tools for
data input and output as well as deployment.
Flink allows to run legacy Big Data applications:
MapReduce, Cascading and Storm applications
Flink integrates with other open source tools
1. Data Input / Output
2. Deployment
3. Legacy Big Data applications
4. Other tools
57
58. 1. Data Input / Output
HDFS to read and write. Secure HDFS support
Reuse data types (that implement Writables interface)
Amazon S3
Microsoft Azure Storage
MapR-FS
Flink + Tachyon
http://tachyon-project.org/
Running Apache Flink on Tachyon http://tachyon-project.org/Running-
Flink-on-Tachyon.html
Flink + XtreemFS http://www.xtreemfs.org/
58
59. 1. Data Input / Output
Crunching Parquet Files with Apache Flink
https://medium.com/@istanbul_techie/crunching-parquet-files-with-apache-flink-
200bec90d8a7
Here are some examples of how to read/write data
from/to HBase:
https://github.com/apache/flink/tree/master/flink-staging/flink-
hbase/src/test/java/org/apache/flink/addons/hbase/example
Using MongoDB with Flink:
http://flink.apache.org/news/2014/01/28/querying_mongodb.html
https://github.com/m4rcsch/flink-mongodb-example
59
60. 1. Data Input / Output
Apache Kafka, a system that provides durability and
pub/sub functionality for data streams.
Kafka + Flink: A practical, how-to guide. Robert
Metzger and Kostas Tzoumas, September 2,
2015 http://data-artisans.com/kafka-flink-a-practical-how-
to/ https://www.youtube.com/watch?v=7RPQUsy4qOM
Click-Through Example for Flink’s KafkaConsumer
Checkpointing. Robert Metzger, September 2nd , 2015.
http://www.slideshare.net/robertmetzger1/clickthrough-example-for-flinks-
kafkaconsumer-checkpointing
MapR Streams (proprietary alternative to Kafka that is
compatible with Apache Kafka 0.9 API) provides out of
the box integration with Apache 60
61. 1. Data Input / Output
Using Apache Nifi with Flink:
• Flink and NiFi: Two Stars in the Apache Big Data
Constellation. Matthew Ring. January 19th , 2016
http://www.slideshare.net/mring33/flink-and-nifi-two-stars-in-the-apache-big-
data-constellation
• Integration of Apache Flink and Apache Nifi. Bryan Bende,
February 4th , 2016
http://www.slideshare.net/BryanBende/integrating-nifi-and-flink
Using Elasticsearch with Flink:
https://www.elastic.co/
Building real-time dashboard applications with Apache
Flink, Elasticsearch, and Kibana. By Fabian Hueske,
December 7, 2015.https://www.elastic.co/blog/building-real-time-dashboard-
applications-with-apache-flink-elasticsearch-and-kibana
61
62. 2. Deployment
Deploy inside of Hadoop via YARN
• YARN Setup http://ci.apache.org/projects/flink/flink-docs-
master/setup/yarn_setup.html
• YARN Configuration
http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#yarn
Apache Flink cluster deployment on Docker using
Docker-Compose by Simons Laws from IBM.
Talk at the Flink Forward in Berlin on October 12,
2015.
Slides: http://www.slideshare.net/FlinkForward/simon-laws-apache-flink-
cluster-deployment-on-docker-and-dockercompose
Video recording (40’:49): https://www.youtube.com/watch?v=CaObaAv9tLE
62
63. 3. Legacy Big Data applications
Flink’s MapReduce compatibility layer allows to:
• run legacy Hadoop MapReduce jobs
• reuse Hadoop input and output formats
• reuse functions like Map and Reduce.
References:
• Documentation: https://ci.apache.org/projects/flink/flink-docs-release-
0.7/hadoop_compatibility.html
• Hadoop Compatibility in Flink by Fabian Hüeske - November
18, 2014 http://flink.apache.org/news/2014/11/18/hadoop-compatibility.html
• Apache Flink - Hadoop MapReduce Compatibility. Fabian
Hüeske, January 29, 2015 http://www.slideshare.net/fhueske/flink-
hadoopcompat20150128
63
64. 3. Legacy Big Data applications
Cascading on Flink allows to port existing Cascading-MapReduce
applications to Apache Flink with virtually no code changes.
http://www.cascading.org/cascading-flink/
Expected advantages are performance boost and less resources
consumption.
References:
• Cascading on Apache Flink, Fabian Hueske, data Artisans. Flink
Forward 2015. October 12, 2015
• http://www.slideshare.net/FlinkForward/fabian-hueske-training-cascading-on-
flink
• https://www.youtube.com/watch?v=G7JlpARrFkU
• Cascading connector for Apache Flink. Code on Github
https://github.com/dataArtisans/cascading-flink
• Running Scalding jobs on Apache Flink, Ian Hummel, December 20,
201http://themodernlife.github.io/scala/hadoop/hdfs/sclading/flink/streaming/realtime/2015/12/2
0/running-scalding-jobs-on-apache-flink/ 64
65. 3. Legacy Big Data applications
Flink is compatible with Apache Storm interfaces and
therefore allows reusing code that was implemented for
Storm:
• Execute existing Storm topologies using Flink as the underlying
engine.
• Reuse legacy application code (bolts and spouts) inside Flink
programs. https://ci.apache.org/projects/flink/flink-docs-
master/apis/streaming/storm_compatibility.html
A Tale of Squirrels and Storms. Mathias J. Sax, October 13, 2015.
Flink Forward 2015
http://www.slideshare.net/FlinkForward/matthias-j-sax-a-tale-of-squirrels-and-storms
https://www.youtube.com/watch?v=aGQQkO83Ong
Storm Compatibility in Apache Flink: How to run existing Storm
topologies on Flink. Mathias J. Sax, December 11, 2015
http://flink.apache.org/news/2015/12/11/storm-compatibility.html 65
66. Ambari service for Apache Flink: install, configure,
manage Apache Flink on HDP, November 17, 2015
https://community.hortonworks.com/repos/4122/ambari-service-for-apache-
flink.html
Exploring Apache Flink with HDP
https://community.hortonworks.com/articles/2659/exploring-apache-flink-with-
hdp.html
Apache Flink + Apache SAMOA for Machine
Learning on streams http://samoa.incubator.apache.org/
Flink Integrates with Zeppelin
http://zeppelin.incubator.apache.org/
http://www.slideshare.net/FlinkForward/moon-soo-lee-data-science-lifecycle-
with-apache-flink-and-apache-zeppelin
Flink + Apache MRQL http://mrql.incubator.apache.org
66
4. Other tools
67. Google Cloud Dataflow (GA on August 12, 2015) is a
fully-managed cloud service and a unified
programming model for batch and streaming big data
processing. https://cloud.google.com/dataflow/ (Try it FREE)
Flink-Dataflow is a Google Cloud Dataflow SDK
Runner for Apache Flink. It enables you to run
Dataflow programs with Flink as an execution engine.
References:
Google Cloud Dataflow on top of Apache Flink,
Maximilian Michels, data Artisans. Flink Forward
conference, October 12, 2015
http://www.slideshare.net/FlinkForward/maximilian-michels-google-
cloud-dataflow-on-top-of-apache-flink Slides
https://www.youtube.com/watch?v=K3ugWmHb7CE Video recording
67
4. Other tools
68. Agenda
I. What is Apache Flink stack and how it fits
into the Big Data ecosystem?
II. How Apache Flink integrates with Hadoop
and other open source tools for data input
and output as well as deployment?
III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark?
IV. Who is using Apache Flink?
V. Where to learn more about Apache Flink?
68
69. III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark?
1. Why Flink is an alternative to Hadoop
MapReduce?
2. Why Flink is an alternative to Apache Storm?
3. Why Flink is an alternative to Apache Spark?
4. What are the benchmarking results against
Flink?
69
70. 2. Why Flink is an alternative to Hadoop
MapReduce?
1. Flink offers cyclic dataflows compared to the two-
stage, disk-based MapReduce paradigm.
2. The application programming interface (API) for
Flink is easier to use than programming for
Hadoop’s MapReduce.
3. Flink is easier to test compared to MapReduce.
4. Flink can leverage in-memory processing, data
streaming and iteration operators for faster data
processing speed.
5. Flink can work on file systems other than Hadoop.
70
71. 2. Why Flink is an alternative to Hadoop
MapReduce?
6. Flink lets users work in a unified framework allowing
to build a single data workflow that leverages,
streaming, batch, sql and machine learning for
example.
7. Flink can analyze real-time streaming data.
8. Flink can process graphs using its own Gelly library.
9. Flink can use Machine Learning algorithms from its
own FlinkML library.
10. Flink supports interactive queries and iterative
algorithms, not well served by Hadoop MapReduce.
71
72. 2. Why Flink is an alternative to Hadoop
MapReduce?
11. Flink extends MapReduce model with new operators:
join, cross, union, iterate, iterate delta, cogroup, …
Input Map Reduce Output
DataSet DataSet
DataSet
Red Join
DataSet Map DataSet
OutputS
Input
72
73. 3. Why Flink is an alternative to Storm?
1. Higher Level and easier to use API
2. Lower latency
• Thanks to pipelined engine
3. Exactly-once processing guarantees
• Variation of Chandy-Lamport
4. Higher throughput
• Controllable checkpointing overhead
5. Flink Separates application logic from
recovery
• Checkpointing interval is just a configuration
parameter 73
74. 3. Why Flink is an alternative to Storm?
6. More light-weight fault tolerance strategy
7. Stateful operators
8. Native support for iterative stream
processing.
9. Flink does also support batch processing
10. Flink offers Storm compatibility
• Flink is compatible with Apache Storm interfaces and
therefore allows reusing code that was implemented for
Storm.
https://ci.apache.org/projects/flink/flink-docs-
master/apis/storm_compatibility.html
74
75. 3. Why Flink is an alternative to Storm?
Extending the Yahoo! Streaming Benchmark, by
Jamie Grier. February 2nd, 2016
http://data-artisans.com/extending-the-yahoo-streaming-benchmark/
Code at Github: https://github.com/dataArtisans/yahoo-streaming-benchmark
Results show that Flink has a much better throughput
compared to storm and better fault-tolerance
guarantees: exactly-once.
High-throughput, low-latency, and exactly-once
stream processing with Apache Flink. The evolution
of fault-tolerant streaming architectures and their
performance – Kostas Tzoumas, August 5th 2015
http://data-artisans.com/high-throughput-low-latency-and-exactly-once-stream-
processing-with-apache-flink/
75
76. 4. Why Flink is an alternative to Spark?
4.1 True Low latency streaming engine
• Spark’s micro-batches aren’t good enough!
• Unified batch and real-time streaming in a single
engine
• The streaming model of Flink is based on the
Dataflow model similar to Google Dataflow
4.2 Unique windowing features not available in Spark
• support for event time
• out of order streams
• a mechanism to define custom windows based on
window assigners and triggers.
76
77. 4. Why Flink is an alternative to Spark?
4.3 Native closed-loop iteration operators
• make graph and machine learning applications run
much faster
4.4 Custom memory manager
• no more frequent Out Of Memory errors!
• Flink’s own type extraction component
• Flink’s own serialization component
4.5 Automatic Cost Based Optimizer
• little re-configuration and little maintenance when
the cluster characteristics change and the data
evolves over time
77
78. 4. Why Flink is an alternative to Apache
Spark?
4.6 Little configuration required
4.7 Little tuning required
4.8 Flink has better performance
78
79. 4.1 True low latency streaming engine
Some claim that 95% of streaming use cases can be
handled with micro-batches!? Really!!!
Spark’s micro-batching isn’t good enough for many
time-critical applications that need to process large
streams of live data and provide results in real-time.
Below are Several use cases, taken from real industrial
situations where batch or micro batch processing is not
appropriate.
References:
• MapR Streams FAQ https://www.mapr.com/mapr-streams-faq#question12
• Apache Spark vs. Apache Flink, January 13, 2015. Whiteboard
walkthrough by Balaji Narasimhalu from MapR
https://www.youtube.com/watch?v=Dzx-iE6RN4w 79
80. 4.1 True low latency streaming engine
Financial Services
– Real-time fraud detection.
– Real-time mobile notifications.
Healthcare
– Smart hospitals - collect data and readings from hospital
devices (vitals, IVs, MRI, etc.) and analyze and alert in real time.
– Biometrics - collect and analyze data from patient devices that
collect vitals while outside of care facilities.
Ad Tech
– Real-time user targeting based on segment and preferences.
Oil & Gas
– Real-time monitoring of pumps/rigs.
80
81. 4.1 True low latency streaming engine
Retail
– Build an intelligent supply chain by placing sensors or RFID
tags on items to alert if items aren’t in the right place, or
proactively order more if supply is low.
– Smart logistics with real-time end-to-end tracking of delivery
trucks.
Telecommunications
– Real-time antenna optimization based on user location data.
– Real-time charging and billing based on customer usage, ability
to populate up-to-date usage dashboards for users.
– Mobile offers.
– Optimized advertising for video/audio content based on what
users are consuming.
81
82. 4.1 True low latency streaming engine
“I would consider stream data analysis to be a major
unique selling proposition for Flink. Due to its
pipelined architecture Flink is a perfect match for big
data stream processing in the Apache stack.” – Volker
Markl
Ref.: On Apache Flink. Interview with Volker Markl, June 24th 2015
http://www.odbms.org/blog/2015/06/on-apache-flink-interview-with-volker-markl/
Apache Flink uses streams for all workloads:
streaming, SQL, micro-batch and batch.
Batch is just treated as a finite set of streamed data.
This makes Flink the most sophisticated distributed
open source Big Data processing engine.
82
83. 4.2 Unique windowing features not
available in Spark Streaming
Besides arrival time, support for event time or a mixture
of both for out of order streams
Custom windows based on window assigners and
triggers.
How Apache Flink enables new streaming applications.
Part I: The power of event time and out of order stream processing.
December 9, 2015 by Stephan Ewen and Kostas Tzoumas http://data-
artisans.com/how-apache-flink-enables-new-streaming-applications-part-1/
How Apache Flink enables new streaming applications.
Part II: State and versioning. February 3, 2016 by Ufuk Celebi and
Kostas Tzoumas
http://data-artisans.com/how-apache-flink-enables-new-streaming-applications/
83
84. 4.2 Unique windowing features not
available in Spark Streaming
Flink 0.10: A significant step forward in open source
stream processing. November 17, 2015. By Fabian
Hueske and Kostas Tzoumashttp://data-artisans.com/flink-0-10-a-
significant-step-forward-in-open-source-stream-processing/
Dataflow/Beam & Spark: A Programming Model
Comparison. February 3, 2016. By Tyler Akidau & Frances
Perry, Software Engineers, Apache Beam
Committershttps://cloud.google.com/dataflow/blog/dataflow-beam-and-
spark-comparison
84
86. 4.2 Iteration Operators
Flink's API offers two dedicated iteration operations:
Iterate and Delta Iterate.
Flink executes programs with iterations as cyclic
data flows: a data flow program (and all its operators)
is scheduled just once.
In each iteration, the step function consumes the
entire input (the result of the previous iteration, or the
initial data set), and computes the next version of the
partial solution
86
87. 4.3 Iteration Operators
Delta iterations run only on parts of the data that is
changing and can significantly speed up many
machine learning and graph algorithms because the
work in each iteration decreases as the number of
iterations goes on.
Documentation on iterations with Apache
Flinkhttp://ci.apache.org/projects/flink/flink-docs-master/apis/iterations.html
87
88. 4.3 Iteration Operators
Step
Step
Step Step Step
Client
for (int i = 0; i < maxIterations; i++) {
// Execute MapReduce job
}
Non-native iterations in Hadoop and Spark are
implemented as regular for-loops outside the system.
88
89. 4.3 Iteration Operators
Although Spark caches data across iterations, it still
needs to schedule and execute a new set of tasks for
each iteration.
In Spark, it is driver-based looping:
• Loop outside the system, in driver program
• Iterative program looks like many independent jobs
In Flink, it is Built-in iterations:
• Dataflow with Feedback edges
• System is iteration-aware, can optimize the job
Spinning Fast Iterative Data Flows - Ewen et al. 2012 :
http://vldb.org/pvldb/vol5/p1268_stephanewen_vldb2012.pdf The
Apache Flink model for incremental iterative dataflow
processing. 89
90. 4.4 Custom Memory Manager
Features:
C++ style memory management inside the JVM
User data stored in serialized byte arrays in JVM
Memory is allocated, de-allocated, and used strictly
using an internal buffer pool implementation.
Advantages:
1. Flink will not throw an OOM exception on you.
2. Reduction of Garbage Collection (GC)
3. Very efficient disk spilling and network transfers
4. No Need for runtime tuning
5. More reliable and stable performance
90
91. 4.4 Custom Memory Manager
public class WC {
public String word;
public int count;
}
empty
page
Pool of Memory Pages
Sorting,
hashing,
caching
Shuffles/
broadcasts
User code
objects
ManagedUnmanagedFlink contains its own memory management stack.
To do that, Flink contains its own type extraction
and serialization components.
JVM Heap
91
Network
Buffers
92. 4.4 Custom Memory Manager
Flink provides an Off-Heap option for its memory
management component
References:
• Peeking into Apache Flink's Engine Room - by Fabian
Hüske, March 13,
2015 http://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-
Engine-Room.html
• Juggling with Bits and Bytes - by Fabian Hüske, May
11,2015
https://flink.apache.org/news/2015/05/11/Juggling-with-Bits-and-Bytes.html
• Memory Management (Batch API) by Stephan Ewen-
May 16, 2015
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=53741525
92
93. 4.4 Custom Memory Manager
Compared to Flink, Spark is catching up with its
project Tungsten for Memory Management and
Binary Processing: manage memory explicitly and
eliminate the overhead of JVM object model and
garbage collection. April 28,
2014https://databricks.com/blog/2015/04/28/project-tungsten-bringing-
spark-closer-to-bare-metal.html
It seems that Spark is adopting something similar to
Flink and the initial Tungsten announcement read
almost like Flink documentation!!
93
94. 4.5 Built-in Cost-Based Optimizer
Apache Flink comes with an optimizer that is
independent of the actual programming interface.
It chooses a fitting execution strategy depending
on the inputs and operations.
Example: the "Join" operator will choose between
partitioning and broadcasting the data, as well as
between running a sort-merge-join or a hybrid hash
join algorithm.
This helps you focus on your application logic
rather than parallel execution.
Quick introduction to the Optimizer: section 6 of the
paper: ‘The Stratosphere platform for big data
analytics’http://stratosphere.eu/assets/papers/2014-
VLDBJ_Stratosphere_Overview.pdf
94
95. 4.5 Built-in Cost-Based Optimizer
Run locally on a data
sample
on the laptop
Run a month later
after the data evolved
Hash vs. Sort
Partition vs. Broadcast
Caching
Reusing partition/sort
Execution
Plan A
Execution
Plan B
Run on large files
on the cluster
Execution
Plan C
What is Automatic Optimization? The system's built-in
optimizer takes care of finding the best way to
execute the program in any environment.
95
96. 4.5 Built-in Cost-Based Optimizer
In contrast to Flink’s built-in automatic optimization,
Spark jobs have to be manually optimized and
adapted to specific datasets because you need to
manually control partitioning and caching if you
want to get it right.
Spark SQL uses the Catalyst optimizer that
supports both rule-based and cost-based
optimization. References:
• Spark SQL: Relational Data Processing in
Sparkhttp://people.csail.mit.edu/matei/papers/2015/sigmod_spark_sql.p
df
• Deep Dive into Spark SQL’s Catalyst Optimizer
https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-
catalyst-optimizer.html
96
97. 4.6 Little configuration required
Flink requires no memory thresholds to
configure
• Flink manages its own memory
Flink requires no complicated network
configurations
• Pipelining engine requires much less
memory for data exchange
Flink requires no serializers to be configured
• Flink handles its own type extraction and
data representation
97
98. 4.7 Little tuning required
Flink programs can be adjusted to data automatically
• Flink’s optimizer can choose execution strategies
automatically
According to Mike Olsen, Chief Strategy Officer of
Cloudera Inc. “Spark is too knobby — it has too many
tuning parameters, and they need constant adjustment
as workloads, data volumes, user counts change.
Reference: http://vision.cloudera.com/one-platform/
Tuning Spark Streaming for Throughput By Gerard
Maas from Virdata. December 22, 2014
http://www.virdata.com/tuning-spark/
Spark Tuning: http://spark.apache.org/docs/latest/tuning.html
98
99. 4.8 Flink has better performance
Why Flink provides a better performance?
• Custom memory manager
• Native closed-loop iteration operators make graph
and machine learning applications run much faster .
• Role of the built-in automatic optimizer. For example,
more efficient join processing
• Pipelining data to the next operator in Flink is more
efficient than in Spark.
Reference:
• A comparative performance evaluation of Flink,
Dongwon Kim, Postech. October 12,
2015http://www.slideshare.net/FlinkForward/dongwon-kim-a-comparative-
performance-evaluation-of-flink 99
100. 5. What are the benchmarking results
against Flink?
I am maintaining a list of resources related to
benchmarks against Flink: http://sparkbigdata.com/102-spark-blog-
slim-baltagi/14-results-of-a-benchmark-between-apache-flink-and-apache-spark
A couple resources worth mentioning:
• A comparative performance evaluation of Flink, Dongwon
Kim, POSTECH, Flink Forward October 13,
2015 http://www.slideshare.net/FlinkForward/dongwon-kim-a-comparative-
performance-evaluation-of-flink
• Benchmarking Streaming Computation Engines at Yahoo
December 16, 2015 Code at
github: http://yahooeng.tumblr.com/post/135321837876/benchmarking-
streaming-computation-engines-at
https://github.com/yahoo/streaming-benchmarks
100
101. Agenda
I. What is Apache Flink stack and how it fits
into the Big Data ecosystem?
II. How Apache Flink integrates with Hadoop
and other open source tools for data input
and output as well as deployment?
III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark.
IV. Who is using Apache Flink?
V. Where to learn more about Apache Flink?
101
102. IV. Who is using Apache Flink?
You might like what you saw so far about
Apache Flink and still reluctant to give it a try!
You might wonder: Is there anybody using
Flink in pre-production or production
environment?
I asked this question to our friend ‘Google’
and I came with a short list in the next slide!
I also heard more about who is using Flink in
production at the Flink Forward conference on
October 12-13, 2015 in Berlin, Germany!
http://flink-forward.org/
102
103. IV. Who is using Apache Flink?
How companies are using Flink as presented at Flink
Forward 2015. Kostas Tzoumas and Stephan Ewen.
http://www.slideshare.net/stephanewen1/flink-use-cases-bay-area-meetup-
october-2015
Powered by Flink page:
https://cwiki.apache.org/confluence/display/FLINK/Powered+by+Flink
103
104. IV. Who is using Apache Flink?
6 Apache Flink Case Studies from the 2015 Flink
Forward conference http://sparkbigdata.com/102-spark-blog-slim-
baltagi/21-6-apache-flink-case-studies-from-the-2015-flinkforward-conference
Mine the Apache Flink User mailing list to discover
more!
Gradoop: Scalable Graph Analytics with Apache Flink
• Gradoop project page http://dbs.uni-
leipzig.de/en/research/projects/gradoop
• Gradoop: Scalable Graph Analytics with Apache Flink
@ FOSDEM 2016. January 31,
2016http://www.slideshare.net/s1ck/gradoop-scalable-graph-analytics-with-
apache-flink-fosdem-2016
104
105. PROTEUS http://www.proteus-bigdata.com/
a European Union funded research project to improve
Apache Flink and mainly to develop two libraries
(visualization and online machine learning) on top of
Flink core.
PROTEUS: Scalable Online Machine Learning by
Rubén Casado at Big Data Spain 2015
• Video: https://www.youtube.com/watch?v=EIH7HLyqhfE
• Slides: http://www.slideshare.net/Datadopter/proteus-h2020-big-data
105
IV. Who is using Apache Flink?
106. IV. Who is using Apache Flink?
has its hack week and the winner was
a Flink based streaming project! December 18, 2015
• Extending the Yahoo! Streaming Benchmark and Winning
Twitter Hack-Week with Apache Flink. Posted on
February 2, 2016 by Jamie Grier http://data-
artisans.com/extending-the-yahoo-streaming-benchmark/
did some benchmarks to
compare performance of their use case implemented
on Apache Storm against Spark Streaming and Flink.
Results posted on December 18, 2015
http://yahooeng.tumblr.com/post/135321837876/benchmarking-
streaming-computation-engines-at
106
107. Agenda
I. What is Apache Flink stack and how it fits
into the Big Data ecosystem?
II. How Apache Flink integrates with Hadoop
and other open source tools for data input
and output as well as deployment?
III. Why Apache Flink is an alternative to
Apache Hadoop MapReduce, Apache Storm
and Apache Spark?
IV. Who is using Apache Flink?
V. Where to learn more about Apache Flink?
107
108. V. Where to learn more about Apache Flink?
1. What is Flink 2016 roadmap?
2. How to get started quickly with Apache
Flink?
3. Where to find more resources about
Apache Flink?
4. How to contribute to Apache Flink?
5. What are some Key Takeaways?
108
109. 1 What is Flink 2016 roadmap?
SQL/StreamSQL and Table API
CEP Library: Complex Event Processing library for the
analysis of complex patterns such as correlations and
sequence detection from multiple sources
https://github.com/apache/flink/pull/1557 January 28, 2015
Dynamic Scaling: Runtime scaling for DataStream
programs
Managed memory for streaming operators
Support for Apache Mesos
https://issues.apache.org/jira/browse/FLINK-1984
Security: Over-the-wire encryption of RPC (Akka) and
data transfers (Netty)
Additional streaming connectors: Cassandra, Kinesis109
110. 1 What is Flink roadmap?
Expose more runtime metrics: Throughput / Latencies,
Backpressure monitoring, Spilling / Out of Core
Making YARN resource dynamic
DataStream API enhancements
DataSet API Enhancements
References:
• Apache Flink Roadmap Draft, December 2015
https://docs.google.com/document/d/1ExmtVpeVVT3TIhO1JoBpC5JKXm-
778DAD7eqw5GANwE/edit
• What’s next? Roadmap 2016. Robert Metzger, January 26,
2016. Berlin Apache Flink Meetup.
http://www.slideshare.net/robertmetzger1/january-2016-flink-community-
update-roadmap-2016/9
110
111. 2. How to get started quickly with Apache
Flink?
Step-By-Step Introduction to Apache
Flinkhttp://www.slideshare.net/sbaltagi/stepbystep-introduction-to-apache-flink
Implementing BigPetStore with Apache Flink
http://www.slideshare.net/MrtonBalassi/implementing-bigpetstore-with-apache-flink
Apache Flink Crash Course
http://www.slideshare.net/sbaltagi/apache-
flinkcrashcoursebyslimbaltagiandsrinipalthepu
Free training from Data Artisans
http://dataartisans.github.io/flink-training/
All talks at the Flink Forward 2015
http://sparkbigdata.com/102-spark-blog-slim-baltagi/22-all-talks-of-the-
2015-flink-forward-conference 111
112. 3. Where to find more resources about
Flink?
Flink at the Apache Software Foundation: flink.apache.org/
data-artisans.com
@ApacheFlink, #ApacheFlink, #Flink
apache-flink.meetup.com
github.com/apache/flink
user@flink.apache.org dev@flink.apache.org
Flink Knowledge Base
http://sparkbigdata.com/component/tags/tag/27-flink
112
113. 4. How to contribute to Apache Flink?
Contributions to the Flink project can be in the
form of:
• Code
• Tests
• Documentation
• Community participation: discussions, questions,
meetups, …
How to contribute guide ( also contains a list of
simple “starter issues”)
http://flink.apache.org/how-to-contribute.html
113
114. 5. What are some key takeaways?
1. Although most of the current buzz is about Spark,
Flink offers the only hybrid (Real-Time Streaming +
Batch) open source distributed data processing
engine natively supporting many use cases.
2. With the upcoming release of Apache Flink 1.0, I
foresee more adoption especially in use cases with
Real-Time stream processing and also fast iterative
machine learning or graph processing.
3. I foresee Flink embedded in major Hadoop
distributions and supported!
4. Apache Spark and Apache Flink will both have their
sweet spots despite their “Me Too Syndrome”!
114
115.
116. Thanks!
116
• To all of you for attending!
• To Bloomberg for sponsoring this event.
• To data Artisans for allowing me to use some of
their materials for my slide deck.
• To Capital One for giving me time to prepare and
give this talk.
• Yes, we are hiring for our New York City offices
and our other locations! http://jobs.capitalone.com
• Drop me a note at sbaltagi@gmail.com if you’re
interested.