Virtual Flink Forward 2020: Build your next-generation stream platform based ...Flink Forward
As organizations are getting better at capturing streaming data and the data velocity and volume are ever-increasing, the traditional messaging queues or log storage systems are suffering from scalability or operational and maintenance problems. Apache Pulsar is a multi-tenant, high-performance distributed pub-sub messaging system. Pulsar includes multiple features, such as native support for multiple clusters in a Pulsar instance, seamless geo-replication of messages across clusters, very low publishing and end-to-end latency, seamless scalability to over a million topics, and guaranteed message delivery with persistent message storage provided by Apache BookKeeper. In this talk, I will use one of the most popular stream processing engines, Apache Flink, as an example, to share our experience in building a stream processing and storage stack. Some of the traits are: * How to ensure end-to-end exactly-once semantics based on Pulsar's durable and replayable storage as well as Pulsar transaction. * How to implement Pulsar topics as infinite tables based on Pulsar's schema. * How to efficiently store stream states in Pulsar based on Pulsar's layered storage API. * A usage scenario that chaining all functionalities in the streaming platform.
Flink Connector Development Tips & TricksEron Wright
A look at some of the challenges and techniques for developing a connector for Apache Flink, covering the different types of connectors, lifecycle, metrics, event-time support, and fault tolerance.
Presentation video: https://www.youtube.com/watch?v=ZkbYO5S4z18
Flink Forward SF 2017: Stephan Ewen - Convergence of real-time analytics and ...Flink Forward
Witnessing the rise of stream processing from the driving seat, we see Apache Flink® and associated technologies used for a wide variety of business applications, from routing data through systems, serving as a backbone for real-time analytics on live data using SQL, detecting credit card fraud, to implementing complete end-to-end social networks. Such applications enable modern data-driven businesses where decisions and actions happen in real-time, and transform traditional businesses to become more data-driven. Observing the variety of these applications implemented using Flink, it becomes apparent that the traditional dividing line between analytics and operational applications is becoming more and more blurry. Historically, operational applications were built using transactional databases, and analytics were done offline. In contrast, Flink’s, state, checkpoints, and time management are the core building blocks for both operational applications with strong data consistency needs, and for real-time analytics with correctness guarantees. With these shared building blocks, developers start building what is arguably a new class of data-driven applications: applications that are operational in that they serve live systems and at the same time analytical in that they perform complex data analysis. Following application architectures like CQRS and using new features like Flink’s queryable state, streaming analytics and online applications move even closer to each other. In this talk, guided by real-world use cases, we present how the unique core concepts behind Flink simplify the development, deployment, and management of data-driven applications, and we conclude with a vision for the future for Flink and stream processing.
What's new in 1.9.0 blink planner - Kurt Young, AlibabaFlink Forward
Flink 1.9.0 added the ability to support multiple SQL planners under the same API. With this help. we successfully merged a lot features which comes from Alibaba's internal flink version, called blink. In this talk, I will give a introduction about the architecture of the blink planner, and also share with you the functionalities and performance enhancements we added.
In this talk, we describe the design and implementation of the Python Streaming API support that has been submitted for inclusion in mainline Flink. Python is one of the most popular programming languages for data analysis. Its readability emphasizes development productivity and as a scripting language, it does not require a compilation nor complex development environment setup. Flink already has support for Python APIs for batch programming and unfortunately, the mechanism used to support batch programs (i.e., DataSet APIs) do does not work for Streaming API. We describe the limitations with the batch implementation and provide insights into how we solved this using Jython. We will walk through some of the examples programs using the new Python API and compare programmability and performance with the Java and Scala streaming APIs.
Virtual Flink Forward 2020: Build your next-generation stream platform based ...Flink Forward
As organizations are getting better at capturing streaming data and the data velocity and volume are ever-increasing, the traditional messaging queues or log storage systems are suffering from scalability or operational and maintenance problems. Apache Pulsar is a multi-tenant, high-performance distributed pub-sub messaging system. Pulsar includes multiple features, such as native support for multiple clusters in a Pulsar instance, seamless geo-replication of messages across clusters, very low publishing and end-to-end latency, seamless scalability to over a million topics, and guaranteed message delivery with persistent message storage provided by Apache BookKeeper. In this talk, I will use one of the most popular stream processing engines, Apache Flink, as an example, to share our experience in building a stream processing and storage stack. Some of the traits are: * How to ensure end-to-end exactly-once semantics based on Pulsar's durable and replayable storage as well as Pulsar transaction. * How to implement Pulsar topics as infinite tables based on Pulsar's schema. * How to efficiently store stream states in Pulsar based on Pulsar's layered storage API. * A usage scenario that chaining all functionalities in the streaming platform.
Flink Connector Development Tips & TricksEron Wright
A look at some of the challenges and techniques for developing a connector for Apache Flink, covering the different types of connectors, lifecycle, metrics, event-time support, and fault tolerance.
Presentation video: https://www.youtube.com/watch?v=ZkbYO5S4z18
Flink Forward SF 2017: Stephan Ewen - Convergence of real-time analytics and ...Flink Forward
Witnessing the rise of stream processing from the driving seat, we see Apache Flink® and associated technologies used for a wide variety of business applications, from routing data through systems, serving as a backbone for real-time analytics on live data using SQL, detecting credit card fraud, to implementing complete end-to-end social networks. Such applications enable modern data-driven businesses where decisions and actions happen in real-time, and transform traditional businesses to become more data-driven. Observing the variety of these applications implemented using Flink, it becomes apparent that the traditional dividing line between analytics and operational applications is becoming more and more blurry. Historically, operational applications were built using transactional databases, and analytics were done offline. In contrast, Flink’s, state, checkpoints, and time management are the core building blocks for both operational applications with strong data consistency needs, and for real-time analytics with correctness guarantees. With these shared building blocks, developers start building what is arguably a new class of data-driven applications: applications that are operational in that they serve live systems and at the same time analytical in that they perform complex data analysis. Following application architectures like CQRS and using new features like Flink’s queryable state, streaming analytics and online applications move even closer to each other. In this talk, guided by real-world use cases, we present how the unique core concepts behind Flink simplify the development, deployment, and management of data-driven applications, and we conclude with a vision for the future for Flink and stream processing.
What's new in 1.9.0 blink planner - Kurt Young, AlibabaFlink Forward
Flink 1.9.0 added the ability to support multiple SQL planners under the same API. With this help. we successfully merged a lot features which comes from Alibaba's internal flink version, called blink. In this talk, I will give a introduction about the architecture of the blink planner, and also share with you the functionalities and performance enhancements we added.
In this talk, we describe the design and implementation of the Python Streaming API support that has been submitted for inclusion in mainline Flink. Python is one of the most popular programming languages for data analysis. Its readability emphasizes development productivity and as a scripting language, it does not require a compilation nor complex development environment setup. Flink already has support for Python APIs for batch programming and unfortunately, the mechanism used to support batch programs (i.e., DataSet APIs) do does not work for Streaming API. We describe the limitations with the batch implementation and provide insights into how we solved this using Jython. We will walk through some of the examples programs using the new Python API and compare programmability and performance with the Java and Scala streaming APIs.
Flink Forward Berlin 2017: Dominik Bruhn - Deploying Flink Jobs as Docker Con...Flink Forward
This talk will focus on how to package, distribute and deploy Flink Jobs by leveraging existing docker technology: Previously deploying of Flink Jobs has been a manual job which leads into errors. In this talk, we present an approach which works well in an CI/CD environment by automating most steps: From the code of a Flink Job in a repository to a running Job on an YARN cluster.
Keystone Data Pipeline manages several thousand Flink pipelines, with variable workloads. These pipelines are simple routers which consume from Kafka and write to one of three sinks. In order to alleviate our operational overhead, we’ve implemented autoscaling for our routers. Autoscaling has reduced our resource usage by 25% - 45% (varying by region and time), and has reduced our on call burden. This talk will take an in depth look at the mathematics, algorithms, and infrastructure details for implementing autoscaling of simple pipelines at scale. It will also discuss future work for autoscaling complex pipelines.
Flink Forward Berlin 2017: Patrick Lucas - Flink in ContainerlandFlink Forward
Apache Flink, a powerful distributed stateful stream processing framework, is an especially good fit for deployment on a containerization platform: its storage requirement is primarily external (e.g. HDFS or S3), clusters often share the lifetime of the jobs that run on them, and the flexibility of allocating resources on such a platform allows for scaling jobs up and down as necessary. In this talk I will give a brief introduction to Apache Flink, then describe the journey to making it a first-class citizen of the container world. I will cover my experience preparing to publish the “official repository” of Flink images on Docker Hub, the challenges of fitting a Flink deployment in a Kubernetes-shaped box, and the rough edges of Flink itself that were exposed by this process.
In Data Engineer’s Lunch #41: Pygrametl , we discussed PygramETL, a python ETL tool in order to close out our series on them.
Accompanying Blog: https://blog.anant.us/data-engineers-lunch-41-pygrametl
Accompanying YouTube: https://youtu.be/YiPuJyYLXxs
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
http://flink-forward.org/kb_sessions/declarative-stream-processing-with-streamsql-and-cep/
Complex event processing (CEP) and stream analytics are commonly treated as distinct classes of stream processing applications. While CEP workloads identify patterns from event streams in near real-time, stream analytics queries ingest and aggregate high-volume streams. Both types of use cases have very different requirements which resulted in diverging system designs. CEP systems excel at low-latency processing whereas engines for stream analytics achieve high throughput. Recent advances in open source stream processing yielded systems that can process several millions of events per second at sub-second latency. Systems like Apache Flink enable applications that include typical CEP features as well as heavy aggregations. In this talk we will show how Apache Flink unifies CEP and stream analytics workloads. Guided by examples, we introduce Flink’s CEP-enriched StreamSQL interface and discuss how queries are compiled, optimized, and executed on Flink.
Flink Forward SF 2017: Kenneth Knowles - Back to Sessions overviewFlink Forward
Apache Beam lets you write data pipelines over unbounded, out-of-order, global-scale data that are portable across diverse backends including Apache Flink, Apache Apex, Apache Spark, and Google Cloud Dataflow. But not all use cases are pipelines of simple "map" and "combine" operations. Beam's new State API adds scalability and consistency to fine-grained stateful processing, all with Beam's usual portability. Examples of new use cases unlocked include: * Microservice-like streaming applications * Aggregations that aren't natural/efficient as an associative combiner * Fine control over retrieval and storage of intermediate values during aggregation * Output based on customized conditions, such as limiting to only "significant" changes in a learned model (resulting in potentially large cost savings in subsequent processing) This talk will introduce the new state and timer features in Beam and show how to use them to express common real-world use cases in a backend-agnostic manner.
Flink Forward SF 2017: Stefan Richter - Improvements for large state and reco...Flink Forward
Stateful stream processing with exactly-once guarantees is one of Apache Flink's distinctive features and we can observe that the scale of state that is managed by Flink in production constantly grows. This leads to a couple of interesting challenges for state handling in Flink. In this talk, we presents current and future developments to improve the handling of large state and recovery in Apache Flink. We show how to keep snapshots for large state swift and how to minimize negative effects on job performance through incremental and asynchronous checkpointing. Furthermore, we discuss how to greatly accelerate recovery under failures and for rescaling. In this context, we go into details about improved execution graph recovery, caching state on task managers, and considering new features of modern storage architectures for our state backends.
Flink Forward San Francisco 2019: Massive Scale Data Processing at Netflix us...Flink Forward
Over 137 million members worldwide are enjoying TV series, feature films across a wide variety of genres and languages on Netflix. It leads to petabyte scale of user behavior data. At Netflix, our client logging platform collects and processes this data to empower recommendations, personalization and many other services to enhance user experience. Built with Apache Flink, this platform processes 100s of billion events and a petabyte data per day, 2.5 million events/sec in sub milliseconds latency. The processing involves a series of data transformations such as decryption and data enrichment of customer, geo, device information using microservices based lookups.
The transformed and enriched data is further used by multiple data consumers for a variety of applications such as improving user-experience with A/B tests, tracking application performance metrics, tuning algorithms. This causes redundant reads of the dataset by multiple batch jobs and incurs heavy processing costs. To avoid this, we have developed a config driven, centralized, managed platform, on top of Apache Flink, that reads this data once and routes it to multiple streams based on dynamic configuration. This has resulted in improved computation efficiency, reduced costs and reduced operational overhead.
Stream processing at scale while ensuring that the production systems are scalable and cost-efficient brings interesting challenges. In this talk, we will share about how we leverage Apache Flink to achieve this, the challenges we faced and our learnings while running one of the largest Flink application at Netflix.
Unify Enterprise Data Processing System Platform Level Integration of Flink a...Flink Forward
In this talk, I will present how Flink enables enterprise customers to unify their data processing systems by using Flink to query Hive data.
Unification of streaming and batch is a main theme for Flink. Since 1.9.0, we have integrated Flink with Hive in a platform level. I will talk about:
- what features we have released so far, and what they enable our customers to do
- best practices to use Flink with Hive
- what is the latest development status of Flink-Hive integration at the time of Flink Forward Berlin (Oct 2019), and what to look for in the next release (probably 1.11)
Flink Forward Berlin 2017: Fabian Hueske - Using Stream and Batch Processing ...Flink Forward
Apache Flink's DataStream API is very expressive and gives users precise control over time and state. However, many applications do not require this level of expressiveness and can be implemented more concisely and easily with a domain-specific API. SQL is undoubtedly the most widely used language for data processing but usually applied in the domain of batch processing. Apache Flink features two relational APIs for unified stream and batch processing, the Table API, a language-integrated relational query API for Scala and Java, and SQL. A Table API or SQL query computes the same result regardless whether it is evaluated on a static file or on a Kafka topic. While Flink evaluates queries on batch input like a conventional query engine, queries on streaming input are continuously processed and their results constantly updated and refined. In this talk we present Flink’s unified relational APIs, show how streaming SQL queries are processed, and discuss exciting new use-cases.
Francesco Versaci - Flink in genomics - efficient and scalable processing of ...Flink Forward
http://flink-forward.org/kb_sessions/flink-in-genomics-efficient-and-scalable-processing-of-raw-illumina-bcl-data/
A single run in genome sequencing can easily produce several terabytes of data, which subsequently feed a complex pipeline of tools. Typically, the first step in this workflow is a rearrangement of data, roughly equivalent to a matrix transposition, to reconstruct the original DNA fragments from the raw BCL data, where the fragments are sliced and scattered over multiple files. This step is followed by the sorting of the fragments by a specific identifying tag sequence, which is attached during the preparation of the sample. In this talk we will present a parallel program which performs these essential operations. Our BCL converter is shown to have comparable performance to the shared-memory Illumina bcl2fastq tool, while also enabling easy and scalable distributed-memory parallelization. We will describe the techniques we have used to achieve high performance and discuss the features of Flink which we have particularly appreciated as well as the ones which we think are still missing.
Flink Forward Berlin 2017: Andreas Kunft - Efficiently executing R Dataframes...Flink Forward
This document discusses providing an R dataframe abstraction for efficient distributed computation on Apache Flink. The goals are to provide a natural API for R and achieve performance comparable to Flink's native dataflow. The approach represents R dataframes as Flink data sets and compiles R functions into the native execution plan where possible. For user-defined R functions, they are evaluated within worker tasks using a just-in-time compiler. This allows executing R code within the same Java virtual machine as Flink for good performance, even on a single node. Results show it can achieve native Flink performance even for functions containing R code.
Measure your app internals with InfluxDB and Symfony2Corley S.r.l.
This document discusses using InfluxDB, a time-series database, to measure application internals in Symfony. It describes sending data from a Symfony app to InfluxDB using its PHP client library, and visualizing the data with Grafana dashboards. Key steps include setting up the InfluxDB client via dependency injection, dispatching events from controllers, listening for them to send data to InfluxDB, and building Grafana dashboards to view measurements over time.
Flink Forward Berlin 2017: Jörg Schad, Till Rohrmann - Apache Flink meets Apa...Flink Forward
Apache Mesos allows operators to run distributed applications across an entire datacenter and is attracting ever increasing interest. As much as distributed applications see increased use enabled by Mesos, Mesos also sees increasing use due to a growing ecosystem of well integrated applications. One of the latest additions to the Mesos family is Apache Flink. Flink is one of the most popular open source systems for real-time high scale data processing and allows users to deal with low-latency streaming analytical workloads on Mesos. In this talk we explain the challenges solved while integrating Flink with Mesos, including how Flink’s distributed architecture can be modeled as a Mesos framework, and how Flink was integrated with Fenzo. Next, we describe how Flink was packaged to easily run on DC/OS.
Stream Loops on Flink - Reinventing the wheel for the streaming eraParis Carbone
This document discusses adding iterative processing capabilities to stream processing systems like Apache Flink. It proposes programming model extensions that treat iterative computations as structured loops over windows. Progress would be tracked using progress timestamps rather than watermarks to allow for arbitrary loop structures. Challenges include managing state and cyclic flow control to avoid deadlocks while encouraging iteration completion.
This document provides a summary of upcoming features in Apache Flink, including stream SQL, queryable state, dynamic scaling of streaming programs, and consistent hashing. Stream SQL will allow running continuous SQL queries over infinite data streams and ingesting streams into a data warehouse. Queryable state will improve performance by allowing queries to Flink's internal state without external systems. Dynamic scaling will adjust a streaming program's parallelism without interrupting the application. Consistent hashing will improve state redistribution when changing a program's parallelism.
Flink Forward San Francisco 2019: Scaling a real-time streaming warehouse wit...Flink Forward
Scaling a real-time streaming warehouse with Apache Flink, Parquet and Kubernetes
At Branch, we process more than 12 billions events per day, and store and aggregate terabytes of data daily. We use Apache Flink for processing, transforming and aggregating events, and parquet as the data storage format. This talk covers our challenges with scaling our warehouse, namely:
How did we scale our Flink-Parquet warehouse to handle 3x increase in traffic?
How do we ensure exactly once, event-time based, fault tolerant processing of events?
In this talk, we also provide an overview on deploying and scaling our streaming warehouse. We give an overview on:
How we scaled our Parquet warehouse by tuning memory
Running on Kubernetes cluster for resource management
How we migrated our streaming jobs with no disruption from Mesos to Kubernetes
Our challenges and learnings along the way
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...Flink Forward
This document discusses Flink's Table and SQL APIs, which provide a unified way to write batch and streaming queries. It motivates the need for a relational API by explaining that while Flink's DataStream API is powerful, it requires more technical skills. The Table and SQL APIs allow users to focus on business logic by writing declarative queries. It describes how the APIs work, including translating queries to logical and execution plans and supporting batch, streaming and windowed queries. Finally, it outlines the current capabilities and opportunities for contributors to help expand Flink's relational features.
These are the slides that supported the presentation on Apache Flink at the ApacheCon Budapest.
Apache Flink is a platform for efficient, distributed, general-purpose data processing.
This document provides an overview of Apache Flink and stream processing. It discusses how stream processing has changed data infrastructure by enabling real-time analysis with low latency. Traditional batch processing had limitations like high latency of hours. Flink allows analyzing streaming data with sub-second latency using mechanisms like windows, state handling, and fault tolerance through distributed snapshots. The document benchmarks Flink performance against other frameworks on a Yahoo! production use case, finding Flink can achieve over 15 million messages/second throughput.
Flink Forward Berlin 2017: Dominik Bruhn - Deploying Flink Jobs as Docker Con...Flink Forward
This talk will focus on how to package, distribute and deploy Flink Jobs by leveraging existing docker technology: Previously deploying of Flink Jobs has been a manual job which leads into errors. In this talk, we present an approach which works well in an CI/CD environment by automating most steps: From the code of a Flink Job in a repository to a running Job on an YARN cluster.
Keystone Data Pipeline manages several thousand Flink pipelines, with variable workloads. These pipelines are simple routers which consume from Kafka and write to one of three sinks. In order to alleviate our operational overhead, we’ve implemented autoscaling for our routers. Autoscaling has reduced our resource usage by 25% - 45% (varying by region and time), and has reduced our on call burden. This talk will take an in depth look at the mathematics, algorithms, and infrastructure details for implementing autoscaling of simple pipelines at scale. It will also discuss future work for autoscaling complex pipelines.
Flink Forward Berlin 2017: Patrick Lucas - Flink in ContainerlandFlink Forward
Apache Flink, a powerful distributed stateful stream processing framework, is an especially good fit for deployment on a containerization platform: its storage requirement is primarily external (e.g. HDFS or S3), clusters often share the lifetime of the jobs that run on them, and the flexibility of allocating resources on such a platform allows for scaling jobs up and down as necessary. In this talk I will give a brief introduction to Apache Flink, then describe the journey to making it a first-class citizen of the container world. I will cover my experience preparing to publish the “official repository” of Flink images on Docker Hub, the challenges of fitting a Flink deployment in a Kubernetes-shaped box, and the rough edges of Flink itself that were exposed by this process.
In Data Engineer’s Lunch #41: Pygrametl , we discussed PygramETL, a python ETL tool in order to close out our series on them.
Accompanying Blog: https://blog.anant.us/data-engineers-lunch-41-pygrametl
Accompanying YouTube: https://youtu.be/YiPuJyYLXxs
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
http://flink-forward.org/kb_sessions/declarative-stream-processing-with-streamsql-and-cep/
Complex event processing (CEP) and stream analytics are commonly treated as distinct classes of stream processing applications. While CEP workloads identify patterns from event streams in near real-time, stream analytics queries ingest and aggregate high-volume streams. Both types of use cases have very different requirements which resulted in diverging system designs. CEP systems excel at low-latency processing whereas engines for stream analytics achieve high throughput. Recent advances in open source stream processing yielded systems that can process several millions of events per second at sub-second latency. Systems like Apache Flink enable applications that include typical CEP features as well as heavy aggregations. In this talk we will show how Apache Flink unifies CEP and stream analytics workloads. Guided by examples, we introduce Flink’s CEP-enriched StreamSQL interface and discuss how queries are compiled, optimized, and executed on Flink.
Flink Forward SF 2017: Kenneth Knowles - Back to Sessions overviewFlink Forward
Apache Beam lets you write data pipelines over unbounded, out-of-order, global-scale data that are portable across diverse backends including Apache Flink, Apache Apex, Apache Spark, and Google Cloud Dataflow. But not all use cases are pipelines of simple "map" and "combine" operations. Beam's new State API adds scalability and consistency to fine-grained stateful processing, all with Beam's usual portability. Examples of new use cases unlocked include: * Microservice-like streaming applications * Aggregations that aren't natural/efficient as an associative combiner * Fine control over retrieval and storage of intermediate values during aggregation * Output based on customized conditions, such as limiting to only "significant" changes in a learned model (resulting in potentially large cost savings in subsequent processing) This talk will introduce the new state and timer features in Beam and show how to use them to express common real-world use cases in a backend-agnostic manner.
Flink Forward SF 2017: Stefan Richter - Improvements for large state and reco...Flink Forward
Stateful stream processing with exactly-once guarantees is one of Apache Flink's distinctive features and we can observe that the scale of state that is managed by Flink in production constantly grows. This leads to a couple of interesting challenges for state handling in Flink. In this talk, we presents current and future developments to improve the handling of large state and recovery in Apache Flink. We show how to keep snapshots for large state swift and how to minimize negative effects on job performance through incremental and asynchronous checkpointing. Furthermore, we discuss how to greatly accelerate recovery under failures and for rescaling. In this context, we go into details about improved execution graph recovery, caching state on task managers, and considering new features of modern storage architectures for our state backends.
Flink Forward San Francisco 2019: Massive Scale Data Processing at Netflix us...Flink Forward
Over 137 million members worldwide are enjoying TV series, feature films across a wide variety of genres and languages on Netflix. It leads to petabyte scale of user behavior data. At Netflix, our client logging platform collects and processes this data to empower recommendations, personalization and many other services to enhance user experience. Built with Apache Flink, this platform processes 100s of billion events and a petabyte data per day, 2.5 million events/sec in sub milliseconds latency. The processing involves a series of data transformations such as decryption and data enrichment of customer, geo, device information using microservices based lookups.
The transformed and enriched data is further used by multiple data consumers for a variety of applications such as improving user-experience with A/B tests, tracking application performance metrics, tuning algorithms. This causes redundant reads of the dataset by multiple batch jobs and incurs heavy processing costs. To avoid this, we have developed a config driven, centralized, managed platform, on top of Apache Flink, that reads this data once and routes it to multiple streams based on dynamic configuration. This has resulted in improved computation efficiency, reduced costs and reduced operational overhead.
Stream processing at scale while ensuring that the production systems are scalable and cost-efficient brings interesting challenges. In this talk, we will share about how we leverage Apache Flink to achieve this, the challenges we faced and our learnings while running one of the largest Flink application at Netflix.
Unify Enterprise Data Processing System Platform Level Integration of Flink a...Flink Forward
In this talk, I will present how Flink enables enterprise customers to unify their data processing systems by using Flink to query Hive data.
Unification of streaming and batch is a main theme for Flink. Since 1.9.0, we have integrated Flink with Hive in a platform level. I will talk about:
- what features we have released so far, and what they enable our customers to do
- best practices to use Flink with Hive
- what is the latest development status of Flink-Hive integration at the time of Flink Forward Berlin (Oct 2019), and what to look for in the next release (probably 1.11)
Flink Forward Berlin 2017: Fabian Hueske - Using Stream and Batch Processing ...Flink Forward
Apache Flink's DataStream API is very expressive and gives users precise control over time and state. However, many applications do not require this level of expressiveness and can be implemented more concisely and easily with a domain-specific API. SQL is undoubtedly the most widely used language for data processing but usually applied in the domain of batch processing. Apache Flink features two relational APIs for unified stream and batch processing, the Table API, a language-integrated relational query API for Scala and Java, and SQL. A Table API or SQL query computes the same result regardless whether it is evaluated on a static file or on a Kafka topic. While Flink evaluates queries on batch input like a conventional query engine, queries on streaming input are continuously processed and their results constantly updated and refined. In this talk we present Flink’s unified relational APIs, show how streaming SQL queries are processed, and discuss exciting new use-cases.
Francesco Versaci - Flink in genomics - efficient and scalable processing of ...Flink Forward
http://flink-forward.org/kb_sessions/flink-in-genomics-efficient-and-scalable-processing-of-raw-illumina-bcl-data/
A single run in genome sequencing can easily produce several terabytes of data, which subsequently feed a complex pipeline of tools. Typically, the first step in this workflow is a rearrangement of data, roughly equivalent to a matrix transposition, to reconstruct the original DNA fragments from the raw BCL data, where the fragments are sliced and scattered over multiple files. This step is followed by the sorting of the fragments by a specific identifying tag sequence, which is attached during the preparation of the sample. In this talk we will present a parallel program which performs these essential operations. Our BCL converter is shown to have comparable performance to the shared-memory Illumina bcl2fastq tool, while also enabling easy and scalable distributed-memory parallelization. We will describe the techniques we have used to achieve high performance and discuss the features of Flink which we have particularly appreciated as well as the ones which we think are still missing.
Flink Forward Berlin 2017: Andreas Kunft - Efficiently executing R Dataframes...Flink Forward
This document discusses providing an R dataframe abstraction for efficient distributed computation on Apache Flink. The goals are to provide a natural API for R and achieve performance comparable to Flink's native dataflow. The approach represents R dataframes as Flink data sets and compiles R functions into the native execution plan where possible. For user-defined R functions, they are evaluated within worker tasks using a just-in-time compiler. This allows executing R code within the same Java virtual machine as Flink for good performance, even on a single node. Results show it can achieve native Flink performance even for functions containing R code.
Measure your app internals with InfluxDB and Symfony2Corley S.r.l.
This document discusses using InfluxDB, a time-series database, to measure application internals in Symfony. It describes sending data from a Symfony app to InfluxDB using its PHP client library, and visualizing the data with Grafana dashboards. Key steps include setting up the InfluxDB client via dependency injection, dispatching events from controllers, listening for them to send data to InfluxDB, and building Grafana dashboards to view measurements over time.
Flink Forward Berlin 2017: Jörg Schad, Till Rohrmann - Apache Flink meets Apa...Flink Forward
Apache Mesos allows operators to run distributed applications across an entire datacenter and is attracting ever increasing interest. As much as distributed applications see increased use enabled by Mesos, Mesos also sees increasing use due to a growing ecosystem of well integrated applications. One of the latest additions to the Mesos family is Apache Flink. Flink is one of the most popular open source systems for real-time high scale data processing and allows users to deal with low-latency streaming analytical workloads on Mesos. In this talk we explain the challenges solved while integrating Flink with Mesos, including how Flink’s distributed architecture can be modeled as a Mesos framework, and how Flink was integrated with Fenzo. Next, we describe how Flink was packaged to easily run on DC/OS.
Stream Loops on Flink - Reinventing the wheel for the streaming eraParis Carbone
This document discusses adding iterative processing capabilities to stream processing systems like Apache Flink. It proposes programming model extensions that treat iterative computations as structured loops over windows. Progress would be tracked using progress timestamps rather than watermarks to allow for arbitrary loop structures. Challenges include managing state and cyclic flow control to avoid deadlocks while encouraging iteration completion.
This document provides a summary of upcoming features in Apache Flink, including stream SQL, queryable state, dynamic scaling of streaming programs, and consistent hashing. Stream SQL will allow running continuous SQL queries over infinite data streams and ingesting streams into a data warehouse. Queryable state will improve performance by allowing queries to Flink's internal state without external systems. Dynamic scaling will adjust a streaming program's parallelism without interrupting the application. Consistent hashing will improve state redistribution when changing a program's parallelism.
Flink Forward San Francisco 2019: Scaling a real-time streaming warehouse wit...Flink Forward
Scaling a real-time streaming warehouse with Apache Flink, Parquet and Kubernetes
At Branch, we process more than 12 billions events per day, and store and aggregate terabytes of data daily. We use Apache Flink for processing, transforming and aggregating events, and parquet as the data storage format. This talk covers our challenges with scaling our warehouse, namely:
How did we scale our Flink-Parquet warehouse to handle 3x increase in traffic?
How do we ensure exactly once, event-time based, fault tolerant processing of events?
In this talk, we also provide an overview on deploying and scaling our streaming warehouse. We give an overview on:
How we scaled our Parquet warehouse by tuning memory
Running on Kubernetes cluster for resource management
How we migrated our streaming jobs with no disruption from Mesos to Kubernetes
Our challenges and learnings along the way
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...Flink Forward
This document discusses Flink's Table and SQL APIs, which provide a unified way to write batch and streaming queries. It motivates the need for a relational API by explaining that while Flink's DataStream API is powerful, it requires more technical skills. The Table and SQL APIs allow users to focus on business logic by writing declarative queries. It describes how the APIs work, including translating queries to logical and execution plans and supporting batch, streaming and windowed queries. Finally, it outlines the current capabilities and opportunities for contributors to help expand Flink's relational features.
These are the slides that supported the presentation on Apache Flink at the ApacheCon Budapest.
Apache Flink is a platform for efficient, distributed, general-purpose data processing.
This document provides an overview of Apache Flink and stream processing. It discusses how stream processing has changed data infrastructure by enabling real-time analysis with low latency. Traditional batch processing had limitations like high latency of hours. Flink allows analyzing streaming data with sub-second latency using mechanisms like windows, state handling, and fault tolerance through distributed snapshots. The document benchmarks Flink performance against other frameworks on a Yahoo! production use case, finding Flink can achieve over 15 million messages/second throughput.
This document discusses Storm, an open-source distributed real-time computation system. It provides an overview of the key Storm concepts and components including Nimbus, ZooKeeper, spouts, bolts, streams, and topologies. It also covers installing Storm using Java, Maven and Spring Tool Suite and developing a sample topology to demonstrate how Storm processes data streams in a distributed fashion.
Yarn Resource Management Using Machine Learningojavajava
HadoopCon 2016 In Taiwan - How to maximum the utilization of Hadoop computing power is the biggest challenge for Hadoop administer. In this talk I will explain how we use Machine Learning to build the prediction model for the computing power requirements and setting up the MapReduce scheduler parameters dynamically, to fully utilize our Hadoop cluster computing power.
Hadoop con 2016_9_10_王經篤(Jing-Doo Wang)Jing-Doo Wang
This document summarizes a presentation on potential applications using the class frequency distribution of maximal repeats from tagged sequential data. It discusses using maximal repeat patterns and their frequency distributions over time to analyze trends in topic histories from literature, detect anomalies in manufacturing processes for quality control, and identify distinguishing patterns in genomic sequences. Potential applications discussed include text mining historical archives, individualized learning based on topic histories, detecting changes in language for elderly assessment, monitoring new word adoption, and integrating IoT sensor data with product traceability systems for industrial quality assurance.
How to plan a hadoop cluster for testing and production environmentAnna Yen
Athemaster wants to share our experience to plan Hardware Spec, server initial and role deployment with new Hadoop Users. There are 2 testing environments and 3 production environments for case study.
Full Video: https://www.youtube.com/watch?v=cOShsisEsC0
An overview of the relation and combination of three data processing paradigms that is becoming more relevant today. It introduces the essentials of graph, distributed and stream computing and beyond. Furthermore, it questions the fundamental problems that we want to solve with data analysis and the potential of eventually saving the human kind in the next millennium by improving the state of the art of computation technologies while being too busy answering first world problem questions. Crazy but possible.
This document discusses using Jupyter Notebook for machine learning projects with Spark. It describes running Python, Spark, and pandas code in Jupyter notebooks to work with data from various sources and build machine learning models. Key points include using notebooks for an ML pipeline, running Spark jobs, visualizing data, and building word embedding models with Spark. The document emphasizes how Jupyter notebooks allow integrating various tools for an ML workflow.
This document provides an overview of a business intelligence (BI) system architecture. It includes a product database using Attunity for change data capture fed into a Teradata data warehouse. An ETL system extracts and transforms the data from the warehouse for analysis in Tableau, a BI reporting tool. Centralized logging of the database, applications, and web console are stored in a separate logging database.
This document discusses Hivemall, a machine learning library for Apache Hive and Spark. It was developed by Makoto Yui as a personal research project to make machine learning easier for SQL developers. Hivemall implements various machine learning algorithms like logistic regression, random forests, and factorization machines as user-defined functions (UDFs) for Hive, allowing machine learning tasks to be performed using SQL queries. It aims to simplify machine learning by abstracting it through the SQL interface and enabling parallel and interactive execution on Hadoop.
Achieve big data analytic platform with lambda architecture on cloudScott Miao
This document discusses achieving a big data analytic platform using the Lambda architecture on cloud infrastructure. It begins by explaining why moving to the cloud provides benefits like elastic scaling, reduced operational overhead, and increased focus on innovation. Common cloud services at Trend Micro like an analytic engine and cloud storage are then described. The document introduces the Lambda architecture and proposes a serving layer as a service. Key lessons learned from building big data solutions on AWS include the pros of unlimited scalability and easy disaster recovery compared to on-premises infrastructure.
SparkR - Play Spark Using R (20160909 HadoopCon)wqchen
1. Introduction to SparkR
2. Demo
Starting to use SparkR
DataFrames: dplyr style, SQL style
RDD v.s. DataFrames
SparkR on MLlib: GLM, K-means
3. User Case
Median: approxQuantile()
ID Match: dplyr style, SQL style, SparkR function
SparkR + Shiny
4. The Future of SparkR
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
This document discusses continuous counting on data streams using Apache Flink. It begins by introducing streaming data and how counting is an important but challenging problem. It then discusses issues with batch-oriented and lambda architectures for counting. The document presents Flink's streaming architecture and DataStream API as solutions. It discusses requirements for low-latency, high-efficiency counting on streams, as well as fault tolerance, accuracy, and queryability. Benchmark results show Flink achieving sub-second latencies and high throughput. The document closes by overviewing upcoming features in Flink like SQL and dynamic scaling.
Continuous Processing with Apache Flink - Strata London 2016Stephan Ewen
Task from the Strata & Hadoop World conference in London, 2016: Apache Flink and Continuous Processing.
The talk discusses some of the shortcomings of building continuous applications via batch processing, and how a stream processing architecture naturally solves many of these issues.
Hadoop con2016 - Implement Real-time Centralized logging System by Elastic StackLen Chang
This document proposes implementing a real-time centralized logging system using the Elastic Stack. It introduces Elastic Stack components like Filebeat, Elasticsearch, and Kibana. It then provides a use case of converting log timestamps to a standard sort format using Logstash filters like grok and date. The presenter works at WeMo Scooter, an electric scooter rental startup aiming to reduce emissions. He is interested in technologies like Elastic Stack, PostgreSQL, and Spark.
This introductory level talk is about Apache Flink: a multi-purpose Big Data analytics framework leading a movement towards the unification of batch and stream processing in the open source.
With the many technical innovations it brings along with its unique vision and philosophy, it is considered the 4 G (4th Generation) of Big Data Analytics frameworks providing the only hybrid (Real-Time Streaming + Batch) open source distributed data processing engine supporting many use cases: batch, streaming, relational queries, machine learning and graph processing.
In this talk, you will learn about:
1. What is Apache Flink stack and how it fits into the Big Data ecosystem?
2. How Apache Flink integrates with Hadoop and other open source tools for data input and output as well as deployment?
3. Why Apache Flink is an alternative to Apache Hadoop MapReduce, Apache Storm and Apache Spark.
4. Who is using Apache Flink?
5. Where to learn more about Apache Flink?
DBCC 2021 - FLiP Stack for Cloud Data LakesTimothy Spann
DBCC 2021 - FLiP Stack for Cloud Data Lakes
With Apache Pulsar, Apache NiFi, Apache Flink. The FLiP(N) Stack for Event processing and IoT. With StreamNative Cloud.
DBCC International – Friday 15.10.2021
Powered by Apache Pulsar, StreamNative provides a cloud-native, real-time messaging and streaming platform to support multi-cloud and hybrid cloud strategies.
OpenLineage for Stream Processing | Kafka Summit LondonHostedbyConfluent
"OpenLineage is an open platform for the collection and analysis of data lineage, which includes an open standard for lineage data collection, integration libraries for the most common tools, and a metadata repository/reference implementation (Marquez).
In recent months, stream processing, which is an important use case for Apache Kafka, has gained the particular focus of the OpenLineage community with many useful features completed or begun, including:
* A seamless OpenLineage & Apache Flink integration,
* Support for streaming jobs in Marquez,
* Progress on a built-in lineage API within the Flink codebase.
Cross-platform lineage allows for a holistic overview of data flow and its dependencies within organizations, including stream processing.
This talk will provide an overview of the most recent developments in the OpenLineage Flink integration and share what’s in store for this important collaboration.
This talk is a must-attend for those wishing to stay up-to-date on lineage developments in the stream processing world."
Near real-time anomaly detection at Lyftmarkgrover
Near real-time anomaly detection at Lyft, by Mark Grover and Thomas Weise at Strata NY 2018.
https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/69155
This document discusses building stream processing as a service (SPaaS) using Apache Flink. It introduces Flink's stream processing capabilities and describes how to build a SPaaS offering with different levels of complexity and ease of use. It also covers the Keystone router for simple stream routing, building custom Flink jobs, and techniques for recovering from failures using backfill from Hive or rewinding the Flink job.
At the beginning of 2021, Shopify Data Platform decided to adopt Apache Flink to enable modern stateful stream-processing. Shopify had a lot of experience with other streaming technologies, but Flink was a great fit due to its state management primitives.
After about six months, Shopify now has a flourishing ecosystem of tools, tens of prototypes from many teams across the company and a few large use-cases in production.
Yaroslav will share a story about not just building a single data pipeline but building a sustainable ecosystem. You can learn about how they planned their platform roadmap, the tools and libraries Shopify built, the decision to fork Flink, and how Shopify partnered with other teams and drove the adoption of streaming at the company.
Aljoscha Krettek offers a very short introduction to stream processing before diving into writing code and demonstrating the features in Apache Flink that make truly robust stream processing possible, with a focus on correctness and robustness in stream processing.
All of this will be done in the context of a real-time analytics application that we’ll be modifying on the fly based on the topics we’re working though, as Aljoscha exercises Flink’s unique features, demonstrates fault recovery, clearly explains why event time is such an important concept in robust, stateful stream processing, and covers the features you need in a stream processor to do robust, stateful stream processing in production.
We’ll also use a real-time analytics dashboard to visualize the results we’re computing in real time, allowing us to easily see the effects of the code we’re developing as we go along.
Topics include:
* Apache Flink
* Stateful stream processing
* Event time versus processing time
* Fault tolerance
* State management in the face of faults
* Savepoints
* Data reprocessing
This document provides an overview of Apache Flink, an open-source stream processing framework. It discusses Flink's capabilities in supporting streaming, batch, and iterative processing natively through a streaming dataflow model. It also describes Flink's architecture including the client, job manager, task managers, and various execution setups like local, remote, YARN, and embedded. Finally, it compares Flink to other stream and batch processing systems in terms of their APIs, fault tolerance guarantees, and strengths.
Towards Apache Flink 2.0 - Unified Data Processing and Beyond, Bowen LiBowen Li
This talk was presented at Scale by the bay conference on Nov 14, 2019.
As the most popular and widely adopted stream processing framework, Apache Flink powers some of the world's largest stream processing use cases in companies like Netflix, Alibaba, Uber, Lyft, Pinterest, Yelp , etc.
In this talk, we will first go over use cases and basic (yet hard to achieve!) requirements of stream processing, and how Flink stands out with some of its unique core building blocks, like pipelined execution, native event time support, state support, and fault tolerance.
We will then take a look at how Flink is going beyond stream processing into areas like unified streaming/batch data processing, enterprise intergration with Hive, AI/machine learning, and serverless computation, how Flink fits with its distinct value, and what development is going on in Flink community to gap.
Advanced Stream Processing with Flink and Pulsar - Pulsar Summit NA 2021 KeynoteStreamNative
In this talk, Till Rohrmann and Addison Higham discuss how Flink allows for ambitious stream processing workflows and how Pulsar and Flink enable new capabilities that push forward the state-of-the-art in streaming. They will also share upcoming features and new capabilities in the integrations between Flink and Pulsar and how these two communities are working together to truly advance the power of stream processing.
LWA 2015: The Apache Flink Platform for Parallel Batch and Stream AnalysisJonas Traub
This is our presentation for the German paper "Die Apache Flink Plattform zur parallelen Analyse von Datenströmen und Stapeldaten" which was published in Proceedings of the
LWA 2015 Workshops: KDML, FGWM, IR, and FGDB. Trier, Germany, 7.-9. October 2015. Link: http://ceur-ws.org/Vol-1458/H02_CRC79_Traub.pdf
GOTO Night Amsterdam - Stream processing with Apache FlinkRobert Metzger
This document discusses Apache Flink, an open source stream processing framework. It provides an overview of Flink and how it enables low-latency stream processing compared to traditional batch processing systems. Key aspects covered include windowing, state handling, fault tolerance, and performance benchmarks showing Flink can achieve high throughput. The document demonstrates how Flink addresses challenges like out-of-order events, state management, and exactly-once processing through features like event-time processing, managed state, and distributed snapshots.
http://www.learntek.org/product/apache-flink/
Apache Flink is an open source stream processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming dataflow engine written in Java and Scala. Apache Flink’s dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result. Programs can be written in Java, Scala, Python, and SQL and are automatically compiled and optimized into dataflow programs that are executed in a cluster or cloud environment.
http://www.learntek.org
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses. We are dedicated to designing, developing and implementing training programs for students, corporate employees and business professional.
https://www.learntek.org/apache-flink/
https://www.learntek.org/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
Apache Flink Training
https://www.learntek.org/apache-flink/
https://www.learntek.org/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
This document provides an introduction to Apache Flink and its streaming capabilities. It discusses stream processing as an abstraction and how Flink uses streams as a core abstraction. It compares Flink streaming to Spark streaming and outlines some of Flink's streaming features like windows, triggers, and how it handles event time.
The document outlines the topics that will be covered in an Apache Flink online training, including: what Apache Flink is; why use Apache Flink; its architecture, features, and deployment; its streaming, batch processing, and table APIs; complex event processing; graph processing; and integration with Hadoop. The training will cover Apache Flink's stream processing engine, fault tolerance, state management, and support for stream, batch, and iterative processing using its dataflow model.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
2. ● 戴資力(Gordon)
● Apache Flink Committer
● Co-organizer of Apache Flink Taiwan User Group
● Software Engineer @ VMFive
● Java, Scala
● Enjoy developing distributed computing systems
00 Who am I?
1
3. ● An understanding of Apache Flink
● Flink’s streaming-first philosophy
● What exactly is a “streaming dataflow engine”?
00 This session will be about ...
2
4. 3
Apache Flink
an open-source platform for distributed
stream and batch data processing
● Apache Top-Level Project since Jan. 2015
● Streaming Dataflow Engine at its core
○ Low latency
○ High Throughput
○ Stateful
○ Distributed
5. 4
Apache Flink
an open-source platform for distributed
stream and batch data processing
● ~100 active contributors for Flink 1.1.x release
● Used in production by:
○ Alibaba - realtime search ranking optimization
○ Uber - ride request fulfillment marketplace
○ Netflix - Stream Processing as a Service (SPaaS)
○ Kings Gaming - realtime data science dashboard
○ ...
7. 6
02 Scala Collection-like API
case class Word (word: String, count: Int)
val lines: DataSet[String] = env.readTextFile(...)
lines.flatMap(_.split(“ ”)). map(word => Word(word,1))
.groupBy(“word”).sum(“count”)
.print()
val lines: DataStream[String] = env.addSource(new KafkaSource(...))
lines.flatMap(_.split(“ ”)). map(word => Word(word,1))
.keyBy(“word”).timeWindow(Time.seconds(5)).sum(“count”)
.print()
DataSet API
DataStream API
8. 7
02 Scala Collection-like API
.filter(...).flatmap(...).map(...).groupBy(...).reduce(...)
● Becoming the de facto standard for new generation API to
express data pipelines
● Apache Spark, Apache Flink, Apache Beam ...
20. ● True one-at-a-time streaming
● Tasks are scheduled and executed concurrently
● Data transmission is pushed across distributed queues
05 Streaming Dataflow Engine
19
Source Sink
Flink Job
Task
21. ● Receiving data at a higher rate than a system can process
during a temporary load spike
○ ex. GC pauses at processing tasks
○ ex. data source natural load spike
06 Backpressure Handling
20
Normal stable case:
Temporary load spike:
Ideal backpressure handling:
22. 07 How Distributed Queues Work
21
1. Record “A” enters Task 1, and is processed
2. The record is serialized into an output buffer at Task 1
3. The buffer is shipped to Task 2’s input buffer
Observation: Buffers need to be available throughout the process
(think Java blocking queues used between threads)
23. 07 How Distributed Queues Work
21
1. Record “A” enters Task 1, and is processed
2. The record is serialized into an output buffer at Task 1
3. The buffer is shipped to Task 2’s input buffer
Observation: Buffers need to be available throughout the process
(think Java blocking queues used between threads)
Taken from an
output buffer pool
Taken from an
input buffer pool
24. 07 How Distributed Queues Work
22
● Size of buffer pools defines the amount of buffering Flink can
do in presence of different sender / receiver speeds
● More memory simply means Flink can simply buffer away
transient, short-living backpressure
● Lesser memory simply means more immediate
backpressure response
25. ● Due to one-at-a-time processing, Flink has very powerful
built-in windowing (certainly among the best in the current
streaming framework solutions)
○ Time-driven: Tumbling window, Sliding window
○ Data-driven: Count window, Session window
08 Flexible Windows
23