The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the Big Data ecosystem together; it enables users to "run-anything-anywhere".
This talk will briefly cover the capabilities of the Beam model for data processing, as well as the current state of the Beam ecosystem. We'll discuss Beam architecture and dive into the portability layer. We'll offer a technical analysis of the Beam's powerful primitive operations that enable true and reliable portability across diverse environments. Finally, we'll demonstrate a complex pipeline running on multiple runners in multiple deployment scenarios (e.g. Apache Spark on Amazon Web Services, Apache Flink on Google Cloud, Apache Apex on-premise), and give a glimpse at some of the challenges Beam aims to address in the future.
Realizing the promise of portable data processing with Apache Beam
1. Abstract
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua
franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing
robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms.
In a way, Apache Beam is a glue that can connect the Big Data ecosystem together; it enables users to
"run-anything-anywhere".
This talk will briefly cover the capabilities of the Beam model for data processing, as well as the
current state of the Beam ecosystem. We'll discuss Beam architecture and dive into the portability
layer. We'll offer a technical analysis of the Beam's powerful primitive operations that enable true and
reliable portability across diverse environments. Finally, we'll demonstrate a complex pipeline running
on multiple runners in multiple deployment scenarios (e.g. Apache Spark on Amazon Web Services,
Apache Flink on Google Cloud, Apache Apex on-premise), and give a glimpse at some of the
challenges Beam aims to address in the future.
This session is a (Intermediate) talk in our IoT and Streaming track. It focuses on Apache Flink,
Apache Kafka, Apache Spark, Cloud, Other and is geared towards Architect, Data Scientist, Data
Analyst, Developer / Engineer, Operations / IT audiences.
2. Realizing the promise of
portable data processing
with Apache Beam
Davor Bonaci
PMC Chair, Apache Beam
Senior Software Engineer, Google Inc.
3. Apache Beam: Open Source data processing APIs
● Expresses data-parallel batch and streaming
algorithms using one unified API
● Cleanly separates data processing logic
from runtime requirements
● Supports execution on multiple distributed
processing runtime environments
4. Apache Beam is
a unified programming model
designed to provide
efficient and portable
data processing pipelines
5. Agenda
1. Road to the first stable release
2. Expressing data-parallel pipelines with the Beam model
3. The Beam vision for portability
a. Parallel and portable pipelines in practice
4. Extensibility to integrate the entire Big Data ecosystem
6. Apache Beam at DataWorks Summit
● Realizing the promise of portable data processing with Apache Beam
○ Speaker: Davor Bonaci, Google
○ Wednesday @ 11:30 am
● Stateful processing of massive out-of-order streams with Apache Beam
○ Speaker: Kenneth Knowles, Google
○ Wednesday @ 3:00 pm
● Birds-of-a-feather: IoT, Streaming and Data Flow
○ Panel: Yolanda Davis, Davor Bonaci, P. Taylor Goetz, Sriharsha Chintalapani,
and Joseph Nimiec
○ Thursday @ 5:00 pm
8. What we accomplished so far?
02/01/2016
Enter Apache
Incubator
5/16/2017
First stable
release
Early 2016
Design for use cases,
begin refactoring
Late 2016
Community growth
Early 2017
API stabilization
06/14/2016
1st incubating
release
01/10/2017
Graduation as a
top-level project
12. The Beam Model: asking the right questions
What results are calculated?
Where in event time are results calculated?
When in processing time are results materialized?
How do refinements of results relate?
15. PCollection<KV<String, Integer>> scores = input
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)))
.apply(Sum.integersPerKey());
The Beam Model: Where in event time?
23. Beam Vision: mix and match SDKs and runtimes
● The Beam Model: the abstractions
at the core of Apache Beam
Runner 1 Runner 3Runner 2
● Choice of SDK: Users write their
pipelines in a language that’s
familiar and integrated with their
other tooling
● Choice of Runners: Users choose
the right runtime for their current
needs -- on-prem / cloud, open
source / not, fully managed / not
● Scalability for Developers: Clean
APIs allow developers to contribute
modules independently
The Beam Model
Language A Language CLanguage B
The Beam Model
Language A
SDK
Language C
SDK
Language B
SDK
24. ● Beam’s Java SDK runs on multiple
runtime environments, including:
○ Apache Apex
○ Apache Spark
○ Apache Flink
○ Google Cloud Dataflow
○ [in development] Apache Gearpump
● Cross-language infrastructure is in
progress.
○ Beam’s Python SDK currently runs
on Google Cloud Dataflow
Beam Vision: as of June 2017
Beam Model: Fn Runners
Apache
Spark
Cloud
Dataflow
Beam Model: Pipeline Construction
Apache
Flink
Java
Java
Python
Python
Apache
Apex
Apache
Gearpump
25. Example Beam Runners
Apache Spark
● Open-source
cluster-computing
framework
● Large ecosystem of
APIs and tools
● Runs on premise or in
the cloud
Apache Flink
● Open-source
distributed data
processing engine
● High-throughput and
low-latency stream
processing
● Runs on premise or in
the cloud
Google Cloud Dataflow
● Fully-managed service
for batch and stream
data processing
● Provides dynamic
auto-scaling,
monitoring tools, and
tight integration with
Google Cloud
Platform
55. IO connectors
The Beam Model
Language A
SDK
Language C
SDK
Language B
SDK
IO
connector
2
IO
connector
3
IO
connector
1
56. File systems
The Beam Model
Language A
SDK
Language C
SDK
Language B
SDK
File system
2
File system
3
File system
1
57. Ecosystem integration
● I have an engine
→ write a Beam runner
● I want to extend Beam to new languages
→ write an SDK
● I want to adopt an SDK to a target audience
→ write a DSL
● I want a component can be a part of a bigger data-processing pipeline
→ write a library of transformations
● I have a data storage or messaging system
→ write an IO connector or a file system connector
59. Learn more and get involved!
Apache Beam
https://beam.apache.org
Join the Beam mailing lists!
user-subscribe@beam.apache.org
dev-subscribe@beam.apache.org
Follow @ApacheBeam on Twitter
60. Apache Beam is
a unified programming model
designed to provide
efficient and portable
data processing pipelines
61. Still coming up...
● Stateful processing of massive out-of-order streams with Apache Beam
○ Speaker: Kenneth Knowles, Google
○ Wednesday @ 3:00 pm
● Birds-of-a-feather: IoT, Streaming and Data Flow
○ Panel: Yolanda Davis, Davor Bonaci, P. Taylor Goetz, Sriharsha Chintalapani,
and Joseph Nimiec
○ Thursday @ 5:00 pm