This document summarizes an event about Apache Storm and stream processing. It includes:
- An event for Tuga IT 2017 in Lisbon, Portugal thanking sponsors.
- A presentation by Nuno Caneco on event processing with Apache Storm, covering why stream processing is important, what Storm is, concepts like topologies, spouts, bolts and tuples, examples, and features like Trident and Storm SQL.
- A question and answer section and thanks to sponsors.
6. Stream Processing - Why?
WHY
● Data is crucial for business
● New data is always being generated
● Companies want to extract value from
data in “real-time”
USE CASES
● Fraud detection
● Sensor data aggregation
● Live monitoring
7. What is Storm?
Apache Storm is a free and open source distributed realtime computation system.
Storm makes it easy to reliably process unbounded streams of data, doing for realtime
processing what Hadoop did for batch processing. Storm is simple, can be used with
any programming language, and is a lot of fun to use!
Storm has many use cases: real time analytics, online machine learning, continuous
computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at
over a million tuples processed per second per node. It is scalable, fault-tolerant,
guarantees your data will be processed, and is easy to set up and operate.
http://storm.apache.org/
11. Topology
Topologies combine individual work units to be applied to input data
Spout
Bolt A
Bolt C
Bolt B
data
[Tuple]
[Tuple]
[Tuple]
[Tuple]
[Data out]
Bolt D
[Tuple]
Topology
12. Spout
● First node of every topology
○ Collects data from the outside world
○ Injects the data on the topology in order to be processed
● Must implement ISpout interface
○ BaseRichSpout is a more convenient abstract class
Spouts
13. Bolt
● Middle or terminating nodes of a topology
● Implements a Unit of Data Processing
● Each Bolt has an output stream
● Can emit one or more Tuples to other Bolts subscribing the output
stream
● Must implement IBolt
○ BaseRichBolt is a more convenient abstract class
Bolt
14. Tuple
Hash-alike data structure containing the data that flows between Spouts and Bolts
Data can be accessed by:
● Field index: [{0, “foo”}, {1, “bar”}]
● Key name: [{“foo_key”, “foo”}, {“bar_key”, “bar”}]
Values can be:
● Java primitive types
● String
● byte[]
● Any Serializable object
15. Example: Alert on monitored words
Monitored
Words Bolt
Collector
Spout
{Message} Split
Sentence
Bolt
{Word, MessageId}
Message
queue
[Message]
Notify User
Bolt
Store event
on DB Bolt
{MonitoredWord,
MessageId}
{MonitoredWord,
MessageId}
18. Acknowledging Tuples
● ack(): Tuple was processed successfully
● fail(): Tuple failed to process
Tuples with no ack() nor fail() are automatically replayed
Tuples with fail() will fail up the dependency tree
22. Beware!
Storm is designed to scale to process millions of messages per second.
It's design deliberately assumes that some Tuples might be lost.
If your application needs Exactly Once semantics, you should consider using
Trident (will talk about that in a while)
Storm does not ensure
exactly once processing
24. Parallelism
Cluster node
Worker Process Worker Process
Cluster Node → 1+ JVM instances
JVM Instance → 1+ Threads
Thread → 1+ Task
Each instance of a Bolt or Spout is a
Task
Thread Thread
Thread Thread
Task Task
Task
Task Task
Task Task Task
25. Parallelism Example
Config conf = new Config();
conf.setNumWorkers(2); // use two worker processes
topologyBuilder.setSpout("blue-spout", new BlueSpout(), 2);
topologyBuilder.setBolt("green-bolt", new GreenBolt(), 2)
.setNumTasks(4)
.shuffleGrouping("blue-spout");
topologyBuilder.setBolt("yellow-bolt", new YellowBolt(), 6)
.shuffleGrouping("green-bolt");
26. Stream grouping
● Shuffle grouping: randomly distributed across all downstream Bolts
● Fields grouping: GROUP BY values - Same values of the grouped fields will be
delivered to the same Bolt
● All grouping: The stream is replicated across all the bolt's tasks. Use this grouping
with care
● Direct grouping: The producer of the Tuple must indicate which consumer will
receive the Tuple
● Custom Grouping: When you go NIH
31. Other features: Trident
Trident is an abstraction layer to manage state across the topology
The state can be kept:
● Internally in the topology - in memory or backed by HDFS
● Externally on a Database - such as Memcached or Cassandra
32. Other features: Storm SQL
The Storm SQL integration allows users to run SQL queries over streaming data
in Storm.
Cool feature, but still experimental
35. PLEASE FILL IN EVALUATION FORMS
FRIDAY, MAY 19th SATURDAY, MAY 20th
https://survs.com/survey/cprwce7pi8 https://survs.com/survey/l9kksmlzd8
YOUR OPINION IS IMPORTANT!
38. Trident: How it works
1. Tuples are processed as small batches
2. Each batch of tuples is given a unique id called the "transaction id" (txid).
a. If the batch is replayed, it is given the exact same txid.
3. State updates are ordered among batches. That is, the state updates for
batch 3 won't be applied until the state updates for batch 2 have succeeded.