Storm 0.8.2
Upcoming SlideShare
Loading in...5
×
 

Storm 0.8.2

on

  • 3,681 views

original slides updated for STORM 0.8.2

original slides updated for STORM 0.8.2

Statistics

Views

Total Views
3,681
Views on SlideShare
3,680
Embed Views
1

Actions

Likes
5
Downloads
88
Comments
1

1 Embed 1

http://tedwon.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Storm 0.8.2 Storm 0.8.2 Presentation Transcript

  • Slide updated for STORM 0.8.2 STORM COMPARISON – INTRODUCTION - CONCEPTSPRESENTATION BY KASPER MADSENNOVEMBER - 2012
  • HADOOP VS STORM Batch processing Real-time processing Jobs runs to completion Topologies run forever JobTracker is SPOF* No single point of failure Stateful nodes Stateless nodes Scalable Scalable Guarantees no data loss Guarantees no data loss Open source Open source* Hadoop 0.21 added some checkpointing SPOF: Single Point Of Failure
  • COMPONENTS Nimbus daemon is comparable to Hadoop JobTracker. It is the master Supervisor daemon spawns workers, it is comparable to Hadoop TaskTracker Worker is spawned by supervisor, one per port defined in storm.yaml configuration Executor is spawned by worker, run as a thread Task is spawned by executors, run as a thread Zookeeper* is a distributed system, used to store metadata. Nimbus and Supervisor daemons are fail-fast and stateless. All state is kept in Zookeeper. Notice all communication between Nimbus and Supervisors are done through Zookeeper On a cluster with 2k+1 zookeeper nodes, the system can recover when maximally k nodes fails.* Zookeeper is an Apache top-level project
  • EXECUTORSExecutor is a new abstraction • Disassociate tasks of a component to #threads • Allows dynamically changing #executors, without changing #tasks • Makes elasticity much simpler, as semantics are kept valid (e.g. for a grouping) • Enables elasticity in a multi-core environment
  • STREAMSStream is an unbounded sequence of tuples.Topology is a graph where each node is a spout or bolt, and the edges indicatewhich bolts are subscribing to which streams.• A spout is a source of a stream• A bolt is consuming a stream (possibly emits a new one) Subscribes: A• An edge represents a grouping Emits: C Subscribes: C & D Subscribes: A Source of stream A Emits: D Source of stream B Subscribes:A & B
  • GROUPINGSEach spout or bolt are running X instances in parallel (called tasks).Groupings are used to decide which task in the subscribing bolt, the tuple is sent toShuffle grouping is a random groupingFields grouping is grouped by value, such that equal value results in equal taskAll grouping replicates to all tasksGlobal grouping makes all tuples go to one taskNone grouping makes bolt run in same thread as bolt/spout it subscribes toDirect grouping producer (task that emits) controls which consumer will receive 4 tasks 3 tasks 2 tasks 2 tasks
  • TestWordSpout ExclamationBolt ExclamationBolt EXAMPLE TopologyBuilder builder = new TopologyBuilder(); Create stream called ”words” Run 10 tasks builder.setSpout("words", new TestWordSpout(), 10); Create stream called ”exclaim1” builder.setBolt("exclaim1", new ExclamationBolt(), 3) Run 3 tasks Subscribe to stream ”words”, .shuffleGrouping("words"); using shufflegrouping Create stream called ”exclaim2” builder.setBolt("exclaim2", new ExclamationBolt(), 2) Run 2 tasks .shuffleGrouping("exclaim1"); Subscribe to stream ”exclaim1”, using shufflegrouping A bolt can subscribe to an unlimited number of streams, by chaining groupings.The sourcecode for this example is part of the storm-starter project on github
  • TestWordSpout ExclamationBolt ExclamationBoltEXAMPLE – 1TestWordSpoutpublic void nextTuple() { Utils.sleep(100); final String[] words = new String[] {"nathan", "mike", "jackson", "golda", "bertels"}; final Random rand = new Random(); final String word = words[rand.nextInt(words.length)]; _collector.emit(new Values(word));}The TestWordSpout emits a random string from the array words, each 100 milliseconds
  • TestWordSpout ExclamationBolt ExclamationBoltEXAMPLE – 2ExclamationBolt Prepare is called when bolt is createdOutputCollector _collector;public void prepare(Map conf, TopologyContext context, OutputCollector collector) { _collector = collector;} Execute is called for each tuplepublic void execute(Tuple tuple) { _collector.emit(tuple, new Values(tuple.getString(0) + "!!!")); _collector.ack(tuple); } declareOutputFields is called when bolt is createdpublic void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word"));}declareOutputFields is used to declare streams and their schemas. It is possible to declare several streams and specify the stream to use when outputting tuples in the emit function call.
  • TRIDENT TOPOLOGY Trident topology is a new abstraction built on top of STORM primitives • Supports • Joins • Aggregations • Grouping • Functions • Filters • Easy to use, read the wiki • Guarantees exactly-once processing - if using (opaque) transactional spout • Some basic ideas are equal to the deprecated transactional topology* • Tuples are processed as small batches • Each batch gets a transaction id, if batch is replayed same txid is given • State updates are strongly ordered among batches • State updates atomically stores meta-data with data • Transactional topology is superseded by the Trident topology from 0.8.0*see my first slide (march 2012) on STORM, for detailed information. www.slideshare.com/KasperMadsen
  • EXACTLY-ONCE-PROCESSING - 1Transactional spouts guarantees same data is replayed for every batchGuaranteeing exactly-once-processing for transactional spouts • txid is stored with data, such that last txid that updated the data is known • Information is used to know what to update in case of replayExample 1. Currently processing txid: 2, with data [”man”, ”dog”, ”dog”] 2. Current state is: ”man” => [count=3, txid=1] ”dog” => [count=2, txid=2] 3. Batch with txid 2, fails and gets replayed. 4. Resulting state is ”man” => [count=4, txid=2] ”dog” => [count=2, txid=2] 5. Because txid is stored with the data, it is known the count for “dog” should not be increased again.
  • EXACTLY-ONCE-PROCESSING - 2Opaque transactional spout is not guaranteed to replay same data for a failedbatch, as originally existed in the batch. • Guarantees every tuple is successfully processed in exactly one batch • Useful for having exactly-once-processing and allowing some inputs to failGuaranteeing exactly-once-processing for opaque transactional spouts • Same trick doesn’t work, as replayed batch might be changed, meaning some state might now have stored incorrect data. Consider previous example! • Problem is solved by storing more meta-data with data (previous value)ExampleStep Data Count prevValue Txid Updates dog count then fails1 2 dog 1 cat 2,1 0,0 1,12 1 dog 2 cat 3,1 2,1 2,12.1 2 dog 2 cat 4, 3 2,1 2,2 Consider the potential problems if the Batch contains new data, but updatesnew data for 2.1 doesn’t contain any cat. ok as previous values are used
  • ELASTICITY• Rebalancing workers and executors (not tasks) • Pause spouts • Wait for message timeout • Set new assignment • All moved tasks will be killed and restarted in new location• Swapping (STORM 0.8.2) • Submit inactive new topology • Pause spouts of old topology • Wait for message timeout of old topology • Activate new topology • Deactivate old topology • Kill old topology What about state on tasks which are killed and restarted? It is up to the user to solve!
  • LEARN MOREWebsite (http://storm-project.net/)Wiki (https://github.com/nathanmarz/storm/wiki)Storm-starter (https://github.com/nathanmarz/storm-starter)Mailing list (http://groups.google.com/group/storm-user)#storm-user room on freenodeUTSL: https://github.com/nathanmarz/stormMore slides: www.slideshare.net/KasperMadsen from: http://www.cupofjoe.tv/2010/11/learn-lesson.html