Ruby on Big Data (Cassandra + Hadoop)
Upcoming SlideShare
Loading in...5
×
 

Ruby on Big Data (Cassandra + Hadoop)

on

  • 12,009 views

Shows how to use Virgil to access the facilities of Hadoop and Cassandra from ruby using REST.

Shows how to use Virgil to access the facilities of Hadoop and Cassandra from ruby using REST.

Statistics

Views

Total Views
12,009
Views on SlideShare
11,986
Embed Views
23

Actions

Likes
19
Downloads
127
Comments
2

7 Embeds 23

http://www.linkedin.com 9
https://twitter.com 5
https://www.linkedin.com 4
http://twitter.com 2
http://www.hanrss.com 1
http://www.twylah.com 1
http://searchutil01 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n

Ruby on Big Data (Cassandra + Hadoop) Ruby on Big Data (Cassandra + Hadoop) Presentation Transcript

  • Ruby on Big Data Brian O’NeillLead Architect, Health Market Science (HMS) The views expressed herein are those of my own and do not necessarily reflect the views of HMS or other organizations mentioned.
  • AgendaBig Data Orientation Cassandra Hadoop SOLR StormDEMOJava/Ruby InteroperabilityAdvanced Ideas Rails Integration Combing Real-time w/ Batch Processing (The Final Frontier)
  • “Big” DataSize doesn’t always matter, it may bewhat your doing with it e.g. Natural-Language ProcessingFlexibility was our major motivator Data sources with disparate schema
  • Decomposing the ProblemData Processing Storage Distributed Indexing Batch Querying Real-time
  • Relational StorageACID Atomic: Everything in a transaction succeeds or the entire transaction is rolled back. Consistent: A transaction cannot leave the database in an inconsistent state. Isolated: Transactions cannot interfere with each other. Durable: Completed transactions persist, even when ser vers restart etc.
  • Relational StorageBenefits Limitations Data Integrity Static Schemas Ubiquity Scalability
  • NoSQL StorageBASE Basic Availability Soft-state Eventual consistencySimple API REST + JSON
  • IndexingReal-time AnswersFull-text queries Fuzzy SearchingNickname analysisGeospatial and Temporal Search
  • Storage Options
  • Indexing Options
  • Why?Cassandra Consistency-level per operation Temporal dimension of an operation Idempotent mentalitySOLR Community Integration (Solandra) NOT scalability and flexibility (sharding stinks)
  • Cassandra’s Data Model Keyspaces Column Families Rows (Sorted by KEY!) Columns (Name : Value)
  • ExampleBeerGuys (Keyspace) Users (Column Families) bonedog (Row) firstName : Brian lastName : O’Neill lisa (Row) firstName : Lisa lastName : O’Neill maidenName : Kelley
  • Cassandra Architecture Ring Architecture A (N-Z) Hash(key) -> Node Reliability F (A-F) Scalability Client M (G-M)
  • Why NoSQL for us?FlexibilityA new data processing paradigm Instead of: Data Processing Do this: Processing Data
  • Batch Processing DATA JOB ADistributable (T-A)ScalableData Locality S HDFS H (I-R) (B-G)
  • Map / Reducetuple = (key, value)map(x) -> tuple[]reduce(key, value[]) -> tuple[]
  • Word CountThe Code The Rundef map(doc) doc1 = “boy meets girl” doc.each do |word| doc2 = ”girl likes boy”) emit(word, 1) map (doc1) -> (boy, 1), (meets, 1), (girl, 1) end map (doc2) -> (girl, 1), (likes, 1), (boy, 1)end reduce (boy, [1, 1]) -> (boy, 2)def reduce(key, values[]) reduce (girl, [1, 1]) -> (girl, 2) sum = values.inject {|sum,x| sum + x } reduce (likes [1]) -> (likes, 1) emit(key, sum) reduce (meets, [1]) -> (meets, 1)end
  • Queries / Flows HivePig Cascading
  • Real-time ProcessingDeals with data streams Storm tuple Bolt tuple Spout Bolt tuple tuple tuple Bolt Spout Bolt tuple tuple Bolt
  • Putting it Together A (T-A) S Storm H (I-R) (B-G)
  • But...We love Ruby! and it’s all in Java. :(That’s okay, becauseWe love REST!
  • REST Layer CRUD via HTTP Map/Reduce via HTTP AClient S H Storm
  • DEMO
  • Java InteroperabilityConventional Interoperability I/O Streams bet ween processesHadoop StreamingStorm Multilang
  • CRUD via HTTPhttp://virgil/data/{keyspace}/{columnFamily}/{column}/{row} PUT : Replaces Content of Row/Column GET : Retrieves Value of a Row/Column DELETE : Removes Value of a Row/Column A curl S H
  • Map/Reduce over HTTP wordcount.rbdef map(rowKey, columns) result = [] columns.each do |column_name, value| words = value.split A words.each do |word| result << [word, "1"] end end curl return resultenddef reduce(key, values) rows = {} total = 0 S H columns = {} values.each do |value| total += value.to_i end columns["count"] = total.to_s rows[key] = columns return rowsend CF in CF out
  • Better? Use JRuby Single Process Parse Once / Eval ManyJSR 223 ScriptEngine ENGINE = new ScriptEngineManager().getEngineByName("jruby"); ScriptContext context = new SimpleScriptContext(); Bindings bindings = context.getBindings(ScriptContext.ENGINE_SCOPE); bindings.put("variable", "value"); ENGINE.eval(script, context);Redbridge this.rubyContainer = new ScriptingContainer(LocalContextScope.CONCURRENT); this.rubyReceiver = rubyContainer.runScriptlet(script); container.callMethod(rubyReceiver, "foo", "value");
  • Rails Integration A Balancer Load ta Da g S H sin es oc Pr“REST is the new JDBC”ActiveRecord backed by REST?Anything more than a proxy?
  • Ratch Processing (Combing Real-time and Batch)Data Flows as: Cascading Map/Reduce jobs Storm Topologies?Can’t we have one framework to rulethem all?