• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
MapReduce In The Cloud Infinispan Distributed Task Execution Framework
 

MapReduce In The Cloud Infinispan Distributed Task Execution Framework

on

  • 2,747 views

Having a data grid without an ability to execute distributed computational tasks on that data is like having a Ferrari without a drivers license!...

Having a data grid without an ability to execute distributed computational tasks on that data is like having a Ferrari without a drivers license!

Do you have or plan to use a data grid which contains massive amount of information? Introducing one of the most exciting and long awaited Infinispan 5 features - distributed data processing framework. Being tightly integrated into each node it leverages Infinispan data grid into a distributed large scale parallel data processing system. The two core contributors of the data processing framework show how one can perform various simple and complex processing such as Map-Reduce over a data grid in Infinispan with minimal effort, using various static and dynamic languages such as Scala and Ruby. This presentation includes a live demonstration.

Statistics

Views

Total Views
2,747
Views on SlideShare
2,745
Embed Views
2

Actions

Likes
4
Downloads
72
Comments
0

1 Embed 2

http://twitter.com 2

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • Why Vladimir can’t make it\n\n
  • Not into surfing, although I believe this picture captures a feeling about the data deluge quite well.\n* Big and scary\n* Inevitable\n* Kinda interesting, challenging\n* And full of opportunity\n\n\n
  • We are in the era of big data, data beyond human scale\nBeyond the scale of most simple systems\nCloud helps, with cheap, on-demand storage and processing\nAnd while social networks are a good example of this, it isn’t just tied to social networks. We see aspects of big data crop up everywhere.\n \n\n\n
  • Even prior to cloud and cheap,on-demand storage/processing, the cost of traditional storage has been dropping so fast that you may as well store everything you find out. Just in case. Like GMail. :)\n\n
  • Wikipedia sums this up well, on this exponential scale of how the price of storage has been dropping.\n
  • \n
  • And this quote by Eric Schmidt gives you an idea of the scale and volume of data we’re dealing with today.\n
  • We have things that can deal with big data.\nWe have the tools, but they are non-trivial.\nHadoop, for example, is tough to set up, hard to maintain, use, etc.\nM/R: is that enough? Do we need a higher level language to express this? IMO, yes. But for now, this is what we have.\n
  • \n
  • Infinispan and big data\nwrapped around 2 separate but related APIs:\nDistributed Executors and Map/Reduce\n(Map/Reduce implemented over DistExec)\n
  • Start with DistExec. This is a simplistic API, extending familiar JDK abstractions.\nIt encapsulates the use case of wanting to distribute workloads on the grid, potentially with some data affinity. \nLets take a look at the API.\n\n
  • \n
  • This allows you to parallelise tasks across the grid. Essentially it takes JDK7’s Fork/Join concepts and distributes it across multi-nodes.\n
  • But this suffers from the same issues that Fork/Join suffers from - at some point it is just too expensive to break up tasks, distribute it, etc.\nSo care must always be taken when considering distributing task execution.\n
  • Example: Pi approximation.\nThis is an old brute-force algorithm using random points on the circle. The more “tasks” that get kicked off, the more accurate our approximation.\n\nLook at Impl, try with 1 node and try with 6 nodes\n\n
  • \n
  • \n
  • \n
  • That was distexec. So that’s cool to some degree, lets now look at Map/Reduce.\nM/R based on Google’s paper, allows you to perform a task over a data set in parallel, and pull back “reduced” results.\nIn the case of Infinispan, data is DIST anyway. So your global data set is disjoint.\n\n
  • Google’s Map/Reduce model, from Google’s paper\nInput -> Mapped -> Reduced -> Collated.\n
  • 3 interfaces in Infinispan’s map/reduce API\n\nExplain word-count demo\nDemo on 1 node\nDemo on 6 nodes\n
  • \n
  • \n
  • * Currently in an experimental phase\n* We need to do failover - behaviour as cluster size changes\n* Test on very large volumes\n* OGM - could make use of this for distributed queries\n* Integration with analysis tools\n* Data language? LINQ?\n
  • \n
  • \n

MapReduce In The Cloud Infinispan Distributed Task Execution Framework MapReduce In The Cloud Infinispan Distributed Task Execution Framework Presentation Transcript

  • MapReduce In The Cloud Infinispan Distributed Task Execution FrameworkManik SurtaniMay 3rd 2011, JUDCon - Boston
  • Welcome to the Data Deluge
  • Background• Emergence of data beyond human scale• Outgrows current platforms in scale, structure, processing time• Abundance of unstructured, machine generated data• Does not fit into current software paradigms• Not confined to Twitter, Facebook and Google only• Need new platforms built for Big Data from ground up
  • Big Data• How did Big Data happen?• Exponential changes in storage, bandwidth and data creation• Data exhaust• Drowning in data and not knowing what to do with it• Leverage, not discard Big Data• We are not even aware of data revolution around us
  • Hard disk cost & capacity history 1000000 1000000 100000 100000 10000 10000 1000 1000 100 100 10 1 10 0.1 1 1980 1985 1990 1995 2000 2005 2010 Cost per GB in US$ Capacity in MBSource: http://en.wikipedia.org/wiki/History_of_hard_disk_drives
  • “There was 5 exabytes of information created between the dawn of civilization through 2003.... Eric Schmidt, August 4th, 2010Source: http://www.readwriteweb.com/archives/google_ceo_schmidt_people_arent_ready_for_the_tech.php
  • “There was 5 exabytes of information created between the dawn of civilization through 2003.... ... that much information is now created every two days” Eric Schmidt, August 4th, 2010Source: http://www.readwriteweb.com/archives/google_ceo_schmidt_people_arent_ready_for_the_tech.php
  • Big Data challenges• Powerful platforms available today, however...• Need to be genius to build systems on top of them• Equivalent to programming client/server apps in assembly language• Is there a need for a new language?• Confusion is as big as Big Data• Simpler yet equally powerful solutions needed
  • “Infinispan withoutMapReduce is like owning a Ferrari without aDriving License” - Vladimir Blagojevic
  • Infinispan Big Data goals• Why waste a Ferrari?• Humble beginnings• Simplicity without sacrificing power• Capitalize on existing Infinispan infrastructure• Reuse of current programming abstractions• Two frameworks: distributed executors and MapReduce
  • Infinispan Distributed Execution Framework• Leverage familiar ExecutorService, Callable abstractions• Expand it to distributed, parallel computing paradigm• Looks like a regular ExecutorService• Feels like a regular ExecutorService• The magic that goes on within Infinispan is completely transparent to users• Nevertheless, users can experience it :-)
  • So simple it fits on one slidepublic interface DistributedExecutorService extends ExecutorService { <T, K> Future<T> submit(Callable<T> task, K... input); <T> List<Future<T>> submitEverywhere(Callable<T> task); <T, K > List<Future<T>> submitEverywhere(Callable<T> task, K... input); } public interface DistributedCallable<K, V, T> extends Callable<T> { void setEnvironment(Cache<K, V> cache, Set<K> inputKeys);}
  • However,behind the scenes..
  • Do not forget Gene Amdahl Speedup = 1/(p/n)+(1-p) However, problems that increase the percentage of parallel time with their size are more scalable than problems with fixed percentage of parallel time p = parallel fraction n = number of processorsSource: https://computing.llnl.gov/tutorials/parallel_comp/
  • π approximation
  • Infinispan MapReduce• We already have a data grid!• Leverages Infinispan’s DIST mode• Cache data is input for MapReduce tasks• Task components: Mapper, Reducer, Collator• MapReduceTask cohering them together
  • MapReduce modelSource: http://labs.google.com/papers/mapreduce.html
  • Mapper, Reducer, Collatorpublic interface Mapper<KIn, VIn, KOut, VOut> extends Serializable { void map(KIn key, VIn value, Collector<KOut, VOut> collector);}public interface Reducer<KOut, VOut> extends Serializable { VOut reduce(KOut reducedKey, Iterator<VOut> iter);}public interface Collator<KOut, VOut, R> { R collate(Map<KOut, VOut> reducedResults);}
  • Roadmap• Improve task execution container• Failover, execution policies• Make sure it scales to terabytes and petabytes• Integration with Hibernate OGM• Analytics and BI tools• Do we need data analysis language?
  • Parting thoughts• Data revolution is here, today!• Profound socio-economic impact• Do not sleep through it• Infinispan as a platform for Big Data• Join us in these exciting endeavours
  • Questions? http://www.infinispan.org http://blog.infinispan.orghttp://www.twitter.com/infinispan #infinispan on FreeNode Rate this talk!http://spkr8.com/t/7384