GPars (Groovy Parallel Systems)

2,311 views

Published on

GPars is an open source library for concurrency. Provides new abstractions around Java's Thread Model

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,311
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
26
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

GPars (Groovy Parallel Systems)

  1. 1. GPars(Groovy Parallel Systems)Gagan AgrawalXebia
  2. 2. AgendaWhat is Gpars?Data ParallelismActorsAgentsDataflow
  3. 3. Trend
  4. 4. Multi Core Processor
  5. 5. Problems with Java Concurrency ModelSynchronizationDead-LocksLive-LocksRace ConditionsStarvation
  6. 6. GPars GOAL To fully utilize all available processors
  7. 7. What is GPars ?An open-source concurrency and parallelismlibrary for Groovy and JavaGives a number of high-level abstractions forwriting concurrent and parallel code Abstractions like..Map-ReduceFork-JoinAsynchronous ClosuresActorsAgentsDataFlow
  8. 8. Data Parallelism
  9. 9. Data ParallelismFor low level data parallelism techniquesGParsPool – Relies on JSR-166y Fork/JoinFramework and offers greater functionality andbetter performance.GParsExecutorsPool – Uses old Java executors andso is easy to setup in a managed or restrictedenvironment.
  10. 10. Data Parallelism – Parallel CollectionsDealing with data frequently involvesmanipulating collectionsLists, arrays, sets, maps, iterators, strings etc.can be viewed as collections of itemsCommon pattern to process such collections is totake elements sequentially, one-by-one, andmake an action for each of the items in row.E.g min() function – Iterates over the collectionsequentially to find the minimum value
  11. 11. Parallel Collections with GParsPoolThe GParsPool class enables a ParallelArray-based(from JSR-166y) concurrency DSL for collectionsand objects.GParsPool.withPool(){ .. }GParsPool.withPool(){ ForkJoinPool pool ->.. }GParsPool.withPool(10){ .. }GParsPool.withExistingPool(ForkJoinPool pool){ .. }
  12. 12. Parallel Collections with GParsPool Some of the methods supported are -eachParallel()collectParallel()findAllParallel()findAnyParallel()groupByParallel()minParallel()maxParallel()sumParallel()countParallel()
  13. 13. Parallel Collections with Meta-class enhancerParallelEnhancer def list = [1,2,3,4,5,6,7,8] ParallelEnhancer.enhanceInstance(list) println list.collectParallel{it * 2}
  14. 14. Parallel Collections with Meta-class enhancerParallelEnhancer def animals = [dog,ant,cat,whale] ParallelEnhancer.enhanceInstance(animals) println(animals.anyParallel{it==ant} ? Found an ant : No ants found) println(animals.everyParallel{it.contains("a")} ? All animals contain a : Some animals can live without a)
  15. 15. Parallel Collections – WarningDont do this def thumbnails = [] images.eachParallel{ thumbnails << it.thumbnail }
  16. 16. Parallel Collections - MemoizeEnables caching of functions return values.Repeated calls to the memoized function willretrieve the result value from an internaltransparent cache.Example
  17. 17. Data Parallelism – Map ReduceCan be used for the same purpose as thexxxParallel() family methods and has very similarsemantics.Can perform considerably faster if you need tochain multiple methods to process a singlecollection in multiple steps.Example
  18. 18. Data Parallelism – Map Reduce How it is different from Parallel CollectionsThe xxxParallel() methods must return a legal collection of items.Internally they build ParallelArray, perform required operationconcurrently and destroy ParallelArray before returningRepeats same process for every xxxParallel() method call.With Map-Reduce Parallel Array is created just once and same isused in all chained method calls.To get collection, retrieve "collection" property
  19. 19. Data Parallelism – Asynchronous Invocationasync() - Creates an asynchronous variant of thesupplied closure GParsPool.withPool{ Closure longLastingCalculation = {calculate()} Closure fastCalculation = longLastingCalculation.async() Future result = fastCalculation() //do stuff while calculation performs... println result.get() }
  20. 20. Data Parallelism – Asynchronous Invocation callAsync() - Calls a closure in a separate threadsupplying the given arguments GParsPool.withPool{ println ({it * 2}.call(3)) println ({it * 2}.callAsync(3).get()) }
  21. 21. Actors
  22. 22. ActorsWas originally inspired by the Actors library inScalaAllow for a message passing-based concurrencymodelPrograms are collections of independent activeobjects that exchange messages and have nomutable shared stateAlways guarantee that at most one threadprocesses the actors body
  23. 23. ActorsHelps to avoid deadlock, live-lock and starvationA great number of actors can share a relativelysmall thread poolAn actor with no work doesnt consume threads.No shared mutable state.Runs in daemon threads.
  24. 24. Actors
  25. 25. Actors - Types Stateless ActorsDynamicDispatchActor and Reactive ActorKeep no track of what messages have arrived previously Stateful ActorsDefaultActorAllows user to handle implicit state directlyAfter receiving a message the actor moves into a new statewith different ways to handle future messagesE.g Encrypted messages for decryption, only after it hasreceived the encryption keys
  26. 26. Stateful ActorsCan be created in one of two waysBy extending DefaultActor classStatic methods of Actors class
  27. 27. Actors - Usage Performs 3 specific operationsSend MessagesReceive MessagesCreate new actors
  28. 28. Actors – Sending MessagesMessages can be sent to actors usingsend() method<< operatorImplicit call() method
  29. 29. Actors – Sending Messages def passiveActor = Actors.actor{ loop { react { msg -> println "Received: $msg"; } } } passiveActor.send Message 1 passiveActor << Message 2 passiveActor Message 3
  30. 30. Actors – Sending Messages sendAndWait()Blocks the caller until a reply from the actor isavailable.
  31. 31. Actors – Sending Messages def replyingActor = Actors.actor{ loop { react { msg -> println "Received: $msg"; reply "Ive got $msg" } } } def reply1 = replyingActor.sendAndWait(Message 4) def reply2 = replyingActor.sendAndWait(Message 5, 10, TimeUnit.SECONDS)
  32. 32. Actors – Sending Messages sendAndContinue() replyingActor.sendAndContinue("Message 6") {reply -> println "Got reply ${reply}" } println "I can continue while replyingActor is executing"
  33. 33. Actors – Sending Messages sendAndPromise() Promise promise = replyingActor.sendAndPromise("Message 6") println "Got reply : ${promise.get()}"
  34. 34. Actors – Receiving Messagesreact() method within Actors code is responsibleto consume message from actors inbox react{message -> //consume message... }Waits if there is no message to be processedimmediatelySupplied closure is not invoked directlyIs scheduled for processing by any thread in thethread pool once a message is available
  35. 35. Actors – Receiving Messages def calculator = Actors.actor { loop{ react {a -> react {b -> println(a + b) } } } }
  36. 36. Blocking ActorsBlocking actors hold a single pooled thread fortheir whole life-timeIncludes the time when waiting for messagesAvoids thread management overheadThe number of blocking actors runningconcurrently is limited by the number of threadsavailable in the shared pool.Provide better performance compared tocontinuation-style actorsGood candidates for high-traffic positions in actornetwork.
  37. 37. Blocking Actors def decryptor = blockingActor { while (true) { receive {message -> if (message instanceof String) reply message.reverse() else stop() } } } def console = blockingActor { decryptor.send lellarap si yvoorG println Decrypted message: + receive() decryptor.send false } [decryptor, console]*.join()
  38. 38. Stateless ActorsDynamic Dispatch ActorRepeatedly scans for messagesDispatches arrived messages to one of theonMessage(message) methodsPerformance better than DefaultActor
  39. 39. Stateless Actors - DynamicDispatchActor class MyActor extends DynamicDispatchActor{ void onMessage(String message){ println "Received String : $message" } void onMessage(Integer message){ println "Received Integer : $message" } }
  40. 40. Stateless Actors - DynamicDispatchActor Actor myActor = new DynamicDispatchActor().become{ when {String msg -> println "Received String : $msg" } when {Integer msg -> println "Received Integer : $msg" } }
  41. 41. Stateless Actor – Static Dispatch Actor Contains single handler method Performs better than DynamicDispatchActor Make Dataflow operators four times fastercompared to when using DynamicDispatchActor class MyActor extends StaticDispatchActor<String>{ void onMessage(String message){ println "Message is : $message" } }
  42. 42. Agents
  43. 43. AgentsInspired by Agents in ClojureUsed when shared mutable state is required e.gShopping CartIs a thread-safe non-blocking shared mutablestate wrapperHides data and protects from direct accessAccepts messages and process themasynchronously
  44. 44. AgentsMessages are commands(functions) and executedinside AgentAgent guarantees execution of a single function ata timeAfter reception, received function is run againstthe internal state of Agent and return value is newinternal state of AgentThe mutable values are not directly accessiblefrom outside
  45. 45. AgentsRequests have to be sent to AgentAgent guarantees to process the requestsseqentially on behaf of callersWraps a reference to mutable state held inside asingle fieldMessages can be sent via<< operatorsend() methodImplicit call() method
  46. 46. Agents - Basic RulesSubmitted commands obtain the agents state asa parameter.Can call any methods on the agents state.Replacing the state object with a new one is alsopossible using the updateValue() method.The val property waits until all precedingcommands are consumed
  47. 47. Agents – Basic RulesThe valAsync() does not block caller.The instantVal returns immediate snapshot ofagent state.All Agent instances share a default daemon threadpool.Setting the threadPool property of an Agentinstance will allow it to use a different threadpool.
  48. 48. Agents Example
  49. 49. Agent – Listeners & ValidatorsListenersGet notified each time internal state changesValidatorsGet a chance to reject a coming change by throwing anexception
  50. 50. Dataflow
  51. 51. DataflowOperations in Dataflow programs consists of“Black Boxes”Inputs and Outputs are always explicitly definedThey run as soon as all of their inputs becomevalidDataflow program is more like series of workersin assembly lineThey are inherently parallel
  52. 52. Dataflow ChannelsVariablesQueuesBrodcastsStreams
  53. 53. Dataflow VariablesChannel to safely and reliably transfer data fromproducers to their consumersValue is set using << operatorA task blocks until value has been set by anothertaskDF Variable can only be set only one in itslifetimeDont have to bother with ordering andsynchronizing the tasks or threads
  54. 54. Dataflow def x = new DataflowVariable() def y = new DataflowVariable() def z = new DataflowVariable() task{z << x.val + y.val} task{x << 10} task{y << 5} println "Result : $z.val"
  55. 55. Dataflow Variables main task1 task2 task3 x z y
  56. 56. Benefits No race-conditions No live-locks Deterministic deadlocks Completely deterministic programs Beautiful Code 56
  57. 57. Thank You

×