Upcoming SlideShare
×

# Clojure's take on concurrency

3,435 views
3,358 views

Published on

3 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
3,435
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
24
0
Likes
3
Embeds 0
No embeds

No notes for slide

### Clojure's take on concurrency

1. 1. Clojure’s take onConcurrencyYoav Rubin
2. 2. About me• Software engineer in IBM Research, Haifa– Worked on• From large scale products to small scale researchprojects– Domains• Software tools• Development environments• Simplified programming– Technologies• Frontend engineering• Java, Clojure• Lecture the course “Functionalprogramming on the JVM” in HaifaUniversity{:name “Yoav Rubin,:email yoavrubin@gmail.com,:blog http://yoavrubin.blogspot.com,:twitter @yoavrubin}
3. 3. Agenda• The problem of concurrency• Reference types• Pendings
4. 4. Why concurrency is a problem?
5. 5. Mutability
6. 6. What is to mutate• What is actually x = x+1– LOAD R10 x– ADDI R10 1– STORE R10 X
7. 7. What is to mutate• Thread 1: x = x+1– LOAD R10 X– ADDI R10 1– STORE R10 X• Thread 2: x = x+5– LOAD R10 X– ADDI R10 5– STORE R10 X
8. 8. What will happen?
9. 9. What is to mutate• Thread 1: x = x+1– LOAD R10 X– ADDI R10 1– STORE R10 X• Thread 2: x = x+5– LOAD R10 X– ADDI R10 5– STORE R10 Xx is increased by 1 !!!
10. 10. What is to mutate• Thread 1: x = x+1– LOAD R10 X– ADDI R10 1– STORE R10 X• Thread 2: x = x+5– LOAD R10 X– ADDI R10 5– STORE R10 Xx is increased by 5 !!!
11. 11. What is to mutate• Thread 1: x = x+1– LOAD R10 X– ADDI R10 1– STORE R10 X• Thread 2: x = x+5– LOAD R10 X– ADDI R10 5– STORE R10 Xx is increased by 6 (the correct result)
12. 12. Getting to the right result• The first two cases introduced a racecondition– Threads racing to perform a write to the sameplace in the memory• Can be prevented with critical section
13. 13. Critical section• A marker that does not allow a thread to enter acode segment as long as another thread is there
14. 14. Critical section• It is up to the developer to define it– Using locks• Need to get the lock of the critical section beforeentering it• Need to release the lock of the critical section afterfinishing with it
15. 15. The trouble with locks• Introduce a trade-off between improvingperformance and reducing complexity• More complexity => more bugs• Concurrency bugs are:– Harder to find– Harder to replicate– Harder to debug– Harder to solve
16. 16. The trouble with locks• To properly use locks we need to have acomplete understanding of everything thathappens in the program– Rarely possible, and if so, by top individuals– Hardly scalable
17. 17. The trouble with locks• If the entire program is locked, there’s nocomplexity related to lock management– But we suffer from poorer performance due tono concurrency• If nothing is locked => it is up to Murphy
18. 18. Managing locks• What to lock• When to lock– What’s the right time for a specific lock– What’s the right order for a series of locks• When to unlock– The right time for a specific lock– The right order for a series of locks
19. 19. What to lock?• Pessimistic approach – any accessedvalue, both read and write• Optimistic approach – any value we try towrite to– What happens if a read value is used in futurewrites ?• We cannot trust writes that are based on anunlocked read
20. 20. When to lock?• Grab the lock as soon as possible– Prevent from others to take it• Postpone the locking as much as possible– Less effect on the rest of the threads
21. 21. When to unlock?• The first release defines the end of thecritical section• Release lock(x) after writing to X• Release lock(x,y,z) the writes to x,y,z
22. 22. Grabbing several locks• In what order?– Ordered vs unordered• What to do if we can’t grab them all– Keep and retry to continue– Release what we have an restart
23. 23. Unordered + keeping the locksThread 1:• Need locks A and B• (grab A)• Wait till B is unlockedThread 2:• Need locks A and B• (grab B)• Wait till A is unlockedDeadlock!!!
24. 24. Unordered + release the locks• Need locks A and B• (grab A)• (can’t get B)• (free A)• Need locks A and B• (grab B)• (can’t get A)• (free B)livelock!!!
25. 25. Ordered• Need to decide on strict order• Need to enforce it throughout the software• Need to enforce it on components that interact with thesoftware• Need to adapt to the order that was used in othercomponents• Need to update all of the places when there’s a changethat affects the order– e.g., in case of refactoring• Both code structure and element’s names
26. 26. Who grabs the lock• Need to prioritize the locking order– Need to update the priority based on theapplication’s state• Otherwise we may cause a starvation– Thread A waits for a lock on X, other threadskeep on grabbing that lock before thread Asucceeds
27. 27. Debugging concurrentsoftware may introduceheisenbugs
28. 28. Writing correct concurrent software is verycomplicatedComplexity cause bugsKnown unknowns
29. 29. Writing correct concurrent software isalways harder than you thinkThe delta between how hard it is and how hardyou think it is transforms to bugs which arealmost impossible to solveUnknown unknowns
30. 30. Why does it happen• Locks have the same abstraction level as typeshave in assembly– They don’t• Types are used to allow correct interpretation ofthe areas in the memory– Semantic aspect of the software• Locks are used to allow correct access to areasin the memory– Syntactic aspect of the software• Lower level constructs mixed with higher levellanguage
31. 31. What’s the solution• Types allow defining semantic interpretation ofmemory areas– Each access to a memory area has to pass throughthe type information• Need to find a mechanism that would defineconcurrency semantic to areas in the memory– So each access to the memory area would passthrough the concurrency semantics information
32. 32. What’s the solution• Add another level of indirection• Manage changes based onconcurrency semantics• Reference types
33. 33. TypeinfomemoryConcurrencysemanticsThe elementReference typessymbol
34. 34. (as oppose to)symbolTypeinfomemory
35. 35. What happens when changing?symbolTypeinfomemory
36. 36. What happens when changing?ConcurrencysemanticssymbolTypeinfomemoryTypeinfoOther memoryThis area maybe reclaimed bythe GC
37. 37. Clojure epochal modelSymbol that has concurrency semanticsState 1 State 2 State 3function function
38. 38. State:The value of an identityat a given timeState can be changed byapplying function on an identity
39. 39. Reference types• Providing concurrency semantics as partof the language– The developer needs to decide what’s theright concurrency semantics of the element• Just like deciding what’s the type of the element• When combined with immutability, itresults in almost eliminating the riskcaused by concurrency
40. 40. Declaring the semanticsas oppose toimplementing it (using locks)
41. 41. Concurrency semantics• The change is to be performed at:– Current thread (synchronous)– Another thread (A-synchronous)• A change in the element’s state can be:– Visible to other threads (shared)– Not visible to other threads (isolated)• A change in the element’s state can be– Coordinated with changes at other elements– Not coordinated with changes at other elements
42. 42. Concurrency semanticsIsolated Coordinated SynchronousNo meaning
43. 43. Concurrency semanticsIsolated Coordinated Synchronousvar
44. 44. Concurrency semanticsIsolated Coordinated Synchronousref
45. 45. Concurrency semanticsIsolated Coordinated Synchronousatom
46. 46. Concurrency semanticsIsolated Coordinated Synchronousagent
47. 47. Agent• A value that can be shared betweenthreads• The change is not coordinated with otherelements• Execution is performed in anasynchronous manner– By a different thread
48. 48. Agent• Creation:– (agent <value>)– (def a (agent <value>)• Reading– (deref <the-agent>)– @<the-agent>
49. 49. Agent - activation• Activation:– (send a-name func args)• To be executed from a predefined thread pool– (send-off a-name func args)• For blocking / heavy functions – uses a new thread• Send and send-off return immediately– The return value is the agent
50. 50. Agent - activation• Agents are aware of transactions• Agent can be activated within atransaction– send or send-off within dosync– The agents wait for the transaction to succeedbefore activating• To prevent multiple execution due to retries
51. 51. Agent - waiting• Agents are performed in an asynchronousfashion– We may reach to a point in the program that we needtheir updated value• We need to wait for it to complete– (await a+)• Though it may block forever• Returns nil– (await-for millis a+ )• Waiting for a predefined milliseconds• Return nil in case the return is due to the timeout
52. 52. Error handling• Agents are executed in a different threadthan the one that created them• In case of error, they are in a FAILUREstate• Any send would result in the same error• Can be restarted by– (restart-agent <the-agent> new-state)
53. 53. Error handling• It is possible to set a error handlingfunction to an agent• The function is activated in case of anerror• (set-error-handler! <the-agent> <er-fn>)– The error handling function receives twoarguments• The agent• The exception
54. 54. Var• A var’s value is visible in all threads• We can change its value, but the changes isvisible only in the changing thread• Use ‘def’ to create a var• (var <the-var-name>) returns the var– Or use the reader macro #’<the-var-name>• #’a ;=>theNS/a
55. 55. Var• (def a ^:dynamic 8) to create a var that isre-bindable• To rebind a var– The common way:• (binding [binding-pairs] <expression>)– Use set! within binding to re-bind the var to anew value
56. 56. Var• The much less used way to rebind a var– (with-binding* {binding-map} <expression>)• Binding-map is paired with var => newVal• That’s where the reader macro #’ becomes handy– (with-binding <the-var> <the-value>)
57. 57. Var• It is also possible to change the root valueof a var– The root value is the value exposed to all thethreads• (alter-var-root the-var f <args…>)– Note that the var’s value is the first argumentto f
58. 58. Atom• An atom’s value is shared betweenthreads• A change in an atom’s value is sharedbetween threads• The change is not coordinated with otherAtoms• The change is atomic – a single point intime• Execution is synchronous
59. 59. Atom• Creation– (atom <value>)– (def a (atom <value>))• Reading an atom’s value– (deref <the-atom>)– @<the-atom>
60. 60. Atom• (swap! atm func args)– The first argument of func is the pre-changevalue of the atom• A new value is created based on the function• (reset! atm val)– Change the atom’s value to val
61. 61. Ref• A ref’s value can be shared betweenthreads• The change can be coordinated with otherrefs– It is always performed within a transaction,that can be executed on several refs• Execution is synchronous
62. 62. Ref• Creation:– (ref <value>)– (def a (ref <value>))• Reading– (deref <the-ref>)– @<the-ref>
63. 63. Ref• the modification of the ref is done using– (alter <the-ref> func args)• The first argument of func must be the updatedelement– (ref-set <the-ref> v)• Using only the above will not work !!!!
64. 64. Ref• Need to execute the commands within atransaction• Use (dosync <expr…>)
65. 65. Transaction• Transactions maintain the ACID property:– Atomic• The change happens in a single point in time, for all the participatingvalues– Or it fails entirely– Consistent• At any given point the consistency rules are valid– It is possible to add such rules– Isolated• Any change done within a transaction is not visible to an outsideviewer during the execution of a transaction– No side effects– Durable• Once the transaction succeeds, its effects are not-susceptible tosystem failures
66. 66. Transactions and side-effects• Transactions may be retried• Do not perform side-effects in the body ofalter / swap!– Any i/o , db call …
67. 67. Software Transaction Memory(STM)• Clojure uses an STM to update refs andatoms• STM maintains the ACI properties– As it runs in memory – no writing to a disc• Clojure’s STM uses the MVCC algorithm– Multi-version-concurrency-control– Used within commercial DBs, such asOracle’s
68. 68. How the update works• No assignment in the developer’s code• The developer provides a function– How to create new value based on old value• The update is managed by the system– There are locks behind the scenes• The update functionality is just one of the thingsthat can be provided by the developer– More things can be added
69. 69. validation• It is possible to provide a validator when creatinga ref / atom / var / agent– (<elem> initial-val :validator fn)– (set-validator <elem> fn)• The validation function accepts one argument,which is the new value– Returns either true or false• If the validation function fails, the transactionfails– No retry– Note that atom’s update is done also within atransaction
70. 70. Observing changes• It is possible to add a function that would beinvoked upon a change in an element– Var / Atom / Ref / Agent• (add-watch <elem> <key> <watch-fn>)– <elem>: the var / atom / ref / agent– <key>: a unique identifier of the watch-fn– <watch-fn>: a function that accepts 4 arguments• <key> - the key used when the fn was attached to the elem• <elem> - the changed element• <old-val> - the old value of the element• <new-val> - the new value of the element
71. 71. Observing changes• Within the watch function:– Do not deref the element to get its value• it may be different from both the old and new value– Ignore the key• Use the key when removing the watch– (remove-watch <elem> <key>)
72. 72. Pendings
73. 73. What are pendings• A result of a calculation• To be used later• Who provides the calculation• When to start it
74. 74. What are pendings• A box that contains a result of a computation• Future– The computation is defined upon initialization– Starts when the future is defined• Delay– The computation is defined upon initialization– Starts when somebody asks for the result of thecomputation• Promise– The computation is NOT defined upon initialization– It is up to someone who can access the promise toprovide it
75. 75. Future / delay• An asynchronous computation• Creation:– (future <form>)– (def ftr (future <form>))• Reading– (deref ftr)– @ftr• Reading a future / delay is a blockingoperation
76. 76. Future / delay• When to use– For starting long computations that will beneeded later• DB call• Service over HTTP• …
77. 77. Promise• A promise is a “box” that holds a dataelement– Not a computation• The “box” can be filled once, and then itsvalue can be read– Following attempts to “fill” the box would failsilently
78. 78. Promise• Creation– (promise)– (def p (promise))• Reading– (deref p)– @p• Setting the value– (deliver p <the-val>)
79. 79. That’s all for today