## Just for you: FREE 60-day trial to the world’s largest digital library.

The SlideShare family just got bigger. Enjoy access to millions of ebooks, audiobooks, magazines, and more from Scribd.

Cancel anytime.Free with a 14 day trial from Scribd

- 1. Papers We Love San Francisco Edition July 24th, 2014 Henry Robinson henry@cloudera.com / @henryr
- 2. • Software engineer at Cloudera since 2009 • My interests are in databases and distributed systems • I write about them - in particular, about papers in those areas - at http://the-paper-trail.org
- 3. Papers We Love San Francisco Edition July 24th, 2014 Henry Robinson henry@cloudera.com / @henryr
- 4. Papers We Love San Francisco Edition July 24th, 2014 Henry Robinson henry@cloudera.com / @henryr Papers of which we are quite fond
- 5. • Impossibility of Distributed Consensus with One Faulty Process, by Fischer, Lynch and Paterson (1985) • Dijkstra award winner 2001
- 6. • Walk through the proof (leaving rigour for the paper itself) • Show how this gives rise to a framework for thinking about distributed systems
- 7. or: agreeing to agree Consensus
- 8. • Consensus is the problem of having a set of processes agree on a value proposed by one of those processes
- 9. • Validity: the value agreed upon must have been proposed by some process • Termination: at least one non-faulty process eventually decides • Agreement: all deciding processes agree on the same value
- 10. • Validity: the value agreed upon must have been proposed by some process - safety • Termination: at least one non-faulty process eventually decides - liveness • Agreement: all deciding processes agree on the same value - safety
- 11. Transactional Commit Should I commit this transaction? [Magic consensus protocol] YES! No :(
- 12. Replicated State Machines Client Node 1 Node 2 Node 3 N-2N-3 N = S N-1 N-2N-3 N = S N-1 N-2N-3 N = S N-1 1: Client proposes ! state N should ! be S 2: Magic consensus ! protocol 3: New state written to! log
- 13. Strong Leader Election 1: Who’s the leader?
- 14. Strong Leader Election A cast of millions 2: Magic consensus protocol 1: Who’s the leader?
- 15. Strong Leader Election A cast of millions 2: Magic consensus protocol 3: There can only be one 1: Who’s the leader?
- 16. What does FLP actually say?
- 17. Fischer
- 18. Fischer Lynch
- 19. Fischer Lynch Paterson
- 20. Fischer Lynch Paterson Choose at most two.
- 21. Distributed consensus is impossible when at least one process might fail
- 22. Distributed consensus is impossible when at least one process might fail “[a] surprising result”
- 23. Distributed* consensus is impossible when at least one process might fail *i.e. message passing
- 24. Distributed consensus is impossible when at least one process might fail Termination Validity Agreement
- 25. Distributed consensus is impossible when at least one process might fail No algorithm solves consensus in every case
- 26. Distributed consensus is impossible when at least one process might fail Crash failures
- 27. Hierarchy of Failure Modes Crash failures! ! Fail by stopping
- 28. Omission failures! ! ! ! ! Fail by dropping messages Hierarchy of Failure Modes Crash failures! ! Fail by stopping
- 29. Byzantine failures! ! ! ! ! ! ! ! ! Fail by doing whatever the hell I like Omission failures! ! ! ! ! Fail by dropping messages Hierarchy of Failure Modes Crash failures! ! Fail by stopping
- 30. More on the system model
- 31. • The system model is the abstraction we layer over messy computers and networks in order to actually reason about them.
- 32. • Message deliveries are the only way that nodes may communicate • Messages are delivered in any order • But are never lost (c.f. crash model vs. omission model), and are always delivered exactly once
- 33. • Nodes do not have access to a shared clock. • So cannot mutually estimate the passage of time • Messages are the only way that nodes may co- ordinate with each other
- 34. The Proof itself
- 35. Some deﬁnitions • Conﬁguration: the state of every node in the system, plus the set of undelivered (but sent) messages! • Initial conﬁguration: what each node in the system would propose as the decisions at time 0 • Univalent: a state from which only one decision is possible, no matter what messages are received (0- valent and 1-valent can only decide 0 or 1 respectively) • Bivalent: a state from which either decision value is still possible.
- 36. Proof sketch Initial, ‘undecided’, conﬁguration
- 37. Proof sketch Initial, ‘undecided’, conﬁguration Undecided state Messages delivered
- 38. Proof sketch Initial, ‘undecided’, conﬁguration Undecided state Messages delivered More messages delivered
- 39. Proof sketch Initial, ‘undecided’, conﬁguration Undecided state Messages delivered Lemma 2: This always exists! More messages delivered
- 40. Proof sketch Initial, ‘undecided’, conﬁguration Undecided state Messages delivered Lemma 2: This always exists! Lemma 3: You can always get here! More messages delivered
- 41. Lemma 2: Communication Matters
- 42. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 (C:XY means process 0 has initial value X, process 1 has initial value Y)
- 43. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 These two conﬁgurations differ only at one node, but their valencies are different (C:XY means process 0 has initial value X, process 1 has initial value Y)
- 44. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 These two conﬁgurations differ only at one node, but their valencies are different (C:XY means process 0 has initial value X, process 1 has initial value Y)
- 45. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 (C:XY means process 0 has initial value X, process 1 has initial value Y) I decided 1! All executions of the protocol - i.e. set of messages delivered
- 46. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 (C:XY means process 0 has initial value X, process 1 has initial value Y) I decided 0! All executions of the protocol - i.e. set of messages delivered
- 47. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 (C:XY means process 0 has initial value X, process 1 has initial value Y) I decided 0! What if process 1 fails? Are the conﬁgurations any different? I decided 1!
- 48. 2-node system C: 00! V: 1 C: 01! V: 0 C: 11! V: 0 C: 10! V: 1 (C:XY means process 0 has initial value X, process 1 has initial value Y) I decided 0! For the remaining processes: no difference in initial state, but different outcome ?! I decided 1! Same execution
- 49. Every protocol has an undecided (‘bivalent’) initial state
- 50. Lemma 3: Indecisiveness is Sticky
- 51. Conﬁguration C (bivalent) e-not- delivered Conﬁguration Conﬁguration Conﬁguration Conﬁguration Conﬁguration e-arrived-last Conﬁguration Conﬁguration Conﬁguration Conﬁguration Conﬁguration Some message e is sent in C
- 52. Conﬁguration C (bivalent) e-not- delivered Conﬁguration Conﬁguration Conﬁguration Conﬁguration Conﬁguration e-arrived-last Conﬁguration Conﬁguration Conﬁguration Conﬁguration Conﬁguration One of these must be bivalent Some message e is sent in C Conﬁguration set D
- 53. • Consider the possibilities: • If one of those conﬁgurations in D is bivalent, we’re done • Otherwise show that lack of bivalent state leads to contradiction • Do this by ﬁrst showing that there must be both 0-valent and 1-valent conﬁgurations in D • and that this leads to a contradiction
- 54. D Conﬁguration C (bivalent) 0-valent! e not received 0-valent! e received Either the protocol goes through D before it reaches the 0-valent conﬁguration… 2. e is received 1. C moves to 0-valent conﬁguration before receiving e
- 55. D Conﬁguration C (bivalent) 0-valent! e not received 0-valent! e received Or the protocol gets to the 0-valent conﬁguration after receiving e in which case this state also must be 0-valent and in D 1. e is received 2. 0-valent state is arrived at
- 56. Now for the contradiction
- 57. • There must be two conﬁgurations C0 and C1 that are separated by a single message m where receiving e in Ci moves the conﬁguration to Di • We will write that as Ci + e = Di • So C0 + m = C1 • and C0 + m + e = C1 + e = D1 • and C0 + e = D0
- 58. • Now consider the destinations of m and e. If they go to different processes, their receipt is commutative • C0 + m + e = D1 • C0 + e + m = D0 + m = D1 • Contradiction: D0 is 0-valent!
- 59. • Instead, e and m might go to the same process p. • Consider a deciding computation R from the original bivalent state C, where p does nothing (i.e. looks like it failed) • Since to get to D0 and D1, only e and m have been received, only p took any steps to get there. • So R can apply to both D0 and D1.
- 60. • Since D0 and D1 are both univalent, so the conﬁgurations D0 + R and D1 + R are both univalent.
- 61. • Now remember: • A = C + R • D1 = C + m + e • D0 = C + e • But what about • C + R + m + e = A + m + e = D1 + R => 1-valent • C + R + e = A + e = D0 + R => 0-valent
- 62. • Let e be some event that might be sent in conﬁguration C. Then let D be the set of all conﬁgurations where e is received last and let C be the set of conﬁgurations where e has not been received. • D either contains a bivalent conﬁguration, or both 0- and 1-valent conﬁgurations. If it contains a bivalent conﬁguration, we’re done. So assume it does not. • Now there must be some C0 and C1 in C where C0 + e is 0-valent, but C1 + e is 1-valent, and C1 = C0 + e’ • Consider two possibilities for the destination of e’ and e. If they are not the same, then we can say C0 + e + e’ == C0 + e’ + e = C1 + e = D1 -> 1-valent. But C0 + e -> 0-valent. • If they are the same, then let A be the conﬁguration reached by a deciding run from C0 when p does nothing (looks like it failed). We can also apply that run from D0 and D1 to get to E0 and E1. But we can get from A to either E0 or E1 by applying e or e’ + e. This is a contradiction.
- 63. What are the consequences?
- 64. ! “These results do not show that such problems cannot be “solved” in practice; rather, they point up the need for more reﬁned models of distributed computing that better reﬂect realistic assumptions about processor and communication timings, and for less stringent requirements on the solution to such problems. (For example, termination might be required only with probability 1.) “
- 65. Paxos • Paxos cleverly defers to its leader election scheme • If leader election is perfect, so is Paxos! • But perfect leader election is solvable iff consensus is. • Impossibilities all the way down…
- 66. Randomized Consensus • Nice way to circumvent technical impossibilities: make their probability vanishingly small • Ben-Or gave an algorithm that terminates with probability 1 • (But the rate of convergence might be high)
- 67. Failure Detectors • Deep connection between the ability to tell if a machine has failed, and consensus. • Lots of research into ‘weak’ failure detectors, and how weak they can be and still solve consensus
- 68. FLP vs CAP
- 69. • FLP and CAP are not the same thing (see http://the- paper-trail.org/blog/ﬂp-and-cap-arent-the-same- thing/) • FLP is a stronger result, because the system model has fewer restrictions (crash stop vs omission)
- 70. • Theorem: CAP is actually really boring
- 71. Further reading
- 72. • 100 Impossibility Proofs for Distributed Computing (Lynch, 1989) • The Weakest Failure Detector for Solving Consensus (Chandra and Toueg, 1996) • Sharing Memory Robustly in Message-Passing Systems (Attiya et. al., 1995) • Wait-Free Synchronization (Herlihy, 1991) • Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols (Ben-Or, 1983)