Is there anybody out there?
Voxxed Days Vienna 2018
Manuel Bernhardt
manuel.bernhardt.io
• Helping companies to get started with
reac4ve systems ...
• ... or to get them to perform
• Lightbend consul4ng and training
partner (Akka, Akka Cluster, Akka
Streams)
@elmanu - h+ps://manuel.bernhardt.io
Mo#va#onal quote
Life is a single player game. You’re born alone. You’re going to die
alone. All of your interpreta8ons are alone. All your memories are
alone. You’re gone in three genera8ons and no one cares. Before
you showed up nobody cared. It’s all single player.
— Naval Ravikant
@elmanu - h+ps://manuel.bernhardt.io
Key issues for building clusters
• discovery: who's there?
• fault detec0on: who's in trouble?
• load balancing: who can take up work?
@elmanu - h+ps://manuel.bernhardt.io
Key issues for building clusters
• discovery: who's there?
• fault detec0on: who's in trouble?
• load balancing: who can take up work?
Group membership
@elmanu - h+ps://manuel.bernhardt.io
Implemen'ng group membership
1. Failure Detec.on
2. Dissemina.on
3. Consensus
@elmanu - h+ps://manuel.bernhardt.io
Failure detec,on
@elmanu - h+ps://manuel.bernhardt.io
Failure detector key
proper0es
• Completeness: crash-failure of any
group member is detected by all non-
faulty members
@elmanu - h+ps://manuel.bernhardt.io
Failure detector key
proper0es
• Completeness: crash-failure of any
group member is detected by all non-
faulty members
• Accuracy: no non-faulty group member
is declared as failed by any other non-
faulty group member (no false posi9ves)
@elmanu - h+ps://manuel.bernhardt.io
Failure detector key
proper0es
• Completeness: crash-failure of any group
member is detected by all non-faulty
members
• Accuracy: no non-faulty group member is
declared as failed by any other non-faulty
group member (no false posi9ves)
Also relevant in prac/ce:
• speed
• network message load
@elmanu - h+ps://manuel.bernhardt.io
Impossibility result
It is impossible for a failure detector algorithm to determinis3cally
achieve both completeness and accuracy over an asynchronous
unreliable network1
1
Chandra, Toueg: Unreliable failure detectors for reliable distributed systems (1996)
@elmanu - h+ps://manuel.bernhardt.io
Trade-offs
• Strong - Weak completeness: all / some non-faulty members
detect a crash
• Strong - Weak accuracy: there are no / some false-posi7ves
@elmanu - h+ps://manuel.bernhardt.io
Trade-offs
• Strong - Weak completeness: all / some non-faulty members
detect a crash
• Strong - Weak accuracy: there are no / some false-posi7ves
In prac(ce most applica(ons prefer strong completeness with a
weaker form of accuracy
@elmanu - h+ps://manuel.bernhardt.io
Failure Detector strategies
• heartbeat: no heartbeat = failure
• ping: no response = failure
@elmanu - h+ps://manuel.bernhardt.io
Phi Adap)ve Accrual
Failure Detector 3
• has a cool name
• adap.ve: adjusts to network condi.ons
• introduces the no.on of accrual failure
detec.on: suspicion value φ rather than
boolean (trusted or suspected)
3
N. Hayashibara et al: The ϕ Accrual Failure Detector (2004)
@elmanu - h+ps://manuel.bernhardt.io
Phi Adap)ve Accrual
Failure Detector 3
Example: master and worker processes
• φ(w) > 8 stop sending new work to
the node
• φ(w) > 10 start to rebalance current
tasks of the worker to other nodes
• φ(w) > 12 remove the worker from
the list of nodes
3
N. Hayashibara et al: The ϕ Accrual Failure Detector (2004)
@elmanu - h+ps://manuel.bernhardt.io
New Adap)ve accrual
Failure Detector 4
• much simpler to calculate suspicion
level than Phi
• performs slightly be7er and more
adap)ve 5
5
h$ps://manuel.bernhardt.io/2017/07/26/a-new-adap=ve-accrual-
failure-detector-for-akka/
4
B. Satzger et al: A new adap3ve accrual failure detector for dependable
distributed systems (2007)
@elmanu - h+ps://manuel.bernhardt.io
SWIM Failure Detector
As you swim lazily through the milieu,
The secrets of the world will infect you 6
• has both a dissemina.on and a failure detec.on component
• scalable membership protocol
• members are first suspected and not immediately flagged as
failed
6
A. Das et al: SWIM: scalable weakly-consistent infec:on-style process group membership protocol (2002)
@elmanu - h+ps://manuel.bernhardt.io
Lifeguard Failure Detector
• based on SWIM, developed by
Hashicorp 7
• memberlist implementa;on 8
• extensions to the SWIM protocol
• dras;cally reduces the amount of false-
posi;ves
8
h$ps://github.com/hashicorp/memberlist
7
A. Dadgar et al: Lifeguard: SWIM-ing with Situa:onal Awareness (2017)
@elmanu - h+ps://manuel.bernhardt.io
Dissemina(on
Dissemina(on
How to communicate changes in the cluster?
• members joining
• members leaving
• members failing
@elmanu - h+ps://manuel.bernhardt.io
Dissemina(on strategies
• hardware / IP / UDP mul1cast: not readily (or willingly) enabled
in data centres
• gossip protocols
@elmanu - h+ps://manuel.bernhardt.io
Gossip styles
• gossip to one node at random 10
10
van Renesse et al: A gossip-style failure detec9on service (1998)
@elmanu - h+ps://manuel.bernhardt.io
Gossip styles
• gossip to one node at random 10
• round-robin, binary round-robin, round-
robin with sequence check 11
11
S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed
Consensus for Scalable Heterogeneous Clusters (2000)
10
van Renesse et al: A gossip-style failure detec9on service (1998)
@elmanu - h+ps://manuel.bernhardt.io
Gossip styles
• gossip to one node at random 10
• round-robin, binary round-robin,
round-robin with sequence check 11
11
S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed
Consensus for Scalable Heterogeneous Clusters (2000)
10
van Renesse et al: A gossip-style failure detec9on service (1998)
@elmanu - h+ps://manuel.bernhardt.io
Gossip styles
• gossip to one node at random 10
• round-robin, binary round-robin, round-
robin with sequence check 11
• piggy-back on another protocol (e.g. on
a failure detector 6
): also called
infec&on-style gossip
6
A. Das et al: SWIM: scalable weakly-consistent infec:on-style process
group membership protocol (2002)
11
S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed
Consensus for Scalable Heterogeneous Clusters (2000)
10
van Renesse et al: A gossip-style failure detec9on service (1998)
@elmanu - h+ps://manuel.bernhardt.io
What do you even gossip about
@elmanu - h+ps://manuel.bernhardt.io
What do you even gossip about
@elmanu - h+ps://manuel.bernhardt.io
What do you even gossip about
@elmanu - h+ps://manuel.bernhardt.io
Example of gossip op.miza.ons
• Akka Cluster: gossip with a higher probability to nodes that have
not already seen a gossip
• Akka Cluster: speeds up gossip (3x) when less than half of the
members have seen the latest gossip
• Lifeguard: an<-entropy mechanism based on nodes doing a full
sync with a node at random (helps to speed up convergence a?er
a network par<<on)
@elmanu - h+ps://manuel.bernhardt.io
Consensus
Impossibility result - group membership
[...] the primary-par//on group membership problem cannot be
solved in asynchronous systems with crash failures, even if one
allows the removal or killing of non-faulty processes that are
erroneously suspected to have crashed 9
9
Chandra et al: On the Impossibility of Group Membership (1996)
@elmanu - h+ps://manuel.bernhardt.io
It would be unwise to make
membership-related decisions while
there are processes suspected of
having crashed
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus: .me
• Lamport Clocks 12
: how do you order events in a distributed
system
12
L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed System (1978)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus: .me
• Lamport Clocks: how do you order events in a distributed
system
• Vector Clocks 13
: how do you order events in a distributed
system and flag concurrent event
13
F. Ma(ern: Virtual Time and Global States of Distributed Systems (1989)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus: .me
• Lamport Clocks: how do you order events in a distributed system
• Vector Clocks: how do you order events in a distributed system and flag
concurrent events
• Version Vectors 14 15
: how do you order replicas in a distributed system and
detect conflicts
• Do3ed Version Vectors 16
: same as version vectors, but prevent false conflicts
16
N. Preguiça: Do1ed Version Vectors: Efficient Causality Tracking for Distributed Key-Value Stores (2012)
15
h%ps://haslab.wordpress.com/2011/07/08/version-vectors-are-not-vector-clocks/
14
D.S. Parker: Detec/on of mutual inconsistency in distributed systems (1983)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus: .me
private[cluster] final case class Gossip(
members: immutable.SortedSet[Member],
overview: GossipOverview = GossipOverview(),
version: VectorClock = VectorClock(),
tombstones: Map[UniqueAddress, Gossip.Timestamp] = Map.empty)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus:
CRDTs
• Conflict-Free Replicated Data types 17
• strong eventual consistency
• two families:
• CmRDTs (commuta+vity of
opera+ons)
• CvRDTs (convergence of state - merge
funcBon)
17
F.B. Schneider: Implemen4ng Fault Tolerant Services Using the State
Machine Approach: A Tutorial (1990)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus:
replicated state machines
Any suffisciently complicated model class
contains an ad-hoc, informally-specified,
bug-ridden, slow implementa;on of half a
state machine
— Pete Forde
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus:
replicated state machines
This method allows one to implement any
desired form of mul4process
synchroniza4on in a distributed system
— Leslie Lamport 12
12
L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed
System (1978)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus:
protocols
• fault-tolerant distributed systems
• how do mul1ple servers agree on a
value?
• Paxos 19 20
, Ra@ 21
, CASPaxos 22
22
D. Rystsov: CASPaxos: Replicated State Machines without logs (2018)
21
D. Ongaro and J. Ousterhout: In Search of an Understandable
Consensus Algorithm (Extended Version) (2014)
20
L. Lamport: Paxos made simple (2001)
19
L. Lamport: The Part-Time Parliament (1998)
@elmanu - h+ps://manuel.bernhardt.io
Reaching consensus: conven/ons
• reaching a decision by not transmi2ng any informa4on
• example: Akka Cluster leader designa4on
/**
* INTERNAL API
* Orders the members by their address except that members with status
* Joining, Exiting and Down are ordered last (in that order).
*/
private[cluster] val leaderStatusOrdering: Ordering[Member] = ...
@elmanu - h+ps://manuel.bernhardt.io
In prac(ce: Akka Cluster
@elmanu - h+ps://manuel.bernhardt.io
Akka Cluster
• failure detector: φ Adap)ve Accrual FD
with ping-pong
• dissemina0on: random biased gossip
• consensus:
• leader by conven)on
• membership decisions driven by
leader (joining, leaving)
@elmanu - h+ps://manuel.bernhardt.io
Akka Cluster: membership states
@elmanu - h+ps://manuel.bernhardt.io
Akka Cluster: happy path
@elmanu - h+ps://manuel.bernhardt.io
Akka Cluster: sad path (aka reality)
@elmanu - h+ps://manuel.bernhardt.io
Akka Cluster: how do you even
@elmanu - h+ps://manuel.bernhardt.io
h#ps://manuel.bernhardt.io
Failure detectors
• Chandra, Toueg: Unreliable failure detectors for reliable distributed systems (1996)
• N. Hayashibara et al: The ϕ Accrual Failure Detector (2004)
• B. Satzger et al: A new adapNve accrual failure detector for dependable distributed systems
(2007)
• hQps://manuel.bernhardt.io/2017/07/26/a-new-adapNve-accrual-failure-detector-for-akka/
• A. Das et al: SWIM: scalable weakly-consistent infecNon-style process group membership
protocol (2002)
• A. Dadgar et al: Lifeguard: SWIM-ing with SituaNonal Awareness (2017)
• hQps://github.com/hashicorp/memberlist
@elmanu - h+ps://manuel.bernhardt.io
Impossibility results
• M.J. Fischer, N.A. Lynch, and M.S. Paterson: Impossibility of
distributed consensus with one faulty process (1985)
• Chandra, Toueg: Unreliable failure detectors for reliable
distributed systems (1996)
• Chandra et al: On the Impossibility of Group Membership (1996)
@elmanu - h+ps://manuel.bernhardt.io
Dissemina(on
• van Renesse et al: A gossip-style failure detec8on service (1998)
• S. Ranganathan et al: Gossip-Style Failure Detec8on and
Distributed Consensus for Scalable Heterogeneous Clusters
(2000)
@elmanu - h+ps://manuel.bernhardt.io
Consensus - )me
• L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed
System (1978)
• F. MaJern: Virtual Time and Global States of Distributed Systems (1989)
• D.S. Parker: DetecNon of mutual inconsistency in distributed systems (1983)
• hJps://haslab.wordpress.com/2011/07/08/version-vectors-are-not-vector-
clocks/
• N. Preguiça: DoJed Version Vectors: Efficient Causality Tracking for
Distributed Key-Value Stores (2012)
@elmanu - h+ps://manuel.bernhardt.io
Consensus - protocols
• F.B. Schneider: Implemen3ng Fault Tolerant Services Using the State
Machine Approach: A Tutorial (1990)
• L. Lamport: The Part-Time Parliament (1998)
• L. Lamport: Paxos made simple (2001)
• D. Ongaro and J. Ousterhout: In Search of an Understandable
Consensus Algorithm (Extended Version) (2014)
• D. Rystsov: CASPaxos: Replicated State Machines without logs (2018)
@elmanu - h+ps://manuel.bernhardt.io
Consensus - misc
• M. Shapiro: Conflict-free Replicated Data Types (2011)
@elmanu - h+ps://manuel.bernhardt.io
Thank you!
• Ques&ons?
• Website: h1ps://manuel.bernhardt.io
• Twi1er: @elmanu
@elmanu - h+ps://manuel.bernhardt.io

Is there anybody out there?

  • 1.
    Is there anybodyout there? Voxxed Days Vienna 2018 Manuel Bernhardt
  • 3.
    manuel.bernhardt.io • Helping companiesto get started with reac4ve systems ... • ... or to get them to perform • Lightbend consul4ng and training partner (Akka, Akka Cluster, Akka Streams) @elmanu - h+ps://manuel.bernhardt.io
  • 4.
    Mo#va#onal quote Life isa single player game. You’re born alone. You’re going to die alone. All of your interpreta8ons are alone. All your memories are alone. You’re gone in three genera8ons and no one cares. Before you showed up nobody cared. It’s all single player. — Naval Ravikant @elmanu - h+ps://manuel.bernhardt.io
  • 8.
    Key issues forbuilding clusters • discovery: who's there? • fault detec0on: who's in trouble? • load balancing: who can take up work? @elmanu - h+ps://manuel.bernhardt.io
  • 9.
    Key issues forbuilding clusters • discovery: who's there? • fault detec0on: who's in trouble? • load balancing: who can take up work? Group membership @elmanu - h+ps://manuel.bernhardt.io
  • 10.
    Implemen'ng group membership 1.Failure Detec.on 2. Dissemina.on 3. Consensus @elmanu - h+ps://manuel.bernhardt.io
  • 11.
    Failure detec,on @elmanu -h+ps://manuel.bernhardt.io
  • 12.
    Failure detector key proper0es •Completeness: crash-failure of any group member is detected by all non- faulty members @elmanu - h+ps://manuel.bernhardt.io
  • 13.
    Failure detector key proper0es •Completeness: crash-failure of any group member is detected by all non- faulty members • Accuracy: no non-faulty group member is declared as failed by any other non- faulty group member (no false posi9ves) @elmanu - h+ps://manuel.bernhardt.io
  • 14.
    Failure detector key proper0es •Completeness: crash-failure of any group member is detected by all non-faulty members • Accuracy: no non-faulty group member is declared as failed by any other non-faulty group member (no false posi9ves) Also relevant in prac/ce: • speed • network message load @elmanu - h+ps://manuel.bernhardt.io
  • 15.
    Impossibility result It isimpossible for a failure detector algorithm to determinis3cally achieve both completeness and accuracy over an asynchronous unreliable network1 1 Chandra, Toueg: Unreliable failure detectors for reliable distributed systems (1996) @elmanu - h+ps://manuel.bernhardt.io
  • 16.
    Trade-offs • Strong -Weak completeness: all / some non-faulty members detect a crash • Strong - Weak accuracy: there are no / some false-posi7ves @elmanu - h+ps://manuel.bernhardt.io
  • 17.
    Trade-offs • Strong -Weak completeness: all / some non-faulty members detect a crash • Strong - Weak accuracy: there are no / some false-posi7ves In prac(ce most applica(ons prefer strong completeness with a weaker form of accuracy @elmanu - h+ps://manuel.bernhardt.io
  • 18.
    Failure Detector strategies •heartbeat: no heartbeat = failure • ping: no response = failure @elmanu - h+ps://manuel.bernhardt.io
  • 19.
    Phi Adap)ve Accrual FailureDetector 3 • has a cool name • adap.ve: adjusts to network condi.ons • introduces the no.on of accrual failure detec.on: suspicion value φ rather than boolean (trusted or suspected) 3 N. Hayashibara et al: The ϕ Accrual Failure Detector (2004) @elmanu - h+ps://manuel.bernhardt.io
  • 20.
    Phi Adap)ve Accrual FailureDetector 3 Example: master and worker processes • φ(w) > 8 stop sending new work to the node • φ(w) > 10 start to rebalance current tasks of the worker to other nodes • φ(w) > 12 remove the worker from the list of nodes 3 N. Hayashibara et al: The ϕ Accrual Failure Detector (2004) @elmanu - h+ps://manuel.bernhardt.io
  • 21.
    New Adap)ve accrual FailureDetector 4 • much simpler to calculate suspicion level than Phi • performs slightly be7er and more adap)ve 5 5 h$ps://manuel.bernhardt.io/2017/07/26/a-new-adap=ve-accrual- failure-detector-for-akka/ 4 B. Satzger et al: A new adap3ve accrual failure detector for dependable distributed systems (2007) @elmanu - h+ps://manuel.bernhardt.io
  • 22.
    SWIM Failure Detector Asyou swim lazily through the milieu, The secrets of the world will infect you 6 • has both a dissemina.on and a failure detec.on component • scalable membership protocol • members are first suspected and not immediately flagged as failed 6 A. Das et al: SWIM: scalable weakly-consistent infec:on-style process group membership protocol (2002) @elmanu - h+ps://manuel.bernhardt.io
  • 29.
    Lifeguard Failure Detector •based on SWIM, developed by Hashicorp 7 • memberlist implementa;on 8 • extensions to the SWIM protocol • dras;cally reduces the amount of false- posi;ves 8 h$ps://github.com/hashicorp/memberlist 7 A. Dadgar et al: Lifeguard: SWIM-ing with Situa:onal Awareness (2017) @elmanu - h+ps://manuel.bernhardt.io
  • 30.
  • 31.
    Dissemina(on How to communicatechanges in the cluster? • members joining • members leaving • members failing @elmanu - h+ps://manuel.bernhardt.io
  • 32.
    Dissemina(on strategies • hardware/ IP / UDP mul1cast: not readily (or willingly) enabled in data centres • gossip protocols @elmanu - h+ps://manuel.bernhardt.io
  • 33.
    Gossip styles • gossipto one node at random 10 10 van Renesse et al: A gossip-style failure detec9on service (1998) @elmanu - h+ps://manuel.bernhardt.io
  • 34.
    Gossip styles • gossipto one node at random 10 • round-robin, binary round-robin, round- robin with sequence check 11 11 S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed Consensus for Scalable Heterogeneous Clusters (2000) 10 van Renesse et al: A gossip-style failure detec9on service (1998) @elmanu - h+ps://manuel.bernhardt.io
  • 35.
    Gossip styles • gossipto one node at random 10 • round-robin, binary round-robin, round-robin with sequence check 11 11 S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed Consensus for Scalable Heterogeneous Clusters (2000) 10 van Renesse et al: A gossip-style failure detec9on service (1998) @elmanu - h+ps://manuel.bernhardt.io
  • 36.
    Gossip styles • gossipto one node at random 10 • round-robin, binary round-robin, round- robin with sequence check 11 • piggy-back on another protocol (e.g. on a failure detector 6 ): also called infec&on-style gossip 6 A. Das et al: SWIM: scalable weakly-consistent infec:on-style process group membership protocol (2002) 11 S. Ranganathan et al: Gossip-Style Failure Detec:on and Distributed Consensus for Scalable Heterogeneous Clusters (2000) 10 van Renesse et al: A gossip-style failure detec9on service (1998) @elmanu - h+ps://manuel.bernhardt.io
  • 37.
    What do youeven gossip about @elmanu - h+ps://manuel.bernhardt.io
  • 38.
    What do youeven gossip about @elmanu - h+ps://manuel.bernhardt.io
  • 39.
    What do youeven gossip about @elmanu - h+ps://manuel.bernhardt.io
  • 40.
    Example of gossipop.miza.ons • Akka Cluster: gossip with a higher probability to nodes that have not already seen a gossip • Akka Cluster: speeds up gossip (3x) when less than half of the members have seen the latest gossip • Lifeguard: an<-entropy mechanism based on nodes doing a full sync with a node at random (helps to speed up convergence a?er a network par<<on) @elmanu - h+ps://manuel.bernhardt.io
  • 41.
  • 42.
    Impossibility result -group membership [...] the primary-par//on group membership problem cannot be solved in asynchronous systems with crash failures, even if one allows the removal or killing of non-faulty processes that are erroneously suspected to have crashed 9 9 Chandra et al: On the Impossibility of Group Membership (1996) @elmanu - h+ps://manuel.bernhardt.io
  • 43.
    It would beunwise to make membership-related decisions while there are processes suspected of having crashed @elmanu - h+ps://manuel.bernhardt.io
  • 44.
    Reaching consensus: .me •Lamport Clocks 12 : how do you order events in a distributed system 12 L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed System (1978) @elmanu - h+ps://manuel.bernhardt.io
  • 45.
    Reaching consensus: .me •Lamport Clocks: how do you order events in a distributed system • Vector Clocks 13 : how do you order events in a distributed system and flag concurrent event 13 F. Ma(ern: Virtual Time and Global States of Distributed Systems (1989) @elmanu - h+ps://manuel.bernhardt.io
  • 46.
    Reaching consensus: .me •Lamport Clocks: how do you order events in a distributed system • Vector Clocks: how do you order events in a distributed system and flag concurrent events • Version Vectors 14 15 : how do you order replicas in a distributed system and detect conflicts • Do3ed Version Vectors 16 : same as version vectors, but prevent false conflicts 16 N. Preguiça: Do1ed Version Vectors: Efficient Causality Tracking for Distributed Key-Value Stores (2012) 15 h%ps://haslab.wordpress.com/2011/07/08/version-vectors-are-not-vector-clocks/ 14 D.S. Parker: Detec/on of mutual inconsistency in distributed systems (1983) @elmanu - h+ps://manuel.bernhardt.io
  • 47.
    Reaching consensus: .me private[cluster]final case class Gossip( members: immutable.SortedSet[Member], overview: GossipOverview = GossipOverview(), version: VectorClock = VectorClock(), tombstones: Map[UniqueAddress, Gossip.Timestamp] = Map.empty) @elmanu - h+ps://manuel.bernhardt.io
  • 48.
    Reaching consensus: CRDTs • Conflict-FreeReplicated Data types 17 • strong eventual consistency • two families: • CmRDTs (commuta+vity of opera+ons) • CvRDTs (convergence of state - merge funcBon) 17 F.B. Schneider: Implemen4ng Fault Tolerant Services Using the State Machine Approach: A Tutorial (1990) @elmanu - h+ps://manuel.bernhardt.io
  • 49.
    Reaching consensus: replicated statemachines Any suffisciently complicated model class contains an ad-hoc, informally-specified, bug-ridden, slow implementa;on of half a state machine — Pete Forde @elmanu - h+ps://manuel.bernhardt.io
  • 50.
    Reaching consensus: replicated statemachines This method allows one to implement any desired form of mul4process synchroniza4on in a distributed system — Leslie Lamport 12 12 L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed System (1978) @elmanu - h+ps://manuel.bernhardt.io
  • 51.
    Reaching consensus: protocols • fault-tolerantdistributed systems • how do mul1ple servers agree on a value? • Paxos 19 20 , Ra@ 21 , CASPaxos 22 22 D. Rystsov: CASPaxos: Replicated State Machines without logs (2018) 21 D. Ongaro and J. Ousterhout: In Search of an Understandable Consensus Algorithm (Extended Version) (2014) 20 L. Lamport: Paxos made simple (2001) 19 L. Lamport: The Part-Time Parliament (1998) @elmanu - h+ps://manuel.bernhardt.io
  • 52.
    Reaching consensus: conven/ons •reaching a decision by not transmi2ng any informa4on • example: Akka Cluster leader designa4on /** * INTERNAL API * Orders the members by their address except that members with status * Joining, Exiting and Down are ordered last (in that order). */ private[cluster] val leaderStatusOrdering: Ordering[Member] = ... @elmanu - h+ps://manuel.bernhardt.io
  • 53.
    In prac(ce: AkkaCluster @elmanu - h+ps://manuel.bernhardt.io
  • 54.
    Akka Cluster • failuredetector: φ Adap)ve Accrual FD with ping-pong • dissemina0on: random biased gossip • consensus: • leader by conven)on • membership decisions driven by leader (joining, leaving) @elmanu - h+ps://manuel.bernhardt.io
  • 55.
    Akka Cluster: membershipstates @elmanu - h+ps://manuel.bernhardt.io
  • 56.
    Akka Cluster: happypath @elmanu - h+ps://manuel.bernhardt.io
  • 57.
    Akka Cluster: sadpath (aka reality) @elmanu - h+ps://manuel.bernhardt.io
  • 58.
    Akka Cluster: howdo you even @elmanu - h+ps://manuel.bernhardt.io
  • 60.
  • 61.
    Failure detectors • Chandra,Toueg: Unreliable failure detectors for reliable distributed systems (1996) • N. Hayashibara et al: The ϕ Accrual Failure Detector (2004) • B. Satzger et al: A new adapNve accrual failure detector for dependable distributed systems (2007) • hQps://manuel.bernhardt.io/2017/07/26/a-new-adapNve-accrual-failure-detector-for-akka/ • A. Das et al: SWIM: scalable weakly-consistent infecNon-style process group membership protocol (2002) • A. Dadgar et al: Lifeguard: SWIM-ing with SituaNonal Awareness (2017) • hQps://github.com/hashicorp/memberlist @elmanu - h+ps://manuel.bernhardt.io
  • 62.
    Impossibility results • M.J.Fischer, N.A. Lynch, and M.S. Paterson: Impossibility of distributed consensus with one faulty process (1985) • Chandra, Toueg: Unreliable failure detectors for reliable distributed systems (1996) • Chandra et al: On the Impossibility of Group Membership (1996) @elmanu - h+ps://manuel.bernhardt.io
  • 63.
    Dissemina(on • van Renesseet al: A gossip-style failure detec8on service (1998) • S. Ranganathan et al: Gossip-Style Failure Detec8on and Distributed Consensus for Scalable Heterogeneous Clusters (2000) @elmanu - h+ps://manuel.bernhardt.io
  • 64.
    Consensus - )me •L. Lamport: Time, Clocks, and the Ordering of Events in a Distributed System (1978) • F. MaJern: Virtual Time and Global States of Distributed Systems (1989) • D.S. Parker: DetecNon of mutual inconsistency in distributed systems (1983) • hJps://haslab.wordpress.com/2011/07/08/version-vectors-are-not-vector- clocks/ • N. Preguiça: DoJed Version Vectors: Efficient Causality Tracking for Distributed Key-Value Stores (2012) @elmanu - h+ps://manuel.bernhardt.io
  • 65.
    Consensus - protocols •F.B. Schneider: Implemen3ng Fault Tolerant Services Using the State Machine Approach: A Tutorial (1990) • L. Lamport: The Part-Time Parliament (1998) • L. Lamport: Paxos made simple (2001) • D. Ongaro and J. Ousterhout: In Search of an Understandable Consensus Algorithm (Extended Version) (2014) • D. Rystsov: CASPaxos: Replicated State Machines without logs (2018) @elmanu - h+ps://manuel.bernhardt.io
  • 66.
    Consensus - misc •M. Shapiro: Conflict-free Replicated Data Types (2011) @elmanu - h+ps://manuel.bernhardt.io
  • 67.
    Thank you! • Ques&ons? •Website: h1ps://manuel.bernhardt.io • Twi1er: @elmanu @elmanu - h+ps://manuel.bernhardt.io