SlideShare a Scribd company logo
1 of 42
Silberschatz, Galvin, and Gagne ©199916.1
Module 16: Distributed
Coordination
• Event Ordering
• Mutual Exclusion
• Atomicity
• Concurrency Control
• Deadlock Handling
• Election Algorithms
• Reaching Agreement
Silberschatz, Galvin, and Gagne ©199916.2
Event Ordering
• Happened-before relation (denoted by →).
– If A and B are events in the same process, and A was
executed before B, then A → B.
– If A is the event of sending a message by one process and
B is the event of receiving that message by another
process, then A → B.
– If A → B and B → C then A → C.
Silberschatz, Galvin, and Gagne ©199916.3
Implementation of →
• Associate a timestamp with each system event. Require that for
every pair of events A and B, if A → B, then the timestamp of A is
less than the timestamp of B.
• Within each process Pi a logical clock, LCi is associated. The
logical clock can be implemented as a simple counter that is
incremented between any two successive events executed within
a process.
• A process advances its logical clock when it receives a message
whose timestamp is greater than the current value of its logical
clock.
• If the timestamps of two events A and B are the same, then the
events are concurrent. We may use the process identity
numbers to break ties and to create a total ordering.
Silberschatz, Galvin, and Gagne ©199916.4
Distributed Mutual Exclusion (DME)
• Assumptions
– The system consists of n processes; each process Pi
resides at a different processor.
– Each process has a critical section that requires mutual
exclusion.
• Requirement
– If Pi is executing in its critical section, then no other process
Pj is executing in its critical section.
• We present two algorithms to ensure the mutual exclusion
execution of processes in their critical sections.
Silberschatz, Galvin, and Gagne ©199916.5
DME: Centralized Approach
• One of the processes in the system is chosen to coordinate the
entry to the critical section.
• A process that wants to enter its critical section sends a request
message to the coordinator.
• The coordinator decides which process can enter the critical
section next, and its sends that process a reply message.
• When the process receives a reply message from the
coordinator, it enters its critical section.
• After exiting its critical section, the process sends a release
message to the coordinator and proceeds with its execution.
• This scheme requires three messages per critical-section entry:
– request
– reply
– release
Silberschatz, Galvin, and Gagne ©199916.6
DME: Fully Distributed Approach
• When process Pi wants to enter its critical section, it generates a
new timestamp, TS, and sends the message request (Pi, TS) to
all other processes in the system.
• When process Pj receives a request message, it may reply
immediately or it may defer sending a reply back.
• When process Pi receives a reply message from all other
processes in the system, it can enter its critical section.
• After exiting its critical section, the process sends reply
messages to all its deferred requests.
Silberschatz, Galvin, and Gagne ©199916.7
DME: Fully Distributed Approach
(Cont.)
• The decision whether process Pj replies immediately to a
request(Pi, TS) message or defers its reply is based on three
factors:
– If Pj is in its critical section, then it defers its reply to Pi.
– If Pj does not want to enter its critical section, then it sends a
reply immediately to Pi.
– If Pj wants to enter its critical section but has not yet entered
it, then it compares its own request timestamp with the
timestamp TS.
If its own request timestamp is greater than TS, then it
sends a reply immediately to Pi (Pi asked first).
Otherwise, the reply is deferred.
Silberschatz, Galvin, and Gagne ©199916.8
Desirable Behavior of Fully Distributed Approach
• Freedom from Deadlock is ensured.
• Freedom from starvation is ensured, since entry to the critical
section is scheduled according to the timestamp ordering. The
timestamp ordering ensures that processes are served in a first-
come, first served order.
• The number of messages per critical-section entry is
2 x (n – 1).
This is the minimum number of required messages per critical-
section entry when processes act independently and
concurrently.
Silberschatz, Galvin, and Gagne ©199916.9
Three Undesirable Consequences
• The processes need to know the identity of all other processes in
the system, which makes the dynamic addition and removal of
processes more complex.
• If one of the processes fails, then the entire scheme collapses.
This can be dealt with by continuously monitoring the state of all
the processes in the system.
• Processes that have not entered their critical section must pause
frequently to assure other processes that they intend to enter the
critical section. This protocol is therefore suited for small, stable
sets of cooperating processes.
Silberschatz, Galvin, and Gagne ©199916.10
Atomicity
• Either all the operations associated with a program unit are
executed to completion, or none are performed.
• Ensuring atomicity in a distributed system requires a transaction
coordinator, which is responsible for the following:
– Starting the execution of the transaction.
– Breaking the transaction into a number of subtransactions,
and distribution these subtransactions to the appropriate
sites for execution.
– Coordinating the termination of the transaction, which may
result in the transaction being committed at all sites or
aborted at all sites.
Silberschatz, Galvin, and Gagne ©199916.11
Two-Phase Commit Protocol (2PC)
• Assumes fail-stop model.
• Execution of the protocol is initiated by the coordinator after the
last step of the transaction has been reached.
• When the protocol is initiated, the transaction may still be
executing at some of the local sites.
• The protocol involves all the local sites at which the transaction
executed.
• Example: Let T be a transaction initiated at site Si and let the
transaction coordinator at Si be Ci.
Silberschatz, Galvin, and Gagne ©199916.12
Phase 1: Obtaining a Decision
• Ci adds <prepare T> record to the log.
• Ci sends <prepare T> message to all sites.
• When a site receives a <prepare T> message, the transaction
manager determines if it can commit the transaction.
– If no: add <no T> record to the log and respond to Ci with
<abort T>.
– If yes:
add <ready T> record to the log.
force all log records for T onto stable storage.
transaction manager sends <ready T> message to Ci.
Silberschatz, Galvin, and Gagne ©199916.13
Phase 1 (Cont.)
• Coordinator collects responses
– All respond “ready”,
decision is commit.
– At least one response is “abort”,
decision is abort.
– At least one participant fails to respond within time out
period,
decision is abort.
Silberschatz, Galvin, and Gagne ©199916.14
Phase 2: Recording Decision in the Database
• Coordinator adds a decision record
<abort T> or <commit T>
to its log and forces record onto stable storage.
• Once that record reaches stable storage it is irrevocable (even if
failures occur).
• Coordinator sends a message to each participant informing it of
the decision (commit or abort).
• Participants take appropriate action locally.
Silberschatz, Galvin, and Gagne ©199916.15
Failure Handling in 2PC – Site
Failure
• The log contains a <commit T> record. In this case, the site
executes redo(T).
• The log contains an <abort T> record. In this case, the site
executes undo(T).
• The contains a <ready T> record; consult Ci. If Ci is down, site
sends query-status T message to the other sites.
• The log contains no control records concerning T. In this case,
the site executes undo(T).
Silberschatz, Galvin, and Gagne ©199916.16
Failure Handling in 2PC – Coordinator Ci Failure
• If an active site contains a <commit T> record in its log, the T
must be committed.
• If an active site contains an <abort T> record in its log, then T
must be aborted.
• If some active site does not contain the record <ready T> in its
log then the failed coordinator Ci cannot have decided to
commit T. Rather than wait for Ci to recover, it is preferable to
abort T.
• All active sites have a <ready T> record in their logs, but no
additional control records. In this case we must wait for the
coordinator to recover.
– Blocking problem – T is blocked pending the recovery of
site Si.
Silberschatz, Galvin, and Gagne ©199916.17
Concurrency Control
• Modify the centralized concurrency schemes to accommodate
the distribution of transactions.
• Transaction manager coordinates execution of transactions (or
subtransactions) that access data at local sites.
• Local transaction only executes at that site.
• Global transaction executes at several sites.
Silberschatz, Galvin, and Gagne ©199916.18
Locking Protocols
• Can use the two-phase locking protocol in a distributed
environment by changing how the lock manager is implemented.
• Nonreplicated scheme – each site maintains a local lock manager
which administers lock and unlock requests for those data items
that are stored in that site.
– Simple implementation involves two message transfers for
handling lock requests, and one message transfer for
handling unlock requests.
– Deadlock handling is more complex.
Silberschatz, Galvin, and Gagne ©199916.19
Single-Coordinator Approach
• A single lock manager resides in a single chosen site, all lock and
unlock requests are made a that site.
• Simple implementation
• Simple deadlock handling
• Possibility of bottleneck
• Vulnerable to loss of concurrency controller if single site fails
• Multiple-coordinator approach distributes lock-manager function
over several sites.
Silberschatz, Galvin, and Gagne ©199916.20
Majority Protocol
• Avoids drawbacks of central control by dealing with replicated
data in a decentralized manner.
• More complicated to implement
• Deadlock-handling algorithms must be modified; possible for
deadlock to occur in locking only one data item.
Silberschatz, Galvin, and Gagne ©199916.21
Biased Protocol
• Similar to majority protocol, but requests for shared locks
prioritized over requests for exclusive locks.
• Less overhead on read operations than in majority protocol; but
has additional overhead on writes.
• Like majority protocol, deadlock handling is complex.
Silberschatz, Galvin, and Gagne ©199916.22
Primary Copy
• One of the sites at which a replica resides is designated as the
primary site. Request to lock a data item is made at the primary
site of that data item.
• Concurrency control for replicated data handled in a manner
similar to that of unreplicated data.
• Simple implementation, but if primary site fails, the data item is
unavailable, even though other sites may have a replica.
Silberschatz, Galvin, and Gagne ©199916.23
Timestamping
• Generate unique timestamps in distributed scheme:
– Each site generates a unique local timestamp.
– The global unique timestamp is obtained by concatenation
of the unique local timestamp with the unique site identifier
– Use a logical clock defined within each site to ensure the fair
generation of timestamps.
• Timestamp-ordering scheme – combine the centralized
concurrency control timestamp scheme with the 2PC protocol to
obtain a protocol that ensures serializability with no cascading
rollbacks.
Silberschatz, Galvin, and Gagne ©199916.24
Deadlock Prevention
• Resource-ordering deadlock-prevention – define a global ordering
among the system resources.
– Assign a unique number to all system resources.
– A process may request a resource with unique number i
only if it is not holding a resource with a unique number
grater than i.
– Simple to implement; requires little overhead.
• Banker’s algorithm – designate one of the processes in the
system as the process that maintains the information necessary
to carry out the Banker’s algorithm.
– Also implemented easily, but may require too much
overhead.
Silberschatz, Galvin, and Gagne ©199916.25
Timestamped Deadlock-Prevention
Scheme
• Each process Pi is assigned a unique priority number
• Priority numbers are used to decide whether a process Pi should
wait for a process Pj; otherwise Pi is rolled back.
• The scheme prevents deadlocks. For every edge Pi → Pj in the
wait-for graph, Pi has a higher priority than Pj. Thus a cycle
cannot exist.
Silberschatz, Galvin, and Gagne ©199916.26
Wait-Die Scheme
• Based on a nonpreemptive technique.
• If Pi requests a resource currently held by Pj, Pi is allowed to
wait only if it has a smaller timestamp than does Pj (Pi is older
than Pj). Otherwise, Pi is rolled back (dies).
• Example: Suppose that processes P1, P2, and P3 have
timestamps t, 10, and 15 respectively.
– if P1 request a resource held by P2, then P1 will wait.
– If P3 requests a resource held by P2, then P3 will be rolled
back.
Silberschatz, Galvin, and Gagne ©199916.27
Would-Wait Scheme
• Based on a preemptive technique; counterpart to the wait-die
system.
• If Pi requests a resource currently held by Pj, Pi is allowed to wait
only if it has a larger timestamp than does Pj (Pi is younger than
Pj). Otherwise Pj is rolled back (Pj is wounded by Pi).
• Example: Suppose that processes P1, P2, and P3 have timestamps
5, 10, and 15 respectively.
– If P1 requests a resource held by P2, then the resource will
be preempted from P2 and P2 will be rolled back.
– If P3 requests a resource held by P2, then P3 will wait.
Silberschatz, Galvin, and Gagne ©199916.28
Deadlock Detection – Centralized
Approach
• Each site keeps a local wait-for graph. The nodes of the graph
correspond to all the processes that are currently either holding
or requesting any of the resources local to that site.
• A global wait-for graph is maintained in a single coordination
process; this graph is the union of all local wait-for graphs.
• There are three different options (points in time) when the wait-for
graph may be constructed:
1. Whenever a new edge is inserted or removed in one of the
local wait-for graphs.
2. Periodically, when a number of changes have occurred in a
wait-for graph.
3. Whenever the coordinator needs to invoke the cycle-
detection algorithm..
• Unnecessary rollbacks may occur as a result of false cycles.
Silberschatz, Galvin, and Gagne ©199916.29
Detection Algorithm Based on
Option 3
• Append unique identifiers (timestamps) to requests form different
sites.
• When process Pi, at site A, requests a resource from process Pj,
at site B, a request message with timestamp TS is sent.
• The edge Pi → Pj with the label TS is inserted in the local wait-for
of A. The edge is inserted in the local wait-for graph of B only if B
has received the request message and cannot immediately grant
the requested resource.
Silberschatz, Galvin, and Gagne ©199916.30
The Algorithm
1. The controller sends an initiating message to each site in the
system.
2. On receiving this message, a site sends its local wait-for graph to
the coordinator.
3. When the controller has received a reply from each site, it
constructs a graph as follows:
(a) The constructed graph contains a vertex for every process
in the system.
(b) The graph has an edge Pi → Pj if and only if (1) there is an
edge Pi → Pj in one of the wait-for graphs, or (2) an edge
Pi → Pj with some label TS appears in more than one wait-
for graph.
If the constructed graph contains a cycle ⇒ deadlock.
Silberschatz, Galvin, and Gagne ©199916.31
Fully Distributed Approach
• All controllers share equally the responsibility for detecting
deadlock.
• Every site constructs a wait-for graph that represents a part of
the total graph.
• We add one additional node Pex to each local wait-for graph.
• If a local wait-for graph contains a cycle that does not involve
node Pex, then the system is in a deadlock state.
• A cycle involving Pex implies the possibility of a deadlock. To
ascertain whether a deadlock does exist, a distributed deadlock-
detection algorithm must be invoked.
Silberschatz, Galvin, and Gagne ©199916.32
Election Algorithms
• Determine where a new copy of the coordinator should be
restarted.
• Assume that a unique priority number is associated with each
active process in the system, and assume that the priority
number of process Pi is i.
• Assume a one-to-one correspondence between processes and
sites.
• The coordinator is always the process with the largest priority
number. When a coordinator fails, the algorithm must elect that
active process with the largest priority number.
• Two algorithms, the bully algorithm and a ring algorithm, can be
used to elect a new coordinator in case of failures.
Silberschatz, Galvin, and Gagne ©199916.33
Bully Algorithm
• Applicable to systems where every process can send a message
to every other process in the system.
• If process Pi sends a request that is not answered by the
coordinator within a time interval T, assume that the coordinator
has failed; Pi tries to elect itself as the new coordinator.
• Pi sends an election message to every process with a higher
priority number, Pi then waits for any of these processes to
answer within T.
Silberschatz, Galvin, and Gagne ©199916.34
Bully Algorithm (Cont.)
• If no response within T, assume that all processes with numbers
greater than i have failed; Pi elects itself the new coordinator.
• If answer is received, Pi begins time interval T´, waiting to receive
a message that a process with a higher priority number has been
elected.
• If no message is sent within T´, assume the process with a higher
number has failed; Pi should restart the algorithm
Silberschatz, Galvin, and Gagne ©199916.35
Bully Algorithm (Cont.)
• If Pi is not the coordinator, then, at any time during execution, Pi
may receive one of the following two messages from process Pj.
– Pj is the new coordinator (j > i). Pi, in turn, records this
information.
– Pj started an election (j > i). Pi, sends a response to Pj and
begins its own election algorithm, provided that Pi has not
already initiated such an election.
• After a failed process recovers, it immediately begins execution
of the same algorithm.
• If there are no active processes with higher numbers, the
recovered process forces all processes with lower number to let it
become the coordinator process, even if there is a currently
active coordinator with a lower number.
Silberschatz, Galvin, and Gagne ©199916.36
Ring Algorithm
• Applicable to systems organized as a ring (logically or
physically).
• Assumes that the links are unidirectional, and that processes
send their messages to their right neighbors.
• Each process maintains an active list, consisting of all the
priority numbers of all active processes in the system when the
algorithm ends.
• If process Pi detects a coordinator failure, I creates a new active
list that is initially empty. It then sends a message elect(i) to its
right neighbor, and adds the number i to its active list.
Silberschatz, Galvin, and Gagne ©199916.37
Ring Algorithm (Cont.)
• If Pi receives a message elect(j) from the process on the left, it
must respond in one of three ways:
1. If this is the first elect message it has seen or sent, Pi
creates a new active list with the numbers i and j. It then
sends the message elect(i), followed by the message
elect(j).
2. If i ≠ j, then the active list for Pi now contains the numbers of
all the active processes in the system. Pi can now
determine the largest number in the active list to identify the
new coordinator process.
Silberschatz, Galvin, and Gagne ©199916.38
Reaching Agreement
• There are applications where a set of processes wish to agree on
a common “value”.
• Such agreement may not take place due to:
– Faulty communication medium
– Faulty processes
Processes may send garbled or incorrect messages to
other processes.
A subset of the processes may collaborate with each
other in an attempt to defeat the scheme.
Silberschatz, Galvin, and Gagne ©199916.39
Faulty Communications
• Process Pi at site A, has sent a message to process Pj at site B;
to proceed, Pi needs to know if Pj has received the message.
• Detect failures using a time-out scheme.
– When Pi sends out a message, it also specifies a time
interval during which it is willing to wait for an
acknowledgment message form Pj.
– When Pj receives the message, it immediately sends an
acknowledgment to Pi.
– If Pi receives the acknowledgment message within the
specified time interval, it concludes that Pj has received its
message. If a time-out occurs, Pj needs to retransmit its
message and wait for an acknowledgment.
– Continue until Pi either receives an acknowledgment, or is
notified by the system that B is down.
Silberschatz, Galvin, and Gagne ©199916.40
Faulty Communications (Cont.)
• Suppose that Pj also needs to know that Pi has received its
acknowledgment message, in order to decide on how to proceed.
– In the presence of failure, it is not possible to accomplish
this task.
– It is not possible in a distributed environment for processes
Pi and Pj to agree completely on their respective states.
Silberschatz, Galvin, and Gagne ©199916.41
Faulty Processes (Byzantine Generals
Problem)
• Communication medium is reliable, but processes can fail in
unpredictable ways.
• Consider a system of n processes, of which no more than m
are faulty. Suppose that each process Pi has some private
value of Vi.
• Devise an algorithm that allows each nonfaulty Pi to construct
a vector Xi = (Ai,1, Ai,2, …, Ai,n) such that::
– If Pj is a nonfaulty process, then Aij = Vj.
– If Pi and Pj are both nonfaulty processes, then Xi = Xj.
• Solutions share the following properties.
– A correct algorithm can be devised only if n ≥ 3 x m + 1.
– The worst-case delay for reaching agreement is
proportionate to m + 1 message-passing delays.
Silberschatz, Galvin, and Gagne ©199916.42
Faulty Processes (Cont.)
• An algorithm for the case where m = 1 and n = 4 requires two
rounds of information exchange:
– Each process sends its private value to the other 3
processes.
– Each process sends the information it has obtained in the
first round to all other processes.
• If a faulty process refuses to send messages, a nonfaulty
process can choose an arbitrary value and pretend that that
value was sent by that process.
• After the two rounds are completed, a nonfaulty process Pi can
construct its vector Xi = (Ai,1, Ai,2, Ai,3, Ai,4) as follows:
– Ai,j = Vi.
– For j ≠ i, if at least two of the three values reported for
process Pj agree, then the majority value is used to set the
value of Aij. Otherwise, a default value (nil) is used.

More Related Content

What's hot

Distributed Transactions(flat and nested) and Atomic Commit Protocols
Distributed Transactions(flat and nested) and Atomic Commit ProtocolsDistributed Transactions(flat and nested) and Atomic Commit Protocols
Distributed Transactions(flat and nested) and Atomic Commit ProtocolsSachin Chauhan
 
BEAM (Erlang VM) as a Soft Real-time Platform
BEAM (Erlang VM) as a Soft Real-time PlatformBEAM (Erlang VM) as a Soft Real-time Platform
BEAM (Erlang VM) as a Soft Real-time PlatformHamidreza Soleimani
 
Transactions (Distributed computing)
Transactions (Distributed computing)Transactions (Distributed computing)
Transactions (Distributed computing)Sri Prasanna
 
CS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitCS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitJ Singh
 
Synchronization
SynchronizationSynchronization
SynchronizationSara shall
 
2 PHASE COMMIT PROTOCOL
2 PHASE COMMIT PROTOCOL2 PHASE COMMIT PROTOCOL
2 PHASE COMMIT PROTOCOLKABILESH RAMAR
 
Runtimeenvironment
RuntimeenvironmentRuntimeenvironment
RuntimeenvironmentAnusuya123
 
4. concurrency control
4. concurrency control4. concurrency control
4. concurrency controlAbDul ThaYyal
 
Unit II - 3 - Operating System - Process Synchronization
Unit II - 3 - Operating System - Process SynchronizationUnit II - 3 - Operating System - Process Synchronization
Unit II - 3 - Operating System - Process Synchronizationcscarcas
 
Synchronization in distributed systems
Synchronization in distributed systems Synchronization in distributed systems
Synchronization in distributed systems SHATHAN
 
Synchronization Pradeep K Sinha
Synchronization Pradeep K SinhaSynchronization Pradeep K Sinha
Synchronization Pradeep K SinhaJawwad Rafiq
 
Ch5 process synchronization
Ch5 process synchronizationCh5 process synchronization
Ch5 process synchronizationSyaiful Ahdan
 
Transaction concurrency control
Transaction concurrency controlTransaction concurrency control
Transaction concurrency controlAnand Grewal
 

What's hot (20)

Distributed Transactions(flat and nested) and Atomic Commit Protocols
Distributed Transactions(flat and nested) and Atomic Commit ProtocolsDistributed Transactions(flat and nested) and Atomic Commit Protocols
Distributed Transactions(flat and nested) and Atomic Commit Protocols
 
BEAM (Erlang VM) as a Soft Real-time Platform
BEAM (Erlang VM) as a Soft Real-time PlatformBEAM (Erlang VM) as a Soft Real-time Platform
BEAM (Erlang VM) as a Soft Real-time Platform
 
Transactions (Distributed computing)
Transactions (Distributed computing)Transactions (Distributed computing)
Transactions (Distributed computing)
 
CS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitCS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed Commit
 
Ds ppt imp.
Ds ppt imp.Ds ppt imp.
Ds ppt imp.
 
Distributed System
Distributed SystemDistributed System
Distributed System
 
6 two phasecommit
6 two phasecommit6 two phasecommit
6 two phasecommit
 
Synchronization
SynchronizationSynchronization
Synchronization
 
Chapter 6 synchronization
Chapter 6 synchronizationChapter 6 synchronization
Chapter 6 synchronization
 
2 PHASE COMMIT PROTOCOL
2 PHASE COMMIT PROTOCOL2 PHASE COMMIT PROTOCOL
2 PHASE COMMIT PROTOCOL
 
Runtimeenvironment
RuntimeenvironmentRuntimeenvironment
Runtimeenvironment
 
4. concurrency control
4. concurrency control4. concurrency control
4. concurrency control
 
Unit II - 3 - Operating System - Process Synchronization
Unit II - 3 - Operating System - Process SynchronizationUnit II - 3 - Operating System - Process Synchronization
Unit II - 3 - Operating System - Process Synchronization
 
Run time
Run timeRun time
Run time
 
Synchronization in distributed systems
Synchronization in distributed systems Synchronization in distributed systems
Synchronization in distributed systems
 
Transaction management transparencies
Transaction management transparenciesTransaction management transparencies
Transaction management transparencies
 
Synchronization Pradeep K Sinha
Synchronization Pradeep K SinhaSynchronization Pradeep K Sinha
Synchronization Pradeep K Sinha
 
Ch5 process synchronization
Ch5 process synchronizationCh5 process synchronization
Ch5 process synchronization
 
Chapter 13
Chapter 13Chapter 13
Chapter 13
 
Transaction concurrency control
Transaction concurrency controlTransaction concurrency control
Transaction concurrency control
 

Viewers also liked

Experience Manager 6 Developer Features - Highlights
Experience Manager 6 Developer Features - HighlightsExperience Manager 6 Developer Features - Highlights
Experience Manager 6 Developer Features - HighlightsCédric Hüsler
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationAnas Ebrahim
 
Introduction to Adobe Experience Manager based e commerce
Introduction to Adobe Experience Manager based e commerceIntroduction to Adobe Experience Manager based e commerce
Introduction to Adobe Experience Manager based e commerceAdobeMarketingCloud
 
Mutual exclusion and synchronization
Mutual exclusion and synchronizationMutual exclusion and synchronization
Mutual exclusion and synchronizationahmad0007
 
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave Lloyd
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave LloydAdobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave Lloyd
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave LloydDave Lloyd
 
AEM Best Practices for Component Development
AEM Best Practices for Component DevelopmentAEM Best Practices for Component Development
AEM Best Practices for Component DevelopmentGabriel Walt
 
AEM (CQ) eCommerce Framework
AEM (CQ) eCommerce FrameworkAEM (CQ) eCommerce Framework
AEM (CQ) eCommerce FrameworkPaolo Mottadelli
 
AEM Sightly Template Language
AEM Sightly Template LanguageAEM Sightly Template Language
AEM Sightly Template LanguageGabriel Walt
 

Viewers also liked (11)

Concurrency
ConcurrencyConcurrency
Concurrency
 
Experience Manager 6 Developer Features - Highlights
Experience Manager 6 Developer Features - HighlightsExperience Manager 6 Developer Features - Highlights
Experience Manager 6 Developer Features - Highlights
 
OSCh17
OSCh17OSCh17
OSCh17
 
Mutual exclusion and sync
Mutual exclusion and syncMutual exclusion and sync
Mutual exclusion and sync
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization
 
Introduction to Adobe Experience Manager based e commerce
Introduction to Adobe Experience Manager based e commerceIntroduction to Adobe Experience Manager based e commerce
Introduction to Adobe Experience Manager based e commerce
 
Mutual exclusion and synchronization
Mutual exclusion and synchronizationMutual exclusion and synchronization
Mutual exclusion and synchronization
 
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave Lloyd
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave LloydAdobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave Lloyd
Adobe Experience Manager (AEM) - Multilingual SIG on SEO - Dave Lloyd
 
AEM Best Practices for Component Development
AEM Best Practices for Component DevelopmentAEM Best Practices for Component Development
AEM Best Practices for Component Development
 
AEM (CQ) eCommerce Framework
AEM (CQ) eCommerce FrameworkAEM (CQ) eCommerce Framework
AEM (CQ) eCommerce Framework
 
AEM Sightly Template Language
AEM Sightly Template LanguageAEM Sightly Template Language
AEM Sightly Template Language
 

Similar to 운영체제론 Ch16

Ch17 OS
Ch17 OSCh17 OS
Ch17 OSC.U
 
Chapter 18 - Distributed Coordination
Chapter 18 - Distributed CoordinationChapter 18 - Distributed Coordination
Chapter 18 - Distributed CoordinationWayne Jones Jnr
 
Fault Tolerant and Distributed System
Fault Tolerant and Distributed SystemFault Tolerant and Distributed System
Fault Tolerant and Distributed Systemsreenivas1591
 
UNIT IV DIS.pptx
UNIT IV DIS.pptxUNIT IV DIS.pptx
UNIT IV DIS.pptxSamPrem3
 
Mutual Exclusion using Peterson's Algorithm
Mutual Exclusion using Peterson's AlgorithmMutual Exclusion using Peterson's Algorithm
Mutual Exclusion using Peterson's AlgorithmSouvik Roy
 
Week03-Ch3-Process Concept.pptx.gfhgvjhg
Week03-Ch3-Process Concept.pptx.gfhgvjhgWeek03-Ch3-Process Concept.pptx.gfhgvjhg
Week03-Ch3-Process Concept.pptx.gfhgvjhgalianwar88
 
Lecture 5- Process Synchonization_revised.pdf
Lecture 5- Process Synchonization_revised.pdfLecture 5- Process Synchonization_revised.pdf
Lecture 5- Process Synchonization_revised.pdfAmanuelmergia
 
Coordination and Agreement .ppt
Coordination and Agreement .pptCoordination and Agreement .ppt
Coordination and Agreement .pptSOURAVKUMAR723356
 
Distributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlDistributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlbalamurugan.k Kalibalamurugan
 
Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015Eran Harel
 
Concurrency Control in Distributed Systems.pptx
Concurrency Control in Distributed Systems.pptxConcurrency Control in Distributed Systems.pptx
Concurrency Control in Distributed Systems.pptxMArshad35
 
concurrency control.ppt
concurrency control.pptconcurrency control.ppt
concurrency control.pptBikalAdhikari4
 
Operating Systems Chapter 6 silberschatz
Operating Systems Chapter 6 silberschatzOperating Systems Chapter 6 silberschatz
Operating Systems Chapter 6 silberschatzGiulianoRanauro
 
7 concurrency controltwo
7 concurrency controltwo7 concurrency controltwo
7 concurrency controltwoashish61_scs
 
7 concurrency controltwo
7 concurrency controltwo7 concurrency controltwo
7 concurrency controltwoashish61_scs
 
Operating system 23 process synchronization
Operating system 23 process synchronizationOperating system 23 process synchronization
Operating system 23 process synchronizationVaibhav Khanna
 

Similar to 운영체제론 Ch16 (20)

Ch17 OS
Ch17 OSCh17 OS
Ch17 OS
 
OS_Ch17
OS_Ch17OS_Ch17
OS_Ch17
 
Chapter 18 - Distributed Coordination
Chapter 18 - Distributed CoordinationChapter 18 - Distributed Coordination
Chapter 18 - Distributed Coordination
 
Fault Tolerant and Distributed System
Fault Tolerant and Distributed SystemFault Tolerant and Distributed System
Fault Tolerant and Distributed System
 
UNIT IV DIS.pptx
UNIT IV DIS.pptxUNIT IV DIS.pptx
UNIT IV DIS.pptx
 
Mutual Exclusion using Peterson's Algorithm
Mutual Exclusion using Peterson's AlgorithmMutual Exclusion using Peterson's Algorithm
Mutual Exclusion using Peterson's Algorithm
 
Week03-Ch3-Process Concept.pptx.gfhgvjhg
Week03-Ch3-Process Concept.pptx.gfhgvjhgWeek03-Ch3-Process Concept.pptx.gfhgvjhg
Week03-Ch3-Process Concept.pptx.gfhgvjhg
 
Lecture 5- Process Synchonization_revised.pdf
Lecture 5- Process Synchonization_revised.pdfLecture 5- Process Synchonization_revised.pdf
Lecture 5- Process Synchonization_revised.pdf
 
Coordination and Agreement .ppt
Coordination and Agreement .pptCoordination and Agreement .ppt
Coordination and Agreement .ppt
 
Distributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency controlDistributed datababase Transaction and concurrency control
Distributed datababase Transaction and concurrency control
 
Lecture#5
Lecture#5Lecture#5
Lecture#5
 
Ch4
Ch4Ch4
Ch4
 
Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015
 
Concurrency Control in Distributed Systems.pptx
Concurrency Control in Distributed Systems.pptxConcurrency Control in Distributed Systems.pptx
Concurrency Control in Distributed Systems.pptx
 
concurrency control.ppt
concurrency control.pptconcurrency control.ppt
concurrency control.ppt
 
ch6_EN_BK_syn1.pdf
ch6_EN_BK_syn1.pdfch6_EN_BK_syn1.pdf
ch6_EN_BK_syn1.pdf
 
Operating Systems Chapter 6 silberschatz
Operating Systems Chapter 6 silberschatzOperating Systems Chapter 6 silberschatz
Operating Systems Chapter 6 silberschatz
 
7 concurrency controltwo
7 concurrency controltwo7 concurrency controltwo
7 concurrency controltwo
 
7 concurrency controltwo
7 concurrency controltwo7 concurrency controltwo
7 concurrency controltwo
 
Operating system 23 process synchronization
Operating system 23 process synchronizationOperating system 23 process synchronization
Operating system 23 process synchronization
 

More from Jongmyoung Kim

iTunes media server for ubuntu
iTunes media server for ubuntuiTunes media server for ubuntu
iTunes media server for ubuntuJongmyoung Kim
 
Open source for upload
Open source for uploadOpen source for upload
Open source for uploadJongmyoung Kim
 
취업캠프 특강 - 기업의 서비스 개발 프로젝트
취업캠프 특강 - 기업의 서비스 개발 프로젝트취업캠프 특강 - 기업의 서비스 개발 프로젝트
취업캠프 특강 - 기업의 서비스 개발 프로젝트Jongmyoung Kim
 
TTS System을 이용한 교육용 소프트웨어 개발
TTS System을 이용한 교육용 소프트웨어 개발TTS System을 이용한 교육용 소프트웨어 개발
TTS System을 이용한 교육용 소프트웨어 개발Jongmyoung Kim
 
운영체제론 - Ch09
운영체제론 - Ch09운영체제론 - Ch09
운영체제론 - Ch09Jongmyoung Kim
 
Web app service project
Web app service projectWeb app service project
Web app service projectJongmyoung Kim
 

More from Jongmyoung Kim (20)

REST
RESTREST
REST
 
iTunes media server for ubuntu
iTunes media server for ubuntuiTunes media server for ubuntu
iTunes media server for ubuntu
 
Open source for upload
Open source for uploadOpen source for upload
Open source for upload
 
취업캠프 특강 - 기업의 서비스 개발 프로젝트
취업캠프 특강 - 기업의 서비스 개발 프로젝트취업캠프 특강 - 기업의 서비스 개발 프로젝트
취업캠프 특강 - 기업의 서비스 개발 프로젝트
 
TTS System을 이용한 교육용 소프트웨어 개발
TTS System을 이용한 교육용 소프트웨어 개발TTS System을 이용한 교육용 소프트웨어 개발
TTS System을 이용한 교육용 소프트웨어 개발
 
운영체제론 Ch19
운영체제론 Ch19운영체제론 Ch19
운영체제론 Ch19
 
운영체제론 Ch21
운영체제론 Ch21운영체제론 Ch21
운영체제론 Ch21
 
운영체제론 Ch18
운영체제론 Ch18운영체제론 Ch18
운영체제론 Ch18
 
운영체제론 Ch20
운영체제론 Ch20운영체제론 Ch20
운영체제론 Ch20
 
운영체제론 Ch17
운영체제론 Ch17운영체제론 Ch17
운영체제론 Ch17
 
운영체제론 Ch14
운영체제론 Ch14운영체제론 Ch14
운영체제론 Ch14
 
운영체제론 Ch13
운영체제론 Ch13운영체제론 Ch13
운영체제론 Ch13
 
운영체제론 Ch12
운영체제론 Ch12운영체제론 Ch12
운영체제론 Ch12
 
운영체제론 Ch11
운영체제론 Ch11운영체제론 Ch11
운영체제론 Ch11
 
운영체제론 Ch10
운영체제론 Ch10운영체제론 Ch10
운영체제론 Ch10
 
운영체제론 Ch22
운영체제론 Ch22운영체제론 Ch22
운영체제론 Ch22
 
운영체제론 - Ch09
운영체제론 - Ch09운영체제론 - Ch09
운영체제론 - Ch09
 
Web app service project
Web app service projectWeb app service project
Web app service project
 
Bitcoin
BitcoinBitcoin
Bitcoin
 
Bigdata
BigdataBigdata
Bigdata
 

Recently uploaded

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Hr365.us smith
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noidabntitsolutionsrishis
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfAlina Yurenko
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfFerryKemperman
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 

Recently uploaded (20)

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdf
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 

운영체제론 Ch16

  • 1. Silberschatz, Galvin, and Gagne ©199916.1 Module 16: Distributed Coordination • Event Ordering • Mutual Exclusion • Atomicity • Concurrency Control • Deadlock Handling • Election Algorithms • Reaching Agreement
  • 2. Silberschatz, Galvin, and Gagne ©199916.2 Event Ordering • Happened-before relation (denoted by →). – If A and B are events in the same process, and A was executed before B, then A → B. – If A is the event of sending a message by one process and B is the event of receiving that message by another process, then A → B. – If A → B and B → C then A → C.
  • 3. Silberschatz, Galvin, and Gagne ©199916.3 Implementation of → • Associate a timestamp with each system event. Require that for every pair of events A and B, if A → B, then the timestamp of A is less than the timestamp of B. • Within each process Pi a logical clock, LCi is associated. The logical clock can be implemented as a simple counter that is incremented between any two successive events executed within a process. • A process advances its logical clock when it receives a message whose timestamp is greater than the current value of its logical clock. • If the timestamps of two events A and B are the same, then the events are concurrent. We may use the process identity numbers to break ties and to create a total ordering.
  • 4. Silberschatz, Galvin, and Gagne ©199916.4 Distributed Mutual Exclusion (DME) • Assumptions – The system consists of n processes; each process Pi resides at a different processor. – Each process has a critical section that requires mutual exclusion. • Requirement – If Pi is executing in its critical section, then no other process Pj is executing in its critical section. • We present two algorithms to ensure the mutual exclusion execution of processes in their critical sections.
  • 5. Silberschatz, Galvin, and Gagne ©199916.5 DME: Centralized Approach • One of the processes in the system is chosen to coordinate the entry to the critical section. • A process that wants to enter its critical section sends a request message to the coordinator. • The coordinator decides which process can enter the critical section next, and its sends that process a reply message. • When the process receives a reply message from the coordinator, it enters its critical section. • After exiting its critical section, the process sends a release message to the coordinator and proceeds with its execution. • This scheme requires three messages per critical-section entry: – request – reply – release
  • 6. Silberschatz, Galvin, and Gagne ©199916.6 DME: Fully Distributed Approach • When process Pi wants to enter its critical section, it generates a new timestamp, TS, and sends the message request (Pi, TS) to all other processes in the system. • When process Pj receives a request message, it may reply immediately or it may defer sending a reply back. • When process Pi receives a reply message from all other processes in the system, it can enter its critical section. • After exiting its critical section, the process sends reply messages to all its deferred requests.
  • 7. Silberschatz, Galvin, and Gagne ©199916.7 DME: Fully Distributed Approach (Cont.) • The decision whether process Pj replies immediately to a request(Pi, TS) message or defers its reply is based on three factors: – If Pj is in its critical section, then it defers its reply to Pi. – If Pj does not want to enter its critical section, then it sends a reply immediately to Pi. – If Pj wants to enter its critical section but has not yet entered it, then it compares its own request timestamp with the timestamp TS. If its own request timestamp is greater than TS, then it sends a reply immediately to Pi (Pi asked first). Otherwise, the reply is deferred.
  • 8. Silberschatz, Galvin, and Gagne ©199916.8 Desirable Behavior of Fully Distributed Approach • Freedom from Deadlock is ensured. • Freedom from starvation is ensured, since entry to the critical section is scheduled according to the timestamp ordering. The timestamp ordering ensures that processes are served in a first- come, first served order. • The number of messages per critical-section entry is 2 x (n – 1). This is the minimum number of required messages per critical- section entry when processes act independently and concurrently.
  • 9. Silberschatz, Galvin, and Gagne ©199916.9 Three Undesirable Consequences • The processes need to know the identity of all other processes in the system, which makes the dynamic addition and removal of processes more complex. • If one of the processes fails, then the entire scheme collapses. This can be dealt with by continuously monitoring the state of all the processes in the system. • Processes that have not entered their critical section must pause frequently to assure other processes that they intend to enter the critical section. This protocol is therefore suited for small, stable sets of cooperating processes.
  • 10. Silberschatz, Galvin, and Gagne ©199916.10 Atomicity • Either all the operations associated with a program unit are executed to completion, or none are performed. • Ensuring atomicity in a distributed system requires a transaction coordinator, which is responsible for the following: – Starting the execution of the transaction. – Breaking the transaction into a number of subtransactions, and distribution these subtransactions to the appropriate sites for execution. – Coordinating the termination of the transaction, which may result in the transaction being committed at all sites or aborted at all sites.
  • 11. Silberschatz, Galvin, and Gagne ©199916.11 Two-Phase Commit Protocol (2PC) • Assumes fail-stop model. • Execution of the protocol is initiated by the coordinator after the last step of the transaction has been reached. • When the protocol is initiated, the transaction may still be executing at some of the local sites. • The protocol involves all the local sites at which the transaction executed. • Example: Let T be a transaction initiated at site Si and let the transaction coordinator at Si be Ci.
  • 12. Silberschatz, Galvin, and Gagne ©199916.12 Phase 1: Obtaining a Decision • Ci adds <prepare T> record to the log. • Ci sends <prepare T> message to all sites. • When a site receives a <prepare T> message, the transaction manager determines if it can commit the transaction. – If no: add <no T> record to the log and respond to Ci with <abort T>. – If yes: add <ready T> record to the log. force all log records for T onto stable storage. transaction manager sends <ready T> message to Ci.
  • 13. Silberschatz, Galvin, and Gagne ©199916.13 Phase 1 (Cont.) • Coordinator collects responses – All respond “ready”, decision is commit. – At least one response is “abort”, decision is abort. – At least one participant fails to respond within time out period, decision is abort.
  • 14. Silberschatz, Galvin, and Gagne ©199916.14 Phase 2: Recording Decision in the Database • Coordinator adds a decision record <abort T> or <commit T> to its log and forces record onto stable storage. • Once that record reaches stable storage it is irrevocable (even if failures occur). • Coordinator sends a message to each participant informing it of the decision (commit or abort). • Participants take appropriate action locally.
  • 15. Silberschatz, Galvin, and Gagne ©199916.15 Failure Handling in 2PC – Site Failure • The log contains a <commit T> record. In this case, the site executes redo(T). • The log contains an <abort T> record. In this case, the site executes undo(T). • The contains a <ready T> record; consult Ci. If Ci is down, site sends query-status T message to the other sites. • The log contains no control records concerning T. In this case, the site executes undo(T).
  • 16. Silberschatz, Galvin, and Gagne ©199916.16 Failure Handling in 2PC – Coordinator Ci Failure • If an active site contains a <commit T> record in its log, the T must be committed. • If an active site contains an <abort T> record in its log, then T must be aborted. • If some active site does not contain the record <ready T> in its log then the failed coordinator Ci cannot have decided to commit T. Rather than wait for Ci to recover, it is preferable to abort T. • All active sites have a <ready T> record in their logs, but no additional control records. In this case we must wait for the coordinator to recover. – Blocking problem – T is blocked pending the recovery of site Si.
  • 17. Silberschatz, Galvin, and Gagne ©199916.17 Concurrency Control • Modify the centralized concurrency schemes to accommodate the distribution of transactions. • Transaction manager coordinates execution of transactions (or subtransactions) that access data at local sites. • Local transaction only executes at that site. • Global transaction executes at several sites.
  • 18. Silberschatz, Galvin, and Gagne ©199916.18 Locking Protocols • Can use the two-phase locking protocol in a distributed environment by changing how the lock manager is implemented. • Nonreplicated scheme – each site maintains a local lock manager which administers lock and unlock requests for those data items that are stored in that site. – Simple implementation involves two message transfers for handling lock requests, and one message transfer for handling unlock requests. – Deadlock handling is more complex.
  • 19. Silberschatz, Galvin, and Gagne ©199916.19 Single-Coordinator Approach • A single lock manager resides in a single chosen site, all lock and unlock requests are made a that site. • Simple implementation • Simple deadlock handling • Possibility of bottleneck • Vulnerable to loss of concurrency controller if single site fails • Multiple-coordinator approach distributes lock-manager function over several sites.
  • 20. Silberschatz, Galvin, and Gagne ©199916.20 Majority Protocol • Avoids drawbacks of central control by dealing with replicated data in a decentralized manner. • More complicated to implement • Deadlock-handling algorithms must be modified; possible for deadlock to occur in locking only one data item.
  • 21. Silberschatz, Galvin, and Gagne ©199916.21 Biased Protocol • Similar to majority protocol, but requests for shared locks prioritized over requests for exclusive locks. • Less overhead on read operations than in majority protocol; but has additional overhead on writes. • Like majority protocol, deadlock handling is complex.
  • 22. Silberschatz, Galvin, and Gagne ©199916.22 Primary Copy • One of the sites at which a replica resides is designated as the primary site. Request to lock a data item is made at the primary site of that data item. • Concurrency control for replicated data handled in a manner similar to that of unreplicated data. • Simple implementation, but if primary site fails, the data item is unavailable, even though other sites may have a replica.
  • 23. Silberschatz, Galvin, and Gagne ©199916.23 Timestamping • Generate unique timestamps in distributed scheme: – Each site generates a unique local timestamp. – The global unique timestamp is obtained by concatenation of the unique local timestamp with the unique site identifier – Use a logical clock defined within each site to ensure the fair generation of timestamps. • Timestamp-ordering scheme – combine the centralized concurrency control timestamp scheme with the 2PC protocol to obtain a protocol that ensures serializability with no cascading rollbacks.
  • 24. Silberschatz, Galvin, and Gagne ©199916.24 Deadlock Prevention • Resource-ordering deadlock-prevention – define a global ordering among the system resources. – Assign a unique number to all system resources. – A process may request a resource with unique number i only if it is not holding a resource with a unique number grater than i. – Simple to implement; requires little overhead. • Banker’s algorithm – designate one of the processes in the system as the process that maintains the information necessary to carry out the Banker’s algorithm. – Also implemented easily, but may require too much overhead.
  • 25. Silberschatz, Galvin, and Gagne ©199916.25 Timestamped Deadlock-Prevention Scheme • Each process Pi is assigned a unique priority number • Priority numbers are used to decide whether a process Pi should wait for a process Pj; otherwise Pi is rolled back. • The scheme prevents deadlocks. For every edge Pi → Pj in the wait-for graph, Pi has a higher priority than Pj. Thus a cycle cannot exist.
  • 26. Silberschatz, Galvin, and Gagne ©199916.26 Wait-Die Scheme • Based on a nonpreemptive technique. • If Pi requests a resource currently held by Pj, Pi is allowed to wait only if it has a smaller timestamp than does Pj (Pi is older than Pj). Otherwise, Pi is rolled back (dies). • Example: Suppose that processes P1, P2, and P3 have timestamps t, 10, and 15 respectively. – if P1 request a resource held by P2, then P1 will wait. – If P3 requests a resource held by P2, then P3 will be rolled back.
  • 27. Silberschatz, Galvin, and Gagne ©199916.27 Would-Wait Scheme • Based on a preemptive technique; counterpart to the wait-die system. • If Pi requests a resource currently held by Pj, Pi is allowed to wait only if it has a larger timestamp than does Pj (Pi is younger than Pj). Otherwise Pj is rolled back (Pj is wounded by Pi). • Example: Suppose that processes P1, P2, and P3 have timestamps 5, 10, and 15 respectively. – If P1 requests a resource held by P2, then the resource will be preempted from P2 and P2 will be rolled back. – If P3 requests a resource held by P2, then P3 will wait.
  • 28. Silberschatz, Galvin, and Gagne ©199916.28 Deadlock Detection – Centralized Approach • Each site keeps a local wait-for graph. The nodes of the graph correspond to all the processes that are currently either holding or requesting any of the resources local to that site. • A global wait-for graph is maintained in a single coordination process; this graph is the union of all local wait-for graphs. • There are three different options (points in time) when the wait-for graph may be constructed: 1. Whenever a new edge is inserted or removed in one of the local wait-for graphs. 2. Periodically, when a number of changes have occurred in a wait-for graph. 3. Whenever the coordinator needs to invoke the cycle- detection algorithm.. • Unnecessary rollbacks may occur as a result of false cycles.
  • 29. Silberschatz, Galvin, and Gagne ©199916.29 Detection Algorithm Based on Option 3 • Append unique identifiers (timestamps) to requests form different sites. • When process Pi, at site A, requests a resource from process Pj, at site B, a request message with timestamp TS is sent. • The edge Pi → Pj with the label TS is inserted in the local wait-for of A. The edge is inserted in the local wait-for graph of B only if B has received the request message and cannot immediately grant the requested resource.
  • 30. Silberschatz, Galvin, and Gagne ©199916.30 The Algorithm 1. The controller sends an initiating message to each site in the system. 2. On receiving this message, a site sends its local wait-for graph to the coordinator. 3. When the controller has received a reply from each site, it constructs a graph as follows: (a) The constructed graph contains a vertex for every process in the system. (b) The graph has an edge Pi → Pj if and only if (1) there is an edge Pi → Pj in one of the wait-for graphs, or (2) an edge Pi → Pj with some label TS appears in more than one wait- for graph. If the constructed graph contains a cycle ⇒ deadlock.
  • 31. Silberschatz, Galvin, and Gagne ©199916.31 Fully Distributed Approach • All controllers share equally the responsibility for detecting deadlock. • Every site constructs a wait-for graph that represents a part of the total graph. • We add one additional node Pex to each local wait-for graph. • If a local wait-for graph contains a cycle that does not involve node Pex, then the system is in a deadlock state. • A cycle involving Pex implies the possibility of a deadlock. To ascertain whether a deadlock does exist, a distributed deadlock- detection algorithm must be invoked.
  • 32. Silberschatz, Galvin, and Gagne ©199916.32 Election Algorithms • Determine where a new copy of the coordinator should be restarted. • Assume that a unique priority number is associated with each active process in the system, and assume that the priority number of process Pi is i. • Assume a one-to-one correspondence between processes and sites. • The coordinator is always the process with the largest priority number. When a coordinator fails, the algorithm must elect that active process with the largest priority number. • Two algorithms, the bully algorithm and a ring algorithm, can be used to elect a new coordinator in case of failures.
  • 33. Silberschatz, Galvin, and Gagne ©199916.33 Bully Algorithm • Applicable to systems where every process can send a message to every other process in the system. • If process Pi sends a request that is not answered by the coordinator within a time interval T, assume that the coordinator has failed; Pi tries to elect itself as the new coordinator. • Pi sends an election message to every process with a higher priority number, Pi then waits for any of these processes to answer within T.
  • 34. Silberschatz, Galvin, and Gagne ©199916.34 Bully Algorithm (Cont.) • If no response within T, assume that all processes with numbers greater than i have failed; Pi elects itself the new coordinator. • If answer is received, Pi begins time interval T´, waiting to receive a message that a process with a higher priority number has been elected. • If no message is sent within T´, assume the process with a higher number has failed; Pi should restart the algorithm
  • 35. Silberschatz, Galvin, and Gagne ©199916.35 Bully Algorithm (Cont.) • If Pi is not the coordinator, then, at any time during execution, Pi may receive one of the following two messages from process Pj. – Pj is the new coordinator (j > i). Pi, in turn, records this information. – Pj started an election (j > i). Pi, sends a response to Pj and begins its own election algorithm, provided that Pi has not already initiated such an election. • After a failed process recovers, it immediately begins execution of the same algorithm. • If there are no active processes with higher numbers, the recovered process forces all processes with lower number to let it become the coordinator process, even if there is a currently active coordinator with a lower number.
  • 36. Silberschatz, Galvin, and Gagne ©199916.36 Ring Algorithm • Applicable to systems organized as a ring (logically or physically). • Assumes that the links are unidirectional, and that processes send their messages to their right neighbors. • Each process maintains an active list, consisting of all the priority numbers of all active processes in the system when the algorithm ends. • If process Pi detects a coordinator failure, I creates a new active list that is initially empty. It then sends a message elect(i) to its right neighbor, and adds the number i to its active list.
  • 37. Silberschatz, Galvin, and Gagne ©199916.37 Ring Algorithm (Cont.) • If Pi receives a message elect(j) from the process on the left, it must respond in one of three ways: 1. If this is the first elect message it has seen or sent, Pi creates a new active list with the numbers i and j. It then sends the message elect(i), followed by the message elect(j). 2. If i ≠ j, then the active list for Pi now contains the numbers of all the active processes in the system. Pi can now determine the largest number in the active list to identify the new coordinator process.
  • 38. Silberschatz, Galvin, and Gagne ©199916.38 Reaching Agreement • There are applications where a set of processes wish to agree on a common “value”. • Such agreement may not take place due to: – Faulty communication medium – Faulty processes Processes may send garbled or incorrect messages to other processes. A subset of the processes may collaborate with each other in an attempt to defeat the scheme.
  • 39. Silberschatz, Galvin, and Gagne ©199916.39 Faulty Communications • Process Pi at site A, has sent a message to process Pj at site B; to proceed, Pi needs to know if Pj has received the message. • Detect failures using a time-out scheme. – When Pi sends out a message, it also specifies a time interval during which it is willing to wait for an acknowledgment message form Pj. – When Pj receives the message, it immediately sends an acknowledgment to Pi. – If Pi receives the acknowledgment message within the specified time interval, it concludes that Pj has received its message. If a time-out occurs, Pj needs to retransmit its message and wait for an acknowledgment. – Continue until Pi either receives an acknowledgment, or is notified by the system that B is down.
  • 40. Silberschatz, Galvin, and Gagne ©199916.40 Faulty Communications (Cont.) • Suppose that Pj also needs to know that Pi has received its acknowledgment message, in order to decide on how to proceed. – In the presence of failure, it is not possible to accomplish this task. – It is not possible in a distributed environment for processes Pi and Pj to agree completely on their respective states.
  • 41. Silberschatz, Galvin, and Gagne ©199916.41 Faulty Processes (Byzantine Generals Problem) • Communication medium is reliable, but processes can fail in unpredictable ways. • Consider a system of n processes, of which no more than m are faulty. Suppose that each process Pi has some private value of Vi. • Devise an algorithm that allows each nonfaulty Pi to construct a vector Xi = (Ai,1, Ai,2, …, Ai,n) such that:: – If Pj is a nonfaulty process, then Aij = Vj. – If Pi and Pj are both nonfaulty processes, then Xi = Xj. • Solutions share the following properties. – A correct algorithm can be devised only if n ≥ 3 x m + 1. – The worst-case delay for reaching agreement is proportionate to m + 1 message-passing delays.
  • 42. Silberschatz, Galvin, and Gagne ©199916.42 Faulty Processes (Cont.) • An algorithm for the case where m = 1 and n = 4 requires two rounds of information exchange: – Each process sends its private value to the other 3 processes. – Each process sends the information it has obtained in the first round to all other processes. • If a faulty process refuses to send messages, a nonfaulty process can choose an arbitrary value and pretend that that value was sent by that process. • After the two rounds are completed, a nonfaulty process Pi can construct its vector Xi = (Ai,1, Ai,2, Ai,3, Ai,4) as follows: – Ai,j = Vi. – For j ≠ i, if at least two of the three values reported for process Pj agree, then the majority value is used to set the value of Aij. Otherwise, a default value (nil) is used.