1. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 1 of 16
Unit I
1. Explain different types of distribution transparency with example.
Distributed systems, transparency is defined as the masking from the user and the
application programmer regarding the separation of components, so that the whole
system seems to be like a single entity rather than individual components.
AIM OF TRANSPARENCY:
Transparency’s major goal is to make certain features of distribution opaque to
application programmers so they may focus on the design of their specific activity.
Figure 1.1
Types of Transparency in Distributed Systems:
The following are the various kinds of transparency that exist in distributed systems:
Access Transparency
Location Transparency
Concurrency Transparency
Replication Transparency
Failure Transparency
Mobility Transparency
Performance Transparency
Scaling Transparency
Parallelism Transparency
1. Access Transparency:
Access Transparency allows the same operations to be used to access local and
remote resources. The file distribution must be hidden from the clients. The storing of
data on separate servers that are physically separated, and a common set of actions
should be available to access both remote and local files. Applications for local files are to
be designed such that they should be able to run on remote files as well.
2. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 2 of 16
Examples – The File system in Network File System (NFS), SQL queries, and web
navigation exhibits the feature of access transparency.
2. Location Transparency:
Location Transparency permits access to resources regardless of their physical or
network location. There should be a view of a consistent file namespace for the clients. It
must possess the feature of moving files such that their pathnames are not to be affected.
There is no information regarding the physical location of the object in case of a location
transparent name. It is a quite vital and critical feature for facilitating resource movement
and service availability. Location and Access Transparency together makes Network
transparency.
Examples – NFS file system and the pages of the web.
3. Concurrency Transparency:
Concurrency Transparency permits many processes to run in parallel using shared
resources without interfering with one another. As we know distributed systems exhibit
concurrent environments so the shareable items are all accessed at the same time. It is
hard to control Concurrency and implementation.
Example – Automatic Teller Machine (ATM) network.
4. Replication Transparency:
Replication Transparency ensures the existence of numerous instances of
resources to improve reliability and performance without having to know about replication
to the user. In other words, this type of transparency should primarily be applied to
distributed file systems, where replication of data over two or more sites exists for
increased reliability. The existence of a mirrored copy of data must be unknown to the
client.
Example- Distributed DBMS (Database Management System).
5. Failure Transparency:
Failure Transparency permits fault abstraction in the background, allowing users
and application programs to execute tasks even when hardware and software components
fail. The fault tolerance property is exhibited by the procedures that deal with access
transparency. The main concern in the distributed system is that they are more prone to
failure since any of the components could fail, resulting in a degraded or non-
existent/unavailable service. It is quite difficult to tell the difference between a failed and
a slow-running operation since the complexities are hidden.
Examples – Database Management Systems (DBMS).
6. Mobility (Migration) Transparency:
Mobility Transparency lets a system or resources move around without disrupting
user or software processes. It also bolsters the load balancing of any client that may be
overburdened.
Examples – Network File System (NFS) and Web pages.
3. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 3 of 16
7. Performance Transparency:
Performance Transparency enables system reconfiguration to increase or enhance
performance.
8. Scaling (Size) Transparency:
Scaling Transparency enables systems and applications to scale up without
requiring changes to the system architecture or application techniques. The resources
that are not of any relevance to the user are hidden from the user and application
programmers. The smooth development and evolution with this type of transparency are
critical for most businesses. A system should be able to scale down to small surroundings
when necessary, as well as be space and/or time-efficient when required.
Example – World Wide Web (WWW)
9. Parallelism Transparency:
Parallelism Transparency enables parallel activities to run without users knowing
how, where and when it is made possible by the systems.
Note: Access and location transparency are the two most significant types of
transparency; their presence or absence has the most impact on how distributed
resources are used.
Degree of Transparency:
It is normally preferable, but it is not always the best option.
It is not a good idea to keep a physical resource like a printer hidden from its
users.
A trade-off between a high level of transparency and a system’s performance is
required.
2. Difference between synchronous and asynchronous communication?
Synchronous Transmission:
In Synchronous Transmission, data is sent in form of blocks or frames. This
transmission is the full-duplex type. Between sender and receiver, synchronization is
compulsory. In Synchronous transmission, there is no gap present between data. It is
more efficient and more reliable than asynchronous transmission to transfer a large
amount of data.
Example:
Chat Rooms
Telephonic Conversations
Video Conferencing
4. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 4 of 16
Asynchronous Transmission:
In Asynchronous Transmission, data is sent in form of byte or character. This
transmission is the half-duplex type transmission. In this transmission start bits and stop
bits are added with data. It does not require synchronization.
Example:
Email
Forums
Letters
S.
No.
Synchronous Transmission Asynchronous Transmission
1.
In Synchronous transmission, data is
sent in form of blocks or frames.
In Asynchronous transmission, data is
sent in form of bytes or characters.
2. Synchronous transmission is fast. Asynchronous transmission is slow.
3. Synchronous transmission is costly. Asynchronous transmission is economical.
4.
In Synchronous transmission, the time
interval of transmission is constant.
In Asynchronous transmission, the time
interval of transmission is not constant, it
is random.
5.
In this transmission, users have to
wait till the transmission is complete
before getting a response back from
the server.
Here, users do not have to wait for the
completion of transmission in order to get
a response from the server.
6.
In Synchronous transmission, there is
no gap present between data.
In Asynchronous transmission, there is a
gap present between data.
5. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 5 of 16
7.
Efficient use of transmission lines is
done in synchronous transmission.
While in Asynchronous transmission, the
transmission line remains empty during a
gap in character transmission.
8.
The start and stop bits are not used in
transmitting data.
The start and stop bits are used in
transmitting data that imposes extra
overhead.
9.
Synchronous transmission needs
precisely synchronized clocks for the
information of new bytes.
Asynchronous transmission does not need
synchronized clocks as parity bit is used
in this transmission for information of new
bytes.
3. List out and discuss the goals that should be met to make building a distributed
systems.
The four important goals that should be met for an efficient distributed system are
as follows:
1. Connecting Users and Resources:
The main goal of a distributed system is to make it easy for users to access remote
resources and to share them with others in a controlled way.
It is cheaper to le a printer be shared by several users than buying and maintaining
printers for each user.
Collaborating and exchanging information can be made easier by connecting users
and resource.
2. Transparency:
It is important for a distributed system to hide the location of its process and
resource. A distributed system that can portray itself as a single system is said to be
transparent.
The various transparencies need to be considered are access, location, migration,
relocation, replication, concurrency, failure and persistence.
Aiming for distributed transparency should be considered along with performance
issues.
3. Openness:
Openness is an important goal of distributed system in which it offers services
according to standard rules that describe the syntax and semantics of those
services.
Open distributed system must be flexible making it easy to configure and add new
components without affecting existing components.
An open distributed system must also be extensible.
4. Scalable:
Scalability is one of the most important goals which are measured along three
different dimensions.
First, a system can be scalable with respect to its size which can add more user and
resources to a system.
6. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 6 of 16
Second, users and resources can be geographically apart.
Third, it is possible to manage even if many administrative organizations are
spanned.
Unit II
1. Write short note about the uses of Threads.
What is a Thread?
Within a program, a thread is a separate execution path. It is a lightweight process
that the operating system can schedule and run concurrently with other threads.
The operating system creates and manages threads, and they share the same
memory and resources as the program that created them. This enables multiple threads
to collaborate and work efficiently within a single program. A thread is a single sequence
stream within a process.
Threads are also called as lightweight processes as it possess some of the
properties of processes. Each thread belongs to exactly one process.
In operating system that support multithreading, process can consist of many threads.
Types of Threads
There are two types of threads:
User Level Thread
Kernel Level Thread
User-Level Thread-
The User-level Threads are implemented by the user-level software. the user-level
threads are basically created and implemented by the thread library which OS provides as
an API for creating the managing synchronizing threads. it is faster than the kernel-level
threads, it is basically represented by the program counter, stack, register, and PCB.
Example – user threads library includes POSIX threads, Mach C-Threads
Kernel-Level Thread –
So, in terms of the Operating systems basically, the threads are the unit of
execution within a process. and the kernel level threads are also kinds of threads which is
directly handled via kernel threads management. The Kernel-level threads are directly
handled by the OS directly whereas the thread’s management is done by the kernel.
In the Kernel Level Threads, Each thread is self-organizing and the kernel provides each
thread with its own context with information about the thread’s status, such as its name,
group, and priority.
S.
No.
Parameters User Level Thread Kernel Level Thread
1. Implemented by
User threads are
implemented by users.
Kernel threads are implemented
by Operating System (OS).
2. Recognize
The operating System
doesn’t recognize user-level
threads.
Kernel threads are recognized by
Operating System.
3. Implementation Implementation of User Implementation of Kernel-Level
7. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 7 of 16
threads is easy. thread is complicated.
4.
Context switch
time
Context switch time is less. Context switch time is more.
5.
Hardware
support
Context switch requires no
hardware support.
Hardware support is needed.
6.
Blocking
operation
If one user-level thread
performs a blocking
operation then the entire
process will be blocked.
If one kernel thread performs a
blocking operation then another
thread can continue execution.
7. Multithreading
Multithread applications
cannot take advantage of
multiprocessing.
Kernels can be multithreaded.
8.
Creation and
Management
User-level threads can be
created and managed more
quickly.
Kernel-level level threads take
more time to create and manage.
9.
Operating
System
Any operating system can
support user-level threads.
Kernel-level threads are
operating system-specific.
10.
Thread
Management
The thread library contains
the code for thread creation,
message passing, thread
scheduling, data transfer,
and thread destroying
The application code does not
contain thread management
code. It is merely an API to the
kernel mode. The Windows
operating system makes use of
this feature.
11. Example
Example: Java thread,
POSIX threads.
Example: Window Solaris.
12. Advantages
User Level Threads are
simple and quick to
create.
Can run on any
operating system
They perform better than
kernel threads since they
don’t need to make
system calls to create
threads.
In user-level threads,
switching between
threads does not need
kernel mode privileges.
Scheduling multiple threads
that belong to the same
process on different
processors is possible in
kernel-level threads.
Multithreading can be there
for kernel routines.
When a thread at the kernel
level is halted, the kernel
can schedule another thread
for the same process.
13. Disadvantages
Multithreaded
applications on user-
level threads cannot
benefit from
multiprocessing.
If a single user-level
thread performs a
blocking operation, the
entire process is halted.
Transferring control within a
process from one thread to
another necessitates a mode
switch to kernel mode.
Kernel-level threads take
more time to create and
manage than user-level
threads.
2. Explain what the need of Name service is.
Resources are accessed using identifier or reference
An identifier can be stored in variables and retrieved from tables quickly
Identifier includes or can be transformed to an address for an object
E.g. NFS file handle, Corba remote object reference
8. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 8 of 16
A name is human-readable value (usually a string) that can be resolved to an
identifier or address Internet domain name, file pathname, process number
E.g ./etc/passwd, http://www.cdk3.net/ y
For many purposes, names are preferable to identifiers
Because the binding of the named resource to a physical location is deferred and
can be changed
Because they are more meaningful to users
Names and Addresses
Names: used to refer to shared resource
computers
services
remote objects
files
users
Names are needed to request a computer system act on a specific, chosen resource
Processes need to be able to name resources to share them
Users need to be able to name each other to directly communicate
Sometimes descriptive attributes of a resource uniquely identify it
Human-readable names: e.g. /etc/passwd, http://www.registermachine.com
Identifier: not usually human-readable, e.g. remote object references, NFS file handles
more efficiently stored and processed by software
Attribute: value of a property associated with an object
Address: value identifying the location of the object, rather than the object itself
attribute of an object
efficient for accessing objects
cannot be used as a name, because the object may change its address
Pure name: contains no information about the object itself.
Must be looked up to obtain an address before the named resource can be accessed
e.g. names of people are pure names
uninterested bit patterns
address is the opposite extreme
Non-pure name contains some information about the object, typically location information
A name is resolved when it is translated to data about the named resource
Binding: association between a name and an object
names are bound to object attributes, rather than the objects themselves
DNS maps human readable domain names to IP addresses/other attributes
X500 directory service: can map a person’s name onto attributes e.g. email address,
phone number
9. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 9 of 16
Local significance: many names only have meaning to the service that creates it
when a service allocates a resource it generates a unique name, and a client to the
service needs to supply the name in order to access that resource
Sometimes the client can specify to the service the desired name for a new resource
e.g. email username they would like
service is responsible for ensuring username is locally unique
then, in combination with unique domain name, the email address is unique
Services may need to cooperate to have name consistency
e.g. NFS users need the same unique ID on both the client and server to ensure
ownership rights are preserved
3. Explain the design issues in Threads Package. How to implement a Thread Package?
threading issues?
The fork() and exec() system calls
The fork() is used to create a duplicate process. The meaning of the fork() and exec() system
calls change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all threads, or
is the new process single-threaded? If we take, some UNIX systems have chosen to have
two versions of fork(), one that duplicates all threads and another that duplicates only the
thread that invoked the fork() system call.
If a thread calls the exec() system call, the program specified in the parameter to exec() will
replace the entire process which includes all threads.
How to implement thread in user space?
A thread is a lightweight of process. It is a basic unit of CPU utilization which
consists of a program counter, a stack, and a set of registers.
Given below is the structure of single threaded process −
10. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 10 of 16
Thread in user space
Now, let us see how to implement thread in User Space.
Step 1 − The complete thread package is placed in the user space and the kernel has no
knowledge about it.
Step 2 − Kernel generally, manages ordinary and single threaded processes.
Step 3 − Threads are always run on top of a run-time system.
Step 4 − Run time system is a collection of procedures which manage threads.
For example –
pthread create,
pthread exit,
pthread join, and
pthread yield,
Step 5 − Each process requires its own private thread table to keep track of the threads in
that process.
Step 6 − The thread table always keeps a track on each thread’s properties.
Step 7 − The thread tables are managed by the runtime system.
Given below is an image depicting the implementation of thread in user space −
11. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 11 of 16
Advantages
The advantages of implementing thread in user space are as follows −
OSes that do not support thread and thread are implemented by the library.
It doesn’t require any modification in the operating system.
It always gives better performance because there is no context switching involved
from the kernel.
Each process has its own scheduling algorithm.
Disadvantages
The disadvantages of implementing thread in user space are as follows −
Implementing blocking system calls leads all threads to stop.
If a thread starts running, then no other thread can run unless the thread
voluntarily leaves the CPU.
Unit III
1. Explain Berkeley algorithm.
Berkeley’s Algorithm is a clock synchronization technique used in distributed
systems. The algorithm assumes that each machine node in the network either doesn’t
have an accurate time source or doesn’t possess a UTC server.
Algorithm
1) An individual node is chosen as the master node from a pool node in the network. This
node is the main node in the network which acts as a master and the rest of the nodes act
as slaves. The master node is chosen using an election process/leader election algorithm.
2) Master node periodically pings slaves nodes and fetches clock time at them
using Cristian’s algorithm.
The diagram below illustrates how the master sends requests to slave nodes.
12. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 12 of 16
The diagram below illustrates how slave nodes send back time given by their
system clock.
3) Master node calculates the average time difference between all the clock times received
and the clock time given by the master’s system clock itself. This average time difference
is added to the current time at the master’s system clock and broadcasted over the
network.
Pseudocode for above step:
The diagram below illustrates the last step of Berkeley’s algorithm.
13. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 13 of 16
Scope of Improvement
Improvision inaccuracy of Cristian’s algorithm.
Ignoring significant outliers in the calculation of average time difference
In case the master node fails/corrupts, a secondary leader must be ready/pre-
chosen to take the place of the master node to reduce downtime caused due to the
master’s unavailability.
Instead of sending the synchronized time, master broadcasts relative inverse time
difference, which leads to a decrease in latency induced by traversal time in the
network while the time of calculation at slave node.
Features of Berkeley’s Algorithm:
Centralized time coordinator:
Berkeley’s Algorithm uses a centralized time coordinator, which is responsible for
maintaining the global time and distributing it to all the client machines.
Clock adjustment:
The algorithm adjusts the clock of each client machine based on the difference
between its local time and the time received from the time coordinator.
Average calculation:
The algorithm calculates the average time difference between the client machines
and the time coordinator to reduce the effect of any clock drift.
Fault tolerance:
Berkeley’s Algorithm is fault-tolerant, as it can handle failures in the network or
the time coordinator by using backup time coordinators.
Accuracy:
The algorithm provides accurate time synchronization across all the client
machines, reducing the chances of errors due to time discrepancies.
Scalability:
The algorithm is scalable, as it can handle a large number of client machines, and
the time coordinator can be easily replicated to provide high availability.
14. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 14 of 16
Security:
Berkeley’s Algorithm provides security mechanisms such as authentication and
encryption to protect the time information from unauthorized access or tampering.
2. Explain the snapshot of Lamport’s algorithm.
Each distributed system has a number of processes running on a number of different
physical servers. These processes communicate with each other via communication
channels using text messaging. These processes neither have a shared memory nor a
common physical clock, this makes the process of determining the instantaneous global
state difficult.
A process could record it own local state at a given time but the messages that are in
transit (on its way to be delivered) would not be included in the recorded state and hence
the actual state of the system would be incorrect after the time in transit message is
delivered.
Chandy and Lamport were the first to propose a algorithm to capture consistent
global state of a distributed system. The main idea behind proposed algorithm is that if
we know that all message that have been sent by one process have been received by
another then we can record the global state of the system.
Any process in the distributed system can initiate this global state recording algorithm
using a special message called MARKER. This marker traverse the distributed system
across all communication channel and cause each process to record its own state. In the
end, the state of entire system (Global state) is recorded. This algorithm does not interfere
with normal execution of processes.
Assumptions of the algorithm:
There are finite number of processes in the distributed system and they do not
share memory and clocks.
There are finite number of communication channels and they are unidirectional
and FIFO ordered.
There exists a communication path between any two processes in the system
On a channel, messages are received in the same order as they are sent.
Algorithm:
Marker sending rule for a process P :
Process p records its own local state
For each outgoing channel C from process P, P sends marker
along C before sending any other messages along C.
(Note: Process Q will receive this marker on his incoming channel C1.)
Marker receiving rule for a process Q :
If process Q has not yet recorded its own local state then
Record the state of incoming channel C1 as an empty sequence or null.
15. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 15 of 16
After recording the state of incoming channel C1, process Q Follows the
marker sending rule
If process Q has already recorded its state
Record the state of incoming channel C1 as the sequence of messages
received along channel C1 after the state of Q was recorded and
before Q received the marker along C1 from process P.
Need of taking snapshot or recording global state of the system:
Checkpointing:
It helps in creating checkpoint. If somehow application fails, this
checkpoint can be reused
Garbage collection:
It can be used to remove objects that do not have any references.
It can be used in deadlock and termination detection.
It is also helpful in other debugging.
3. Explain how mutual exclusion is handled in distributed systems?
Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions. It is the requirement that a process cannot enter its critical section while
another concurrent process is currently present or executing in its critical section
i.e only one process is allowed to execute the critical section at any given instance
of time.
Mutual exclusion in single computer system Vs. distributed system:
In single computer system, memory and other resources are shared between
different processes. The status of shared resources and the status of users is easily
available in the shared memory so with the help of shared variable (For
example: Semaphores) mutual exclusion problem can be easily solved.
In Distributed systems, we neither have shared memory nor a common physical
clock and there for we can not solve mutual exclusion problem using shared variables. To
eliminate the mutual exclusion problem in distributed system approach based on
message passing is used.
A site in distributed system do not have complete information of state of the
system due to lack of shared memory and a common physical clock.
Requirements of Mutual exclusion Algorithm:
No Deadlock:
Two or more site should not endlessly wait for any message that will never
arrive.
No Starvation:
Every site who wants to execute critical section should get an opportunity
to execute it in finite time. Any site should not wait indefinitely to execute critical
section while other site are repeatedly executing critical section
16. GOVERNMENT ARTS COLLEGE, MELUR, MADURAI.
ES1.3: DISTRIBUTED SYSTEMS
Mr. VEERANAN VEERANAN, M.Sc. Computer Science., Dip.in.Yoga., Page 16 of 16
Fairness:
Each site should get a fair chance to execute critical section. Any request to
execute critical section must be executed in the order they are made i.e Critical
section execution requests should be executed in the order of their arrival in the
system.
Fault Tolerance:
In case of failure, it should be able to recognize it by itself in order to
continue functioning without any disruption.
Solution to distributed mutual exclusion:
As we know shared variables or a local kernel cannot be used to implement mutual
exclusion in distributed systems. Message passing is a way to implement mutual
exclusion. Below are the three approaches based on message passing to implement
mutual exclusion in distributed systems:
Token Based Algorithm:
A unique token is shared among all the sites.
If a site possesses the unique token, it is allowed to enter its critical section
This approach uses sequence number to order requests for the critical section.
Each requests for critical section contains a sequence number. This sequence
number is used to distinguish old and current requests.
This approach insures Mutual exclusion as the token is unique
Example:
Suzuki-Kasami’s Broadcast Algorithm
Non-token based approach:
A site communicates with other sites in order to determine which sites should
execute critical section next. This requires exchange of two or more successive
round of messages among sites.
This approach use timestamps instead of sequence number to order requests for
the critical section.
When ever a site make request for critical section, it gets a timestamp. Timestamp
is also used to resolve any conflict between critical section requests.
All algorithm which follows non-token based approach maintains a logical clock.
Logical clocks get updated according to Lamport’s scheme
Example:
Lamport's algorithm, Ricart–Agrawala algorithm
Quorum based approach:
Instead of requesting permission to execute the critical section from all other sites,
each site requests only a subset of sites which is called a quorum.
Any two subsets of sites or Quorum contains a common site.
This common site is responsible to ensure mutual exclusion
Example:
Maekawa’s Algorithm