Distributed system lamport's and vector algorithmpinki soni
Logical clocks are mechanisms for capturing chronological and causal relationships in distributed systems that lack a global clock. Some key logical clock algorithms are Lamport's timestamps and vector clocks. Lamport's timestamps assign monotonically increasing numbers to events, while vector clocks allow for partial ordering of events. The algorithms for Lamport's timestamps and vector clocks involve incrementing and propagating clock values to determine causal relationships between events in a distributed system.
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
Physical clocks use quartz crystals or atomic vibrations to keep time, but they drift over time. Clock synchronization protocols like NTP and SNTP allow networked devices to regularly adjust their clocks to account for drift by requesting the time from authoritative time servers. They apply algorithms like Cristian's to compensate for network latency, setting the local clock to the average of the reported server time and round-trip delay time to minimize errors from network variability.
Provides a simple and unambiguous taxonomy of three service models
- Software as a service (SaaS)
- Platform as a service (PaaS)
- Infrastructure as a service (IaaS)
(Private cloud, Community cloud, Public cloud, and Hybrid cloud)
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Distributed system lamport's and vector algorithmpinki soni
Logical clocks are mechanisms for capturing chronological and causal relationships in distributed systems that lack a global clock. Some key logical clock algorithms are Lamport's timestamps and vector clocks. Lamport's timestamps assign monotonically increasing numbers to events, while vector clocks allow for partial ordering of events. The algorithms for Lamport's timestamps and vector clocks involve incrementing and propagating clock values to determine causal relationships between events in a distributed system.
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
Physical clocks use quartz crystals or atomic vibrations to keep time, but they drift over time. Clock synchronization protocols like NTP and SNTP allow networked devices to regularly adjust their clocks to account for drift by requesting the time from authoritative time servers. They apply algorithms like Cristian's to compensate for network latency, setting the local clock to the average of the reported server time and round-trip delay time to minimize errors from network variability.
Provides a simple and unambiguous taxonomy of three service models
- Software as a service (SaaS)
- Platform as a service (PaaS)
- Infrastructure as a service (IaaS)
(Private cloud, Community cloud, Public cloud, and Hybrid cloud)
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Deadlock in distribute system by saeed siddikSaeed Siddik
The document discusses deadlocks in distributed systems, outlining the four conditions required for a deadlock, strategies to handle deadlocks such as ignoring, detecting, preventing, and avoiding them, and algorithms for centralized deadlock detection and distributed deadlock detection and prevention. It provides examples of resource allocation graphs to illustrate deadlock conditions and explains how distributed deadlock detection and prevention algorithms work.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
Client-Centric Consistency
Provide guarantees about ordering of operations only for a single client, i.e.
Effects of an operations depend on the client performing it
Effects also depend on the history of client’s operations
Applied only when requested by the client
No guarantees concerning concurrent accesses by different clients
Assumption:
Clients can access different replicas, e.g. mobile users
Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Fault tolerance is important for distributed systems to continue functioning in the event of partial failures. There are several phases to achieving fault tolerance: fault detection, diagnosis, evidence generation, assessment, and recovery. Common techniques include replication, where multiple copies of data are stored at different sites to increase availability if one site fails, and check pointing, where a system's state is periodically saved to stable storage so the system can be restored to a previous consistent state if a failure occurs. Both techniques have limitations around managing consistency with replication and overhead from checkpointing communications and storage requirements.
This document discusses different ways to structure shared memory space in a distributed shared memory (DSM) system. It describes three common types: no structuring, where shared memory is a linear array of words; structuring by data type, where memory is organized as objects or variables; and structuring as a database, where memory is ordered like a tuple space database. The document provides details on each type, including advantages like flexibility of page size for no structuring and matching access granularity to object size for structuring by data type.
This document provides an overview of concepts related to time and clock synchronization in distributed systems. It discusses the need to synchronize clocks across different computers to accurately timestamp events. Physical clocks drift over time so various clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm are presented to synchronize clocks within a known bound. The Network Time Protocol (NTP) used on the internet to synchronize client clocks to UTC sources through a hierarchy of time servers is also summarized. Logical clocks provide an alternative to physical clock synchronization by assigning timestamps to events based on their order of occurrence.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
This document discusses data replication in distributed database management systems. It covers the purposes of replication like availability, performance, and scalability. It also discusses consistency models for replicated data, including mutual consistency and transaction consistency. For update management strategies, it describes eager and lazy propagation as well as centralized and distributed techniques. Specific replication protocols are also outlined, such as single master, primary copy, eager centralized/distributed, and lazy centralized/distributed protocols.
This document discusses interprocess communication and distributed systems. It covers several key topics:
- Application programming interfaces (APIs) for internet protocols like TCP and UDP, which provide building blocks for communication protocols.
- External data representation standards for transmitting objects between processes on different machines.
- Client-server communication models like request-reply that allow processes to invoke methods on remote objects.
- Group communication using multicast to allow a message from one client to be sent to multiple server processes simultaneously.
This document outlines 7 key challenges in designing distributed systems: heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. It discusses each challenge in detail, providing examples. Heterogeneity refers to differences in networks, hardware, operating systems, and programming languages that must be addressed. Openness means a system can be extended and implemented in various ways. Security concerns confidentiality, integrity, and availability of resources. Scalability means a system remains effective as resources and users increase significantly. Failure handling techniques include detecting, masking, tolerating, and recovering from failures. Concurrency ensures correct and high performance sharing of resources. Transparency aims to make distributed components appear as a single system regardless of location, access
File Replication : High availability is a desirable feature of a good distributed file system and file replication is the primary mechanism for improving file availability. Replication is a key strategy for improving reliability, fault tolerance and availability. Therefore duplicating files on multiple machines improves availability and performance.
Replicated file : A replicated file is a file that has multiple copies, with each copy located on a separate file server. Each copy of the set of copies that comprises a replicated file is referred to as replica of the replicated file.
Replication is often confused with caching, probably because they both deal with multiple copies of data. The two concepts has the following basic differences:
A replica is associated with server, whereas a cached copy is associated with a client.
The existence of cached copy is primarily dependent on the locality in file access patterns, whereas the existence of a replica normally depends on availability and performance requirements.
Satynarayanana [1992] distinguishes a replicated copy from a cached copy by calling the first-class replicas and second-class replicas respectively
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
This document compares three distributed operating systems: Amoeba, Mach, and Chorus. Amoeba was designed for distributed systems and uses a pool processor execution model, automatic load balancing, and automatic file replication. Mach was designed for single CPU/multiprocessors and provides extensive multiprocessor support. Chorus is a microkernel-based real-time operating system that is optimized for the local case and provides asynchronous communication. The document outlines key differences between the three operating systems in areas such as architecture, communication methods, memory management, and UNIX compatibility.
Distributed systems face challenges with time ordering and synchronization due to a lack of a global clock. Logical clocks provide a way to determine event ordering without precise time information. Lamport's algorithm uses logical clocks and the "happens before" relation to ensure proper synchronization. Physical clocks also require synchronization methods like Cristian's algorithm to limit clock drift between distributed nodes. Mutual exclusion in distributed systems can be achieved through centralized coordination of access to critical sections.
This document discusses different types of clocks used in distributed systems, including physical clocks and logical clocks. Physical clocks are tied to real time but may drift over time. Logical clocks like Lamport's logical clock and vector clocks are derived from potential causality between events and can order events without reference to real time. The document covers clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm for internal synchronization. It also discusses external synchronization using protocols like NTP. Logical clocks like Lamport's clock and vector clocks can order events based on potential causality between them using logical timestamps.
Deadlock in distribute system by saeed siddikSaeed Siddik
The document discusses deadlocks in distributed systems, outlining the four conditions required for a deadlock, strategies to handle deadlocks such as ignoring, detecting, preventing, and avoiding them, and algorithms for centralized deadlock detection and distributed deadlock detection and prevention. It provides examples of resource allocation graphs to illustrate deadlock conditions and explains how distributed deadlock detection and prevention algorithms work.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
Client-Centric Consistency
Provide guarantees about ordering of operations only for a single client, i.e.
Effects of an operations depend on the client performing it
Effects also depend on the history of client’s operations
Applied only when requested by the client
No guarantees concerning concurrent accesses by different clients
Assumption:
Clients can access different replicas, e.g. mobile users
Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Fault tolerance is important for distributed systems to continue functioning in the event of partial failures. There are several phases to achieving fault tolerance: fault detection, diagnosis, evidence generation, assessment, and recovery. Common techniques include replication, where multiple copies of data are stored at different sites to increase availability if one site fails, and check pointing, where a system's state is periodically saved to stable storage so the system can be restored to a previous consistent state if a failure occurs. Both techniques have limitations around managing consistency with replication and overhead from checkpointing communications and storage requirements.
This document discusses different ways to structure shared memory space in a distributed shared memory (DSM) system. It describes three common types: no structuring, where shared memory is a linear array of words; structuring by data type, where memory is organized as objects or variables; and structuring as a database, where memory is ordered like a tuple space database. The document provides details on each type, including advantages like flexibility of page size for no structuring and matching access granularity to object size for structuring by data type.
This document provides an overview of concepts related to time and clock synchronization in distributed systems. It discusses the need to synchronize clocks across different computers to accurately timestamp events. Physical clocks drift over time so various clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm are presented to synchronize clocks within a known bound. The Network Time Protocol (NTP) used on the internet to synchronize client clocks to UTC sources through a hierarchy of time servers is also summarized. Logical clocks provide an alternative to physical clock synchronization by assigning timestamps to events based on their order of occurrence.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
This document discusses data replication in distributed database management systems. It covers the purposes of replication like availability, performance, and scalability. It also discusses consistency models for replicated data, including mutual consistency and transaction consistency. For update management strategies, it describes eager and lazy propagation as well as centralized and distributed techniques. Specific replication protocols are also outlined, such as single master, primary copy, eager centralized/distributed, and lazy centralized/distributed protocols.
This document discusses interprocess communication and distributed systems. It covers several key topics:
- Application programming interfaces (APIs) for internet protocols like TCP and UDP, which provide building blocks for communication protocols.
- External data representation standards for transmitting objects between processes on different machines.
- Client-server communication models like request-reply that allow processes to invoke methods on remote objects.
- Group communication using multicast to allow a message from one client to be sent to multiple server processes simultaneously.
This document outlines 7 key challenges in designing distributed systems: heterogeneity, openness, security, scalability, failure handling, concurrency, and transparency. It discusses each challenge in detail, providing examples. Heterogeneity refers to differences in networks, hardware, operating systems, and programming languages that must be addressed. Openness means a system can be extended and implemented in various ways. Security concerns confidentiality, integrity, and availability of resources. Scalability means a system remains effective as resources and users increase significantly. Failure handling techniques include detecting, masking, tolerating, and recovering from failures. Concurrency ensures correct and high performance sharing of resources. Transparency aims to make distributed components appear as a single system regardless of location, access
File Replication : High availability is a desirable feature of a good distributed file system and file replication is the primary mechanism for improving file availability. Replication is a key strategy for improving reliability, fault tolerance and availability. Therefore duplicating files on multiple machines improves availability and performance.
Replicated file : A replicated file is a file that has multiple copies, with each copy located on a separate file server. Each copy of the set of copies that comprises a replicated file is referred to as replica of the replicated file.
Replication is often confused with caching, probably because they both deal with multiple copies of data. The two concepts has the following basic differences:
A replica is associated with server, whereas a cached copy is associated with a client.
The existence of cached copy is primarily dependent on the locality in file access patterns, whereas the existence of a replica normally depends on availability and performance requirements.
Satynarayanana [1992] distinguishes a replicated copy from a cached copy by calling the first-class replicas and second-class replicas respectively
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
This document compares three distributed operating systems: Amoeba, Mach, and Chorus. Amoeba was designed for distributed systems and uses a pool processor execution model, automatic load balancing, and automatic file replication. Mach was designed for single CPU/multiprocessors and provides extensive multiprocessor support. Chorus is a microkernel-based real-time operating system that is optimized for the local case and provides asynchronous communication. The document outlines key differences between the three operating systems in areas such as architecture, communication methods, memory management, and UNIX compatibility.
Distributed systems face challenges with time ordering and synchronization due to a lack of a global clock. Logical clocks provide a way to determine event ordering without precise time information. Lamport's algorithm uses logical clocks and the "happens before" relation to ensure proper synchronization. Physical clocks also require synchronization methods like Cristian's algorithm to limit clock drift between distributed nodes. Mutual exclusion in distributed systems can be achieved through centralized coordination of access to critical sections.
This document discusses different types of clocks used in distributed systems, including physical clocks and logical clocks. Physical clocks are tied to real time but may drift over time. Logical clocks like Lamport's logical clock and vector clocks are derived from potential causality between events and can order events without reference to real time. The document covers clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm for internal synchronization. It also discusses external synchronization using protocols like NTP. Logical clocks like Lamport's clock and vector clocks can order events based on potential causality between them using logical timestamps.
This document provides an overview of clock synchronization in distributed systems. It discusses how physical clocks can differ slightly in frequency and how precise atomic clocks are used to define International Atomic Time (TAI) and Universal Coordinated Time (UTC). It also describes several common clock synchronization algorithms, including Cristian's algorithm, the Berkeley algorithm, and averaging algorithms. Logical clocks are introduced as an alternative to synchronized physical clocks for maintaining consistency in distributed algorithms. Lamport timestamps are presented as a way to totally order events in a distributed system.
Chapter 5 discusses synchronization in distributed systems. Synchronization mechanisms are needed to enforce correct interaction between processes that share resources and run concurrently. Clock synchronization and event ordering are important synchronization techniques. Clock synchronization aims to keep clocks across distributed nodes close together despite unpredictable delays. It can be achieved through centralized or distributed algorithms. Event ordering ensures a total order of all events in a distributed system through happened-before relations and logical clocks.
clock synchronization in Distributed System Harshita Ved
The document discusses various techniques for synchronizing clocks in distributed real-time systems. It begins by explaining that real-time systems require results within a certain time frame and interactions with the physical world. The challenges of distributed systems are then presented, where individual node clocks may run at different speeds and it is difficult to determine which event occurred first. Several clock synchronization algorithms are outlined, including using a global clock, averaging individual clocks, having an external time source, and assigning timestamps to messages. The Cristian and Berkeley algorithms are then described in more detail as centralized synchronization approaches where one node coordinates keeping all clocks aligned.
This document summarizes key concepts around synchronization in distributed systems including clock synchronization, logical clocks, election algorithms, and mutual exclusion. It discusses how physical clocks become unsynchronized over time, introduces Lamport timestamps and logical clocks, describes the Bully and Ring election algorithms, and covers a centralized mutual exclusion algorithm that uses a coordinator to grant processes permission to enter a critical region.
The document discusses synchronization in distributed systems. It covers clock synchronization algorithms to ensure processes agree on time ordering of events. Logical clocks like Lamport timestamps are used when exact time is not needed, only causal ordering. Distributed mutual exclusion algorithms are discussed to allow only one process access to shared resources at a time, including token-based and permission-based approaches. The latter includes both a centralized and distributed algorithm.
Synchronization Pradeep K Sinha
Introduction
Issues in Synchronization
Clock synchronization
Event Ordering
Mutual Exclusion
Deadlock
Election algorithms
Clock Synchronization
How Computer Clocks are Implemented
Drifting of Clocks
Types of Clock Synchronization and issues in them
Clock Synchronization Algorithms
Distributed and Centralized Algorithms
Case Study
Event Ordering
Happened Before Relation
Logical Clocks Concept and Implementation
Mutual Exclusion
Centralized Approach, Distributed Approach, Token Passing Approach
Deadlocks
Election algorithms
The document summarizes concepts related to synchronization in distributed systems including clock synchronization algorithms, logical clocks, mutual exclusion, and election algorithms. It describes Cristian's algorithm and the Berkeley algorithm for clock synchronization, Lamport's logical clocks and vector clocks, centralized and distributed algorithms for mutual exclusion, a token ring algorithm, and the bully and ring algorithms for elections.
The document discusses clock synchronization in distributed systems. It begins by explaining how computer clocks work using quartz crystals and counters. It then discusses the need for clock synchronization across nodes since their clocks will drift over time. Centralized and distributed clock synchronization algorithms are described. Centralized algorithms rely on a time server but have single point of failure issues. Distributed algorithms allow each node to independently synchronize or average with neighbors to reach consensus. Lamport's logical clocks are presented as a way to order events in a distributed system. Mutual exclusion is also discussed as a challenge for ensuring only one process accesses a shared resource like a file at a time.
This document discusses various synchronization issues in distributed systems including clock synchronization, event ordering, mutual exclusion, and deadlock. It describes how computer clocks are implemented and different clock synchronization algorithms like centralized and distributed algorithms. It explains logical clocks and happened-before relation for event ordering. Different approaches for mutual exclusion like centralized, distributed, and token-passing are outlined. The four conditions for deadlock and different strategies like avoidance, prevention, and detection and recovery are summarized. Resource allocation graphs and wait-for graphs are introduced for modeling deadlocks.
Clock synchronization: Clocks, events and process states, Synchronizing physical clocks,
Logical time and logical clocks, Lamport’s Logical Clock, Global states, Distributed mutual
exclusion algorithms: centralized, decentralized, distributed and token ring algorithms,
election algorithms, Multicast communication.
Lesson 05 - Time in Distrributed System.pptxLagamaPasala
This document discusses time and clocks in distributed systems. It explains that distributed systems rely on time for scheduling, timeouts, failure detection and other purposes. It discusses physical clocks like quartz clocks and atomic clocks, and how they are used to synchronize time across distributed nodes. It also discusses logical clocks and different clock implementations. It explains concepts like clock skew, clock drift, and protocols like NTP and PTP that are used for clock synchronization in distributed systems. It distinguishes between monotonic and time-of-day clocks and their appropriate usage.
1) The document discusses synchronization in distributed systems, including logical clocks, physical clock synchronization methods like NTP, and logical time.
2) NTP is described as the standard protocol for synchronizing clocks across the Internet, using a hierarchical structure of servers and statistical techniques.
3) Logical clocks allow events in a distributed system to be partially ordered even if physical clocks are not perfectly synchronized.
The document discusses various topics related to distributed systems including clock synchronization, mutual exclusion, election algorithms, and fault tolerance. It provides details on:
1. Centralized and distributed clock synchronization algorithms including passive and active time server approaches and global and localized averaging algorithms.
2. Lamport's logical clocks for ordering events in a distributed system.
3. Mutual exclusion algorithms including centralized, distributed, and token passing approaches.
4. Traditional election algorithms like the Bully algorithm and ring algorithm.
5. Fault tolerance techniques using redundancy like replicating servers and majority voting.
This document provides an introduction to distributed systems. It discusses why distributed systems are developed, defines what a distributed system is, and provides examples. It then compares different types of distributed systems and networked operating systems. The rest of the document outlines various concepts, issues, and algorithms related to distributed systems, including advantages and disadvantages over centralized systems, software concepts, mutual exclusion, synchronization, and clock synchronization.
This document summarizes key concepts related to time and clocks in distributed systems. It discusses how physical clocks work, including obtaining accurate time from sources like atomic clocks and synchronizing clocks across distributed systems. It also covers logical clocks and how they are used to order events in a way that preserves causality. Other distributed computing topics summarized include mutual exclusion algorithms, elections, and atomic transactions including concurrency control methods like two-phase locking and optimistic concurrency control.
Parallel and Distributed Computing Chapter 13AbdullahMunir32
This document discusses synchronous and asynchronous computation in distributed systems. Synchronous distributed systems have bounded process execution times and message transmission delays, allowing them to be used for hard real-time applications through timeouts. However, they are difficult and costly to implement. Asynchronous distributed systems have no bounds on timing, making them unpredictable but able to tolerate unpredictable delays and crashes. Event ordering helps determine the sequence of events across processes.
The document discusses logical timestamps and causality in distributed systems. It introduces Lamport timestamps, which assign integer timestamps to events following three rules: (1) events on the same process are ordered by their local clock, (2) message sends carry their timestamp, and (3) received messages take the max of the local clock and received timestamp plus one. Lamport timestamps obey causality by ensuring timestamps respect the happens-before relation, where if event A happens before B, the timestamp of A is less than B. The document provides an example of assigning Lamport timestamps to events across three processes.
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
Artificial Bee Colony (ABC) is a swarm
optimization technique. This algorithm generally used to solve
nonlinear and complex problems. ABC is one of the simplest
and up to date population based probabilistic strategy for
global optimization. Analogous to other population based
algorithms, ABC also has some drawbacks computationally
pricey due to its sluggish temperament of search procedure.
The solution search equation of ABC is notably motivated by a
haphazard quantity which facilitates in exploration at the cost
of exploitation of the search space. Due to the large step size in
the solution search equation of ABC there are chances of
skipping the factual solution are higher. For that reason, this
paper introduces a new search strategy in order to balance the
diversity and convergence capability of the ABC. Both
employed bee phase and onlooker bee phase are improved
with help of a local search strategy stimulated by memetic
algorithm. This paper also proposes a new strategy for fitness
calculation and probability calculation. The proposed
algorithm is named as Improved Memetic Search in ABC
(IMeABC). It is tested over 13 impartial benchmark functions
of different complexities and two real word problems are also
considered to prove proposed algorithms superiority over
original ABC algorithm and its recent variants
Spider Monkey optimization (SMO) algorithm is newest addition in class of swarm intelligence. SMO is a population based stochastic meta-heuristic. It is motivated by intelligent foraging behaviour of fission fusion structured social creatures. SMO is a very good option for complex optimization problems. This paper proposed a modified strategy in order to enhance performance of original SMO. This paper introduces a position update strategy in SMO and modifies both local leader and global leader phase. The proposed strategy is named as Modified Position Update in Spider Monkey Optimization (MPU-SMO) algorithm. The proposed algorithm tested over benchmark problems and results show that it gives better results for considered unbiased problems.
Artificial Bee Colony (ABC) algorithm is a Nature Inspired Algorithm (NIA) which based in intelligent food foraging behaviour of honey bee swarm. ABC outperformed over other NIAs and other local search heuristics when tested for benchmark functions as well as factual world problems but occasionally it shows premature convergence and stagnation due to lack of balance between exploration and exploitation. This paper establishes a local search mechanism that enhances exploration capability of ABC and avoids the dilemma of stagnation. With help of recently introduces local search strategy it tries to balance intensification and diversification of search space. The anticipated algorithm named as Enhanced local search in ABC (EnABC) and tested over eleven benchmark functions. Results are evidence for its dominance over other competitive algorithms.
The document discusses a proposed Randomized Memetic Artificial Bee Colony (RMABC) algorithm for optimization problems. RMABC incorporates local search techniques into the Artificial Bee Colony algorithm to improve exploitation of promising solutions. It randomizes the step size in the local search to balance diversification and intensification. Experimental results on benchmark problems show RMABC outperforms other ABC algorithm variants in finding optimal solutions. The document provides background on optimization problems, nature-inspired algorithms, Artificial Bee Colony algorithm, and Memetic algorithms.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Artificial Bee Colony (ABC) is a distinguished optimization strategy that can resolve nonlinear and multifaceted problems. It is comparatively a straightforward and modern population based probabilistic approach for comprehensive optimization. In the vein of the other population based algorithms, ABC is moreover computationally classy due to its slow nature of search procedure. The solution exploration equation of ABC is extensively influenced by a arbitrary quantity which helps in exploration at the cost of exploitation of the better search space. In the solution exploration equation of ABC due to the outsized step size the chance of skipping the factual solution is high. Therefore, here this paper improve onlooker bee phase with help of a local search strategy inspired by memetic algorithm to balance the diversity and convergence capability of the ABC. The proposed algorithm is named as Improved Onlooker Bee Phase in ABC (IoABC). It is tested over 12 well known un-biased test problems of diverse complexities and two engineering optimization problems; results show that the anticipated algorithm go one better than the basic ABC and its recent deviations in a good number of the experiments.
Artificial bee colony (ABC) algorithm is a well known and one of the latest swarm intelligence based techniques. This method is a population based meta-heuristic algorithm used for numerical optimization. It is based on the intelligent behavior of honey bees. Artificial Bee Colony algorithm is one of the most popular techniques that are used in optimization problems. Artificial Bee Colony algorithm has some major advantages over other heuristic methods. To utilize its good feature a number of researchers combined ABC algorithm with other methods, and generate some new hybrid methods. This paper provides comparative analysis of hybrid differential Artificial Bee Colony algorithm with hybrid ABC – SPSO, Genetic algorithm and Independent rough set approach based on some parameters like technique, dimension, methodology etc. KEYWORDS
Artificial bee colony (ABC) algorithm has proved its importance in solving a number of problems including engineering optimization problems. ABC algorithm is one of the most popular and youngest member of the family of population based nature inspired meta-heuristic swarm intelligence method. ABC has been proved its superiority over some other Nature Inspired Algorithms (NIA) when applied for both benchmark functions and real world problems. The performance of search process of ABC depends on a random value which tries to balance exploration and exploitation phase. In order to increase the performance it is required to balance the exploration of search space and exploitation of optimal solution of the ABC. This paper outlines a new hybrid of ABC algorithm with Genetic Algorithm. The proposed method integrates crossover operation from Genetic Algorithm (GA) with original ABC algorithm. The proposed method is named as Crossover based ABC (CbABC). The CbABC strengthens the exploitation phase of ABC as crossover enhances exploration of search space. The CbABC tested over four standard benchmark functions and a popular continuous optimization problem.
Multiplication of two 3 d sparse matrices using 1d arrays and linked listsDr Sandeep Kumar Poonia
A basic algorithm of 3D sparse matrix multiplication (BASMM) is presented using one dimensional (1D) arrays which is used further for multiplying two 3D sparse matrices using Linked Lists. In this algorithm, a general concept is derived in which we enter non- zeros elements in 1st and 2nd sparse matrices (3D) but store that values in 1D arrays and linked lists so that zeros could be removed or ignored to store in memory. The positions of that non-zero value are also stored in memory like row and column position. In this way space complexity is decreased. There are two ways to store the sparse matrix in memory. First is row major order and another is column major order. But, in this algorithm, row major order is used. Now multiplying those two matrices with the help of BASMM algorithm, time complexity also decreased. For the implementation of this, simple c programming and concepts of data structures are used which are very easy to understand for everyone.
This document summarizes a tool called Sunzip that uses the Huffman algorithm for data compression. It discusses how Huffman encoding works by assigning shorter bit codes to more common symbols to reduce file size. The tool analyzes files to determine symbol frequencies and builds a Huffman tree to assign variable-length codes. It allows compressing different data types like text, images, audio and video. Adaptive Huffman coding is also described, which dynamically updates the code tree as more data is processed. Benefits of Huffman compression include being fast, simple to implement and achieving close to optimal compression. Sample screenshots of the Sunzip tool are also provided showing file details before and after compression.
Artificial Bee Colony (ABC) algorithm is a Nature
Inspired Algorithm (NIA) which based on intelligent food
foraging behaviour of honey bee swarm. This paper introduces
a local search strategy that enhances exploration competence
of ABC and avoids the problem of stagnation. The proposed
strategy introduces two new local search phases in original
ABC. One just after onlooker bee phase and one after scout
bee phase. The newly introduced phases are inspired by
modified Golden Section Search (GSS) strategy. The proposed
strategy named as new local search strategy in ABC
(NLSSABC). The proposed NLSSABC algorithm applied over
thirteen standard benchmark functions in order to prove its
efficiency.
This document presents a new approach called mixed S-D slicing that combines static and dynamic program slicing using object-oriented concepts in C++. Static slicing analyzes the entire program code but produces larger slices, while dynamic slicing produces smaller slices based on a specific execution but is more difficult to compute. The mixed S-D slicing aims to generate dynamic slices faster by leveraging object-oriented features like classes. An example C++ program is provided to demonstrate the S-D slicing approach using concepts like classes, inheritance, and polymorphism. The approach is intended to reduce complexity and aid in debugging object-oriented programs by combining static and dynamic slicing techniques.
Performance evaluation of different routing protocols in wsn using different ...Dr Sandeep Kumar Poonia
This document evaluates the performance of different routing protocols in wireless sensor networks using various network parameters. It simulates the Dynamic Source Routing (DSR) and Adhoc On-Demand Distance Vector (AODV) routing protocols in a 1000m x 1000m terrain area with 100 sensor nodes. The packet delivery fraction, average throughput, and normalized routing load are analyzed at different node speeds ranging from 20-100m/s. The results show that AODV performs better than DSR in terms of packet delivery fraction and normalized routing load, while DSR has better average throughput performance. In conclusion, AODV is more optimal for small terrain areas when considering packet delivery and routing overhead, while DSR provides higher data rates.
Articial bee Colony algorithm (ABC) is a population based
heuristic search technique used for optimization problems. ABC
is a very eective optimization technique for continuous opti-
mization problem. Crossover operators have a better exploration
property so crossover operators are added to the ABC. This pa-
per presents ABC with dierent types of real coded crossover op-
erator and its application to Travelling Salesman Problem (TSP).
Each crossover operator is applied to two randomly selected par-
ents from current swarm. Two o-springs generated from crossover
and worst parent is replaced by best ospring, other parent remains
same. ABC with real coded crossover operator applied to travelling
salesman problem. The experimental result shows that our proposed
algorithm performs better than the ABC without crossover in terms
of eciency and accuracy.
This document describes a simulator for database aggregation using metadata. The simulator sits between an end-user application and a database management system (DBMS) to intercept SQL queries and transform them to take advantage of available aggregates using metadata describing the data warehouse schema. The simulator provides performance gains by optimizing queries to use appropriate aggregate tables. It was found to improve performance over previous aggregate navigators by making fewer calls to system tables through the use of metadata mappings. Experimental results showed the simulator solved queries faster than alternative approaches by transforming queries to leverage aggregate tables.
Performance evaluation of diff routing protocols in wsn using difft network p...Dr Sandeep Kumar Poonia
In the recent past, wireless sensor networks have been introduced to use in many applications. To design the networks, the factors needed to be considered are the coverage area, mobility, power consumption, communication capabilities etc. The challenging goal of our project is to create a simulator to support the wireless sensor network simulation. The network simulator (NS-2) which supports both wire and wireless networks is implemented to be used with the wireless sensor network.
The Traveling Salesman Problem (TSP) involves finding the minimum cost tour that visits each customer exactly once and returns to the starting depot. Key heuristics to solve the TSP include nearest neighbor, insertion methods, and 2-opt exchanges. The Vehicle Routing Problem (VRP) extends the TSP by routing multiple vehicles of limited capacity from a central depot to serve customer demands. Common heuristics for the VRP include savings algorithms and sweep methods.
This document provides an overview of linear programming, including:
- It describes the linear programming model which involves maximizing a linear objective function subject to linear constraints.
- It provides examples of linear programming problems like product mix, blending, transportation, and network flow problems.
- It explains how to develop a linear programming model by defining decision variables, the objective function, and constraints.
- It discusses solutions methods like the graphical and simplex methods. The simplex method involves iteratively moving to adjacent extreme points to maximize the objective function.
This document discusses approximation algorithms and introduces several combinatorial optimization problems. It begins by explaining that approximation algorithms are needed to find near-optimal solutions for problems that cannot be solved in polynomial time, such as set cover and bin packing. It then provides examples of problems that are in P, NP, and NP-complete. Several techniques for designing approximation algorithms are outlined, including greedy algorithms, linear programming, and semidefinite programming. Specific NP-complete problems like vertex cover, set cover, and independent set are introduced and approximations algorithms with performance guarantees are provided for set cover and vertex cover.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2. CANONICAL PROBLEMS IN DISTRIBUTED SYSTEMS
Time ordering and clock synchronization
Leader election
Mutual exclusion
Distributed transactions
Deadlock detection
3. THE IMPORTANCE OF SYNCHRONIZATION
Because various components of a distributed
system must cooperate and exchange information,
synchronization is a necessity.
Various components of the system must agree on
the timing and ordering of events. Imagine a
banking system that did not track the timing and
ordering of financial transactions. Similar chaos
would ensure if distributed systems were not
synchronized.
Constraints, both implicit and explicit, are therefore
enforced to ensure synchronization of components.
4. CLOCK SYNCHRONIZATION
As in non-distributed systems, the knowledge
of “when events occur” is necessary.
However, clock synchronization is often more
difficult in distributed systems because there
is no ideal time source, and because
distributed algorithms must sometimes be
used.
Distributed algorithms must overcome:
Scattering of information
Local, rather than global, decision-making
5. CLOCK SYNCHRONIZATION
Time is unambiguous in centralized systems
System clock keeps time, all entities use this for
time
Distributed systems: each node has own
system clock
Crystal-based clocks are less accurate (1 part in
million)
Problem: An event that occurred after another
may be assigned an earlier time
6. LACK OF GLOBAL TIME IN DS
It is impossible to guarantee that
physical clocks run at the same
frequency
Lack of global time, can cause problems
Example: UNIX make
Edit output.c at a client
output.o is at a server (compile at server)
Client machine clock can be lagging behind
the server machine clock
7. LACK OF GLOBAL TIME – EXAMPLE
When each machine has its own clock, an
event that occurred after another event may
nevertheless be assigned an earlier time.
8. LOGICAL CLOCKS
For many problems, internal consistency of
clocks is important
Absolute time is less important
Use logical clocks
Key idea:
Clock synchronization need not be absolute
If two machines do not interact, no need to
synchronize them
More importantly, processes need to agree on
the order in which events occur rather than the
time at which they occurred
9. EVENT ORDERING
Problem: define a total ordering of all events that
occur in a system
Events in a single processor machine are totally
ordered
In a distributed system:
No global clock, local clocks may be unsynchronized
Can not order events on different machines using local
times
Key idea [Lamport ]
Processes exchange messages
Message must be sent before received
Send/receive used to order events (and synchronize
10. LOGICAL CLOCKS
Often, it is not necessary for a computer to know the exact
time, only relative time. This is known as “logical time”.
Logical time is not based on timing but on the ordering of
events.
Logical clocks can only advance forward, not in reverse.
Non-interacting processes cannot share a logical clock.
Computers generally obtain logical time using interrupts to
update a software clock. The more interrupts (the more
frequently time is updated), the higher the overhead.
11. LAMPORT’S LOGICAL CLOCK
SYNCHRONIZATION ALGORITHM
The most common logical clock synchronization algorithm
for distributed systems is Lamport‟s Algorithm. It is used in
situations where ordering is important but global time is not
required.
Based on the “happens-before” relation:
Event A “happens-before” Event B (A→B) when all
processes involved in a distributed system agree that
event A occurred first, and B subsequently occurred.
This DOES NOT mean that Event A actually occurred
before Event B in absolute clock time.
12. LAMPORT’S LOGICAL CLOCK SYNCHRONIZATION
ALGORITHM
A distributed system can use the “happens-before”
relation when:
Events A and B are observed by the same
process, or by multiple processes with the same
global clock
Event A acknowledges sending a message and
Event B acknowledges receiving it, since a
message cannot be received before it is sent
If two events do not communicate via messages,
they are considered concurrent – because order
cannot be determined and it does not matter.
Concurrent events can be ignored.
13. LAMPORT’S LOGICAL CLOCK SYNCHRONIZATION
ALGORITHM (CONT.)
In the previous examples, Clock C(a) < C(b)
If they are concurrent, C(a) = C(b)
Concurrent events can only occur on the same system,
because every message transfer between two systems
takes at least one clock tick.
In Lamport‟s Algorithm, logical clock values for events may
be changed, but always by moving the clock forward. Time
values can never be decreased.
An additional refinement in the algorithm is often used:
If Event A and Event B are concurrent. C(a) = C(b), some unique
property of the processes associated with these events can be used
to choose a winner. This establishes a total ordering of all events.
Process ID is often used as the tiebreaker.
14. LAMPORT’S LOGICAL CLOCK
SYNCHRONIZATION ALGORITHM (CONT.)
Lamport‟s Algorithm can thus be used in distributed
systems to ensure synchronization:
A logical clock is implemented in each node in
the system.
Each node can determine the order in which
events have occurred in that system’s own point
of view.
The logical clock of one node does not need to
have any relation to real time or to any other
node in the system.
15. EVENT ORDERING USING HB
Goal: define the notion of time of an event such
that
If A-> B then C(A) < C(B)
If A and B are concurrent, then C(A) <, = or > C(B)
Solution:
Each processor maintains a logical clock LCi
Whenever an event occurs locally at I, LCi = LCi+1
When i sends message to j, piggyback Lci
When j receives message from i
If LCj < LCi then LCj = LCi +1 else do nothing
Claim: this algorithm meets the above goals
16. PROCESS EACH WITH ITS OWN CLOCK
•At time 6 , Process 0 sends message A to Process 1
•It arrives to Process 1 at 16( It took 10 ticks to make journey
•Message B from 1 to 2 takes 16 ticks
•Message C from 2 to 1 leaves at
60 and arrives at 56 -Not Possible
•Message D from 1 to 0 leaves at
64 and arrives at 54 -Not Possible
17. LAMPORT’S ALGORITHM CORRECTS THE CLOCKS
Use „happened-before‟
relation
Each message carries the
sending time (as per sender‟s
clock)
When arrives, receiver fast
forwards its clock to be one
more than the sending time.
(between every two events,
the clock must tick at least
once)
18. PHYSICAL CLOCKS
The time difference between two computers is known as
drift. Clock drift over time is known as skew. Computer
clock manufacturers specify a maximum skew rate in their
products.
Computer clocks are among the least accurate modern
timepieces.
Inside every computer is a chip surrounding a quartz
crystal oscillator to record time. These crystals cost 25
seconds to produce.
Average loss of accuracy: 0.86 seconds per day
This skew is unacceptable for distributed systems. Several
methods are now in use to attempt the synchronization of
physical clocks in distributed systems:
19. PHYSICAL CLOCKS
17th Century: time has been measured
astronomically
Solar Day: interval between two consecutive
transit of sun
Solar Second: 1/86400th of solar day
20. PHYSICAL CLOCKS
1948: Atomic Clocks are invented
Accurate clocks are atomic oscillators (one part in 1013)
BIH decide TAI(International Atomic Time)
TAI seconds is now about 3 msec less than solar day
BIH solves the problem by introducing leap seconds
Whenever discrepancy between TAI and solar time grow to
800 msec
This time is called Universal Coordinated Time(UTC)
When BIH announces leap second, the power companies
raise their frequency to 61 & 51 Hz for 60 & 50 sec, to
advance all the clocks in their distribution area.
21. PHYSICAL CLOCKS - UTC
Coordinated Universal Time
(UTC) is the international
time standard.
UTC is the current term for
what was commonly
referred to as Greenwich
Mean Time (GMT).
Zero hours UTC is midnight in
Greenwich, England, which
lies on the zero longitudinal
meridian.
UTC is based on a 24-hour
clock.
22. PHYSICAL CLOCKS
Most clocks are less accurate (e.g., mechanical watches)
Computers use crystal-based blocks (one part in million)
Results in clock drift
How do you tell time?
Use astronomical metrics (solar day)
Coordinated universal time (UTC) – international standard
based on atomic time
Add leap seconds to be consistent with astronomical time
UTC broadcast on radio (satellite and earth)
Receivers accurate to 0.1 – 10 ms
Need to synchronize machines with a master or with one
another
23. CLOCK SYNCHRONIZATION
Each clock has a maximum drift rate
1- dC/dt <= 1+
Two clocks may drift by 2 t in time t
To limit drift to resynchronize after every
2 seconds
24. CHRISTIAN’S ALGORITHM
Assuming there is one time server with UTC:
Each node in the distributed system periodically polls the time server.
Time( treply) is estimated as t + (Treply + Treq)/2
This process is repeated several times and an average is provided.
Machine Treply then attempts to adjust its time.
Disadvantages:
Must attempt to take delay between server Treply and time
server into account
Single point of failure if time server fails
25. CRISTIAN’S ALGORITHM
Synchronize machines to a
time server with a UTC
receiver
Machine P requests time from
server every seconds
Receives time t from
server, P sets clock to
t+treply where treply is the
time to send reply to P
Use (treq+treply)/2 as an
estimate of treply
Improve accuracy by
making a series of
measurements
26. PROBLEM WITH CRISTIAN’S ALGORITHM
Major Problem
Time must never run
backward
If sender‟s clock is
fast, CUTC will be
smaller than the
sender‟s current value
of C
Minor Problem
It takes nonzero time
for the time server‟s
reply
This delay may be large
and vary with network
load
27. SOLUTION
Major Problem
Control the clock
Suppose that the timer set
to generate 100 intrpt/sec
Normally each interrupt
add 10 msec to the time
To slow down add only 9
msec
To advance add 11 msec to
the time
Minor Problem
Measure it
Make a series of
measurements for accuracy
Discard the measurements
that exceeds the threshold
value
The message that came
back fastest can be taken to
be the most accurate.
28. BERKELEY ALGORITHM
Used in systems without UTC receiver
Keep clocks synchronized with one another
One computer is master, other are slaves
Master periodically polls slaves for their times
Average times and return differences to slaves
Communication delays compensated as in Cristian‟s
algo
Failure of master => election of a new master
29. BERKELEY ALGORITHM
a) The time daemon asks all the other machines for their clock values
b) The machines answer
c) The time daemon tells everyone how to adjust their clock
30. 30
DECENTRALIZED AVERAGING ALGORITHM
Each machine on the distributed system has a daemon
without UTC.
Periodically, at an agreed-upon fixed time, each machine
broadcasts its local time.
Each machine calculates the correct time by averaging
all results.
31. 31
NETWORK TIME PROTOCOL (NTP)
Enables clients across the Internet to be
synchronized accurately to UTC.
Overcomes large and variable message delays
Employs statistical techniques for filtering, based on past
quality of servers and several other measures
Can survive lengthy losses of connectivity:
Redundant servers
Redundant paths to servers
Provides protection against malicious interference
through authentication techniques
32. 32
NETWORK TIME PROTOCOL (NTP) (CONT.)
Uses a hierarchy of servers located across the Internet.
Primary servers are directly connected to a UTC time
source.
33. 33
NETWORK TIME PROTOCOL (NTP) (CONT.)
NTP has three modes:
Multicast Mode:
Suitable for user workstations on a LAN
One or more servers periodically multicasts the time to other
machines on the network.
Procedure Call Mode:
Similar to Christian‟s Algorithm
Provides higher accuracy than Multicast Mode because delays
are compensated.
Symmetric Mode:
Pairs of servers exchange pairs of timing messages that contain
time stamps of recent message events.
The most accurate, but also the most expensive mode
Although NTP is quite advanced, there is still a drift of 20-35 milliseconds!!!
34. MORE PROBLEMS
Causality
Vector timestamps
Global state and termination detection
Election algorithms
35. LOGICAL CLOCKS
For many DS algorithms, associating
an event to an absolute real time is
not essential, we only need to know
an unambiguous order of events
Lamport's timestamps
Vector timestamps
36. LOGICAL CLOCKS (CONT.)
Synchronization based on “relative time”.
“relative time” may not relate to the “real
time”.
Example: Unix make (Is output.c updated after the
generation of output.o?)
What‟s important is that the processes in
the Distributed System agree on the
ordering in which certain events occur.
Such “clocks” are referred to as Logical
Clocks.
37. EXAMPLE: WHY ORDER MATTERS?
Replicated accounts in Jaipur(JP) and Bikaner(BN)
Two updates occur at the same time
Current balance: $1,000
Update1: Add $100 at BN; Update2: Add interest of 1% at
JP
Whoops, inconsistent states!
38. LAMPORT ALGORITHM
Clock synchronization does not have to be
exact
Synchronization not needed if there is no
interaction between machines
Synchronization only needed when machines
communicate
i.e. must only agree on ordering of interacting
events
39. LAMPORT’S “HAPPENS-BEFORE” PARTIAL
ORDER
Given two events e & e`, e < e` if:
1. Same process: e <i e`, for some
process Pi
2. Same message: e = send(m) and
e`=receive(m) for some message m
3. Transitivity: there is an event e* such
that e < e* and e* < e`
40. CONCURRENT EVENTS
Given two events e & e`:
If neither e < e` nor e`< e, then e || e`
P1
P2
P3
Real Time
a b
c
f
d
e
m1
m2
41. LAMPORT LOGICAL CLOCKS
Substitute synchronized clocks with a global
ordering of events
ei < ej LC(ei) < LC(ej)
LCi is a local clock: contains increasing values
each process i has own LCi
Increment LCi on each event occurrence
within same process i, if ej occurs before ek
LCi(ej) < LCi(ek)
If es is a send event and er receives that send,
then
LCi(es) < LCj(er)
42. LAMPORT ALGORITHM
Each process increments local clock
between any two successive events
Message contains a timestamp
Upon receiving a message, if received
timestamp is ahead, receiver fast forward
its clock to be one more than sending
time
43. LAMPORT ALGORITHM (CONT.)
Timestamp
Each event is given a timestamp t
If es is a send message m from pi, then
t=LCi(es)
When pj receives m, set LCj value as follows
If t < LCj, increment LCj by one
Message regarded as next event on j
If t ≥ LCj, set LCj to t+1
44. LAMPORT’S ALGORITHM ANALYSIS (1)
Claim: ei < ej LC(ei) < LC(ej)
Proof: by induction on the length of the
sequence of events relating to ei and ej
P1
P2
P3
Real Timea b
c
f
d
e
m1
m2
1 2
3
5
4
1
g
5
45. LAMPORT’S ALGORITHM ANALYSIS (2)
LC(ei) < LC(ej) ei < ej ?
Claim: if LC(ei) < LC(ej), then it is not
necessarily true that ei < ej
P1
P2
P3
Real Timea b
c
f
d
e
m1
m2
1 2
3
5
4
1
g
2
46. TOTAL ORDERING OF EVENTS
Happens before is only a partial order
Make the timestamp of an event e of
process Pi be: (LC(e),i)
(a,b) < (c,d) iff a < c, or a = c and b < d
47. APPLICATION: TOTALLY-ORDERED MULTICASTING
Message is timestamped with sender‟s
logical time
Message is multicast (including sender itself)
When message is received
It is put into local queue
Ordered according to timestamp
Multicast acknowledgement
Message is delivered to applications only
when
It is at head of queue
It has been acknowledged by all involved
processes
48. APPLICATION: TOTALLY-ORDERED MULTICASTING
Update 1 is time-stamped and multicast. Added to local queues.
Update 2 is time-stamped and multicast. Added to local queues.
Acknowledgements for Update 2 sent/received. Update 2 can now be
processed.
Acknowledgements for Update 1 sent/received. Update 1 can now be
processed.
(Note: all queues are the same, as the timestamps have been used to
ensure the “happens-before” relation holds.)
49. LIMITATION OF LAMPORT’S ALGORITHM
ei < ej LC(ei) < LC(ej)
However, LC(ei) < LC(ej) does not imply ei <
ej
for instance, (1,1) < (1,3), but events a and e are
concurrent
P1
P2
P3
Real Timea b
c
f
d
e
m1
m2
(1,1) (2,1)
(3,2)
(5,3)
(4,2)
(1,3)
g
(2,3)
50. VECTOR TIMESTAMPS
Pi‟s clock is a vector VTi[]
VTi[i] = number of events Pi has
stamped
VTi[j] = what Pi thinks number of
events Pj has stamped (i j)
51. VECTOR TIMESTAMPS (CONT.)
Initialization
the vector timestamp for each process is
initialized to (0,0,…,0)
Local event
when an event occurs on process Pi, VTi[i]
VTi[i] + 1
e.g., on processor 3, (1,2,1,3) (1,2,2,3)
52. Message passing
when Pi sends a message to Pj, the message
has timestamp t[]=VTi[]
when Pj receives the message, it sets VTj[k] to
max (VTj[k],t[k]), for k = 1, 2, …, N
e.g., P2 receives a message with timestamp (3,2,4)
and P2‟s timestamp is (3,4,3), then P2 adjust its
timestamp to (3,4,4)
VECTOR TIMESTAMPS (CONT.)
53. COMPARING VECTORS
VT1 = VT2 iff VT1[i] = VT2[i] for all i
VT1 VT2 iff VT1[i] VT2[i] for all i
VT1 < VT2 iff VT1 VT2 & VT1 VT2
for instance, (1, 2, 2) < (1, 3, 2)
54. VECTOR TIMESTAMP ANALYSIS
Claim: e < e‟ iff e.VT < e‟.VT
P1
P2
P3
Real Timea b
c
f
d
e
m1
m2
[1,0,0]
[2,0,0]
[2,1,0]
[2,2,3]
[2,2,0]
[0,0,1]
g
[0,0,2]
55. APPLICATION: CAUSALLY-ORDERED MULTICASTING
For ordered delivery, we also need…
Multicast msgs (reliable but may be out-of-order)
Vi[i] is only incremented when sending
When k gets a msg from j, with timestamp ts,
the msg is buffered until:
1: ts[j] = Vk[j] + 1
(this is the next timestamp that k is expecting from j)
2: ts[i] ≤ Vk[i] for all i ≠ j
(k has seen all msgs that were seen by j when j sent the
msg)
57. CAUSALLY-ORDERED MULTICASTING (CONT.)
P2
a
P1 P3
d
g
[1,0,0]
[1,0,0]
[1,0,1]
Post a
r: Reply a
Buffered
c
[1,0,0]
The message a arrives at P2 after the reply from P3; The reply is
not delivered right away.
b
[1,0,1]
[0,0,0]
[0,0,0]
[0,0,0]
Deliver r
58. ORDERED COMMUNICATION
Totally ordered multicast
Use Lamport timestamps
Causally ordered multicast
Use vector timestamps
59. VECTOR CLOCKS
Each process i maintains a vector Vi
Vi[i] : number of events that have occurred at i
Vi[j] : number of events I knows have occurred at process
j
Update vector clocks as follows
Local event: increment Vi[I]
Send a message :piggyback entire vector V
Receipt of a message: Vj[k] = max( Vj[k],Vi[k] )
Receiver is told about how many events the sender knows
occurred at another process k
Also Vj[i] = Vj[i]+1
60. GLOBAL STATE
Global state of a distributed system
Local state of each process
Messages sent but not received (state of the queues)
Many applications need to know the state of the
system
Failure recovery, distributed deadlock detection
Problem: how can you figure out the state of a
distributed system?
Each process is independent
No global clock or synchronization
Distributed snapshot: a consistent global state
62. DISTRIBUTED SNAPSHOT ALGORITHM
Assume each process communicates with
another process using unidirectional point-to-
point channels (e.g, TCP connections)
Any process can initiate the algorithm
Checkpoint local state
Send marker on every outgoing channel
On receiving a marker
Checkpoint state if first marker and send marker
on outgoing channels, save messages on all
other channels until:
Subsequent marker on a channel: stop saving
state for that channel
63. DISTRIBUTED SNAPSHOT
A process finishes when
It receives a marker on each incoming channel
and processes them all
State: local state plus state of all channels
Send state to initiator
Any process can initiate snapshot
Multiple snapshots may be in progress
Each is separate, and each is distinguished by tagging
the marker with the initiator ID (and sequence
number)
A
C
B
M
M
65. SNAPSHOT ALGORITHM EXAMPLE
b) Process Q receives a marker for the first time and records its
local state
c) Q records all incoming message
d) Q receives a marker for its incoming channel and finishes
recording the state of the incoming channel
66. TERMINATION DETECTION
Detecting the end of a distributed computation
Notation: let sender be predecessor, receiver be successor
Two types of markers: Done and Continue
After finishing its part of the snapshot, process Q sends a
Done or a Continue to its predecessor
Send a Done only when
All of Q‟s successors send a Done
Q has not received any message since it check-pointed its local state
and received a marker on all incoming channels
Else send a Continue
Computation has terminated if the initiator receives Done
messages from everyone
67. DISTRIBUTED SYNCHRONIZATION
Distributed system with multiple processes may
need to share data or access shared data
structures
Use critical sections with mutual exclusion
Single process with multiple threads
Semaphores, locks, monitors
How do you do this for multiple processes in a
distributed system?
Processes may be running on different machines
Solution: lock mechanism for a distributed
environment
Can be centralized or distributed
68. CENTRALIZED MUTUAL EXCLUSION
Assume processes are numbered
One process is elected coordinator (highest ID
process)
Every process needs to check with coordinator
before entering the critical section
To obtain exclusive access: send request, await
reply
To release: send release message
Coordinator:
Receive request: if available and queue empty, send
grant; if not, queue request
Receive release: remove next request from queue and
send grant
69. MUTUAL EXCLUSION:
A CENTRALIZED ALGORITHM
a) Process 1 asks the coordinator for permission to enter a
critical region. Permission is granted
b) Process 2 then asks permission to enter the same critical
region. The coordinator does not reply.
c) When process 1 exits the critical region, it tells the
coordinator, when then replies to 2
70. PROPERTIES
Simulates centralized lock using blocking calls
Fair: requests are granted the lock in the order they were
received
Simple: three messages per use of a critical section
(request, grant, release)
Shortcomings:
Single point of failure
How do you detect a dead coordinator?
A process can not distinguish between “lock in use” from a dead
coordinator
No response from coordinator in either case
Performance bottleneck in large distributed systems
71. DISTRIBUTED ALGORITHM
[Ricart and Agrawala]: needs 2(n-1) messages
Based on event ordering and time stamps
Process k enters critical section as follows
Generate new time stamp TSk = TSk+1
Send request(k,TSk) all other n-1 processes
Wait until reply(j) received from all other processes
Enter critical section
Upon receiving a request message, process j
Sends reply if no contention
If already in critical section, does not reply, queue request
If wants to enter, compare TSj with TSk and send reply if TSk<TSj,
else queue
72. A DISTRIBUTED ALGORITHM
a) Two processes want to enter the same critical region at the same
moment.
b) Process 0 has the lowest timestamp, so it wins.
c) When process 0 is done, it sends an OK also, so 2 can now enter
the critical region.
73. PROPERTIES
Fully decentralized
N points of failure!
All processes are involved in all decisions
Any overloaded process can become a
bottleneck
74. ELECTION ALGORITHMS
Many distributed algorithms need one process
to act as coordinator
Doesn‟t matter which process does the job, just
need to pick one
Election algorithms: technique to pick a unique
coordinator (aka leader election)
Examples: take over the role of a failed
process, pick a master in Berkeley clock
synchronization algorithm
Types of election algorithms: Bully and Ring
algorithms
75. BULLY ALGORITHM
Each process has a unique numerical ID
Processes know the Ids and address of every other
process
Communication is assumed reliable
Key Idea: select process with highest ID
Process initiates election if it just recovered from
failure or if coordinator failed
3 message types: election, OK, I won
Several processes can initiate an election
simultaneously
Need consistent result
O(n2) messages required with n processes
76. BULLY ALGORITHM DETAILS
Any process P can initiate an election
P sends Election messages to all process with
higher Ids and awaits OK messages
If no OK messages, P becomes coordinator and
sends I won messages to all process with lower Ids
If it receives an OK, it drops out and waits for an I
won
If a process receives an Election msg, it returns an
OK and starts an election
If a process receives a I won, it treats sender an
coordinator
77. BULLY ALGORITHM EXAMPLE
The bully election algorithm
Process 4 holds an election
Process 5 and 6 respond, telling 4 to stop
Now 5 and 6 each hold an election
79. LAST CLASS
Vector timestamps
Global state
Distributed Snapshot
Election algorithms
80. TODAY: STILL MORE CANONICAL
PROBLEMS
Election algorithms
Bully algorithm
Ring algorithm
Distributed synchronization and mutual
exclusion
Distributed transactions
81. ELECTION ALGORITHMS
Many distributed algorithms need one process
to act as coordinator
Doesn‟t matter which process does the job, just
need to pick one
Election algorithms: technique to pick a unique
coordinator (aka leader election)
Examples: take over the role of a failed
process, pick a master in Berkeley clock
synchronization algorithm
Types of election algorithms: Bully and Ring
82. BULLY ALGORITHM
Each process has a unique numerical ID
Processes know the Ids and address of every
other process
Communication is assumed reliable
Key Idea: select process with highest ID
Process initiates election if it just recovered
from failure or if coordinator failed
3 message types: election, OK, I won
Several processes can initiate an election
simultaneously
83. BULLY ALGORITHM DETAILS
Any process P can initiate an election
P sends Election messages to all process with higher Ids and
awaits OK messages
If no OK messages, P becomes coordinator and sends I won
messages to all process with lower Ids
If it receives an OK, it drops out and waits for an I won
If a process receives an Election msg, it returns an OK and
starts an election
If a process receives a I won, it treats sender an coordinator
84. BULLY ALGORITHM EXAMPLE
The bully election algorithm
Process 4 holds an election
Process 5 and 6 respond, telling 4 to stop
Now 5 and 6 each hold an election
86. RING-BASED ELECTION
Processes have unique Ids and arranged in a logical ring
Each process knows its neighbors
Select process with highest ID
Begin election if just recovered or coordinator has failed
Send Election to closest downstream node that is alive
Sequentially poll each successor until a live node is found
Each process tags its ID on the message
Initiator picks node with highest ID and sends a coordinator
message
Multiple elections can be in progress
Wastes network bandwidth but does no harm
88. COMPARISON
Assume n processes and one election in
progress
Bully algorithm
Worst case: initiator is node with lowest ID
Triggers n-2 elections at higher ranked nodes: O(n2)
msgs
Best case: immediate election: n-2 messages
Ring
2 (n-1) messages always
89. A TOKEN RING ALGORITHM
a) An unordered group of processes on a network.
b) A logical ring constructed in software.
• Use a token to arbitrate access to critical section
• Must wait for token before entering CS
• Pass the token to neighbor once done or if not interested
• Detecting token loss in non-trivial
90. COMPARISON
A comparison of three mutual exclusion
algorithms.
Algorithm
Messages per
entry/exit
Delay before entry (in
message times)
Problems
Centralized 3 2 Coordinator crash
Distributed 2 ( n – 1 ) 2 ( n – 1 )
Crash of any
process
Token ring 1 to 0 to n – 1
Lost token, process
crash
91. TRANSACTIONS
Transactions provide higher
level mechanism for atomicity
of processing in distributed
systems
Have their origins in databases
Banking example: Three
accounts A:$100, B:$200,
C:$300
Client 1: transfer $4 from A to
B
Client 2: transfer $3 from C to
B
Result can be inconsistent
unless certain properties are
Client 1 Client 2
Read A: $100
Write A: $96
Read C: $300
Write C:$297
Read B: $200
Read B: $200
Write B:$203
Write B:$204
92. ACID PROPERTIES
Atomic: all or nothing
Consistent: transaction takes
system from one consistent
state to another
Isolated: Immediate effects
are not visible to other
(serializable)
Durable: Changes are
permanent once transaction
completes (commits)
Client 1 Client 2
Read A: $100
Write A: $96
Read B: $200
Write B:$204
Read C: $300
Write C:$297
Read B: $204
Write B:$207