BITS Pilani
Pilani | Dubai | Goa | Hyderabad
Dr.M.Jagadeesh
jagadeesh@wilp.bits-pilani.ac.in
SSWT ZG526
Distributed Computing
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
IMP Note to Self
Distributed Computing || Week - 2 2
START RECORDING
BITS Pilani
Pilani | Dubai | Goa | Hyderabad
CS-2 : Logical Clocks
[T1: Chap - 3]
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Text and References
T1 - Ajay D. Kshemkalyani, and Mukesh Singhal “Distributed
Computing: Principles, Algorithms, and Systems”, Cambridge University
Press, 2008 (Reprint 2013).
R1 - John F. Buford, Heather Yu, and Eng K. Lua, “P2P Networking and
Applications”, Morgan Kaufmann, 2009 Elsevier Inc.
R2 - Kai Hwang, Geoffrey C. Fox, and Jack J. Dongarra, “Distributed and
Cloud Computing: From Parallel processing to the Internet of Things”,
Morgan Kaufmann, 2012 Elsevier Inc.
R3 - A. Tanenbaum and M. V. Steen, “Distributed Systems”, 2nd
Edition, Pearson Prentice Hall
4
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Objective of Module – 2
Logical Clocks & Vector Clocks in Distributed Systems
Problem: No Global Clock in Distributed Systems
• In a distributed system, different machines (or processes) have independent clocks
that may not be synchronized. So, using physical time to determine the order of
events can be unreliable.
Solution: Logical Time
• Logical time provides a way to order events based on causality, rather than actual
clock time.
– Causality: If Event A causes Event B, then A must come before B.
5
Distributed Computing || Week - 2
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Contact Session – 2
M2: Logical Clocks & Vector clocks
• A framework for a system of logical clocks.
• Scalar time, Vector time.
• Implementation of Logical and Vector clocks, Efficient implementation of Vector
clocks.
• Physical Clock synchronization: NTP
Distributed Computing || Week - 2 6
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time (or Scalar Clocks)
• Lamport Timestamps
• Vector Time (or Vector Clocks)
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 7
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 8
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Logical time introduction
• In distributed systems, physical clocks (like wall clocks) on different machines are not
synchronized — so you can't rely on them to say which event happened first.
• Concept of causality — that is, figuring out which events happened before or caused others
is very important.
• In a single computer system, it's easy to track when events happen using the computer’s
physical clock (like timestamps). But in a distributed system (where multiple computers
work together over a network), it’s much harder because:
– Each computer has its own clock, and
– These clocks are not perfectly synchronized.
• Understanding causality helps us:
– Analyze how programs behave
– Debug problems
– Ensure that the system works correctly, even when many parts are working in parallel or
remotely.
Distributed Computing || Week - 2 9
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Analogy: Email Between Friends
Imagine 3 friends: Alice, Bob, and Carol. They write messages and reply to each other.
• Alice sends an email to Bob.
• Bob replies to Alice.
• Carol sends a new email to Alice, unaware of the previous exchange.
Now we want to order these messages. Instead of using unreliable wall clocks, we give each
event a logical number:
• Alice's message to Bob: 1
• Bob's reply: 2 (because it happened after 1)
• Carol’s message: 1 (independent of Alice–Bob chain)
So, Bob’s reply causally depends on Alice’s message, but Carol’s message is independent.
Logical time captures this causality.
10
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Logical time introduction
• Causality is influenced by which one event, process or state contributes to the
production of another event, process or state where the cause is partly responsible
for the effect, and the effect is partly dependent on the cause.
Distributed Computing || Week - 2 11
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Logical time introduction-causal relationships
1. Distributed algorithms design
• Understanding which events happened before others (called causal precedence)
is very useful in distributed systems. It helps us:
• Ensure fairness and progress in mutual exclusion algorithms — so every process
gets a turn and doesn't wait forever.
• Keep copies of databases consistent, even when stored on different machines.
• Detect deadlocks correctly — to avoid situations where:
– Some deadlocks are missed (undetected), or
– We falsely think there's a deadlock when there isn't (phantom deadlocks).
Distributed Computing || Week - 2 12
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Logical time introduction
2. Tracking of dependent events
Distributed Debugging
• Causal knowledge helps us recreate a consistent state of the system. This is needed when we want
to pause and restart a program correctly during debugging.
Failure Recovery
• We can use causal information to take a reliable snapshot or checkpoint — a saved state that the
system can return to if something goes wrong.
Replicated Databases
• If the network splits (network partitioning), causal tracking helps detect whether different copies
of files have become inconsistent.
Knowledge about the progress
The knowledge of the causal dependency among events helps measure the progress of processes in the
distributed computation.
Distributed Computing || Week - 2 13
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Logical time introduction
3. Concurrency measure
If two events are causally dependent:
• One must happen before the other.
• They cannot be done at the same time.
If two events are not causally related:
• They are independent.
• They can happen at the same time, on different machines or processors — this is
concurrency.
"If events don’t depend on each other, we can do them in parallel and the more such
independent events we have, the more concurrency we can achieve."
Distributed Computing || Week - 2 14
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 15
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Implementing logical clocks
• A logical clock is a software-based times-tamping system used to order events in
distributed systems based on causality, not real time.
• It helps answer: “Did Event A happen before Event B?”
• There are two major types:
– Scalar logical clocks (like Lamport timestamps)
– Vector clocks (more advanced)
Distributed Computing || Week - 2 16
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Implementing logical clocks
In a distributed system, each process (say, process pi) keeps track of time using logical clocks, not
physical ones.
Each process maintains two key data structures:
1. Local Logical Clock (lci)
• This clock is used by process pi alone.
• It helps the process track its own progress — like how many steps or events it has performed.
• Think of it as a counter that increases with every event or action done by the process.
2. Global Logical Clock (gci)
• This is a more complete view of the system from process pi's perspective.
• It tells pi what it believes about the progress of all processes, including itself.
• So, it’s like a local snapshot of the whole system's logical time, as seen by pi.
Distributed Computing || Week - 2 17
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Implementing logical clocks
The protocol consists of the following two rules:
R1 : This rule governs how the local logical clock is updated by a process when it
executes an event (send, receive, or internal).
R2 : This rule governs how a process updates cits global logical lock to update its view
of the global time and global progress. It dictates what information about the logical
time is piggybacked (attaching extra information to something that is already being
sent, instead of sending it separately) in a message and how this information is used
by the receiving process to update its view of the global time.
Distributed Computing || Week - 2 18
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Analogy: A Group of Students Taking Notes
• Imagine a group of students (say, Alice, Bob, and Charlie) are working on a group project but sitting
in different rooms. They can't see each other, but they communicate by sending notes through
messengers.
Each student maintains:
• A personal notebook (like lci) to record when they do something.
• A shared view of what others are doing (like gci) — this is a whiteboard in each room where they
try to write what they know about everyone’s progress.
R1: Local Clock Update = Writing in Your Own Notebook
“Update your own progress when you do something.”
R2: Global Clock Update = Updating the Whiteboard
“When you receive a message, update your view of everyone’s progress using what the sender
knows.”
19
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 20
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
What is Scalar Time in Distributed Systems?
• In scalar time, each process uses a single integer variable (called Ci i.e., a counter) to track logical
time. The counter is updated before each event.
• Scalar time means using a single non-negative integer to represent time (0, 1, 2, …).
• This value represents both:
– The process's local clock, and
– Its local view of global time
R1: When a process pi performs any event (send, receive, or internal), it updates its logical clock using:
Ci = Ci+d (d > 0)
What does this mean?
• Ci is the logical clock of process pi
• Before any action, it is incremented by d
• Usually, d = 1, to:
– Uniquely identify each event's time
– Keep the clock growing slowly and predictably
Distributed Computing || Week - 2 21
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Scalar time
R2: When pi receives a message with timestamp Cmsg, it:
1. Ci = max(Ci, Cmsg)
2. Ci = Ci + d (R1, typically d = 1)
3. Deliver the message
Let’s say:
• Sender p1 has clock C1 = 5 and sends a message.
• Receiver p2 has C2 = 3 before receiving it.
• What happens?
1. C2 = max(3, 5) = 5
2. C2 = 5 + 1 = 6 ← R1
3. Deliver the message
Distributed Computing || Week - 2 22
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 23
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Lamport timestamps- a type of scalar logical clock
• They are named after their creator, Leslie Lamport.
• Lamport timestamps determine the order of events (causality) in a distributed computer system.
• Lamport timestamps provide a partial order, not full causality (tell us some ordering of events, but not the
complete cause-and-effect relationship).
• So we use logical clocks, like Lamport timestamps, to assign numbers to events.
Why this is called a partial order
• Lamport timestamps correctly order causally related events.
• They cannot distinguish between:
– true cause-effect events
– events that happened independently
Simple example
• Two computers perform actions at the same time without communicating.
• Lamport timestamps still assign numbers like 3 and 4.
• But neither event caused the other.
Distributed Computing || Week - 2 24
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Lamport timestamps -Scenario Setup
We have:
• Process 1 (P1)
• Process 2 (P2)
• Disk (D)
Key Actions:
• P1 sends a write request to the disk
• P1 sends a message to P2, asking it to read
• P2 receives the message, then sends its own read request to the disk
• The disk grants requests in order they were received
Distributed Computing || Week - 2 25
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Scenario-Event Sequence
26
Step Event Lamport Timestamp Explanation
1 P1 does internal setup C1 = 1
Increment before any
event
2 P1 sends write request to Disk C1 = 2
Send event, message
timestamp = 2
3 P1 sends message to P2 C1 = 3
Message to P2 says
"please read" (timestamp
= 3)
4 P2 receives message C2 = max(0,3)+1 = 4
Updates clock from
received message
5 P2 sends read request to Disk C2 = 5 Increment before sending
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Lamport timestamps-another Scenario
• The disk receives two requests: one from P1 (write), one from P2 (read).
• Due to network delay, both messages arrive at the same physical time.
• Disk must now decide which request happened-before the other — to maintain
causal correctness.
Distributed Computing || Week - 2 27
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
How Logical Clock Algorithm Helps
• Each message carries a Lamport timestamp (scalar clock value).
Let’s say:
• P1’s write request has timestamp = 2
• P2’s read request has timestamp = 5
• Even though they arrive at the same physical time, the logical timestamps tell us:
2 < 5 → P1’s write happened-before P2’s read
So, the disk can deterministically process:
• Write first
• Read after
• This guarantees causal correctness, even without physical clocks.
28
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
But What If Timestamps Are Equal?
Let’s say both timestamps = 5.
This might happen if:
– Events were independent (no causal link)
– Each process had the same local clock value when sending
• Then, we use a tie-breaking rule, often like this:
• If timestamps are equal: Compare process IDs Lower ID wins
Example:
• P1 and P2 both send messages with timestamp = 5
• If P1 < P2 (say, P1 has ID "1", P2 has ID "2"), then P1's message is considered earlier
This ensures total ordering: every pair of events can be compared.
29
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Lamport timestamps
• Lamport invented a simple mechanism by which the happened-before ordering can
be captured numerically.
• A Lamport logical clock is an incrementing software counter maintained in each
process. It follows some simple rules:
1. A process increments its counter before each event in that process;
2. When a process sends a message, it includes its counter value with the message;
3. On receiving a message, the receiver process sets its counter to be the maximum
of the message counter and its own counter incremented, before it considers the
message received. (counter = max(own counter, T) + 1)
Distributed Computing || Week - 2 30
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Lamport timestamps
• Conceptually, this logical clock can be thought of as a clock that only has meaning in
relation to messages moving between processes.
• When a process receives a message, it resynchronizes its logical clock with that
sender
• Lamport timestamps give a partial order — you can reason about causality (A → B),
but not always about exact or total ordering.
Example:
• P1: Event at timestamp 3
• P2: Event at timestamp 4
They may look ordered (3 < 4), but if no messages were exchanged if independent
or concurrent (no causal relation), they may be unrelated.
• To reason fully about concurrency and causality, we need Vector Clocks.
Distributed Computing || Week - 2 31
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Visual Analogy
Logical Clock = category
└── Scalar Time = method of using simple numbers
└───── Lamport Timestamp = specific algorithm using scalar time
Lamport timestamps are a specific, widely-used implementation of logical clocks that
use scalar time to maintain partial ordering of events in distributed systems.
32
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 33
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Vector time
• Vector Time is a way of tracking the exact causal relationship between events in a
distributed system.
• Unlike Lamport timestamps (which use one number), vector time uses a vector
(list) of numbers — one entry for each process in the system.
• vti = [vti(1), vti(2), ..., vti(n)]- This is a vector maintained by a process pi in a
distributed system of n processes.
Distributed Computing || Week - 2 34
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Example: 3 Processes – P1, P2, P3
• vt1 = [ , , ] ← at Process P1
• vt2 = [ , , ] ← at Process P2
• vt3 = [ , , ] ← at Process P3
vt1 = [4, 2, 1]
35
Entry Value Meaning
vt1(1) 4 P1 has performed 4 events
vt1(2) 2
P1 knows that P2 has
performed 2 events
vt1(3) 1
P1 knows that P3 has
performed 1 event
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Example (3 Processes: P1, P2, P3)
• Suppose: Process P1's vector: vt1 = [4, 2, 0]
• Then:
– vt1(1) = 4 → P1 has performed 4 events
– vt1(2) = 2 → P1 believes P2 has done 2 events
– vt1(3) = 0 → P1 has no knowledge of P3’s events
• So: vt1(1) is P1’s own logical clock → shows that P1 has reached logical time 4
When Is vti(i) Updated?
Any time pi performs an event:
– Internal computation
– Sending a message
– Receiving a message
• vti(i) = vti(i) + 1, This keeps track of how far pi has progressed in the computation.
36
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Step-by-Step Execution
Initial Setup
• Each process starts with a vector initialized to zeros:
P1: [0, 0, 0]
P2: [0, 0, 0]
P3: [0, 0, 0]
Step 1: P1 performs an internal event, P1: [1, 0, 0]
Step 2: P2 performs an internal event, P2: [0, 1, 0]
Step 3: P1 sends a message to P2, now P1 increments its own clock: [2, 0, 0]
37
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Step-by-Step Execution
Step 4: P2 receives the message from P1,
• P2 updates each entry by taking the max of its own vector and the received vector:
– Before: [0, 1, 0]
– Message: [2, 0, 0]
– → max([0,1,0], [2,0,0]) = [2,1,0]
– Then P2 increments its own entry: P2: [2, 2, 0]
Step 5: P2 sends a message to P3, P2 increments its clock: P2: [2, 3, 0]
Step 6: P3 receives the message from P2
– P3’s current vector: [0, 0, 0]
– Message: [2, 3, 0]
– → max([0,0,0], [2,3,0]) = [2,3,0]
– Then increment P3’s own clock: P3: [2, 3, 1]
38
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Step-by-Step Execution
• Step 7: P3 sends a message to P1, P3 increments its clock: P3:[2, 3, 2]
• Step 8: P1 receives message from P3
– P1’s vector: [2, 0, 0]
– Message: [2, 3, 2]
– → max([2,0,0], [2,3,2]) = [2,3,2]
– Then increment P1’s own clock: P1: [3, 3, 2]
39
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Final Vector Clocks
Interpretation of P1’s Clock [3, 3, 2]
• P1 has done 3 events
• It knows P2 has done 3 events
• It knows P3 has done 2 events
40
Process Final Vector Clock
P1 [3, 3, 2]
P2 [2, 3, 0]
P3 [2, 3, 2]
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Vector time
Process pi uses the following two rules R1 and R2 to update its clock:
R1 Before executing an event, process pi updates its local logical time as follows:
vti (i) = vti(i)+d d> 0
Distributed Computing || Week - 2 41
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Vector time
R2 Each message m is piggybacked with the vector clock vt of the sender process at
sending time. On the receipt of such a message (m,vt) , process pi executes the
following sequence of actions:
1.update its global logical time as
1 ≤ k ≤n : vti(k) = max(vti(k),vt(k))
2. execute R1 ;
3. deliver the message m .
The timestamp associated with an event is the value of the vector clock of its process
when the event is executed.
Distributed Computing || Week - 2 42
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 43
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Rules for vector clocks updates
• Initially all clocks are zero.
• Each time a process experiences an internal event, it increments its own logical
clock in the vector by one.
• Each time a process prepares to send a message, it sends its entire vector along
with the message being sent.
• Each time a process receives a message, it increments its own logical clock in the
vector by one and updates each element in its vector by taking the maximum of the
value in its own vector clock and the value in the vector in the received message
(for every element).
Distributed Computing || Week - 2 44
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 45
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Singhal–Kshemkalyani’s differential technique
• Singhal–Kshemkalyani’s differential technique for optimizing vector clock
communication overhead in distributed systems.
Why Optimize Vector Clock Messages?
Problem:
• In a system with n processes, each message usually carries a vector of size n.
• This becomes inefficient when n is large.
• But often, only a few entries in the vector clock change between messages —
especially when only a few processes interact frequently.
Distributed Computing || Week - 2 46
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Example
• 3 processes: P1, P2, P3
Vectors are of size 3
Initial State:
• P1 sends message to P2:
– P1’s clock: [5, 2, 1]
– Last vector sent to P2: [3, 2, 1]
Difference:
– Only P1's own entry has changed (from 3 → 5)
– So, P1 sends only: { index: 1, value: 5 } Instead of the full vector [5, 2, 1]
47
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Singhal–Kshemkalyani’s differential technique
Benefits:
Thus this technique cuts down the
– message size,
– communication bandwidth and
– buffer (to store messages) requirements.
Distributed Computing || Week - 2 48
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 49
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Fowler–Zwaenepoel’s direct-dependency technique
• Fowler–Zwaenepoel's Direct-Dependency Technique, which offers an alternative to
vector clocks by avoiding sending full clock vectors during runtime.
What Problem Are They Solving?
• In systems with many processes, vector clocks (which grow with the number of
processes) become costly to transmit with every message. Fowler–Zwaenepoel
proposed a technique to:
• Track causality with only scalar values in messages — much more lightweight.
• reduces the size of messages by transmitting only a scalar value in the messages.
• No vector clocks are maintained on-the-fly.
• Instead, a process only maintains information regarding direct dependencies on
other processes.
Distributed Computing || Week - 2 50
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Distributed Computing || Week - 2
Example
Let’s say:
• P1 sends msg1 to P2 → event a at P1 is a direct dependency of event b at P2
• P2 sends msg2 to P3 → event b is a direct dependency of event c at P3
So:
– c at P3 depends on b at P2
– b at P2 depends on a at P1
• You can recursively reconstruct:
– Vector time of c = combine vector time of a and b
– But none of these vectors were maintained at runtime — only direct
dependency links were logged.
51
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Presentation Overview
• Logical Time Introduction
• Implementing Logical Clocks
• Scalar Time
• Lamport Timestamps
• Vector Time
• Rules For Vector Clocks Updates
• Singhal–kshemkalyani’s Differential Technique
• Fowler – Zwaenepoel’s Direct-dependency Technique
• Physical Clock Synchronization: NTP
Distributed Computing || Week - 2 52
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Physical clock synchronization: NTP (Network Time
Protocol)
• In centralized systems, there is no need for clock synchronization because,
generally, there is only a single clock(e.g., system time).
• A process gets the time by simply issuing a system call to the kernel.
• When another process after that tries to get the time, it will get a higher time value.
• Thus, in such systems, there is a clear ordering of events and there is no ambiguity
about the times at which these events occur.
Distributed Computing || Week - 2 53
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Physical clock synchronization: NTP
• In distributed systems, there is no global clock or common memory.
• Each processor has its own internal clock and its own notion of time.
• In practice, these clocks can easily drift apart(run at slightly different speeds) by
several seconds per day, accumulating significant errors over time.
• This causes ambiguity in:
– Event ordering
– Log analysis
– Transaction validation
Distributed Computing || Week - 2 54
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
Physical clock synchronization: NTP
• NTP is a protocol used to synchronize the clocks of all computers in a distributed
system to a common reference time (typically UTC – Coordinated Universal Time).
• UTC is the global time standard used across the world to coordinate clocks and
timekeeping systems.
How NTP Works (Simplified)
– A client sends a request to an NTP server.
– The server responds with its current time.
– The client uses round-trip delay and offset estimation to correct its own clock.
• Round-trip delay: How long it takes to send a request and receive a reply.
• Clock offset: How far ahead or behind its clock is compared to the server.
Distributed Computing || Week - 2 55
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
THANK YOU
Distributed Computing || Week - 2 56
BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956
IMP Note to Self
Distributed Computing || Week - 2 57
STOP RECORDING

DC-2 Logical Clocks it's very helpful for.pptx

  • 1.
    BITS Pilani Pilani |Dubai | Goa | Hyderabad Dr.M.Jagadeesh jagadeesh@wilp.bits-pilani.ac.in SSWT ZG526 Distributed Computing
  • 2.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 IMP Note to Self Distributed Computing || Week - 2 2 START RECORDING
  • 3.
    BITS Pilani Pilani |Dubai | Goa | Hyderabad CS-2 : Logical Clocks [T1: Chap - 3]
  • 4.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Text and References T1 - Ajay D. Kshemkalyani, and Mukesh Singhal “Distributed Computing: Principles, Algorithms, and Systems”, Cambridge University Press, 2008 (Reprint 2013). R1 - John F. Buford, Heather Yu, and Eng K. Lua, “P2P Networking and Applications”, Morgan Kaufmann, 2009 Elsevier Inc. R2 - Kai Hwang, Geoffrey C. Fox, and Jack J. Dongarra, “Distributed and Cloud Computing: From Parallel processing to the Internet of Things”, Morgan Kaufmann, 2012 Elsevier Inc. R3 - A. Tanenbaum and M. V. Steen, “Distributed Systems”, 2nd Edition, Pearson Prentice Hall 4
  • 5.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Objective of Module – 2 Logical Clocks & Vector Clocks in Distributed Systems Problem: No Global Clock in Distributed Systems • In a distributed system, different machines (or processes) have independent clocks that may not be synchronized. So, using physical time to determine the order of events can be unreliable. Solution: Logical Time • Logical time provides a way to order events based on causality, rather than actual clock time. – Causality: If Event A causes Event B, then A must come before B. 5 Distributed Computing || Week - 2
  • 6.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Contact Session – 2 M2: Logical Clocks & Vector clocks • A framework for a system of logical clocks. • Scalar time, Vector time. • Implementation of Logical and Vector clocks, Efficient implementation of Vector clocks. • Physical Clock synchronization: NTP Distributed Computing || Week - 2 6
  • 7.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time (or Scalar Clocks) • Lamport Timestamps • Vector Time (or Vector Clocks) • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 7
  • 8.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 8
  • 9.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Logical time introduction • In distributed systems, physical clocks (like wall clocks) on different machines are not synchronized — so you can't rely on them to say which event happened first. • Concept of causality — that is, figuring out which events happened before or caused others is very important. • In a single computer system, it's easy to track when events happen using the computer’s physical clock (like timestamps). But in a distributed system (where multiple computers work together over a network), it’s much harder because: – Each computer has its own clock, and – These clocks are not perfectly synchronized. • Understanding causality helps us: – Analyze how programs behave – Debug problems – Ensure that the system works correctly, even when many parts are working in parallel or remotely. Distributed Computing || Week - 2 9
  • 10.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Analogy: Email Between Friends Imagine 3 friends: Alice, Bob, and Carol. They write messages and reply to each other. • Alice sends an email to Bob. • Bob replies to Alice. • Carol sends a new email to Alice, unaware of the previous exchange. Now we want to order these messages. Instead of using unreliable wall clocks, we give each event a logical number: • Alice's message to Bob: 1 • Bob's reply: 2 (because it happened after 1) • Carol’s message: 1 (independent of Alice–Bob chain) So, Bob’s reply causally depends on Alice’s message, but Carol’s message is independent. Logical time captures this causality. 10
  • 11.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Logical time introduction • Causality is influenced by which one event, process or state contributes to the production of another event, process or state where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. Distributed Computing || Week - 2 11
  • 12.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Logical time introduction-causal relationships 1. Distributed algorithms design • Understanding which events happened before others (called causal precedence) is very useful in distributed systems. It helps us: • Ensure fairness and progress in mutual exclusion algorithms — so every process gets a turn and doesn't wait forever. • Keep copies of databases consistent, even when stored on different machines. • Detect deadlocks correctly — to avoid situations where: – Some deadlocks are missed (undetected), or – We falsely think there's a deadlock when there isn't (phantom deadlocks). Distributed Computing || Week - 2 12
  • 13.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Logical time introduction 2. Tracking of dependent events Distributed Debugging • Causal knowledge helps us recreate a consistent state of the system. This is needed when we want to pause and restart a program correctly during debugging. Failure Recovery • We can use causal information to take a reliable snapshot or checkpoint — a saved state that the system can return to if something goes wrong. Replicated Databases • If the network splits (network partitioning), causal tracking helps detect whether different copies of files have become inconsistent. Knowledge about the progress The knowledge of the causal dependency among events helps measure the progress of processes in the distributed computation. Distributed Computing || Week - 2 13
  • 14.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Logical time introduction 3. Concurrency measure If two events are causally dependent: • One must happen before the other. • They cannot be done at the same time. If two events are not causally related: • They are independent. • They can happen at the same time, on different machines or processors — this is concurrency. "If events don’t depend on each other, we can do them in parallel and the more such independent events we have, the more concurrency we can achieve." Distributed Computing || Week - 2 14
  • 15.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 15
  • 16.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Implementing logical clocks • A logical clock is a software-based times-tamping system used to order events in distributed systems based on causality, not real time. • It helps answer: “Did Event A happen before Event B?” • There are two major types: – Scalar logical clocks (like Lamport timestamps) – Vector clocks (more advanced) Distributed Computing || Week - 2 16
  • 17.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Implementing logical clocks In a distributed system, each process (say, process pi) keeps track of time using logical clocks, not physical ones. Each process maintains two key data structures: 1. Local Logical Clock (lci) • This clock is used by process pi alone. • It helps the process track its own progress — like how many steps or events it has performed. • Think of it as a counter that increases with every event or action done by the process. 2. Global Logical Clock (gci) • This is a more complete view of the system from process pi's perspective. • It tells pi what it believes about the progress of all processes, including itself. • So, it’s like a local snapshot of the whole system's logical time, as seen by pi. Distributed Computing || Week - 2 17
  • 18.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Implementing logical clocks The protocol consists of the following two rules: R1 : This rule governs how the local logical clock is updated by a process when it executes an event (send, receive, or internal). R2 : This rule governs how a process updates cits global logical lock to update its view of the global time and global progress. It dictates what information about the logical time is piggybacked (attaching extra information to something that is already being sent, instead of sending it separately) in a message and how this information is used by the receiving process to update its view of the global time. Distributed Computing || Week - 2 18
  • 19.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Analogy: A Group of Students Taking Notes • Imagine a group of students (say, Alice, Bob, and Charlie) are working on a group project but sitting in different rooms. They can't see each other, but they communicate by sending notes through messengers. Each student maintains: • A personal notebook (like lci) to record when they do something. • A shared view of what others are doing (like gci) — this is a whiteboard in each room where they try to write what they know about everyone’s progress. R1: Local Clock Update = Writing in Your Own Notebook “Update your own progress when you do something.” R2: Global Clock Update = Updating the Whiteboard “When you receive a message, update your view of everyone’s progress using what the sender knows.” 19
  • 20.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 20
  • 21.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 What is Scalar Time in Distributed Systems? • In scalar time, each process uses a single integer variable (called Ci i.e., a counter) to track logical time. The counter is updated before each event. • Scalar time means using a single non-negative integer to represent time (0, 1, 2, …). • This value represents both: – The process's local clock, and – Its local view of global time R1: When a process pi performs any event (send, receive, or internal), it updates its logical clock using: Ci = Ci+d (d > 0) What does this mean? • Ci is the logical clock of process pi • Before any action, it is incremented by d • Usually, d = 1, to: – Uniquely identify each event's time – Keep the clock growing slowly and predictably Distributed Computing || Week - 2 21
  • 22.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Scalar time R2: When pi receives a message with timestamp Cmsg, it: 1. Ci = max(Ci, Cmsg) 2. Ci = Ci + d (R1, typically d = 1) 3. Deliver the message Let’s say: • Sender p1 has clock C1 = 5 and sends a message. • Receiver p2 has C2 = 3 before receiving it. • What happens? 1. C2 = max(3, 5) = 5 2. C2 = 5 + 1 = 6 ← R1 3. Deliver the message Distributed Computing || Week - 2 22
  • 23.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 23
  • 24.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Lamport timestamps- a type of scalar logical clock • They are named after their creator, Leslie Lamport. • Lamport timestamps determine the order of events (causality) in a distributed computer system. • Lamport timestamps provide a partial order, not full causality (tell us some ordering of events, but not the complete cause-and-effect relationship). • So we use logical clocks, like Lamport timestamps, to assign numbers to events. Why this is called a partial order • Lamport timestamps correctly order causally related events. • They cannot distinguish between: – true cause-effect events – events that happened independently Simple example • Two computers perform actions at the same time without communicating. • Lamport timestamps still assign numbers like 3 and 4. • But neither event caused the other. Distributed Computing || Week - 2 24
  • 25.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Lamport timestamps -Scenario Setup We have: • Process 1 (P1) • Process 2 (P2) • Disk (D) Key Actions: • P1 sends a write request to the disk • P1 sends a message to P2, asking it to read • P2 receives the message, then sends its own read request to the disk • The disk grants requests in order they were received Distributed Computing || Week - 2 25
  • 26.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Scenario-Event Sequence 26 Step Event Lamport Timestamp Explanation 1 P1 does internal setup C1 = 1 Increment before any event 2 P1 sends write request to Disk C1 = 2 Send event, message timestamp = 2 3 P1 sends message to P2 C1 = 3 Message to P2 says "please read" (timestamp = 3) 4 P2 receives message C2 = max(0,3)+1 = 4 Updates clock from received message 5 P2 sends read request to Disk C2 = 5 Increment before sending
  • 27.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Lamport timestamps-another Scenario • The disk receives two requests: one from P1 (write), one from P2 (read). • Due to network delay, both messages arrive at the same physical time. • Disk must now decide which request happened-before the other — to maintain causal correctness. Distributed Computing || Week - 2 27
  • 28.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 How Logical Clock Algorithm Helps • Each message carries a Lamport timestamp (scalar clock value). Let’s say: • P1’s write request has timestamp = 2 • P2’s read request has timestamp = 5 • Even though they arrive at the same physical time, the logical timestamps tell us: 2 < 5 → P1’s write happened-before P2’s read So, the disk can deterministically process: • Write first • Read after • This guarantees causal correctness, even without physical clocks. 28
  • 29.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 But What If Timestamps Are Equal? Let’s say both timestamps = 5. This might happen if: – Events were independent (no causal link) – Each process had the same local clock value when sending • Then, we use a tie-breaking rule, often like this: • If timestamps are equal: Compare process IDs Lower ID wins Example: • P1 and P2 both send messages with timestamp = 5 • If P1 < P2 (say, P1 has ID "1", P2 has ID "2"), then P1's message is considered earlier This ensures total ordering: every pair of events can be compared. 29
  • 30.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Lamport timestamps • Lamport invented a simple mechanism by which the happened-before ordering can be captured numerically. • A Lamport logical clock is an incrementing software counter maintained in each process. It follows some simple rules: 1. A process increments its counter before each event in that process; 2. When a process sends a message, it includes its counter value with the message; 3. On receiving a message, the receiver process sets its counter to be the maximum of the message counter and its own counter incremented, before it considers the message received. (counter = max(own counter, T) + 1) Distributed Computing || Week - 2 30
  • 31.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Lamport timestamps • Conceptually, this logical clock can be thought of as a clock that only has meaning in relation to messages moving between processes. • When a process receives a message, it resynchronizes its logical clock with that sender • Lamport timestamps give a partial order — you can reason about causality (A → B), but not always about exact or total ordering. Example: • P1: Event at timestamp 3 • P2: Event at timestamp 4 They may look ordered (3 < 4), but if no messages were exchanged if independent or concurrent (no causal relation), they may be unrelated. • To reason fully about concurrency and causality, we need Vector Clocks. Distributed Computing || Week - 2 31
  • 32.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Visual Analogy Logical Clock = category └── Scalar Time = method of using simple numbers └───── Lamport Timestamp = specific algorithm using scalar time Lamport timestamps are a specific, widely-used implementation of logical clocks that use scalar time to maintain partial ordering of events in distributed systems. 32
  • 33.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 33
  • 34.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Vector time • Vector Time is a way of tracking the exact causal relationship between events in a distributed system. • Unlike Lamport timestamps (which use one number), vector time uses a vector (list) of numbers — one entry for each process in the system. • vti = [vti(1), vti(2), ..., vti(n)]- This is a vector maintained by a process pi in a distributed system of n processes. Distributed Computing || Week - 2 34
  • 35.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Example: 3 Processes – P1, P2, P3 • vt1 = [ , , ] ← at Process P1 • vt2 = [ , , ] ← at Process P2 • vt3 = [ , , ] ← at Process P3 vt1 = [4, 2, 1] 35 Entry Value Meaning vt1(1) 4 P1 has performed 4 events vt1(2) 2 P1 knows that P2 has performed 2 events vt1(3) 1 P1 knows that P3 has performed 1 event
  • 36.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Example (3 Processes: P1, P2, P3) • Suppose: Process P1's vector: vt1 = [4, 2, 0] • Then: – vt1(1) = 4 → P1 has performed 4 events – vt1(2) = 2 → P1 believes P2 has done 2 events – vt1(3) = 0 → P1 has no knowledge of P3’s events • So: vt1(1) is P1’s own logical clock → shows that P1 has reached logical time 4 When Is vti(i) Updated? Any time pi performs an event: – Internal computation – Sending a message – Receiving a message • vti(i) = vti(i) + 1, This keeps track of how far pi has progressed in the computation. 36
  • 37.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Step-by-Step Execution Initial Setup • Each process starts with a vector initialized to zeros: P1: [0, 0, 0] P2: [0, 0, 0] P3: [0, 0, 0] Step 1: P1 performs an internal event, P1: [1, 0, 0] Step 2: P2 performs an internal event, P2: [0, 1, 0] Step 3: P1 sends a message to P2, now P1 increments its own clock: [2, 0, 0] 37
  • 38.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Step-by-Step Execution Step 4: P2 receives the message from P1, • P2 updates each entry by taking the max of its own vector and the received vector: – Before: [0, 1, 0] – Message: [2, 0, 0] – → max([0,1,0], [2,0,0]) = [2,1,0] – Then P2 increments its own entry: P2: [2, 2, 0] Step 5: P2 sends a message to P3, P2 increments its clock: P2: [2, 3, 0] Step 6: P3 receives the message from P2 – P3’s current vector: [0, 0, 0] – Message: [2, 3, 0] – → max([0,0,0], [2,3,0]) = [2,3,0] – Then increment P3’s own clock: P3: [2, 3, 1] 38
  • 39.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Step-by-Step Execution • Step 7: P3 sends a message to P1, P3 increments its clock: P3:[2, 3, 2] • Step 8: P1 receives message from P3 – P1’s vector: [2, 0, 0] – Message: [2, 3, 2] – → max([2,0,0], [2,3,2]) = [2,3,2] – Then increment P1’s own clock: P1: [3, 3, 2] 39
  • 40.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Final Vector Clocks Interpretation of P1’s Clock [3, 3, 2] • P1 has done 3 events • It knows P2 has done 3 events • It knows P3 has done 2 events 40 Process Final Vector Clock P1 [3, 3, 2] P2 [2, 3, 0] P3 [2, 3, 2]
  • 41.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Vector time Process pi uses the following two rules R1 and R2 to update its clock: R1 Before executing an event, process pi updates its local logical time as follows: vti (i) = vti(i)+d d> 0 Distributed Computing || Week - 2 41
  • 42.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Vector time R2 Each message m is piggybacked with the vector clock vt of the sender process at sending time. On the receipt of such a message (m,vt) , process pi executes the following sequence of actions: 1.update its global logical time as 1 ≤ k ≤n : vti(k) = max(vti(k),vt(k)) 2. execute R1 ; 3. deliver the message m . The timestamp associated with an event is the value of the vector clock of its process when the event is executed. Distributed Computing || Week - 2 42
  • 43.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 43
  • 44.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Rules for vector clocks updates • Initially all clocks are zero. • Each time a process experiences an internal event, it increments its own logical clock in the vector by one. • Each time a process prepares to send a message, it sends its entire vector along with the message being sent. • Each time a process receives a message, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the value in its own vector clock and the value in the vector in the received message (for every element). Distributed Computing || Week - 2 44
  • 45.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 45
  • 46.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Singhal–Kshemkalyani’s differential technique • Singhal–Kshemkalyani’s differential technique for optimizing vector clock communication overhead in distributed systems. Why Optimize Vector Clock Messages? Problem: • In a system with n processes, each message usually carries a vector of size n. • This becomes inefficient when n is large. • But often, only a few entries in the vector clock change between messages — especially when only a few processes interact frequently. Distributed Computing || Week - 2 46
  • 47.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Example • 3 processes: P1, P2, P3 Vectors are of size 3 Initial State: • P1 sends message to P2: – P1’s clock: [5, 2, 1] – Last vector sent to P2: [3, 2, 1] Difference: – Only P1's own entry has changed (from 3 → 5) – So, P1 sends only: { index: 1, value: 5 } Instead of the full vector [5, 2, 1] 47
  • 48.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Singhal–Kshemkalyani’s differential technique Benefits: Thus this technique cuts down the – message size, – communication bandwidth and – buffer (to store messages) requirements. Distributed Computing || Week - 2 48
  • 49.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 49
  • 50.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Fowler–Zwaenepoel’s direct-dependency technique • Fowler–Zwaenepoel's Direct-Dependency Technique, which offers an alternative to vector clocks by avoiding sending full clock vectors during runtime. What Problem Are They Solving? • In systems with many processes, vector clocks (which grow with the number of processes) become costly to transmit with every message. Fowler–Zwaenepoel proposed a technique to: • Track causality with only scalar values in messages — much more lightweight. • reduces the size of messages by transmitting only a scalar value in the messages. • No vector clocks are maintained on-the-fly. • Instead, a process only maintains information regarding direct dependencies on other processes. Distributed Computing || Week - 2 50
  • 51.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Distributed Computing || Week - 2 Example Let’s say: • P1 sends msg1 to P2 → event a at P1 is a direct dependency of event b at P2 • P2 sends msg2 to P3 → event b is a direct dependency of event c at P3 So: – c at P3 depends on b at P2 – b at P2 depends on a at P1 • You can recursively reconstruct: – Vector time of c = combine vector time of a and b – But none of these vectors were maintained at runtime — only direct dependency links were logged. 51
  • 52.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Presentation Overview • Logical Time Introduction • Implementing Logical Clocks • Scalar Time • Lamport Timestamps • Vector Time • Rules For Vector Clocks Updates • Singhal–kshemkalyani’s Differential Technique • Fowler – Zwaenepoel’s Direct-dependency Technique • Physical Clock Synchronization: NTP Distributed Computing || Week - 2 52
  • 53.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Physical clock synchronization: NTP (Network Time Protocol) • In centralized systems, there is no need for clock synchronization because, generally, there is only a single clock(e.g., system time). • A process gets the time by simply issuing a system call to the kernel. • When another process after that tries to get the time, it will get a higher time value. • Thus, in such systems, there is a clear ordering of events and there is no ambiguity about the times at which these events occur. Distributed Computing || Week - 2 53
  • 54.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Physical clock synchronization: NTP • In distributed systems, there is no global clock or common memory. • Each processor has its own internal clock and its own notion of time. • In practice, these clocks can easily drift apart(run at slightly different speeds) by several seconds per day, accumulating significant errors over time. • This causes ambiguity in: – Event ordering – Log analysis – Transaction validation Distributed Computing || Week - 2 54
  • 55.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 Physical clock synchronization: NTP • NTP is a protocol used to synchronize the clocks of all computers in a distributed system to a common reference time (typically UTC – Coordinated Universal Time). • UTC is the global time standard used across the world to coordinate clocks and timekeeping systems. How NTP Works (Simplified) – A client sends a request to an NTP server. – The server responds with its current time. – The client uses round-trip delay and offset estimation to correct its own clock. • Round-trip delay: How long it takes to send a request and receive a reply. • Clock offset: How far ahead or behind its clock is compared to the server. Distributed Computing || Week - 2 55
  • 56.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 THANK YOU Distributed Computing || Week - 2 56
  • 57.
    BITS Pilani, Deemedto be University under Section 3 of UGC Act, 1956 IMP Note to Self Distributed Computing || Week - 2 57 STOP RECORDING