This document summarizes key concepts around synchronization in distributed systems including clock synchronization, logical clocks, election algorithms, and mutual exclusion. It discusses how physical clocks become unsynchronized over time, introduces Lamport timestamps and logical clocks, describes the Bully and Ring election algorithms, and covers a centralized mutual exclusion algorithm that uses a coordinator to grant processes permission to enter a critical region.
The document discusses synchronization in distributed systems. It covers clock synchronization algorithms to ensure processes agree on time ordering of events. Logical clocks like Lamport timestamps are used when exact time is not needed, only causal ordering. Distributed mutual exclusion algorithms are discussed to allow only one process access to shared resources at a time, including token-based and permission-based approaches. The latter includes both a centralized and distributed algorithm.
Distributed systems face challenges with time ordering and synchronization due to a lack of a global clock. Logical clocks provide a way to determine event ordering without precise time information. Lamport's algorithm uses logical clocks and the "happens before" relation to ensure proper synchronization. Physical clocks also require synchronization methods like Cristian's algorithm to limit clock drift between distributed nodes. Mutual exclusion in distributed systems can be achieved through centralized coordination of access to critical sections.
This document discusses various synchronization issues in distributed systems including clock synchronization, event ordering, mutual exclusion, and deadlock. It describes how computer clocks are implemented and different clock synchronization algorithms like centralized and distributed algorithms. It explains logical clocks and happened-before relation for event ordering. Different approaches for mutual exclusion like centralized, distributed, and token-passing are outlined. The four conditions for deadlock and different strategies like avoidance, prevention, and detection and recovery are summarized. Resource allocation graphs and wait-for graphs are introduced for modeling deadlocks.
clock synchronization in Distributed System Harshita Ved
The document discusses various techniques for synchronizing clocks in distributed real-time systems. It begins by explaining that real-time systems require results within a certain time frame and interactions with the physical world. The challenges of distributed systems are then presented, where individual node clocks may run at different speeds and it is difficult to determine which event occurred first. Several clock synchronization algorithms are outlined, including using a global clock, averaging individual clocks, having an external time source, and assigning timestamps to messages. The Cristian and Berkeley algorithms are then described in more detail as centralized synchronization approaches where one node coordinates keeping all clocks aligned.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
This document provides an overview of clock synchronization in distributed systems. It discusses how physical clocks can differ slightly in frequency and how precise atomic clocks are used to define International Atomic Time (TAI) and Universal Coordinated Time (UTC). It also describes several common clock synchronization algorithms, including Cristian's algorithm, the Berkeley algorithm, and averaging algorithms. Logical clocks are introduced as an alternative to synchronized physical clocks for maintaining consistency in distributed algorithms. Lamport timestamps are presented as a way to totally order events in a distributed system.
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
The document discusses synchronization in distributed systems. It covers clock synchronization algorithms to ensure processes agree on time ordering of events. Logical clocks like Lamport timestamps are used when exact time is not needed, only causal ordering. Distributed mutual exclusion algorithms are discussed to allow only one process access to shared resources at a time, including token-based and permission-based approaches. The latter includes both a centralized and distributed algorithm.
Distributed systems face challenges with time ordering and synchronization due to a lack of a global clock. Logical clocks provide a way to determine event ordering without precise time information. Lamport's algorithm uses logical clocks and the "happens before" relation to ensure proper synchronization. Physical clocks also require synchronization methods like Cristian's algorithm to limit clock drift between distributed nodes. Mutual exclusion in distributed systems can be achieved through centralized coordination of access to critical sections.
This document discusses various synchronization issues in distributed systems including clock synchronization, event ordering, mutual exclusion, and deadlock. It describes how computer clocks are implemented and different clock synchronization algorithms like centralized and distributed algorithms. It explains logical clocks and happened-before relation for event ordering. Different approaches for mutual exclusion like centralized, distributed, and token-passing are outlined. The four conditions for deadlock and different strategies like avoidance, prevention, and detection and recovery are summarized. Resource allocation graphs and wait-for graphs are introduced for modeling deadlocks.
clock synchronization in Distributed System Harshita Ved
The document discusses various techniques for synchronizing clocks in distributed real-time systems. It begins by explaining that real-time systems require results within a certain time frame and interactions with the physical world. The challenges of distributed systems are then presented, where individual node clocks may run at different speeds and it is difficult to determine which event occurred first. Several clock synchronization algorithms are outlined, including using a global clock, averaging individual clocks, having an external time source, and assigning timestamps to messages. The Cristian and Berkeley algorithms are then described in more detail as centralized synchronization approaches where one node coordinates keeping all clocks aligned.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
This document provides an overview of clock synchronization in distributed systems. It discusses how physical clocks can differ slightly in frequency and how precise atomic clocks are used to define International Atomic Time (TAI) and Universal Coordinated Time (UTC). It also describes several common clock synchronization algorithms, including Cristian's algorithm, the Berkeley algorithm, and averaging algorithms. Logical clocks are introduced as an alternative to synchronized physical clocks for maintaining consistency in distributed algorithms. Lamport timestamps are presented as a way to totally order events in a distributed system.
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like wait-die and would-wait schemes and algorithms for distributed deadlock detection and coordinator election.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like timestamp ordering and various distributed deadlock detection algorithms. Finally, it summarizes bully and ring algorithms for electing a new coordinator when failures occur.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like timestamp ordering and various distributed deadlock detection algorithms. Finally, it summarizes bully and ring algorithms for electing a new coordinator when failures occur.
Chapter 5 discusses synchronization in distributed systems. Synchronization mechanisms are needed to enforce correct interaction between processes that share resources and run concurrently. Clock synchronization and event ordering are important synchronization techniques. Clock synchronization aims to keep clocks across distributed nodes close together despite unpredictable delays. It can be achieved through centralized or distributed algorithms. Event ordering ensures a total order of all events in a distributed system through happened-before relations and logical clocks.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
Communication And Synchronization In Distributed Systemsguest61205606
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
Communication And Synchronization In Distributed Systemsguest61205606
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
Synchronization in distributed systems ensures events are coordinated to operate systems uniformly. This document outlines key synchronization techniques:
1. Clock synchronization algorithms ensure process clocks are aligned for tasks like resource accounting.
2. Mutual exclusion algorithms prevent simultaneous access to shared resources like files or printers. Approaches include bully, ring, and centralized election algorithms.
3. Election algorithms elect a coordinator, like the bully algorithm where highest priority process wins if others don't respond, preventing coordination issues after failures.
This document discusses different types of clocks used in distributed systems, including physical clocks and logical clocks. Physical clocks are tied to real time but may drift over time. Logical clocks like Lamport's logical clock and vector clocks are derived from potential causality between events and can order events without reference to real time. The document covers clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm for internal synchronization. It also discusses external synchronization using protocols like NTP. Logical clocks like Lamport's clock and vector clocks can order events based on potential causality between them using logical timestamps.
This document compares and analyzes three clock synchronization algorithms: Cristian's algorithm, Berkeley algorithm, and Network Time Protocol (NTP). Cristian's algorithm uses a centralized time server model where processes synchronize with a single server. The Berkeley algorithm does not use an accurate time source; it obtains an average time from participating computers. NTP enables clients across the internet to synchronize to UTC despite message delays using statistical techniques. The document provides details on how each algorithm works, including message exchanges, calculating offsets, and handling delays. It discusses using Java RMI for implementing and testing the algorithms on distributed processes.
Distributed system TimeNState-Tanenbaum.pptTantraNathjha1
This document discusses various topics related to distributed systems including clock synchronization, global state, mutual exclusion algorithms, transactions, and concurrency control. It provides examples and explanations of concepts like Cristian's algorithm for clock synchronization, the bully algorithm for electing a coordinator, using timestamps or locking for concurrency control, and maintaining consistency across distributed transactions.
This document discusses various techniques for clock synchronization and maintaining consistency in distributed systems. It covers Cristian's algorithm, the Berkeley algorithm, Lamport timestamps, algorithms for mutual exclusion including a centralized, distributed, and token ring approach, and techniques for concurrency control including two-phase locking, pessimistic timestamp ordering, and ensuring serializability of transactions.
This document summarizes key concepts related to time and clocks in distributed systems. It discusses how physical clocks work, including obtaining accurate time from sources like atomic clocks and synchronizing clocks across distributed systems. It also covers logical clocks and how they are used to order events in a way that preserves causality. Other distributed computing topics summarized include mutual exclusion algorithms, elections, and atomic transactions including concurrency control methods like two-phase locking and optimistic concurrency control.
The document summarizes several distributed systems algorithms:
1. The Bully algorithm is described for electing a coordinator when the current one fails. It works by having processes with higher IDs take precedence in an election.
2. The Ring algorithm also elects a coordinator when one fails using a virtual ring topology rather than process IDs. Election messages circulate the ring to determine the new coordinator.
3. A centralized mutual exclusion algorithm uses a coordinator to grant exclusive access to critical sections to processes one at a time. Distributed algorithms avoid a single point of failure by having processes coordinate directly.
1) Logical clocks are used to provide a total ordering of events in an asynchronous distributed system where there is no global clock. Each process maintains a local logical clock that is incremented for compute and send events. The clock is updated based on timestamp values piggybacked in received messages.
2) The happened-before relation defines a partial ordering of events based on causality. Logical clocks extend this to a total ordering but the ordering is arbitrary and may not match the perceived order of users.
3) While logical clocks provide total ordering, the timestamps do not preclude concurrent events from having the same value. Vector clocks were developed to address this limitation of Lamport clocks.
A Deterministic Model Of Time For Distributed SystemsJim Webb
This paper proposes a new deterministic, logical time model for distributed systems. The time model defines a total order of all messages in the system based on two partial order relations: messages sent before other messages ("=>") and strong causality ("~"). This ensures messages are delivered in a way that respects real-world causality. The time model can be implemented using message timestamps and delayed delivery to ensure the total order is maintained. This provides application programmers with a simple, linear view of time across a distributed system.
The document discusses various techniques for synchronization in shared memory and distributed systems, including monitors, message passing, remote procedure calls (RPC), and logical clocks. It describes monitors as a synchronization primitive for shared memory that uses condition variables and mutual exclusion. For distributed synchronization it discusses message passing with channels/ports/mailboxes and RPC/rendezvous. Classical synchronization problems like the readers-writers problem and dining philosophers problem are presented along with their solutions. Logical clocks are introduced as a way to order events in a distributed system when physical clocks may be skewed.
A wireless local area network (WLAN) uses radio waves to transmit and receive data over the air, minimizing the need for wired connections. A basic wireless network can be set up between two PCs with wireless cards in a peer-to-peer configuration. Larger wireless networks use access points connected to a wired network to connect multiple clients and extend the range of the network. Factors like access point placement, use of extension points or directional antennas can help ensure clients stay connected as they roam across coverage areas. While wireless networks provide mobility and flexibility, their setup and maintenance require additional hardware, have security limitations, and may have lower bandwidth compared to wired networks.
WiMAX is a wireless technology standard that provides broadband connections over long distances, with mobility. It uses IEEE 802.16 standards for wireless metropolitan area networks (MAN). The WiMAX architecture includes mobile subscriber stations, an access service network that provides radio access, and a connectivity service network that provides IP connectivity. It allows for long range connectivity up to 30 miles, mobility, and accessibility connecting multiple base stations. Compared to WiFi, WiMAX provides wireless broadband for transferring data, voice, and video at higher rates between both fixed and mobile devices. Security is an important consideration for WiMAX implementations.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like wait-die and would-wait schemes and algorithms for distributed deadlock detection and coordinator election.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like timestamp ordering and various distributed deadlock detection algorithms. Finally, it summarizes bully and ring algorithms for electing a new coordinator when failures occur.
This document discusses several key concepts in distributed systems including event ordering, mutual exclusion, concurrency control, deadlock handling, and election algorithms. It provides details on implementing happened-before relations to ensure event ordering. It describes centralized and distributed approaches for mutual exclusion and discusses two-phase commit and locking protocols for concurrency control. It also covers deadlock prevention techniques like timestamp ordering and various distributed deadlock detection algorithms. Finally, it summarizes bully and ring algorithms for electing a new coordinator when failures occur.
Chapter 5 discusses synchronization in distributed systems. Synchronization mechanisms are needed to enforce correct interaction between processes that share resources and run concurrently. Clock synchronization and event ordering are important synchronization techniques. Clock synchronization aims to keep clocks across distributed nodes close together despite unpredictable delays. It can be achieved through centralized or distributed algorithms. Event ordering ensures a total order of all events in a distributed system through happened-before relations and logical clocks.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
Communication And Synchronization In Distributed Systemsguest61205606
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
Communication And Synchronization In Distributed Systemsguest61205606
This document discusses various topics related to communication and synchronization in distributed systems. It covers concepts like communication protocols, remote procedure calls, client-server and peer-to-peer models, blocking vs non-blocking communication, reliability, group communication, message ordering, and synchronization techniques including clock synchronization algorithms, mutual exclusion algorithms, and atomic transactions.
Synchronization in distributed systems ensures events are coordinated to operate systems uniformly. This document outlines key synchronization techniques:
1. Clock synchronization algorithms ensure process clocks are aligned for tasks like resource accounting.
2. Mutual exclusion algorithms prevent simultaneous access to shared resources like files or printers. Approaches include bully, ring, and centralized election algorithms.
3. Election algorithms elect a coordinator, like the bully algorithm where highest priority process wins if others don't respond, preventing coordination issues after failures.
This document discusses different types of clocks used in distributed systems, including physical clocks and logical clocks. Physical clocks are tied to real time but may drift over time. Logical clocks like Lamport's logical clock and vector clocks are derived from potential causality between events and can order events without reference to real time. The document covers clock synchronization algorithms like Cristian's algorithm and Berkeley algorithm for internal synchronization. It also discusses external synchronization using protocols like NTP. Logical clocks like Lamport's clock and vector clocks can order events based on potential causality between them using logical timestamps.
This document compares and analyzes three clock synchronization algorithms: Cristian's algorithm, Berkeley algorithm, and Network Time Protocol (NTP). Cristian's algorithm uses a centralized time server model where processes synchronize with a single server. The Berkeley algorithm does not use an accurate time source; it obtains an average time from participating computers. NTP enables clients across the internet to synchronize to UTC despite message delays using statistical techniques. The document provides details on how each algorithm works, including message exchanges, calculating offsets, and handling delays. It discusses using Java RMI for implementing and testing the algorithms on distributed processes.
Distributed system TimeNState-Tanenbaum.pptTantraNathjha1
This document discusses various topics related to distributed systems including clock synchronization, global state, mutual exclusion algorithms, transactions, and concurrency control. It provides examples and explanations of concepts like Cristian's algorithm for clock synchronization, the bully algorithm for electing a coordinator, using timestamps or locking for concurrency control, and maintaining consistency across distributed transactions.
This document discusses various techniques for clock synchronization and maintaining consistency in distributed systems. It covers Cristian's algorithm, the Berkeley algorithm, Lamport timestamps, algorithms for mutual exclusion including a centralized, distributed, and token ring approach, and techniques for concurrency control including two-phase locking, pessimistic timestamp ordering, and ensuring serializability of transactions.
This document summarizes key concepts related to time and clocks in distributed systems. It discusses how physical clocks work, including obtaining accurate time from sources like atomic clocks and synchronizing clocks across distributed systems. It also covers logical clocks and how they are used to order events in a way that preserves causality. Other distributed computing topics summarized include mutual exclusion algorithms, elections, and atomic transactions including concurrency control methods like two-phase locking and optimistic concurrency control.
The document summarizes several distributed systems algorithms:
1. The Bully algorithm is described for electing a coordinator when the current one fails. It works by having processes with higher IDs take precedence in an election.
2. The Ring algorithm also elects a coordinator when one fails using a virtual ring topology rather than process IDs. Election messages circulate the ring to determine the new coordinator.
3. A centralized mutual exclusion algorithm uses a coordinator to grant exclusive access to critical sections to processes one at a time. Distributed algorithms avoid a single point of failure by having processes coordinate directly.
1) Logical clocks are used to provide a total ordering of events in an asynchronous distributed system where there is no global clock. Each process maintains a local logical clock that is incremented for compute and send events. The clock is updated based on timestamp values piggybacked in received messages.
2) The happened-before relation defines a partial ordering of events based on causality. Logical clocks extend this to a total ordering but the ordering is arbitrary and may not match the perceived order of users.
3) While logical clocks provide total ordering, the timestamps do not preclude concurrent events from having the same value. Vector clocks were developed to address this limitation of Lamport clocks.
A Deterministic Model Of Time For Distributed SystemsJim Webb
This paper proposes a new deterministic, logical time model for distributed systems. The time model defines a total order of all messages in the system based on two partial order relations: messages sent before other messages ("=>") and strong causality ("~"). This ensures messages are delivered in a way that respects real-world causality. The time model can be implemented using message timestamps and delayed delivery to ensure the total order is maintained. This provides application programmers with a simple, linear view of time across a distributed system.
The document discusses various techniques for synchronization in shared memory and distributed systems, including monitors, message passing, remote procedure calls (RPC), and logical clocks. It describes monitors as a synchronization primitive for shared memory that uses condition variables and mutual exclusion. For distributed synchronization it discusses message passing with channels/ports/mailboxes and RPC/rendezvous. Classical synchronization problems like the readers-writers problem and dining philosophers problem are presented along with their solutions. Logical clocks are introduced as a way to order events in a distributed system when physical clocks may be skewed.
A wireless local area network (WLAN) uses radio waves to transmit and receive data over the air, minimizing the need for wired connections. A basic wireless network can be set up between two PCs with wireless cards in a peer-to-peer configuration. Larger wireless networks use access points connected to a wired network to connect multiple clients and extend the range of the network. Factors like access point placement, use of extension points or directional antennas can help ensure clients stay connected as they roam across coverage areas. While wireless networks provide mobility and flexibility, their setup and maintenance require additional hardware, have security limitations, and may have lower bandwidth compared to wired networks.
WiMAX is a wireless technology standard that provides broadband connections over long distances, with mobility. It uses IEEE 802.16 standards for wireless metropolitan area networks (MAN). The WiMAX architecture includes mobile subscriber stations, an access service network that provides radio access, and a connectivity service network that provides IP connectivity. It allows for long range connectivity up to 30 miles, mobility, and accessibility connecting multiple base stations. Compared to WiFi, WiMAX provides wireless broadband for transferring data, voice, and video at higher rates between both fixed and mobile devices. Security is an important consideration for WiMAX implementations.
This document is from a Cisco Systems presentation that provides an introduction to computer networks. It discusses how networks are used in everyday life and small businesses. The objectives are to explain network topologies, devices, characteristics that support business communication, and trends affecting business networks. The document covers topics such as the global community, how networks impact daily life, network sizes and components, LANs and WANs, the Internet, trends like BYOD and cloud computing, and network security.
This document discusses consistency models in distributed systems with replication. It describes reasons for replication including reliability and performance. Various consistency models are covered, including: strict consistency where reads always return the most recent write; sequential consistency where operations appear in a consistent order across processes; weak consistency which enforces consistency on groups of operations; and release consistency which separates acquiring and releasing locks to selectively guard shared data. Client-centric models like eventual consistency are also discussed, where updates gradually propagate to all replicas.
This document provides an overview of naming in distributed systems. It discusses how names are used to identify and refer to entities and resources. A naming system implements a name space that organizes names in a hierarchical structure. Name resolution involves mapping names to addresses or identifiers. The implementation of naming services is often distributed across multiple name servers to improve scalability and availability. Examples of naming systems like the Domain Name System (DNS) are also discussed.
The document discusses processes and threads in distributed systems. It explains that threads allow multiple executions within the same process by sharing resources like memory. Distributed systems can use multithreaded clients and servers to improve performance. Code migration is also discussed, where a program's code and execution state can be moved between machines for better load balancing or to reduce communication costs. The challenges of migrating local resources that may be fixed to a particular machine are also covered.
This document discusses communication in distributed systems and introduces several communication models. It begins by explaining that communication in distributed systems is based on message passing rather than shared memory. It then reviews four common communication models: Remote Procedure Call (RPC), Remote Method Invocation (RMI), Message-Oriented Middleware (MOM), and Streams. The document also discusses layered protocols like OSI and TCP/IP, and delves into specifics of RPC including parameter passing and extended RPC models like asynchronous RPC and doors.
This document discusses different types of distributed system architectures. It covers layered architectures where components are organized in layers and requests flow from top to bottom. Object-based architectures connect components through remote procedure calls. Data-centered architectures involve all components communicating through a shared data repository. Event-based architectures use publish-subscribe systems where components communicate through event propagation. The document also describes centralized client-server architectures and decentralized peer-to-peer architectures. Client-server involves clients requesting from servers, while peer-to-peer has no central control and nodes can act as clients or servers.
The document discusses the history and concepts of distributed systems. It defines a distributed system as a collection of independent computers that appears as a single system to users. Distributed systems provide benefits like resource sharing, availability, scalability, and performance. However, they also introduce challenges around concurrency, security, partial failures, and heterogeneity. The document outlines common goals for distributed systems like transparency, openness, and scalability. It describes different approaches to scaling distributed systems through techniques like hiding latencies, distribution, and replication. Finally, it discusses key hardware concepts like multiprocessors and multicomputers as well as software approaches like distributed operating systems, network operating systems, and middleware.
Instagram has become one of the most popular social media platforms, allowing people to share photos, videos, and stories with their followers. Sometimes, though, you might want to view someone's story without them knowing.
Understanding User Behavior with Google Analytics.pdfSEO Article Boost
Unlocking the full potential of Google Analytics is crucial for understanding and optimizing your website’s performance. This guide dives deep into the essential aspects of Google Analytics, from analyzing traffic sources to understanding user demographics and tracking user engagement.
Traffic Sources Analysis:
Discover where your website traffic originates. By examining the Acquisition section, you can identify whether visitors come from organic search, paid campaigns, direct visits, social media, or referral links. This knowledge helps in refining marketing strategies and optimizing resource allocation.
User Demographics Insights:
Gain a comprehensive view of your audience by exploring demographic data in the Audience section. Understand age, gender, and interests to tailor your marketing strategies effectively. Leverage this information to create personalized content and improve user engagement and conversion rates.
Tracking User Engagement:
Learn how to measure user interaction with your site through key metrics like bounce rate, average session duration, and pages per session. Enhance user experience by analyzing engagement metrics and implementing strategies to keep visitors engaged.
Conversion Rate Optimization:
Understand the importance of conversion rates and how to track them using Google Analytics. Set up Goals, analyze conversion funnels, segment your audience, and employ A/B testing to optimize your website for higher conversions. Utilize ecommerce tracking and multi-channel funnels for a detailed view of your sales performance and marketing channel contributions.
Custom Reports and Dashboards:
Create custom reports and dashboards to visualize and interpret data relevant to your business goals. Use advanced filters, segments, and visualization options to gain deeper insights. Incorporate custom dimensions and metrics for tailored data analysis. Integrate external data sources to enrich your analytics and make well-informed decisions.
This guide is designed to help you harness the power of Google Analytics for making data-driven decisions that enhance website performance and achieve your digital marketing objectives. Whether you are looking to improve SEO, refine your social media strategy, or boost conversion rates, understanding and utilizing Google Analytics is essential for your success.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
2. 2
we discuss
the issue of synchronization based on time (actual time and
relative ordering)
how a group of processes can appoint a process as a
coordinator; can be done by means of election algorithms
distributed mutual exclusion to protect shared resources
from simultaneous access by multiple processes
distributed transactions, which also do the same thing, but
optimize access through advanced concurrency control
mechanisms
Objectives of the Chapter
3. 3
5.1 Clock Synchronization
in centralized systems, time can be unambiguously decided
by a system call
e.g., process A at time t1 gets the time, say tA, and process b
at time t2, where t1 < t2, gets the time, say tB
then tA is always less than tB
achieving agreement on time in distributed systems is
difficult
Physical Clocks
is it possible to synchronize all clocks in a distributed
system?
no; even if all computers initially start at the same time,
they will get out of synch after some time due to crystals in
different computers running at different frequencies, a
phenomenon called clock skew
4. 4
how is time actually measured?
earlier astronomically; based on the amount of time it
takes the earth to rotate the sun; 1 solar second =
1/86400th of a solar day (24*3600 = 86400)
it was later discovered that the period of the earth’s
rotation is not constant; the earth is slowing down due to
tidal friction and atmospheric drag; geologists believe
that 300 million years ago there were about 400 days per
year; the length of the year is not affected, only the days
have become longer
astronomical timekeeping was later replaced by counting
the transitions of the cesium 133 atom and led to what is
known as IAT - International Atomic Time
5. 5
UTC (Universal Coordinated Time) was introduced by
having leap seconds whenever the discrepancy between
IAT and solar time grows to 800 msec
Clock Synchronization Algorithms
two situations:
one machine has a receiver of UTC time, then how do we
synchronize all other machines to it
no machine has a receiver, each machine keeps track of
its own time, then how to synchronize them
many algorithms have been proposed
6. 6
Cristian’s Algorithm
suitable when one machine has a UTC receiver; let us call this
machine a time server
every some fixed seconds, each machine sends a message to the
time server asking the current time
getting the current time from a time server
7. 7
first approximation: let the client set its clock to CUTC
problems:
time must never run backward
solution: introduce change gradually; if a timer is set to
generate 100 interrupts per second, it means 10 msec must be
added to the time; then make it say 9 (to slow it down) or 11
msec (to advance it gradually)
message propagation time
solution: try to estimate it as (T1-T0)/2 and add this to CUTC
if we know how long it takes the time server to handle the
interrupt, the above can be improved; let the interrupt
handling time be I, then the new estimate will be (T1-T0-I)/2
8. 8
The Berkley Algorithm
in Cristian’s algorithm, the time server is passive; only
other machines ask it periodically
it then calculates the average and sends messages to all
machines so that they will adjust their clocks accordingly
suitable when no machine has a UTC receiver
the time daemon’s time must be set periodically manually
a) the time daemon asks all the other machines for their clock values
b) the machines answer how far ahead or behind the time daemon they are
c) the time daemon tells everyone how to adjust their clock
9. 9
for some applications, it is sufficient if all machines agree on the
same time, rather than with the real time; we need internal
consistency of the clocks rather than being close to the real time
hence the concept of logical clocks
what matters is the order in which events occur
Lamport Timestamps
Lamport defined a relation called happens before
a b read as “a happens before b”; means all processes agree
that first event a occurs, then event b occurs
this relation can be observed in two situations
if a and b are events in the same process, and a occurs
before b, then a b is true
if a is the event of a message being sent by one process, and
b is the event of the message being received by another
process, then a b is also true
for every event a, we can assign a time value C(a) on which
all processes agree; if a b, then C(a) < C(b)
5.2 Logical Clocks
10. 10
Lamport’s proposed algorithm for assigning times for processes
consider three processes each running on a different machine,
each with its own clock
the solution follows the happens before relation; each message
carries the sending time; if the receiver’s clock shows a value
prior to the time the message was sent, the receiver fast forwards
its clock to be one more than the sending time
a) three processes, each with its own clock; the clocks run at different rates
b) Lamport's algorithm corrects the clocks
11. 11
example: Totally-Ordered Multicasting
assume a bank database replicated across several cities for
performance; a query is always forwarded to the nearest copy
let us have a customer with $1000 in her account
she is in city A and wants to add $100 to her account (update 1)
a bank employee in city B initiates an update of increasing
accounts by 1 percent interest (update 2)
both must be carried out at both copies of the database
due to communication delays, if the updates are done in
different order, the database will be inconsistent (city A =
$1111, city B = $1110)
updating a replicated database and leaving it in an inconsistent state
12. 12T
H
situations like these require a Totally-Ordered Multicast, i.e.,
all messages are delivered in the same order to each receiver
Lamport timestamps can be used to implement totally-
ordered multicasts
timestamp each message
during multicasting, send a message also to the sender
assume messages from the same sender are received in
the order they are sent and no message is lost
all messages are put in a local queue ordered by
timestamp
each receiver multicasts an acknowledgement
then all processes will have the same copy of the local
queue
a process can deliver a queued message to the application
it is running only when that message is at the top of the
queue and has been acknowledged by each other process
13. 13
there are situations where one process must act as a
coordinator, initiator, or perform some special task
assume that
each process has a unique number
every process knows the process number of every other
process, but not the state of each process (which ones are
currently running and which ones are down)
election algorithms attempt to locate the process with the
highest process number
two algorithms: the Bully algorithm and the Ring algorithm
5.3 Election Algorithms
14. 14
The Bully Algorithm (the biggest person wins)
when a process (say P4) notices that the coordinator is no
longer responding to requests, it initiates an election as
follows
1. P4 sends an ELECTION message to all processes with
higher numbers (P5, P6, P7)
if a process gets an ELECTION message from one of
its lower-numbered colleagues, it sends an OK
message to the sender and holds an election
2. if no one responds, P4 wins the election and becomes a
coordinator
3. if one of the higher-ups answers, this new process takes
over
15. 15
a) Process 4 holds an election
b) Process 5 and 6 respond, telling 4 to stop
c) Now 5 and 6 each hold an election
16. 16
d) Process 6 tells 5 to stop
e) Process 6 wins and tells everyone
the last one will send a message to all processes telling
them that it is a boss
if a process that was previously down comes back, it holds
an election
17. 17
The Ring Algorithm
based on the use of a ring
assume that processes are physically or logically ordered,
so that each process knows who its successor is
when a process notices that the coordinator is not
functioning,
it builds an ELECTION message containing its own
process number and sends it to its successor
if the successor is down, the sender goes to the next
number, or the next until a running process is found
at each step, the sender adds its own process number to
the list in the message making itself a candidate
eventually the message gets back to the originating
process; the process with the highest number is elected
as coordinator, the message type is changed to
COORDINATOR and circulated once again to announce
the coordinator and who the members of the ring are
18. 18
election algorithm using a ring
e.g., processes 2 and 5 simultaneously discover that the
previous coordinator has crashed
they build an ELECTION message and a coordinator is
chosen
although the process is done twice, there is no problem,
only a little bandwidth is consumed
19. 19
when a process has to read or update shared data (quite
often to use a shared resource such as a printer, a file, etc.), it
first enters a critical region to achieve mutual exclusion
in single processor systems, critical regions are protected
using semaphores, monitors, and similar constructs
how are critical regions and mutual exclusion implemented in
distributed systems?
three algorithms: centralized, distributed, and token ring
A Centralized Algorithm
a coordinator is appointed and is in charge of granting
permissions
three messages required: request, grant, release
5.5 Mutual Exclusion
20. 20
a) process 1 asks the coordinator for permission to enter a
critical region; permission is granted
b) process 2 then asks permission to enter the same critical
region; the coordinator does not reply (could also send a no
message; is implementation dependent), but queues process
2; process 2 is blocked
c) when process 1 exits the critical region, it tells the coordinator,
which then replies to 2
21. 21
the algorithm
guarantees mutual exclusion
is fair - first come first served
no starvation
easy to implement; requires only three messages:
request, grant, release
shortcoming: a failure of the coordinator crashes the
system (specially if processes block after sending a
request); it also becomes a performance bottleneck
A Distributed Algorithm
assume that there is a total ordering of all events in the
system, such as using Lamport timestamps
when a process wants to enter a critical region it builds a
message (containing name of the critical region, its
process number, current time) and sends it to everyone
including itself
the sending of a message is assumed to be reliable; i.e.,
every message is acknowledged
22. 22
when a process receives a request message
1. if the receiver is not in a critical region and does not
want to enter it, it sends back an OK message to the
sender
2. if the receiver is already in the critical region, it does not
reply; instead it queues the request
3. if the receiver wants to enter the critical region but has
not yet done so, it compares the timestamp of the
message it sent with the incoming one; the lowest wins;
if the incoming message is lower, it sends an OK
message; otherwise it queues the incoming message
and does not do anything
when the sender gets a reply from all processes, it may
enter into the critical region
when it finishes it sends an OK message to all processes
in its queue
23. 23
is it possible that two processes can enter into the critical
region at the same time if they initiate a message at the
same time? NO
a) two processes (0 and 2) want to enter the same critical
region at the same moment
b) process 0 has the lowest timestamp, so it wins
c) when process 0 is done, it sends an OK message, so
process 2 can now enter into its critical region
24. 24
mutual exclusion is guaranteed
the total number of messages to enter a critical region is
increased to 2(n-1), where n is the number of processes
no single point of failure; unfortunately n points of failure
we also have n bottlenecks
hence, it is slower, more complicated, more expensive, and
less robust than the previous one; but shows that a
distributed algorithm is possible
A Token Ring Algorithm
assume a bus network (e.g., Ethernet); no physical
ordering of processes required but a logical ring is
constructed by software
25. 25
a) an unordered group of processes on a network
b) a logical ring constructed in software
26. 26
when the ring is initialized, process 0 is given a token
the token circulates around the ring
a process can enter into a critical region only when it has
access to the token
it releases it when it leaves the critical region
it should not enter into a second critical region with the
same token
mutual exclusion is guaranteed
no starvation
problems
if the token is lost, it has to be generated, but detecting
that it is lost is difficult
if a process crashes
can be modified to include acknowledging a token so
that the token can bypass a crashed process
in this case every process has to maintain the current
ring configuration
27. 27
A comparison of the three algorithms (assumes only point-
to-point communication channels are used)
Algorithm
Messages per
entry/exit
Delay before entry
(in message times)
Problems
Centralized 3 - efficient 2
Coordinator
crash
Distributed 2 ( n – 1 ) 2 ( n – 1 )
Crash of any
process
Token ring 1 to 0 to n – 1
Lost token,
process crash
28. 28
transactions are also used to protect shared resources,
usually data (similar to mutual exclusion)
but transactions do much more
they also allow a process to access and modify multiple
data items as a single atomic operation
if a process backs out halfway during the transaction,
everything must be restored to the point just before the
transaction started (all-or-nothing)
The Transaction Model
the model for transactions comes from the world of
business
a supplier and a retailer negotiate on
price
delivery date
quality
etc.
5.6 Distributed Transactions and Concurrency Control
29. 29
until the deal is concluded they can continue negotiating
or one of them can terminate
but once they have reached an agreement they are bound
by law to carry out their part of the deal
transactions between processes is similar with this
scenario
e.g., assume the following banking operation
withdraw an amount x from account 1
deposit the amount x to account 2
what happens if there is a problem after the first activity is
carried out?
group the two operations into one transaction; either both
are carried out or neither
we need a way to roll back when a transaction is not
completed
30. 30
special primitives are required to program transactions,
supplied either by the underlying distributed system or by
the language runtime system
exact list of primitives depends on the type of application
Primitive Description
BEGIN_TRANSACTION Make the start of a transaction
END_TRANSACTION
Terminate the transaction and try to
commit
ABORT_TRANSACTION
Kill the transaction and restore the old
values
READ
Read data from a file, a table, or
otherwise
WRITE
Write data to a file, a table, or
otherwise
31. 31
a) transaction to reserve two flights commits
b) transaction aborts when second flight is unavailable
BEGIN_TRANSACTION
reserve BD Addis;
reserve Addis DD
END_TRANSACTION
(a)
BEGIN_TRANSACTION
reserve BD Addis;
reserve Addis DD full
ABORT_TRANSACTION
(b)
e.g. reserving a seat from Bahir dar to Dire dawa through
Addis Ababa and Bahir dar airports
32. 32
properties of transactions, often referred to as ACID
1. Atomic: to the outside world, the transaction happens
indivisibly(); a transaction either happens completely or
not at all; intermediate states are not seen by other
processes
2. Consistent: the transaction does not violate system
invariants(never change); e.g., in an internal transfer in
a bank, the amount of money in the bank must be the
same as it was before the transfer (the law of
conservation of money); this may be violated for a brief
period of time, but not seen to other processes
3. Isolated or Serializable: concurrent transactions do not
interfere with each other; if two or more transactions
are running at the same time, the final result must look
as though all transactions run sequentially in some
order.
4. Durable: once a transaction commits, the changes are
permanent
33. 33
Classification of Transactions
a transaction could be flat, nested or distributed
Flat Transaction
consists of a series of operations that satisfy the ACID
properties
simple and widely used but with some limitations
do not allow partial results to be committed or aborted
i.e., atomicity is also partly a weakness
in our airline reservation example, we may want to
accept the first reservations and find an alternative
one for the last
some transactions may take too much time
34. 34
Nested Transaction
constructed from a number of subtransactions; it is
logically decomposed into a hierarchy of
subtransactions
the top-level transaction forks off children that run in
parallel, on different machines; to gain performance or
for programming simplicity
each may also execute one or more subtransactions
permanence (durability) applies only to the top-level
transaction; commits by children should be undone
Distributed Transaction
a flat transaction that operates on data that are
distributed across multiple machines
problem: separate algorithms are needed to handle the
locking of data and committing the entire transaction;
36. 36
Implementation
for simplicity, consider transactions on a file system
if each process executing a transaction just updates the file
it uses in place, transactions will not be atomic and
changes will not vanish if the transaction aborts
so we need some other implementation
two methods: private workspace and writeahead log
Private Workspace
a process is given a private workspace at the instant it
begins a transaction
the workspace contains all the files to which it has
access
all its reads and writes are stored in the private
workspace
the actual files are not affected until a process commits
37. 37
Writeahead Log
files are actually modified in place, but before any block is
changed, a record is written to a log telling
which transaction is making the change
which file and block is being changed, and
what the old and new values are
only after the log has been written successfully is the
change made to the file
if the transaction aborts, the log can be used to back up to
the original state, starting at the end and going backwards,
called rollback
38. 38
a) a transaction
b) – d) the log before each statement is executed
x = 0;
y = 0;
BEGIN_TRANSACTION;
x = x + 1;
y = y + 2;
x = y * y;
END_TRANSACTION;
(a)
Log
[x = 0/1]
(b)
Log
[x = 0/1]
[y = 0/2]
(c)
Log
[x = 0/1]
[y = 0/2]
[x = 1/4]
(d)
example
39. 39
Concurrency Control
how to properly control the execution of concurrent
transactions; transactions that are executed at the same
time on shared data
the goal of concurrency control is to allow several
transactions to be executed simultaneously, but the
collection of data items (such as database records or files)
must be left in a consistent state
achieved by giving transactions access to data items in a
specific order whereby the final result is the same as if all
transactions had run sequentially
for concurrency control, we can have three different
managers organized in a layered fashion (non distributed
case)
40. 40
Transaction manager
responsible for guaranteeing atomicity of transactions
processes transaction primitives by transforming them into
scheduling requests for the scheduler
Scheduler
responsible for properly controlling concurrency
it determines which transaction is allowed to pass a read or
write operation to the data manager and at which time
the scheduling may be based on locks or timestamps
Data manager
performs the actual read and write operations
not concerned which transaction is doing the read or write
42. 42
general organization of managers for handling distributed transactions
the above can be adopted for distributed case; each site
has its own scheduler and data manager, but a common
transaction manager
43. 43
then the aim of concurrency control is to properly schedule
conflicting operations
two operations conflict if they operate on the same data
item, and if at least one of them is a write operation
(read-write conflict or write-write conflict)
how are read and write operations synchronized? or how
are concurrency control algorithms classified?
synchronization can take place either through mutual
exclusion mechanisms on shared data (locking) or
explicitly ordering operations using timestamps
two types of concurrency control
pessimistic approach: if something can go wrong, it will
resolve conflicts before they happen; 2PL (and strict 2PL)
and pessimistic timestamp ordering
optimistic approach: assume nothing will go wrong;
synchronization takes place at the end of a transaction;
one or more transactions are forced to abort if conflicts
occurred; optimistic timestamp ordering