The document summarizes two algorithms for solving the Byzantine agreement problem:
1) Lamport-Shostak-Pease algorithm uses oral messaging to recursively achieve agreement among processors in the presence of faulty processors, as long as the number of faulty processors does not exceed one-third of the total.
2) Dolev et al.'s algorithm does not depend on behavior of faulty processors, requires 2m+3 rounds, and achieves agreement through processors broadcasting messages to confirm values until agreement is reached.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Monitors provide mutual exclusion and condition variables to synchronize processes. A monitor consists of private variables and procedures, public procedures that act as system calls, and initialization procedures. Condition variables allow processes to wait for events within a monitor. When signaling a condition variable, either the signaling process waits or the released process waits, depending on whether it uses the Hoare type or Mesa type.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
This document discusses distributed systems applications in real life, including three key areas: distributed rendering in computer graphics, peer-to-peer networks, and massively multiplayer online gaming. It describes how distributed rendering parallelizes graphics processing across multiple computers. Peer-to-peer networks are defined as decentralized networks where nodes act as both suppliers and consumers of resources. Examples of peer-to-peer applications include file sharing and content delivery networks. The document also outlines the challenges of designing multiplayer online games using a distributed architecture rather than a traditional client-server model.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Operating system 18 process creation and terminationVaibhav Khanna
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to next execute
CPU registers – contents of all process-centric registers
CPU scheduling information- priorities, scheduling queue pointers
Memory-management information – memory allocated to the process
Accounting information – CPU used, clock time elapsed since start, time limits
I/O status information – I/O devices allocated to process, list of open files
The network layer is responsible for routing packets from the source to destination. The routing algorithm is the piece of software that decides where a packet goes next (e.g., which output line, or which node on a broadcast channel).For connectionless networks, the routing decision is made for each datagram. For connection-oriented networks, the decision is made once, at circuit setup time.
Routing Issues
The routing algorithm must deal with the following issues:
Correctness and simplicity: networks are never taken down; individual parts (e.g., links, routers) may fail, but the whole network should not.
Stability: if a link or router fails, how much time elapses before the remaining routers recognize the topology change? (Some never do..)
Fairness and optimality: an inherently intractable problem. Definition of optimality usually doesn't consider fairness. Do we want to maximize channel usage? Minimize average delay?
When we look at routing in detail, we'll consider both adaptive--those that take current traffic and topology into consideration--and nonadaptive algorithms.
Agreement Protocols, Distributed Resource Management: Issues in distributed File Systems, Mechanism for building distributed file systems, Design issues in Distributed Shared Memory, Algorithm for Implementation of Distributed Shared Memory.
Monitors provide mutual exclusion and condition variables to synchronize processes. A monitor consists of private variables and procedures, public procedures that act as system calls, and initialization procedures. Condition variables allow processes to wait for events within a monitor. When signaling a condition variable, either the signaling process waits or the released process waits, depending on whether it uses the Hoare type or Mesa type.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
This document discusses distributed systems applications in real life, including three key areas: distributed rendering in computer graphics, peer-to-peer networks, and massively multiplayer online gaming. It describes how distributed rendering parallelizes graphics processing across multiple computers. Peer-to-peer networks are defined as decentralized networks where nodes act as both suppliers and consumers of resources. Examples of peer-to-peer applications include file sharing and content delivery networks. The document also outlines the challenges of designing multiplayer online games using a distributed architecture rather than a traditional client-server model.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Operating system 18 process creation and terminationVaibhav Khanna
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to next execute
CPU registers – contents of all process-centric registers
CPU scheduling information- priorities, scheduling queue pointers
Memory-management information – memory allocated to the process
Accounting information – CPU used, clock time elapsed since start, time limits
I/O status information – I/O devices allocated to process, list of open files
The network layer is responsible for routing packets from the source to destination. The routing algorithm is the piece of software that decides where a packet goes next (e.g., which output line, or which node on a broadcast channel).For connectionless networks, the routing decision is made for each datagram. For connection-oriented networks, the decision is made once, at circuit setup time.
Routing Issues
The routing algorithm must deal with the following issues:
Correctness and simplicity: networks are never taken down; individual parts (e.g., links, routers) may fail, but the whole network should not.
Stability: if a link or router fails, how much time elapses before the remaining routers recognize the topology change? (Some never do..)
Fairness and optimality: an inherently intractable problem. Definition of optimality usually doesn't consider fairness. Do we want to maximize channel usage? Minimize average delay?
When we look at routing in detail, we'll consider both adaptive--those that take current traffic and topology into consideration--and nonadaptive algorithms.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
1. There are two main approaches to distributed mutual exclusion - token-based and non-token based. Token based approaches use a shared token to allow only one process access at a time, while non-token approaches use message passing to determine access order.
2. A common token based algorithm uses a centralized coordinator process that grants access to the requesting process. Ring-based algorithms pass a token around a logical ring, allowing the process holding it to enter the critical section.
3. Lamport's non-token algorithm uses message passing of requests and timestamps to build identical request queues at each process, allowing the process at the head of the queue to enter the critical section. The Ricart-Agrawala
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
Deadlocks-An Unconditional Waiting Situation in Operating System. We must make sure of This concept well before understanding deep in to Operating System. This PPT will understands you to get how the deadlocks Occur and how can we Detect, avoid and Prevent the deadlocks in Operating Systems.
This document discusses multiprogramming and time sharing in operating systems. It defines multiprogramming as allowing multiple programs to execute concurrently by assigning pending work to idle processors and I/O devices. Time sharing extends multiprogramming by rapidly switching between programs so that each program executes for a fixed time quantum, giving users the impression that the entire system is dedicated to their use. The key aspects covered are the concepts of processes, CPU scheduling, and how multiprogramming and time sharing improve resource utilization.
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
comprehensive lecture on join odering fragments queries. it is the topic of DDBMS and the content are taken from multiple sources including google, book, class lecture.
prepared by IFZAL HUSSAIN student of CS in SHAHEED BENAZIR BHUTTO UNIVERSITY SHERINGAL DIR UPPER KPK, PAKISTAN.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
Operating Systems Process Scheduling Algorithmssathish sak
The document discusses various CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), round robin (RR), shortest job first (SJF), and shortest remaining time first (SRTF). It explains the assumptions, goals, and tradeoffs of each algorithm such as minimizing response time, maximizing throughput, and ensuring fairness. Examples are provided to illustrate how each algorithm works and its performance compared to others under different conditions involving job lengths. Predicting future job lengths is also discussed as it can impact the performance of algorithms like SRTF.
The document discusses various design issues related to interprocess communication using message passing. It covers topics like synchronization methods, buffering strategies, process addressing schemes, reliability in message passing, and group communication. The key synchronization methods are blocking and non-blocking sends/receives. Issues addressed include blocking forever if the receiving process crashes, buffering strategies like null, single-message and finite buffers, and naming schemes like explicit and implicit addressing. Reliability is achieved using protocols like four-message, three-message and two-message. Group communication supports one-to-many, many-to-one and many-to-many communication with primitives for multicast, membership and different ordering semantics.
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Directory services allow entities to be described through attribute-value pairs, known as attribute-based naming. Attributes can be used to search for entities like email messages that have attributes for sender, recipient, subject, etc. Discovery services register and lookup services in distributed systems through attributes. Jini discovery service uses multicast to locate lookup services and services register attributes with lookup services. The Global Name Service (GNS) provided a distributed directory system with a tree structure of directories and references to support resource location and email addressing across changing organizational structures.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
The document discusses three classical synchronization problems: the dining philosophers problem, the readers-writers problem, and the bounded buffer problem. For each problem, it provides an overview of the problem structure, potential issues like deadlock, and example semaphore-based solutions to coordinate access to shared resources in a way that avoids those issues. It also notes some applications where each type of problem could arise, like processes sharing a limited number of resources.
Distributed deadlock occurs when processes are blocked while waiting for resources held by other processes in a distributed system without a central coordinator. There are four conditions for deadlock: mutual exclusion, hold and wait, non-preemption, and circular wait. Deadlock can be addressed by ignoring it, detecting and resolving occurrences, preventing conditions through constraints, or avoiding it through careful resource allocation. Detection methods include centralized coordination of resource graphs or distributed probe messages to identify resource waiting cycles. Prevention strategies impose timestamp or age-based priority to resource requests to eliminate cycles.
The Dempster-Shafer Theory was developed by Arthur Dempster in 1967 and Glenn Shafer in 1976 as an alternative to Bayesian probability. It allows one to combine evidence from different sources and obtain a degree of belief (or probability) for some event. The theory uses belief functions and plausibility functions to represent degrees of belief for various hypotheses given certain evidence. It was developed to describe ignorance and consider all possible outcomes, unlike Bayesian probability which only considers single evidence. An example is given of using the theory to determine the murderer in a room with 4 people where the lights went out.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
This document discusses various inter-process communication (IPC) mechanisms in Linux, including pipes, FIFOs, and message queues. Pipes allow one-way communication between related processes, while FIFOs (named pipes) allow communication between unrelated processes through named pipes that persist unlike anonymous pipes. Message queues provide more robust messaging between unrelated processes by allowing messages to be queued until received and optionally retrieved out-of-order or by message type. The document covers the key functions and system calls for creating and using each IPC mechanism in both shell and C programming.
TERMIN@TION AND DETECTION #ALL DET@IL 123DeeshaKhamar1
The document discusses termination detection in distributed systems. It describes three termination detection algorithms: (1) using distributed snapshots, where processes take snapshots when becoming idle and termination is detected when a snapshot includes all processes; (2) weight throwing, where processes send weights with messages and termination is detected when the controlling process receives all weights; and (3) spanning tree-based, where processes report termination up a spanning tree and the root detects termination. The document provides details of each algorithm's approach, data structures, and correctness.
This document discusses various synchronization issues in distributed systems including clock synchronization, event ordering, mutual exclusion, and deadlock. It describes how computer clocks are implemented and different clock synchronization algorithms like centralized and distributed algorithms. It explains logical clocks and happened-before relation for event ordering. Different approaches for mutual exclusion like centralized, distributed, and token-passing are outlined. The four conditions for deadlock and different strategies like avoidance, prevention, and detection and recovery are summarized. Resource allocation graphs and wait-for graphs are introduced for modeling deadlocks.
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
1. There are two main approaches to distributed mutual exclusion - token-based and non-token based. Token based approaches use a shared token to allow only one process access at a time, while non-token approaches use message passing to determine access order.
2. A common token based algorithm uses a centralized coordinator process that grants access to the requesting process. Ring-based algorithms pass a token around a logical ring, allowing the process holding it to enter the critical section.
3. Lamport's non-token algorithm uses message passing of requests and timestamps to build identical request queues at each process, allowing the process at the head of the queue to enter the critical section. The Ricart-Agrawala
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
Deadlocks-An Unconditional Waiting Situation in Operating System. We must make sure of This concept well before understanding deep in to Operating System. This PPT will understands you to get how the deadlocks Occur and how can we Detect, avoid and Prevent the deadlocks in Operating Systems.
This document discusses multiprogramming and time sharing in operating systems. It defines multiprogramming as allowing multiple programs to execute concurrently by assigning pending work to idle processors and I/O devices. Time sharing extends multiprogramming by rapidly switching between programs so that each program executes for a fixed time quantum, giving users the impression that the entire system is dedicated to their use. The key aspects covered are the concepts of processes, CPU scheduling, and how multiprogramming and time sharing improve resource utilization.
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
comprehensive lecture on join odering fragments queries. it is the topic of DDBMS and the content are taken from multiple sources including google, book, class lecture.
prepared by IFZAL HUSSAIN student of CS in SHAHEED BENAZIR BHUTTO UNIVERSITY SHERINGAL DIR UPPER KPK, PAKISTAN.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
Operating Systems Process Scheduling Algorithmssathish sak
The document discusses various CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), round robin (RR), shortest job first (SJF), and shortest remaining time first (SRTF). It explains the assumptions, goals, and tradeoffs of each algorithm such as minimizing response time, maximizing throughput, and ensuring fairness. Examples are provided to illustrate how each algorithm works and its performance compared to others under different conditions involving job lengths. Predicting future job lengths is also discussed as it can impact the performance of algorithms like SRTF.
The document discusses various design issues related to interprocess communication using message passing. It covers topics like synchronization methods, buffering strategies, process addressing schemes, reliability in message passing, and group communication. The key synchronization methods are blocking and non-blocking sends/receives. Issues addressed include blocking forever if the receiving process crashes, buffering strategies like null, single-message and finite buffers, and naming schemes like explicit and implicit addressing. Reliability is achieved using protocols like four-message, three-message and two-message. Group communication supports one-to-many, many-to-one and many-to-many communication with primitives for multicast, membership and different ordering semantics.
Message and Stream Oriented CommunicationDilum Bandara
Message and Stream Oriented Communication in distributed systems. Persistent vs. Transient Communication. Event queues, Pub/sub networks, MPI, Stream-based communication, Multicast communication
Directory services allow entities to be described through attribute-value pairs, known as attribute-based naming. Attributes can be used to search for entities like email messages that have attributes for sender, recipient, subject, etc. Discovery services register and lookup services in distributed systems through attributes. Jini discovery service uses multicast to locate lookup services and services register attributes with lookup services. The Global Name Service (GNS) provided a distributed directory system with a tree structure of directories and references to support resource location and email addressing across changing organizational structures.
2. Distributed Systems Hardware & Software conceptsPrajakta Rane
This document discusses distributed system software and middleware. It describes three types of operating systems used in distributed systems - distributed operating systems, network operating systems, and middleware operating systems. Middleware operating systems provide a common set of services for local applications and independent services for remote applications. Common middleware models include remote procedure call, remote method invocation, CORBA, and message-oriented middleware. Middleware offers services like naming, persistence, messaging, querying, concurrency control, and security.
The document discusses three classical synchronization problems: the dining philosophers problem, the readers-writers problem, and the bounded buffer problem. For each problem, it provides an overview of the problem structure, potential issues like deadlock, and example semaphore-based solutions to coordinate access to shared resources in a way that avoids those issues. It also notes some applications where each type of problem could arise, like processes sharing a limited number of resources.
Distributed deadlock occurs when processes are blocked while waiting for resources held by other processes in a distributed system without a central coordinator. There are four conditions for deadlock: mutual exclusion, hold and wait, non-preemption, and circular wait. Deadlock can be addressed by ignoring it, detecting and resolving occurrences, preventing conditions through constraints, or avoiding it through careful resource allocation. Detection methods include centralized coordination of resource graphs or distributed probe messages to identify resource waiting cycles. Prevention strategies impose timestamp or age-based priority to resource requests to eliminate cycles.
The Dempster-Shafer Theory was developed by Arthur Dempster in 1967 and Glenn Shafer in 1976 as an alternative to Bayesian probability. It allows one to combine evidence from different sources and obtain a degree of belief (or probability) for some event. The theory uses belief functions and plausibility functions to represent degrees of belief for various hypotheses given certain evidence. It was developed to describe ignorance and consider all possible outcomes, unlike Bayesian probability which only considers single evidence. An example is given of using the theory to determine the murderer in a room with 4 people where the lights went out.
This document discusses interprocess communication (IPC) and message passing in distributed systems. It covers key topics such as:
- The two main approaches to IPC - shared memory and message passing
- Desirable features of message passing systems like simplicity, uniform semantics, efficiency, reliability, correctness, flexibility, security, and portability
- Issues in message passing IPC like message format, synchronization methods (blocking vs. non-blocking), and buffering strategies
This document discusses various inter-process communication (IPC) mechanisms in Linux, including pipes, FIFOs, and message queues. Pipes allow one-way communication between related processes, while FIFOs (named pipes) allow communication between unrelated processes through named pipes that persist unlike anonymous pipes. Message queues provide more robust messaging between unrelated processes by allowing messages to be queued until received and optionally retrieved out-of-order or by message type. The document covers the key functions and system calls for creating and using each IPC mechanism in both shell and C programming.
TERMIN@TION AND DETECTION #ALL DET@IL 123DeeshaKhamar1
The document discusses termination detection in distributed systems. It describes three termination detection algorithms: (1) using distributed snapshots, where processes take snapshots when becoming idle and termination is detected when a snapshot includes all processes; (2) weight throwing, where processes send weights with messages and termination is detected when the controlling process receives all weights; and (3) spanning tree-based, where processes report termination up a spanning tree and the root detects termination. The document provides details of each algorithm's approach, data structures, and correctness.
This document discusses various synchronization issues in distributed systems including clock synchronization, event ordering, mutual exclusion, and deadlock. It describes how computer clocks are implemented and different clock synchronization algorithms like centralized and distributed algorithms. It explains logical clocks and happened-before relation for event ordering. Different approaches for mutual exclusion like centralized, distributed, and token-passing are outlined. The four conditions for deadlock and different strategies like avoidance, prevention, and detection and recovery are summarized. Resource allocation graphs and wait-for graphs are introduced for modeling deadlocks.
This document discusses agreement protocols in distributed systems. It defines three main agreement problems: Byzantine agreement, consensus, and interactive consistency. Byzantine agreement requires all non-faulty processors to agree on a single value initialized by a source processor. Consensus requires agreement on a single value when each processor begins with a different initial value. Interactive consistency requires agreement on a set of values when initial values differ across processors. The document outlines solutions for these problems under synchronous and asynchronous models with crash, omission, and Byzantine faults.
The document discusses various techniques for synchronization in shared memory and distributed systems, including monitors, message passing, remote procedure calls (RPC), and logical clocks. It describes monitors as a synchronization primitive for shared memory that uses condition variables and mutual exclusion. For distributed synchronization it discusses message passing with channels/ports/mailboxes and RPC/rendezvous. Classical synchronization problems like the readers-writers problem and dining philosophers problem are presented along with their solutions. Logical clocks are introduced as a way to order events in a distributed system when physical clocks may be skewed.
The document discusses process synchronization and classical synchronization problems. It describes the critical section problem, where multiple processes need exclusive access to a shared resource. The solution uses semaphores to control access. Classical synchronization problems like the bounded buffer problem, reader-writer problem, and dining philosophers problem are then presented to illustrate synchronization challenges.
This document discusses snapshots in distributed systems. It begins by defining a snapshot as recording the simultaneous local states of all processes and communication channels. Snapshots can be used for deadlock detection, monitoring systems, and checkpointing distributed databases. Determining a global state is difficult due to the distributed nature of systems with no shared memory or clocks. Consistent cuts that do not cross message orderings can accurately capture a global state. The document then discusses several snapshot algorithms, including Chandy-Lamport for FIFO systems using markers, and Lai-Yang for non-FIFO systems using message coloring.
The document describes a two phase LBM code with generalized boundary conditions. It discusses declaring global variables to store user inputs, reading input data from a text file, and the overall flow of the program. Key steps include the user choosing boundary conditions, initializing functions, and at each time step executing functions like density calculation, collision, and boundary conditions using the stored global variables. It then verifies the generalized code for several flows including Poiseuille flow, perturbation, Couette flow, and wall contact.
The document discusses process synchronization and related concepts. It begins with an introduction to the critical section problem and solutions using both software and hardware approaches. Classical problems of synchronization are presented, such as the bounded-buffer, readers-writers, and dining philosophers problems. The concepts of semaphores, mutexes, and monitors are explained as synchronization mechanisms. Producer-consumer problems are used as examples to demonstrate solutions using counting and binary semaphores.
- A distributed system is a collection of autonomous computers linked by a network that appear as a single computer. Inter-process communication allows processes running on different computers to exchange data. Common IPC methods include message passing, shared memory, and remote procedure calls.
- Marshalling is the process of reformatting data to allow exchange between modules that use different data representations. Remote procedure calls allow a program to execute subroutines in another address space, such as on another computer. The client-server model partitions tasks between service providers (servers) and requesters (clients).
- Election algorithms are used in distributed systems to choose a coordinator process from among a group of processes. Examples include the bully algorithm and ring
SCP is a computationally scalable Byzantine consensus protocol that uses committees to achieve scalability. It partitions users into multiple committees, with each committee running a traditional BFT consensus protocol in parallel. This allows throughput to scale with increased computational power. SCP combines proof-of-work for identity generation and committee assignment with message passing-based BFT within committees. Evaluation shows SCP scales computationally with increased cores and has lower bandwidth consumption than Bitcoin.
This document provides an overview of the Python programming language. It begins with an introduction to running Python code and output. It then covers Python's basic data types like integers, floats, strings, lists, tuples and dictionaries. The document explains input and file I/O in Python as well as common control structures like if/else statements, while loops and for loops. It also discusses functions as first-class objects in Python that can be defined and passed as parameters. The document provides examples of higher-order functions like map, filter and reduce. Finally, it notes that functions can be defined inside other functions in Python.
This document discusses computer network error detection and correction. It begins by defining single-bit errors and burst errors. It then explains three common error detection techniques: parity check, cyclic redundancy check (CRC), and checksum. Parity check uses a redundant bit to make the total number of 1s even or odd. CRC performs binary division to generate redundant bits. Checksum adds data bits and compares the sum. For error correction, it describes Hamming codes, which add redundant bits in specific positions to detect and correct single-bit errors.
This project aims to prevent ship accidents using an Arduino, ultrasonic sensor, motors, LEDs, and buzzer. The sensor measures distance to obstacles and the system responds accordingly: at a safe distance the green LED is on and motors run normally, at closer distances the red LED and buzzer activate and motors may stop or reverse, depending on proximity. The system divides distance readings into four levels that determine the response based on danger level. It was developed iteratively, first testing components separately and then combining them through algorithm development.
This document discusses big-O, Ω, and Θ notation for analyzing algorithms and describes how to determine the time complexity of various algorithms. It provides examples of algorithms with different complexities, such as O(n), O(n^2), and O(n^3). It explains that both big-O and big-Ω describe the worst case time, and how to prove the lower and upper bounds for different algorithms.
The document discusses various data link layer protocols for flow and error control. It begins by explaining the basic functions of flow control and error control. It then describes some simple protocols that could be used over noiseless channels, including the simplest protocol and stop-and-wait protocol. The document goes on to introduce protocols that add error control functionality to handle noisy channels, such as stop-and-wait automatic repeat request (ARQ). It provides examples of how these protocols work using sequencing and acknowledgments.
In this presentation we introduce a family of gossiping algorithms whose members share the same structure though they vary their performance in function of a combinatorial parameter. We show that such parameter may be considered as a “knob” controlling the amount of communication parallelism characterizing the algorithms. After this we introduce procedures to operate the knob and choose parameters matching the amount of communication channels currently provided by the available communication system(s). In so doing we provide a robust mechanism to tune the production of requests for communication after the current operational conditions of the consumers of such requests. This can be used to achieve high performance and programmatic avoidance of undesirable events such as message collisions.
Paper available at https://dl.dropboxusercontent.com/u/67040428/Articles/pdp12.pdf
Distributed Computing Set 3 - Topics of non-Byzantine ConsensusOsama Askoura
This document summarizes key points about consensus agreement algorithms. It discusses how consensus can be solved with halting (crash) failures in synchronous systems using a simple algorithm that requires f+1 rounds, where f is the maximum number of failures. It also proves that no consensus algorithm can solve the problem in less than f+1 rounds. For asynchronous systems, the document notes that even one halting failure makes consensus insolvable. Byzantine agreement is also impossible to solve in asynchronous systems.
The document discusses the Byzantine Generals Problem which aims to allow computer systems to coordinate actions reliably even if some components fail or are compromised. It presents two algorithms, the Oral Messages algorithm and the Signed Messages algorithm, to solve the problem under different assumptions about message delivery and the ability to authenticate messages. The Signed Messages algorithm can tolerate more failures as it assumes messages can be authenticated through digital signatures.
This document discusses the randomized Byzantine generals problem and its solution proposed by Michael Rabin in 1983. The key points are:
- The Byzantine generals problem models reaching consensus in a distributed system where some processes may be unreliable or malicious.
- Rabin proposed a randomized algorithm where processes agree on a common value through multiple rounds of exchanging signed messages. This algorithm ensures agreement with high probability within a bounded number of rounds.
- The algorithm uses authentication techniques like digital signatures to ensure traitors can only lie about other traitors, not impersonate others. It also uses a "lottery" procedure for processes to randomly select a coordinator in each round.
- Rabin's randomized algorithm guarantees consensus
This document provides an overview of queuing systems and their analysis. It discusses key concepts like arrival and service processes, performance measures, steady-state analysis using Little's Law, and birth-death processes. An example M/M/1 queue is analyzed to find the steady-state probabilities and performance metrics like expected number in the system and average wait times. The methodology of setting up balance equations, solving for the steady-state distribution, and applying it to derive performance measures is demonstrated.
Similar to Solutions to byzantine agreement problem (20)
Uncovering Bugs in P4 Programs with Assertion-based VerificationAJAY KHARAT
P4 programs allows Network Administrators to deploy network functionalities.
P4 Programming language allows Network Administrators to specify conditions in few instructions as compared to other programming languages.
Earlier, some tools were developed to detect bugs in P4 programs.
But the proposed models are either not able to model P4 programs or cannot reason about program specifications.
SDPROBER: A SOFTWARE DEFINED PROBER FOR SDNAJAY KHARAT
Showing how the central control in SDN can be used for reducing the costs that are involved in proactive delay measurement and how SDN can facilitate adaptability of the measurements to varying conditions.
Instrumenting Open vSwitch with Monitoring Capabilities: Designs and ChallengesAJAY KHARAT
With the advancement of SDN and NFV techniques a series of work was proposed:
OpenSketch, DREAM, FlowRadar, Trumpet
Hybrid solution that balances the tradeoff between FCAP (higher accuracy) and SMON (less memory)
Alternative data structure to Ring Buffer that would consume less memory
Achieve a design of integration that has the minimal forwarding-monitoring function interference, optimal code sharing and efficient CPU/Memory resource usage
Memory and Performance Isolation for a Multi-tenant Function-based Data-planeAJAY KHARAT
3 approaches to memory protection are as follows:
Memory safe language: Writing modules in memory safe language like Rust automatically manages memory but already there are NF’s written in C/C++ , rewriting them from scratch with Rust will takes lots of effort.
Hardware-based memory protection(fine-grained approach): overhead of MPX comes from loading/storing the individual bounds for every pointers in a program
Coarse-grained hardware protection: Divides memory space into modules based on tenancy , having 2 advantages:
Tenants cant access each other modules and reduces the size of bound table thus reducing lookup overhead.
NS4: Enabling Programmable Data Plane SimulationAJAY KHARAT
Programmable data plane with multiple devices can now be simulated
Simulation setup much easier compared to ns-3
Direct migration of simulated behaviour to real-world devices possible
Less error prone code writing.
Performance improved significantly.
Relevance of YATES
Representing real life scenarios
Capturing the actual impact of factors in the simulation
Runtime Parameters
Number of rules
Failure Model
Predicting new Traffic Matrix
Life in the Fast Lane: A Line-Rate Linear RoadAJAY KHARAT
Network hardware was simple and fixed.
Cannot change the underlying code.
As new generation of programmable switches which match the performance of fixed function devices has become commercially available
like consensus protocols, in-network caching etc..
One common feature of all these applications is that they depend on stateful computations.
If this trend continues—as appears likely—then it is worth identifying which abstractions are needed to support a more general form of stateful processing. How?
How to implement complex policies on existing network infrastructure AJAY KHARAT
Network has grown complex today and requires several features like VPN, firewall, intrusion detection etc
Network wide policy cannot be defined on a single switch(approximately around 750 entries per table), requires too much memory and computation
Need to split policy into several switches
Network-Wide Heavy-Hitter Detection with Commodity SwitchesAJAY KHARAT
Network operators often need to identify outliers in network traffic, to detect attacks or diagnose performance problems.
In order to detect such problems network operators perform heavy hitter detection for flows.
In the traditional system, the heavy hitter detection was done using analysing packets or examining the packet flows.
Prior work was focus on detecting heavy hitters on a single switch but we often need to track network-wide heavy hitters.
While detecting heavy hitters on network wide basis we will try to reduce the communication overhead while maintaining the accuracy.
p4pktgen: Automated Test Case Generation for P4 ProgramsAJAY KHARAT
Traditional network devices - fixed set of capabilities
Rise of programmable network devices in recent years
Offers great flexibility / capability than traditional network devices
Flexibility introduces new bugs:
Hardware
Toolchains
Programs
These bugs were previously covered by traditional network devices due to fixed set of capabilities
Use test cases to check whether program is behaving as intended on the device
virtual memory management in multi processor mach osAJAY KHARAT
Virtual memory management in multi-processor Mach OS allows processes to access more memory than is physically installed by using virtual addresses. The Mach kernel provides basic services like tasks, threads, messages, and ports to enable parallel and distributed applications. Tasks have their own virtual address spaces that are divided into pages which are allocated to physical frames. The virtual memory system provides protection at the page level by using protection codes in page table entries to control read, write, and execute permissions.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Mechatronics is a multidisciplinary field that refers to the skill sets needed in the contemporary, advanced automated manufacturing industry. At the intersection of mechanics, electronics, and computing, mechatronics specialists create simpler, smarter systems. Mechatronics is an essential foundation for the expected growth in automation and manufacturing.
Mechatronics deals with robotics, control systems, and electro-mechanical systems.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
2. We studied from last class
-> Faulty processor problem
-> Limitations on number of faulty processors
-> An impossibility result
3. We will be studying about:-
-> Lamport-Shostak-Pease Algorithm
-> Dolev et al.’s Algorithm
4. Lamport-Shostak-Pease Algorithm
Solves Byzantine agreement for 3m+1 or more processors in the presence
of at most m faulty processors
i.e. if there are n processors, faulty processors should not exceed
(n-1)/3
Algorithm is recursively defined, as
• OM(0)
• OM(m), m > 0
5. Oral Messaging (OM)
There must be at least 3m+1 processors to cope up with m faulty
processors by Oral Messaging
What is Oral Messaging?
A Processor has to send messages to other processors
Our Assumptions:-
1. Every message that is sent is delivered correctly
2. The receiver of the message knows who sent it
3. The absence of message can be detected
6. ALGORITHM OM(0)
When there are no faulty processors, achieving agreement is easy
1. The source processor sends its value to every processor
2. Each processor uses the value it receives from the source
Otherwise default value 0
7. ALGORITHM OM(m), m > 0
1. The source processor sends its value to every processor
2. For each i,
let vi be the value processor i receives from the source.
Processor i acts as the new source and initiates Algorithm OM(m-
1) wherein it sends the value vi to each of the n-2 other processors
3. For each i and each j, let vj be the value processor i received from
processor in j in step 2 using Algorithm OM(m-1)
Processor i uses value majority( v1, v2, ..…, vn-1 )
8. What is Majority
If majority of values vi for each processor is
majority( v1,v2,v3,……,vn-1 )
The majority value among the vi if it exists, otherwise the value 0
i.e. a value at which majority of processors agree on.
9. Example 1
Processors p0, p1, p2, p3
Available values 0, 1
p2p1
p0
p3
1
1
1
p0 initiates agreement
p0 executes algorithm OM(1) where it
sends value 1 to all other processors
When one receiving processor is
faulty
10. Example 1
Processors p0, p1, p2, p3
Available values 0, 1
p0 initiates agreement
p0 executes algorithm OM(1) where it sends value 1 to all other
processors
p2p1
p0
p3
1
1
1
1
1
1
0
p1,p2,p3 executes OM(0)
After receiving all messages, processors
choose majority value
p1(1,1,1)
p2(1,1,1) they agree on
p3(1,0,1) majority value 1
1
1
11. Example 2
Processors p0, p1, p2, p3
Available values 0, 1
p2p1
p0
p3
1
0
1
p0 is faulty processor
& initiates agreement
p0 executes algorithm OM(1) where it
sends value 1 to all other processors
12. Example 2
Processors p0, p1, p2, p3
Available values 0, 1
p0 initiates agreement
p0 executes algorithm OM(1) where it sends value 1 to all other
processors
p2p1
p0
p3
1
0
1
1
p1,p2,p3 executes OM(0)
1
13. Example 2
Processors p0, p1, p2, p3
Available values 0, 1
p0 initiates agreement
p0 executes algorithm OM(1) where it sends value 1 to all other
processors
p2p1
p0
p3
1
0
1
1
0 0
p1,p2,p3 executes OM(0)
1
14. Example 2
Processors p0, p1, p2, p3
Available values 0, 1
p0 initiates agreement
p0 executes algorithm OM(1) where it sends value 1 to all other
processors
p2p1
p0
p3
1
0
1
1
0
1
0
p1,p2,p3 executes OM(0)
After receiving all messages, processors choose majority
value
P1(1,0,1)
P2(1,0,1) they agree on
P3(1,0,1) majority value 1
Hence, Byzantine agreement is satisfied
1
1
15. Dolev et al.’s Algorithm
Motivation behind Dolev et al.’s Algorithm?
• Does not depend on behavior of faulty processor
• Requires 2m+3 rounds
• No Authentication required
• It is polynomial time algorithm for reaching Byzantine
Agreement
16. Dolev et al.’s Algorithm
Our Assumptions
• Topology of network is known
• Algorithm is synchronous i.e. time of each round is known
• Without authentication immediate sender of message can be identified
• No solution if the upper bound of faulty processor exceeds
i.e. 1/3 of the processes
17. Dolev et al.’s Algorithm
• m is number of faulty processors
• Two thresholds :-
LOW = m + 1, HIGH = 2m + 1
• LOW is any subset of atleast one nonfaulty processors
(Used to support assertion)
• HIGH is any subset of atleast m + 1 nonfaulty processors
(Used to confirm assertion)
• Two Types of Message:-
• “*” Message and a message containing name of processor
18. Dolev et al.’s Algorithm
• What is * Message?
“*” denotes sender is sending value 1
• What is named message?
denotes sender of the message received a “*” from a named processor.
Every processor keeps a record of all messages it has received.
Processor j is direct supporter of k if j directly receives “*” from k.
When a nonfaulty processor j directly receives “*” from processor k, it sends message “k” to all
other processors
When processor i receives message “k” from processor j, it adds j into Wi
x because
j is a witness to message “k”
Wi
x is set of processors (witness) that have sent message x to processor i.
19. Dolev et al.’s Algorithm
k
j
* k
k
k
i
Wi
k [ j,…..,]
Witness for k message is j
Processor j is direct supporter of k if j directly receives “*” from k.
When a nonfaulty processor j directly receives “*” from
processor k, it sends message “k” to all other processors
When processor i receives message “k” from processor j, it adds
j into Wi
k because j is a witness to message “k”
Wi
k is set of processors (witness) that have sent message k to
processor i.
Processor i is indirect supporter of processor j
if |Wi
k|>= LOW
Processor i confirms processor j
if |Wi
k|>= HIGH
20. Dolev et al.’s Algorithm
Four rules of operation:
1. In first round, the source broadcasts its value to all other processors
2. In round k > 1, processor broadcasts name of all processor for which it is either
direct or indirect supporter.
if it not yet initiated in previous round, it will broadcast “*” message
3. If processor confirms HIGH number of processor, it commits to a value of 1.
4. After round 2m+3, if the value 1 is committed, the processor agree on 1;
Otherwise they agree on 0
31. Papers Referred
An Efficient Algorithm for Byzantine Agreement without Authentication
https://www.sciencedirect.com/science/article/pii/S0019995882907768
The Byzantine Generals Problem
https://people.eecs.berkeley.edu/~luca/cs174/byzantine.pdf
Editor's Notes
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.
Processor i is indirect supporter of processor j
if |Wix|>= LOW
---------------
i.e. process i has received message “k” from at least LOW no of processors
===============
Processor i confirms processor j
if |Wix|>= HIGH
---------------
i.e. at least HIGH no of processors told processor i that they received the value of 1 from processor k.