The document discusses process synchronization and why it is needed to prevent race conditions. It describes critical sections, where a process accesses shared resources and must be mutually exclusive. Several algorithms for coordinating processes to enter critical sections are presented, including using semaphores, disabling interrupts, and specialized machine instructions like test-and-set. However, all have limitations like busy waiting that wastes CPU resources. An optimal synchronization solution is still needed.
PowerPoint Presentation on Distributed Operating Systems,reasons for opting for distributed systems over centralized systems,types of Distributed Systems,Process Migration and its advantages.
A distributed system consists of multiple connected CPUs that appear as a single system to users. Distributed systems provide advantages like communication, resource sharing, reliability and scalability. However, they require distribution-aware software and uninterrupted network connectivity. Distributed operating systems manage resources across connected computers transparently. They provide various forms of transparency and handle issues like failure, concurrency and replication. Remote procedure calls allow calling remote services like local procedures to achieve transparency.
This document discusses different distributed computing system (DCS) models:
1. The minicomputer model consists of a few minicomputers with remote access allowing resource sharing.
2. The workstation model consists of independent workstations scattered throughout a building where users log onto their home workstation.
3. The workstation-server model includes minicomputers, diskless and diskful workstations, and centralized services like databases and printing.
It provides an overview of the key characteristics and advantages of different DCS models.
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Critical section problem in operating system.MOHIT DADU
The critical section problem refers to ensuring that at most one process can execute its critical section, a code segment that accesses shared resources, at any given time. There are three requirements for a correct solution: mutual exclusion, meaning no two processes can be in their critical section simultaneously; progress, ensuring a process can enter its critical section if it wants; and bounded waiting, placing a limit on how long a process may wait to enter the critical section. Early attempts to solve this using flags or a turn variable were incorrect as they did not guarantee all three requirements.
This document discusses multiprogramming and time sharing in operating systems. It defines multiprogramming as allowing multiple programs to execute concurrently by assigning pending work to idle processors and I/O devices. Time sharing extends multiprogramming by rapidly switching between programs so that each program executes for a fixed time quantum, giving users the impression that the entire system is dedicated to their use. The key aspects covered are the concepts of processes, CPU scheduling, and how multiprogramming and time sharing improve resource utilization.
PowerPoint Presentation on Distributed Operating Systems,reasons for opting for distributed systems over centralized systems,types of Distributed Systems,Process Migration and its advantages.
A distributed system consists of multiple connected CPUs that appear as a single system to users. Distributed systems provide advantages like communication, resource sharing, reliability and scalability. However, they require distribution-aware software and uninterrupted network connectivity. Distributed operating systems manage resources across connected computers transparently. They provide various forms of transparency and handle issues like failure, concurrency and replication. Remote procedure calls allow calling remote services like local procedures to achieve transparency.
This document discusses different distributed computing system (DCS) models:
1. The minicomputer model consists of a few minicomputers with remote access allowing resource sharing.
2. The workstation model consists of independent workstations scattered throughout a building where users log onto their home workstation.
3. The workstation-server model includes minicomputers, diskless and diskful workstations, and centralized services like databases and printing.
It provides an overview of the key characteristics and advantages of different DCS models.
Inter-Process Communication in distributed systemsAya Mahmoud
Inter-Process Communication is at the heart of all distributed systems, so we need to know the ways that processes can exchange information.
Communication in distributed systems is based on Low-level message passing as offered by the underlying network.
Critical section problem in operating system.MOHIT DADU
The critical section problem refers to ensuring that at most one process can execute its critical section, a code segment that accesses shared resources, at any given time. There are three requirements for a correct solution: mutual exclusion, meaning no two processes can be in their critical section simultaneously; progress, ensuring a process can enter its critical section if it wants; and bounded waiting, placing a limit on how long a process may wait to enter the critical section. Early attempts to solve this using flags or a turn variable were incorrect as they did not guarantee all three requirements.
This document discusses multiprogramming and time sharing in operating systems. It defines multiprogramming as allowing multiple programs to execute concurrently by assigning pending work to idle processors and I/O devices. Time sharing extends multiprogramming by rapidly switching between programs so that each program executes for a fixed time quantum, giving users the impression that the entire system is dedicated to their use. The key aspects covered are the concepts of processes, CPU scheduling, and how multiprogramming and time sharing improve resource utilization.
This document discusses various applications of parallel processing. It describes how parallel processing is used in numeric weather prediction to forecast weather by processing large amounts of observational data. It is also used in oceanography and astrophysics to study oceans and conduct particle simulations. Other applications mentioned include socioeconomic modeling, finite element analysis, artificial intelligence, seismic exploration, genetic engineering, weapon research, medical imaging, remote sensing, energy exploration, and more. The document also discusses loosely coupled and tightly coupled multiprocessors and the differences between the two approaches.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
This document summarizes key aspects of protocol architecture, TCP/IP, and internet-based applications. It discusses the need for a protocol architecture to break communication tasks into layers. It then describes the layered TCP/IP protocol architecture and its components, including the physical, network access, internet, transport, and application layers. It also summarizes TCP and IP addressing requirements and operation, as well as standard TCP/IP applications like SMTP, FTP, and Telnet. Finally, it contrasts traditional data-based applications with newer multimedia applications involving large amounts of real-time audio and video data.
Remote Procedure Call in Distributed SystemPoojaBele1
Presentation to give description about the remote procedure call in distributed systems
Presentation covers some points on remote procedure call in distributed systems
This document discusses different file models and methods for accessing files. It describes unstructured and structured file models, as well as mutable and immutable files. It also covers remote file access using remote service and data caching models. Finally, it discusses different units of data transfer for file access, including file-level, block-level, byte-level, and record-level transfer models.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
The document discusses various design issues related to interprocess communication using message passing. It covers topics like synchronization methods, buffering strategies, process addressing schemes, reliability in message passing, and group communication. The key synchronization methods are blocking and non-blocking sends/receives. Issues addressed include blocking forever if the receiving process crashes, buffering strategies like null, single-message and finite buffers, and naming schemes like explicit and implicit addressing. Reliability is achieved using protocols like four-message, three-message and two-message. Group communication supports one-to-many, many-to-one and many-to-many communication with primitives for multicast, membership and different ordering semantics.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
This document presents an overview of process synchronization techniques. It discusses the critical section problem and provides solutions like Peterson's algorithm and semaphores. Classical synchronization problems like the bounded buffer, readers-writers, and dining philosophers problems are explained. The document also covers concepts like deadlock, starvation, and monitors. The objectives are to introduce software and hardware solutions for shared data consistency and describe mechanisms for ensuring atomic transactions.
The document discusses solutions to the critical section problem where multiple processes need exclusive access to shared resources. It defines the critical section problem and requirements for a solution. It then presents three algorithms using shared variables to coordinate access between two processes in a way that satisfies mutual exclusion, progress, and bounded waiting.
The document discusses Point-to-Point Protocol (PPP), which provides a standard method for transporting multi-protocol datagrams over point-to-point links. PPP consists of encapsulating packets into frames, a Link Control Protocol (LCP) for establishing and configuring the connection, and Network Control Protocols (NCPs) for network layer configuration. It describes PPP frame formats, byte stuffing for transparency, and authentication protocols like PAP and CHAP. The presentation includes a Wireshark demo and addresses questions about PPP design requirements and non-requirements.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
A natural extension of the Random Access Machine (RAM) serial architecture is the Parallel Random Access Machine, or PRAM.
PRAMs consist of p processors and a global memory of unbounded size that is uniformly accessible to all processors.
Processors share a common clock but may execute different instructions in each cycle.
Distributed systems allow independent computers to appear as a single coherent system by connecting them through a middleware layer. They provide advantages like increased reliability, scalability, and sharing of resources. Key goals of distributed systems include resource sharing, openness, transparency, and concurrency. Common types are distributed computing systems, distributed information systems, and distributed pervasive systems.
A Distributed Shared Memory (DSM) system provides a logical abstraction of shared memory built using interconnected nodes with distributed physical memories. There are hardware, software, and hybrid DSM approaches. DSM offers simple abstraction, improved portability, potential performance gains, large unified memory space, and better performance than message passing in some applications. Consistency protocols ensure shared data coherency across distributed memories according to the memory consistency model.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
This document summarizes Peterson's algorithm for solving the critical section problem with two processes. Peterson's algorithm uses shared variables - an integer "turn" that indicates whose turn it is to enter the critical section, and a boolean flag array to indicate if a process is ready. Each process sets its flag to true, sets "turn" to the other process, and enters the critical section only if the other's flag is false or it owns the turn variable. This ensures mutual exclusion and progress while bounding waiting times.
The document discusses solutions to the critical section problem in multiprocessing systems. It defines the critical section problem as ensuring that when one process is executing shared data or resources, no other process can simultaneously execute that critical section. It then presents three algorithms to solve this problem, satisfying the requirements of mutual exclusion, progress, and bounded waiting.
This document discusses various applications of parallel processing. It describes how parallel processing is used in numeric weather prediction to forecast weather by processing large amounts of observational data. It is also used in oceanography and astrophysics to study oceans and conduct particle simulations. Other applications mentioned include socioeconomic modeling, finite element analysis, artificial intelligence, seismic exploration, genetic engineering, weapon research, medical imaging, remote sensing, energy exploration, and more. The document also discusses loosely coupled and tightly coupled multiprocessors and the differences between the two approaches.
Overview of Network Programming, Remote Procedure Calls, Remote Method Invocation, Message Oriented Communication, and web services in distributed systems
This document summarizes key aspects of protocol architecture, TCP/IP, and internet-based applications. It discusses the need for a protocol architecture to break communication tasks into layers. It then describes the layered TCP/IP protocol architecture and its components, including the physical, network access, internet, transport, and application layers. It also summarizes TCP and IP addressing requirements and operation, as well as standard TCP/IP applications like SMTP, FTP, and Telnet. Finally, it contrasts traditional data-based applications with newer multimedia applications involving large amounts of real-time audio and video data.
Remote Procedure Call in Distributed SystemPoojaBele1
Presentation to give description about the remote procedure call in distributed systems
Presentation covers some points on remote procedure call in distributed systems
This document discusses different file models and methods for accessing files. It describes unstructured and structured file models, as well as mutable and immutable files. It also covers remote file access using remote service and data caching models. Finally, it discusses different units of data transfer for file access, including file-level, block-level, byte-level, and record-level transfer models.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
The document discusses various design issues related to interprocess communication using message passing. It covers topics like synchronization methods, buffering strategies, process addressing schemes, reliability in message passing, and group communication. The key synchronization methods are blocking and non-blocking sends/receives. Issues addressed include blocking forever if the receiving process crashes, buffering strategies like null, single-message and finite buffers, and naming schemes like explicit and implicit addressing. Reliability is achieved using protocols like four-message, three-message and two-message. Group communication supports one-to-many, many-to-one and many-to-many communication with primitives for multicast, membership and different ordering semantics.
Clock synchronization in distributed systemSunita Sahu
This document discusses several techniques for clock synchronization in distributed systems:
1. Time stamping events and messages with logical clocks to determine partial ordering without a global clock. Logical clocks assign monotonically increasing sequence numbers.
2. Clock synchronization algorithms like NTP that regularly adjust system clocks across the network to synchronize with a time server. NTP uses averaging to account for network delays.
3. Lamport's logical clocks algorithm that defines "happened before" relations and increments clocks between events to synchronize logical clocks across processes.
This document presents an overview of process synchronization techniques. It discusses the critical section problem and provides solutions like Peterson's algorithm and semaphores. Classical synchronization problems like the bounded buffer, readers-writers, and dining philosophers problems are explained. The document also covers concepts like deadlock, starvation, and monitors. The objectives are to introduce software and hardware solutions for shared data consistency and describe mechanisms for ensuring atomic transactions.
The document discusses solutions to the critical section problem where multiple processes need exclusive access to shared resources. It defines the critical section problem and requirements for a solution. It then presents three algorithms using shared variables to coordinate access between two processes in a way that satisfies mutual exclusion, progress, and bounded waiting.
The document discusses Point-to-Point Protocol (PPP), which provides a standard method for transporting multi-protocol datagrams over point-to-point links. PPP consists of encapsulating packets into frames, a Link Control Protocol (LCP) for establishing and configuring the connection, and Network Control Protocols (NCPs) for network layer configuration. It describes PPP frame formats, byte stuffing for transparency, and authentication protocols like PAP and CHAP. The presentation includes a Wireshark demo and addresses questions about PPP design requirements and non-requirements.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
A natural extension of the Random Access Machine (RAM) serial architecture is the Parallel Random Access Machine, or PRAM.
PRAMs consist of p processors and a global memory of unbounded size that is uniformly accessible to all processors.
Processors share a common clock but may execute different instructions in each cycle.
Distributed systems allow independent computers to appear as a single coherent system by connecting them through a middleware layer. They provide advantages like increased reliability, scalability, and sharing of resources. Key goals of distributed systems include resource sharing, openness, transparency, and concurrency. Common types are distributed computing systems, distributed information systems, and distributed pervasive systems.
A Distributed Shared Memory (DSM) system provides a logical abstraction of shared memory built using interconnected nodes with distributed physical memories. There are hardware, software, and hybrid DSM approaches. DSM offers simple abstraction, improved portability, potential performance gains, large unified memory space, and better performance than message passing in some applications. Consistency protocols ensure shared data coherency across distributed memories according to the memory consistency model.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
This document summarizes Peterson's algorithm for solving the critical section problem with two processes. Peterson's algorithm uses shared variables - an integer "turn" that indicates whose turn it is to enter the critical section, and a boolean flag array to indicate if a process is ready. Each process sets its flag to true, sets "turn" to the other process, and enters the critical section only if the other's flag is false or it owns the turn variable. This ensures mutual exclusion and progress while bounding waiting times.
The document discusses solutions to the critical section problem in multiprocessing systems. It defines the critical section problem as ensuring that when one process is executing shared data or resources, no other process can simultaneously execute that critical section. It then presents three algorithms to solve this problem, satisfying the requirements of mutual exclusion, progress, and bounded waiting.
This document discusses solutions to the critical section problem in operating systems where multiple processes need synchronized access to shared resources. It describes several proposed solutions that do not fully satisfy the requirements of mutual exclusion, progress, and bounded waiting. These include using a shared flag, turn variable, and ready flags. The document also covers hardware synchronization using atomic test-and-set instructions and introduces semaphores as a better synchronization method that avoids busy waiting.
The document discusses various techniques for process synchronization and solving the critical section problem where multiple processes need exclusive access to shared resources. It describes the critical section problem and requirements that must be met (mutual exclusion, progress, and bounded waiting). It then summarizes several algorithms to solve the problem for two processes and multiple processes, including using semaphores which are basic synchronization tools using wait and signal operations.
The document summarizes key concepts related to process synchronization. It introduces the critical section problem where processes need coordinated access to shared resources. Several classic solutions are described, including Peterson's algorithm and using semaphores. Mutex locks and their implementation using atomic hardware instructions like test-and-set are also covered. The concepts of deadlock and starvation that can occur without proper synchronization are briefly mentioned at the end.
This document discusses inter-process communication and synchronization in operating systems. It covers topics like mutual exclusion, solutions to the mutual exclusion problem using software approaches like Dekker's and Peterson's algorithms, hardware support using test-and-set operations, and operating system solutions using semaphores. It also discusses principles of concurrency and interactions between processes like competing processes and cooperating processes.
Process synchronization is required when multiple processes access shared data concurrently. Peterson's solution solves the critical section problem for two processes using shared variables - an integer "turn" and a boolean flag array. Synchronization hardware uses atomic instructions like TestAndSet() and Swap() to implement locks for mutual exclusion. Semaphores generalize locks to control access to multiple resources.
The document discusses process synchronization and coordination to prevent interference between concurrent processes accessing shared resources. It describes the producer-consumer problem where a producer adds data to a buffer and consumer removes data, requiring synchronization to prevent issues with a full or empty buffer. Race conditions can occur when processes manipulate shared data concurrently, so processes must be synchronized using techniques like mutual exclusion and critical sections to ensure only one process accesses shared variables at a time. Algorithms like Peterson's solution are presented to synchronize processes and satisfy requirements of mutual exclusion and progress.
Mutual Exclusion using Peterson's AlgorithmSouvik Roy
This document discusses mutual exclusion algorithms for ensuring processes can access shared resources safely. It introduces the critical section problem and requirements for a solution. Centralized and decentralized mutual exclusion approaches are described. Popular algorithms are compared, including Dekker's, Lamport's bakery, and Peterson's algorithms. The document outlines how these algorithms work and their limitations. It proposes improvements using time-stamped and lock-based approaches and discusses future work applying these algorithms in distributed systems.
The critical section problem refers to ensuring that at most one process can execute its critical section, a code segment that accesses shared resources, at any given time. There are three requirements for a correct solution: mutual exclusion, meaning no two processes can be in their critical section simultaneously; progress, ensuring a process can enter its critical section if it wants; and bounded waiting, placing a limit on how long a process may wait to enter the critical section. Early attempts to solve this using flags or a turn variable were incorrect as they did not guarantee all three requirements.
This document covers important concepts of process synchronization like: Background, critical section problem, critical region, synchronization hardware, semaphores. It is beneficial for engineering students of aryabhatta knowledge university of bihar (A.K.U. Bihar).
The document discusses various techniques for process synchronization and solving the critical section problem in concurrent systems. It describes producer-consumer problems and solutions using shared memory. It covers issues like race conditions that can occur. Different algorithms for solving the critical section problem are presented, including Peterson's algorithm and the Bakery algorithm. The document also discusses synchronization hardware support and low-level synchronization tools like locks, test-and-set instructions, and semaphores.
The document discusses solutions to the producer-consumer problem in concurrent programming. It describes how a shared counter variable can be incorrectly updated if the producer and consumer concurrently increment and decrement it. To prevent this race condition, only one process should manipulate the shared variable at a time. The document then introduces the concept of a critical section and Peterson's solution for allowing two processes to alternate access to a critical section while preserving mutual exclusion, progress, and bounded waiting.
The document discusses the critical section problem in operating systems and algorithms to solve it. The critical section problem occurs when multiple processes need exclusive access to a shared resource. Three key properties must be satisfied by any solution: mutual exclusion, progress, and bounded waiting. The document presents and analyzes several algorithms to solve the problem for two processes, including using flags and turn-taking variables. It also introduces semaphores as a synchronization primitive and discusses their implementation and usage to solve the critical section problem.
1) The document discusses various techniques for process synchronization and communication, including critical sections, mutual exclusion, semaphores, and monitors.
2) It presents examples like the print spooler and producer-consumer problem to illustrate issues that can occur without proper synchronization.
3) Semaphores provide a way to synchronize processes through operations like down and up that allow blocking on conditions and signaling other processes. However, they must be used carefully to avoid errors like deadlocks.
The document discusses process synchronization and solutions to the critical section problem in concurrent processes. It introduces Peterson's solution which uses shared variables - a "turn" variable and a "flag" array to indicate which process can enter the critical section. It also discusses synchronization hardware support using atomic instructions like Test-And-Set and Swap that can be used to implement mutual exclusion solutions to the critical section problem.
Monitors provide mutual exclusion and condition variables to synchronize processes. A monitor consists of private variables and procedures, public procedures that act as system calls, and initialization procedures. Condition variables allow processes to wait for events within a monitor. When signaling a condition variable, either the signaling process waits or the released process waits, depending on whether it uses the Hoare type or Mesa type.
This document discusses processes and their interaction in operating systems. It defines a process as a program in execution that includes code, data, threads of execution, and resources. It describes how operating systems virtualize CPUs and memory to allow multiple processes to run concurrently. It discusses different types of process interactions including competition over shared resources, which can cause race conditions, and cooperation through synchronization and message passing. It presents several algorithms to solve the critical section problem of mutual exclusion when processes access shared resources.
This document discusses BOOTP and DHCP protocols. It provides objectives that will be covered, including the types of information required by systems on boot-up and how BOOTP and DHCP operate. BOOTP provides IP addresses and other network configuration details, while DHCP provides static and dynamic address allocation manually or automatically. The document includes figures illustrating operations, packet formats, and state diagrams for both protocols.
This document discusses ARP and RARP protocols. ARP associates IP addresses with physical addresses to allow communication on a LAN. RARP performs the inverse, associating physical addresses with IP addresses. The document includes objectives, figures illustrating ARP and RARP operation and positioning in the TCP/IP stack, examples of ARP cache usage, and details on ARP and RARP packet formats and processing. It aims to explain the need, components, and interactions of the ARP and RARP protocols.
This document discusses the User Datagram Protocol (UDP) which provides a connectionless mode of communication between applications on hosts in an IP network. It describes the format of UDP packets, how UDP checksums are calculated, and UDP's operation including encapsulation, queuing, and demultiplexing. Examples are provided to illustrate how a UDP control block table and queues are used to handle incoming and outgoing UDP packets. The document also discusses when UDP is an appropriate protocol to use compared to TCP.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Julie Miller is evaluating an independent project for cash flows of $10,000, $12,000, $15,000, $10,000 and $7,000 over 5 years with an initial cash outlay of $40,000. She will use the payback period, internal rate of return, net present value, and profitability index to evaluate the project. Based on the company's criteria, the project is rejected as it does not meet the payback, IRR, NPV or profitability index requirements.
This document summarizes a presentation on spyware and Trojan horses given on February 12, 2004. The presentation covered definitions of spyware and Trojan horses, examples of common spyware programs, how spyware and Trojans are installed secretly on computers, their effects, and demonstrations of how specific Trojans like Back Orifice operate. It also discussed defenses against spyware and Trojans, including spyware removal tools, firewalls, and making users more aware of the risks. The presentation concluded by discussing the security implications and proposing legislative and technical solutions.
This document discusses pointers in C++. It begins by defining pointers as variables that hold the memory addresses of other variables and explaining that pointers have types corresponding to the types of variables they point to. It then covers initializing and dereferencing pointers, constant pointers, pointer arithmetic, pointers and arrays, using pointers as function arguments, and memory management using pointers with the new and delete operators.
The document provides an overview of peer-to-peer networking, describing how peers directly communicate and share resources in contrast to the client-server model. It discusses various P2P applications and research areas, including content sharing challenges, approaches to group management and data placement, and measurement studies analyzing user behavior on networks like Gnutella. The document also summarizes several structured P2P networks and routing techniques like Chord, CAN, and Tapestry/Pastry.
Overview of current communications systemsMohd Arif
The document provides an overview of current communications systems, including the growth and evolution of cellular technologies from 1G to 3G. It summarizes the key 2G technologies like GSM, CDMA, and TDMA, as well as 2.5G and 3G standards that support higher data rates. It also discusses emerging broadband wireless services for local and personal area networks using technologies like Wi-Fi, HIPERLAN, and Bluetooth.
This document provides an overview of system software topics including operating systems, compilers, assemblers, linkers, loaders, debuggers, editors and more. It discusses the design and implementation of these programs and how they support the operation of a computer. Key points covered include the roles of assemblers in translating assembly code to object code, linkers and loaders in combining object files and libraries and preparing programs for execution, and compilers in translating high-level languages to machine-readable object code. The document also examines machine architectures like SIC/XE and differences between CISC and RISC designs.
The document discusses establishing objectives and budgeting for promotional programs. It emphasizes that objectives should be specific, measurable, attainable, realistic and time-bound. Marketing objectives aim to achieve goals like sales and market share, while communication objectives are more narrow and focus on delivering messages to target audiences. Budgeting can be done through top-down or bottom-up approaches. Top-down uses a percentage of sales or competitive parity to set budgets, while bottom-up budgets activities to achieve predefined objectives. Marginal analysis is used to determine optimal spending by increasing, holding, or decreasing expenditures based on incremental returns.
Network management involves controlling complex data networks to maximize efficiency and ensure data transmission. It aims to help with network complexity and transparency for users. The key aspects of network management include fault, configuration, security, performance, and accounting management. Network management standards and protocols like SNMP and CMIP allow for monitoring and configuration of network devices. Network management platforms provide the software and tools to integrate and manage different network components from a centralized location.
The document discusses underlying technologies for computer networks including transmission media, local area networks (LANs) like Ethernet and Token Ring, switching methods like circuit switching and packet switching, wide area networks (WANs) like PPP, X.25 and Frame Relay, interconnecting devices, and differences between shared media and switched LAN architectures. It provides details on CSMA/CD and IEEE 802 standards for Ethernet, features and problems of Ethernet, Token Ring features, circuit switching vs. packet switching, PPP, X.25, Frame Relay, ATM, internetworking terms, transparent bridges, and differences between shared media and switched LAN architectures.
The document discusses different types of loaders and their functions. It explains that a loader takes object code as input and prepares it for execution by performing allocation of memory, linking of symbolic references, relocation of addresses, and loading the machine code into memory. It describes various types of loaders like compile-and-go, absolute, bootstrap, and relocating loaders. A relocating loader is able to load a program into memory wherever there is space, unlike an absolute loader which loads programs at fixed addresses.
1. Linked lists provide a dynamic data structure where elements are linked using pointers. Elements can be easily inserted or removed without reorganizing the entire data structure.
2. Linked lists are commonly used to implement stacks and queues, where elements are added or removed from the top/front of the structure. Dynamic memory allocation allows pushing and popping elements efficiently.
3. Polynomials can also be represented using linked lists, where each term is a node containing the coefficient and exponent, linked in descending exponent order. This provides an efficient way to perform operations on polynomial expressions.
Iris ngx next generation ip based switching platformMohd Arif
IRIS NGX is a next generation IP-based switching platform that supports both soft and hardware-based switching. It provides a flexible, distributed architecture for integrating IP networks and supports various interfaces and protocols. The system includes IRIS NGX software, communication servers, peripheral shelves, media gateways, and IP phones. It offers advantages like seamless IP network integration, efficient network capacity expansion, and high reliability.
This document discusses IPSec and SSL/TLS as approaches to securing network communications at different layers of the protocol stack. It provides an overview of how IPSec operates at the network/IP layer using techniques like AH and ESP to provide authentication and encryption of IP packets. It also summarizes how SSL/TLS works at the transport layer to establish a secure connection and protect communications between applications using ciphersuites, handshaking, and record layer encryption. The document outlines some strengths and weaknesses of each approach.
1) IPsec provides data confidentiality, integrity, and authentication for IPv4 and IPv6 networks through protocols like AH and ESP.
2) It uses security associations to define encryption and authentication parameters for secure communication between hosts or subnets.
3) The Internet Key Exchange (IKE) protocol negotiates security associations and authenticates peers to securely establish IPsec tunnels.
This document provides an overview of computers, including hardware, software, and organization. It discusses how computers process data under instruction from programs and are made up of various hardware components like the keyboard, screen, and processing units. The document also summarizes Moore's Law, which predicts that the number of transistors on integrated circuits doubles approximately every two years, and how this affects computer performance and memory capacity over time. Finally, it describes the typical organization of a computer into logical units including input, output, memory, arithmetic logic, control processing, and secondary storage.
The document discusses heap sort, which is a sorting algorithm that uses a heap data structure. It works in two phases: first, it transforms the input array into a max heap using the insert heap procedure; second, it repeatedly extracts the maximum element from the heap and places it at the end of the sorted array, reheapifying the remaining elements. The key steps are building the heap, processing the heap by removing the root element and allowing the heap to reorder, and doing this repeatedly until the array is fully sorted.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
2. Process Synchronization
Why is synchronization needed?
Race Conditions
Critical Sections
Pure Software Solutions
Hardware Support
Semaphores
Monitors
Message Passing
2
3. Why is Synchronization Needed? 1/4
int Count = 10;
Process 1 Process 2
Count++; Count--;
Count = ? 9, 10 or 11?
3
4. Why is Synchronization Needed? 2/4
int Count = 10;
Process 1 Process 2
LOAD Reg, Count LOAD Reg, Count
ADD #1 SUB #1
STORE Reg, Count STORE Reg, Count
The problem is that the execution flow may be
switched in the middle!
4
5. Why is Synchronization Needed? 3/4
Process 1 Process 2
Inst Reg Memory Inst Reg Memory
LOAD 10 10
LOAD 10 10
SUB 9 10
ADD 11 10
STORE 11 11 erases the previous value 11
STORE 9 9
5
6. Why is Synchronization Needed? 4/4
Process 1 Process 2
Inst Reg Memory Inst Reg Memory
LOAD 10 10
ADD 11 10
LOAD 10 10
SUB 9 10
STORE 9 9
STORE 11 11 erases the previous value 9
6
7. Race Conditions
A Race Condition occurs, if
two or more processes/threads access and
manipulate the same data concurrently, and
the outcome of the execution depends on the
particular order in which the access takes
place.
Synchronization is needed to prevent race
conditions from happening.
Synchronization is a difficult topic. Don’t miss
a class; otherwise, you will miss a lot of things.
7
8. Critical Section and Mutual Exclusion
A critical section is a section of code in which a
process accesses shared resources.
Thus, the execution of critical sections must be
mutually exclusive (e.g., at most one process can
be in its critical section at any time).
The critical-section problem is to design a
protocol that processes can use to cooperate.
critical sections
int count; // shared
count++; count--; cout << count;
8
9. The Critical Section Protocol
do { A critical section protocol
consists of two parts: an
entry section
entry section and an exit
section.
critical section
Between them is the
exit section critical section that must
run in a mutually
exclusive way.
} while (1);
9
10. Solutions to the Critical Section Problem
Any solution to the critical section problem
must satisfy the following three conditions:
Mutual Exclusion
Progress
Bounded Waiting
Moreover, the solution cannot depend on
CPU’s relative speed and scheduling policy.
10
11. Mutual Exclusion
If a process P is executing in its critical section,
then no other processes can be executing in their
critical sections.
The entry protocol should be capable of
blocking processes that wish to enter but cannot.
Moreover, when the process that is executing in
its critical section exits, the entry protocol must
be able to know this fact and allows a waiting
process to enter.
11
12. Progress
If no process is executing in its critical section
and some processes wish to enter their critical
sections, then
Only those processes that are waiting to enter
can participate in the competition (to enter
their critical sections).
No other process can influence this decision.
This decision cannot be postponed indefinitely.
12
13. Bounded Waiting
After a process made a request to enter its
critical section and before it is granted the
permission to enter, there exists a bound on the
number of times that other processes are
allowed to enter.
Hence, even though a process may be blocked
by other waiting processes, it will not be waiting
forever.
13
14. Software Solutions for
Two Processes
Suppose we have two processes, P0 and P1.
Let one process be Pi and the other be Pj, where
j = 1- i. Thus, if i = 0 (resp., i = 1), then j = 1
(resp., j = 0).
We want to design the enter-exit protocol for a
critical section so that mutual exclusion is
guaranteed.
14
15. Algorithm I: 1/2
Global variable
process Pi turn controls who
do { if it is not my turn, I wait
can enter the
enter critical section.
while (turn != i); Since turn is either
0 or 1, only one can
critical section
enter.
turn = j; exit
However, processes
are forced to run in
} while (1); an alternating way.
I am done, it is your turn now 15
16. Algorithm I: 2/2
This solution does
process Pi
not fulfill the
do { if it is not my turn, I wait
progress condition.
enter If Pj exits by setting
turn to i and
while (turn != i);
terminates, Pi can
critical section enter but cannot
turn = j; exit enter again.
Thus, an irrelevant
process can block
} while (1); other processes
from entering a
I am done, it is your turn now critical section. 16
17. Algorithm II: 1/2
bool flag[2];
Variable flag[i]
I am interested
do {
is the “state” of
wait for you process Pi:
interested or not-
flag[i] = TRUE; enter interested.
while (flag[j]); Pi expresses its
critical section intention to enter,
waits for Pj to exit,
flag[i] = FALSE; exit enters its section,
and finally changes
to “I am out” upon
} I am not interested exit.
17
18. Algorithm II: 2/2
bool flag[2]; The correctness of this
I am interested algorithm is timing
do {
wait for you dependent!
If both processes set
flag[i] = TRUE; enter flag[i] and
while (flag[j]); flag[j] to TRUE at
the same time, then
critical section both will be looping at
flag[i] = FALSE; exit the while forever
and no one can enter.
Bounded waiting does
} I am not interested not hold.
18
19. Algorithm III: a Combination 1/4
bool flag[2]; // process Pi
int turn;
I am interested
do { yield to you first
flag[i] = TRUE; enter
turn = j;
while (flag[j] && turn == j);
I am done
critical section wait while you are
interested and it is
flag[i] = FALSE; exit
your turn.
} 19
20. Algorithm III: Mutual Exclusion 2/4
flag[i] = TRUE; process Pi
turn = j;
while (flag[j] && turn == j);
If both processes are in their critical sections, then
flag[j] && turn == j (Pi) and flag[i] && turn
== i (Pj) are both FALSE.
flag[i] and flag[j] are both TRUE
Thus, turn == i and turn == j are FALSE.
Since turn can hold one value, only one of turn == i
or turn == j is FALSE, but not both.
We have a contradiction and Pi and Pj cannot be in their
critical sections at the same time.
20
21. Algorithm III: Progress 3/4
flag[i] = TRUE; process Pi
turn = j;
while (flag[j] && turn == j);
If Pi is waiting to enter, it must be executing its
while loop.
Suppose Pj is not in its critical section:
If Pj is not interested in entering, flag[j] was
set to FALSE when Pj exits. Thus, Pi may enter.
If Pj wishes to enter and sets flag[j] to TRUE,
it will set turn to i and Pi may enter.
In both cases, processes that are not waiting do not
block the waiting processes from entering. 21
22. Algorithm III: Bounded Waiting 4/4
flag[i] = TRUE; process Pi
turn = j;
while (flag[j] && turn == j);
When Pi wishes to enter:
If Pj is outside of its critical section, then flag[j] is
FALSE and Pi may enter.
If Pj is in its critical section, eventually it will set
flag[j] to FALSE and Pi may enter.
If Pj is in the entry section, Pi may enter if it reaches
while first. Otherwise, Pj enters and Pi may enter
after Pj sets flag[j] to FALSE and exits.
Thus, Pi waits at most one round!
22
23. Hardware Support
There are two types of hardware
synchronization supports:
Disabling/Enabling interrupts: This is slow
and difficult to implement on multiprocessor
systems.
Special privileged machine instructions:
Test and set (TS)
Swap
Compare and Swap (CS)
23
24. Interrupt Disabling
do { Because interrupts are
disabled, no context
entry switch will occur in a
disable interrupts critical section.
Infeasible in a
critical section multiprocessor system
because all CPUs must
enable interrupts be informed.
exit Some features that
} while (1); depend on interrupts
(e.g., clock) may not
work properly.
24
25. Special Machine Instructions
Atomic: These instructions execute as one
uninterruptible unit. More precisely, when such
an instruction runs, all other instructions being
executed in various stages by the CPUs will be
stopped (and perhaps re-issued later) until this
instruction finishes.
Thus, if two such instructions are issued at the
same time, even though on different CPUs, they
will be executed sequentially.
Privileged: These instructions are, in general,
privileged, meaning they can only execute in
supervisor or kernel mode.
25
26. The Test-and-Set Instruction
bool TS(bool *key) bool lock = FALSE;
{
bool save = *key; do {
*key = TRUE;
return save; entry
} while (TS(&lock));
critical section
Mutual exclusion is
obvious as only one TS lock = FALSE; exit
instruction can run at a
time.
However, progress and } while (1);
bounded waiting may
not be satisfied. 26
27. Problems with Software and
Hardware Solutions
All of these solutions use busy waiting.
Busy waiting means a process waits by executing a tight
loop to check the status/value of a variable.
Busy waiting may be needed on a multiprocessor system;
however, it wastes CPU cycles that some other processes
may use productively.
Even though some personal/lower-end systems may allow
users to use some atomic instructions, unless the system
is lightly loaded, CPU and system performance can be
low, although a programmer may “think” his/her
program looks more efficient.
So, we need a better solution.
27