There are three main methods for dealing with deadlocks in an operating system: prevention, avoidance, and detection with recovery. Prevention ensures that the necessary conditions for deadlock cannot occur through restrictions on resource allocation. Avoidance uses additional information about future resource needs and requests to determine if allocating resources will lead to an unsafe state. Detection identifies when a deadlock has occurred, then recovery techniques like process termination or resource preemption are used to resolve it. No single approach is suitable for all resource types, so systems often combine methods by applying the optimal one to each resource class.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
Deadlock occurs when a set of blocked processes form a circular chain where each process waits for a resource held by the next process. There are four necessary conditions for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. The banker's algorithm avoids deadlock by tracking available resources and process resource needs, and only allocating resources to a process if it will not cause the system to enter an unsafe state where deadlock could occur. It uses matrices to represent allocation states and evaluates requests to ensure allocating resources does not lead to deadlock.
This document discusses storage management and disk structure. It covers mass storage structure including magnetic disks, disk platters, tracks, cylinders, sectors, and read/write heads. It then discusses disk structure in operating systems and concepts like surfaces, tracks, cylinders, and read/write heads. Finally, it covers scheduling and management techniques like long term, short term, and medium term schedulers as well as RAID structures and their benefits like increased reliability and performance.
This document discusses threads and threading models. It defines a thread as the basic unit of CPU utilization consisting of a program counter, stack, and registers. Threads allow for simultaneous execution of tasks within the same process by switching between threads rapidly. There are three main threading models: many-to-one maps many user threads to one kernel thread; one-to-one maps each user thread to its own kernel thread; many-to-many maps user threads to kernel threads in a variable manner. Popular thread libraries include POSIX pthreads and Win32 threads.
The document discusses deadlocks in computer systems. It defines deadlock, presents examples, and describes four conditions required for deadlock to occur. Several methods for handling deadlocks are discussed, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure deadlocks never occur, while avoidance allows the system to dynamically prevent unsafe states. Detection identifies when the system is in a deadlocked state.
This document contains information about the Banker's algorithm, which is a deadlock avoidance algorithm used in operating systems and banking systems. It describes the key data structures used in the algorithm, including Available (resources available), Max (maximum requested by each process), Allocation (allocated to each process), and Need (remaining needs). The resource request algorithm checks that the request is within the process's declared needs and that sufficient resources are available, then updates the data structures if allocated. The safety algorithm checks that the system is in a safe state where all processes could complete by finding an order to allocate remaining resources.
This document discusses deadlocks in operating systems. It defines a deadlock as a set of blocked processes that are each holding a resource and waiting for a resource held by another process. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be modeled using directed resource allocation graphs. Methods for handling deadlocks include prevention, avoidance, detection, and recovery.
This document summarizes the Banker's Algorithm, which is used to determine if a set of pending processes can safely acquire resources or if they should wait due to limited resources. It outlines the key data structures used like Available, Max, Allocation, and Need matrices to track current resources. The Safety Algorithm is described to check if the system is in a safe state by finding a process that can terminate and release resources. The Resource-Request Algorithm simulates allocating resources to a process and checks if it leads to a safe state before actual allocation.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
Deadlock occurs when a set of blocked processes form a circular chain where each process waits for a resource held by the next process. There are four necessary conditions for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. The banker's algorithm avoids deadlock by tracking available resources and process resource needs, and only allocating resources to a process if it will not cause the system to enter an unsafe state where deadlock could occur. It uses matrices to represent allocation states and evaluates requests to ensure allocating resources does not lead to deadlock.
This document discusses storage management and disk structure. It covers mass storage structure including magnetic disks, disk platters, tracks, cylinders, sectors, and read/write heads. It then discusses disk structure in operating systems and concepts like surfaces, tracks, cylinders, and read/write heads. Finally, it covers scheduling and management techniques like long term, short term, and medium term schedulers as well as RAID structures and their benefits like increased reliability and performance.
This document discusses threads and threading models. It defines a thread as the basic unit of CPU utilization consisting of a program counter, stack, and registers. Threads allow for simultaneous execution of tasks within the same process by switching between threads rapidly. There are three main threading models: many-to-one maps many user threads to one kernel thread; one-to-one maps each user thread to its own kernel thread; many-to-many maps user threads to kernel threads in a variable manner. Popular thread libraries include POSIX pthreads and Win32 threads.
The document discusses deadlocks in computer systems. It defines deadlock, presents examples, and describes four conditions required for deadlock to occur. Several methods for handling deadlocks are discussed, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure deadlocks never occur, while avoidance allows the system to dynamically prevent unsafe states. Detection identifies when the system is in a deadlocked state.
This document contains information about the Banker's algorithm, which is a deadlock avoidance algorithm used in operating systems and banking systems. It describes the key data structures used in the algorithm, including Available (resources available), Max (maximum requested by each process), Allocation (allocated to each process), and Need (remaining needs). The resource request algorithm checks that the request is within the process's declared needs and that sufficient resources are available, then updates the data structures if allocated. The safety algorithm checks that the system is in a safe state where all processes could complete by finding an order to allocate remaining resources.
This document discusses deadlocks in operating systems. It defines a deadlock as a set of blocked processes that are each holding a resource and waiting for a resource held by another process. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be modeled using directed resource allocation graphs. Methods for handling deadlocks include prevention, avoidance, detection, and recovery.
This document summarizes the Banker's Algorithm, which is used to determine if a set of pending processes can safely acquire resources or if they should wait due to limited resources. It outlines the key data structures used like Available, Max, Allocation, and Need matrices to track current resources. The Safety Algorithm is described to check if the system is in a safe state by finding a process that can terminate and release resources. The Resource-Request Algorithm simulates allocating resources to a process and checks if it leads to a safe state before actual allocation.
This document discusses various techniques for process synchronization. It begins by defining process synchronization as coordinating access to shared resources between processes to maintain data consistency. It then discusses critical sections, where shared data is accessed, and solutions like Peterson's algorithm and semaphores to ensure only one process accesses the critical section at a time. Semaphores use wait and signal operations on a shared integer variable to synchronize processes. The document covers binary and counting semaphores and provides an example of their use.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
Deadlock occurs when two or more processes are waiting for resources held by each other in a circular chain, resulting in none of the processes making progress. There are four conditions required for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be addressed through prevention, avoidance, detection, or recovery methods. Prevention aims to eliminate one of the four conditions, while avoidance techniques like the safe state model and Banker's Algorithm guarantee a safe allocation order to avoid circular waits.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Deadlock detection and recovery by saad symbiansaad symbian
1) Deadlock occurs when there is a cycle of processes where each process is waiting for a resource held by the next process in the cycle.
2) Solutions to deadlock include prevention, avoidance, detection and recovery. Prevention ensures deadlock is impossible through restrictions. Avoidance uses scheduling to steer around deadlock. Detection checks for cycles periodically and recovery kills processes or rolls them back to release resources.
3) Deadlock recovery options include killing all deadlocked processes, killing one at a time to release resources, or rolling processes back to a prior safe state instead of killing them. The process to kill or roll back is chosen based on factors like priority, resources used, and amount of work.
A distributed system consists of multiple connected CPUs that appear as a single system to users. Distributed systems provide advantages like communication, resource sharing, reliability and scalability. However, they require distribution-aware software and uninterrupted network connectivity. Distributed operating systems manage resources across connected computers transparently. They provide various forms of transparency and handle issues like failure, concurrency and replication. Remote procedure calls allow calling remote services like local procedures to achieve transparency.
This document discusses deadlocks in operating systems. A deadlock occurs when a set of processes are blocked waiting for resources held by each other in a cyclic manner. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. The document outlines strategies for detecting and avoiding deadlocks such as deadlock detection algorithms, safe state models like the Banker's Algorithm, and techniques for preventing the four deadlock conditions.
1. There are three methods to handle deadlocks: prevention, avoidance, and detection with recovery.
2. Deadlock prevention ensures that at least one of the necessary conditions for deadlock cannot occur. Deadlock avoidance requires processes to declare maximum resource needs upfront.
3. The Banker's algorithm is a deadlock avoidance technique that dynamically checks the resource allocation state to ensure it remains safe and no circular wait can occur.
Operating System Process SynchronizationHaziq Naeem
This document discusses synchronization between processes. It defines synchronization as the mutual understanding between two or more processes when sharing system resources. It describes critical section problems, solutions like locks, Peterson's solution, and semaphores. It also covers major synchronization problems like bounded buffer, reader-writer, and dining philosophers. Windows uses interrupts to protect shared resources while Linux uses semaphores. Synchronization is important for preventing deadlocks and data inconsistencies to improve efficiency.
There are three main approaches to handling deadlocks: prevention, avoidance, and detection with recovery. Prevention methods constrain how processes request resources to ensure at least one necessary condition for deadlock cannot occur. Avoidance requires advance knowledge of processes' resource needs to decide if requests can be immediately satisfied. Detection identifies when a deadlocked state occurs and recovers by undoing the allocation that caused it.
The Deadlock Problem
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The document discusses deadlocks in database systems. It defines deadlock as a waiting state where transactions are unable to progress because each is holding a resource needed by another, forming a cyclic dependency. It outlines the four conditions for deadlock - mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include avoidance, prevention, detection and recovery. Prevention techniques involve locking protocols or transaction rollback with timestamps. Detection uses a wait-for graph to identify cycles indicating deadlocks, and recovery requires selecting victims to rollback to break cycles.
A document about deadlocks in operating systems is summarized as follows:
1. A deadlock occurs when a set of processes form a circular chain where each process is waiting for a resource held by the next process in the chain. The four conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait.
2. Deadlocks can be modeled using a resource allocation graph where processes and resources are vertices and edges represent resource requests. A cycle in the graph indicates a potential deadlock.
3. Methods for handling deadlocks include prevention, avoidance, and detection/recovery. Prevention ensures deadlock conditions cannot occur while avoidance allows the system to dynamically verify new allocations will not
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
Deadlock is a very important topic in operating system. In this presentation slide, try to relate deadlock with real life scenario and find out some solution with two main algorithm- Safety and Banker's Algorithm.
The document discusses various techniques for handling deadlocks in computer systems, including deadlock prevention, avoidance, detection, and recovery. It defines the four necessary conditions for deadlock, and describes resource-allocation graphs and wait-for graphs that can be used to model deadlock states. Detection algorithms periodically check the resource-allocation graph for cycles that indicate a deadlock. Upon detection, various recovery techniques can be used like terminating processes, preempting resources, or rolling back to a previous safe state. An optimal approach combines prevention, avoidance and detection tailored for each resource class.
This document discusses various approaches to handling deadlocks in operating systems, including deadlock prevention, avoidance, detection, and recovery. It describes the four necessary conditions for deadlock, and models the problem using resource allocation graphs. Prevention methods aim to enforce constraints to ensure deadlock cannot occur, while avoidance algorithms dynamically monitor the system state to guarantee safety. Detection algorithms periodically search allocation graphs or wait-for graphs to find cycles indicating deadlock. Recovery requires rolling back processes to free locked resources.
This document discusses various techniques for process synchronization. It begins by defining process synchronization as coordinating access to shared resources between processes to maintain data consistency. It then discusses critical sections, where shared data is accessed, and solutions like Peterson's algorithm and semaphores to ensure only one process accesses the critical section at a time. Semaphores use wait and signal operations on a shared integer variable to synchronize processes. The document covers binary and counting semaphores and provides an example of their use.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
Deadlock occurs when two or more processes are waiting for resources held by each other in a circular chain, resulting in none of the processes making progress. There are four conditions required for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be addressed through prevention, avoidance, detection, or recovery methods. Prevention aims to eliminate one of the four conditions, while avoidance techniques like the safe state model and Banker's Algorithm guarantee a safe allocation order to avoid circular waits.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
Deadlock detection and recovery by saad symbiansaad symbian
1) Deadlock occurs when there is a cycle of processes where each process is waiting for a resource held by the next process in the cycle.
2) Solutions to deadlock include prevention, avoidance, detection and recovery. Prevention ensures deadlock is impossible through restrictions. Avoidance uses scheduling to steer around deadlock. Detection checks for cycles periodically and recovery kills processes or rolls them back to release resources.
3) Deadlock recovery options include killing all deadlocked processes, killing one at a time to release resources, or rolling processes back to a prior safe state instead of killing them. The process to kill or roll back is chosen based on factors like priority, resources used, and amount of work.
A distributed system consists of multiple connected CPUs that appear as a single system to users. Distributed systems provide advantages like communication, resource sharing, reliability and scalability. However, they require distribution-aware software and uninterrupted network connectivity. Distributed operating systems manage resources across connected computers transparently. They provide various forms of transparency and handle issues like failure, concurrency and replication. Remote procedure calls allow calling remote services like local procedures to achieve transparency.
This document discusses deadlocks in operating systems. A deadlock occurs when a set of processes are blocked waiting for resources held by each other in a cyclic manner. Four conditions must be met for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. The document outlines strategies for detecting and avoiding deadlocks such as deadlock detection algorithms, safe state models like the Banker's Algorithm, and techniques for preventing the four deadlock conditions.
1. There are three methods to handle deadlocks: prevention, avoidance, and detection with recovery.
2. Deadlock prevention ensures that at least one of the necessary conditions for deadlock cannot occur. Deadlock avoidance requires processes to declare maximum resource needs upfront.
3. The Banker's algorithm is a deadlock avoidance technique that dynamically checks the resource allocation state to ensure it remains safe and no circular wait can occur.
Operating System Process SynchronizationHaziq Naeem
This document discusses synchronization between processes. It defines synchronization as the mutual understanding between two or more processes when sharing system resources. It describes critical section problems, solutions like locks, Peterson's solution, and semaphores. It also covers major synchronization problems like bounded buffer, reader-writer, and dining philosophers. Windows uses interrupts to protect shared resources while Linux uses semaphores. Synchronization is important for preventing deadlocks and data inconsistencies to improve efficiency.
There are three main approaches to handling deadlocks: prevention, avoidance, and detection with recovery. Prevention methods constrain how processes request resources to ensure at least one necessary condition for deadlock cannot occur. Avoidance requires advance knowledge of processes' resource needs to decide if requests can be immediately satisfied. Detection identifies when a deadlocked state occurs and recovers by undoing the allocation that caused it.
The Deadlock Problem
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The document discusses deadlocks in database systems. It defines deadlock as a waiting state where transactions are unable to progress because each is holding a resource needed by another, forming a cyclic dependency. It outlines the four conditions for deadlock - mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include avoidance, prevention, detection and recovery. Prevention techniques involve locking protocols or transaction rollback with timestamps. Detection uses a wait-for graph to identify cycles indicating deadlocks, and recovery requires selecting victims to rollback to break cycles.
A document about deadlocks in operating systems is summarized as follows:
1. A deadlock occurs when a set of processes form a circular chain where each process is waiting for a resource held by the next process in the chain. The four conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait.
2. Deadlocks can be modeled using a resource allocation graph where processes and resources are vertices and edges represent resource requests. A cycle in the graph indicates a potential deadlock.
3. Methods for handling deadlocks include prevention, avoidance, and detection/recovery. Prevention ensures deadlock conditions cannot occur while avoidance allows the system to dynamically verify new allocations will not
This document discusses deadlocks, including the four conditions required for a deadlock, methods to avoid deadlocks like using safe states and Banker's Algorithm, ways to detect deadlocks using wait-for graphs and detection algorithms, and approaches to recover from deadlocks such as terminating processes or preempting resources.
Deadlock is a very important topic in operating system. In this presentation slide, try to relate deadlock with real life scenario and find out some solution with two main algorithm- Safety and Banker's Algorithm.
The document discusses various techniques for handling deadlocks in computer systems, including deadlock prevention, avoidance, detection, and recovery. It defines the four necessary conditions for deadlock, and describes resource-allocation graphs and wait-for graphs that can be used to model deadlock states. Detection algorithms periodically check the resource-allocation graph for cycles that indicate a deadlock. Upon detection, various recovery techniques can be used like terminating processes, preempting resources, or rolling back to a previous safe state. An optimal approach combines prevention, avoidance and detection tailored for each resource class.
This document discusses various approaches to handling deadlocks in operating systems, including deadlock prevention, avoidance, detection, and recovery. It describes the four necessary conditions for deadlock, and models the problem using resource allocation graphs. Prevention methods aim to enforce constraints to ensure deadlock cannot occur, while avoidance algorithms dynamically monitor the system state to guarantee safety. Detection algorithms periodically search allocation graphs or wait-for graphs to find cycles indicating deadlock. Recovery requires rolling back processes to free locked resources.
This chapter discusses deadlocks in computer systems. It defines deadlock as a situation where a set of blocked processes are each holding resources and waiting for additional resources held by other processes in the set, resulting in a circular wait. The chapter presents methods for handling deadlocks including deadlock prevention, avoidance, detection, and recovery. It describes the basic conditions for deadlock and models like the resource allocation graph used to analyze deadlock states. Banker's algorithm is provided as an example of a deadlock avoidance strategy.
The document summarizes different approaches to handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. It describes the four conditions required for deadlock, and models for representing resource allocation and processes waiting for resources, such as resource allocation graphs and wait-for graphs. Detection algorithms allow the system to enter a deadlocked state and then identify cycles in wait-for graphs to detect deadlocks.
14th November - Deadlock Prevention, Avoidance.pptUnknown664473
This document discusses various techniques for handling deadlocks in operating systems. It begins by defining the deadlock problem and characterizing the conditions required for a deadlock to occur. It then presents different methods for preventing, avoiding, detecting, and recovering from deadlocks, including deadlock prevention, avoidance using safe states and the banker's algorithm, detection using wait-for graphs and the resource allocation graph, and recovery through process termination or resource preemption.
This document discusses deadlocks in computer systems. It defines a deadlock as a set of blocked processes where each process is holding a resource and waiting for a resource held by another process in the set, resulting in circular waiting. It presents examples of deadlock situations and describes the conditions required for deadlock, including mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include prevention, avoidance, and detection and recovery. Prevention ensures deadlocks never occur through restrictions, while avoidance uses online algorithms to ensure the system remains in a safe state where deadlocks cannot arise.
This document discusses various techniques for handling deadlocks in operating systems, including prevention, avoidance, detection, and recovery. Deadlock prevention methods ensure deadlock conditions cannot occur by restricting resource usage. Deadlock avoidance algorithms dynamically examine the resource allocation state to guarantee the system remains in a safe state. Detection algorithms search for resource allocation cycles to identify deadlocks. Recovery methods terminate or roll back processes involved in deadlocks. The document provides examples to illustrate these deadlock handling techniques.
This document discusses deadlocks in operating systems. It defines deadlock as when a set of processes are all waiting for resources held by each other in a cyclic manner, preventing any progress. Four conditions must hold for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Detection methods include using a resource allocation graph to model processes and resources. Prevention techniques enforce restrictions like requiring processes request resources in a predefined order to avoid circular waits. The banker's algorithm is also described, which uses matrices to model available resources and ensure the system remains in a safe state to avoid deadlocks.
This chapter discusses deadlocks in computer systems. It defines deadlock as when a set of blocked processes wait indefinitely for resources held by each other. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, and recovery. Prevention ensures deadlocks cannot occur through techniques like not allowing certain resource requests. Avoidance uses algorithms like the banker's algorithm to dynamically ensure the system remains in a safe state where deadlocks cannot form. Recovery methods terminate processes or preempt resources to break deadlock cycles when they do occur.
The document discusses different methods for handling deadlocks in computer systems, including deadlock prevention, avoidance, detection, and recovery. It describes the four necessary conditions for deadlock, and models like the resource allocation graph and banker's algorithm that can be used to prevent or avoid deadlocks by ensuring the system remains in a safe state where deadlocks cannot occur. Detection methods allow the system to enter a deadlocked state before taking action to recover through rollback or preemption.
Deadlock occurs in a system when multiple processes are waiting indefinitely for resources held by other waiting processes, resulting in no progress. The four conditions required for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait. Deadlock can be avoided by ensuring that at least one of these conditions does not occur through methods like deadlock prevention, deadlock avoidance using safe sequences, and the banker's algorithm.
This document discusses different methods for handling deadlocks in computer systems, including deadlock prevention, avoidance, and detection. Deadlock prevention methods aim to ensure a system will never enter a deadlocked state by enforcing rules like mutual exclusion of resources and requiring processes to request all resources before starting. Deadlock avoidance uses algorithms like the banker's algorithm to dynamically examine the system state and ensure it remains in a "safe" state where deadlocks cannot occur. Deadlock detection allows the system to enter a deadlocked state but periodically checks a wait-for graph representing resource dependencies between processes to detect any cycles that indicate a deadlock.
The document discusses deadlocks in operating systems. It defines deadlocks as a situation where a set of concurrent processes are prevented from completing tasks due to circular resource dependencies. The document outlines four conditions required for deadlock and presents methods for handling deadlocks, including prevention, avoidance, detection, and recovery. It provides examples using resource allocation graphs and the banker's algorithm to model processes and resources to ensure the system remains in a safe state to avoid deadlocks.
Deadlocks occur when a set of blocked processes each hold resources and wait for resources held by other processes in the set, resulting in a circular wait. The four necessary conditions for deadlock are: mutual exclusion, hold and wait, no preemption, and circular wait. The banker's algorithm is a deadlock avoidance technique that requires processes to declare maximum resource needs upfront. It ensures the system is always in a safe state by delaying resource requests that could lead to an unsafe state where deadlock is possible.
This document summarizes a chapter on deadlocks from an operating systems textbook. It defines deadlock as when a set of blocked processes wait for resources held by each other. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery. Prevention ensures deadlocks cannot occur by restricting resource usage. Avoidance dynamically checks the system state remains safe to prevent deadlocks. Detection allows deadlocks but recovers the system. Recovery options are terminating processes or preempting resources.
This document discusses different methods for handling deadlocks in computer systems, including deadlock prevention, avoidance, detection, and recovery. It presents the four conditions required for deadlock, describes resource allocation graphs and how they can indicate deadlock situations. Prevention methods restrain how processes request resources to ensure deadlock cannot occur. Avoidance algorithms like the banker's algorithm dynamically assess the resource allocation state to guarantee a safe state where deadlock is not possible. Detection algorithms search allocation and wait graphs periodically to find any cycles indicating deadlock, and recovery aims to resolve deadlocks when they are detected.
Get a piece of basic knowledge about deadlock and what it is. What are the measures., conditions, and more. Also, know about Banker's algorithm and how does the calculation work.
The document discusses deadlocks in computer systems. It defines deadlocks as when a set of blocked processes each hold a resource and wait for a resource held by another process in the set, creating a circular wait. It presents methods for handling deadlocks including prevention, avoidance, detection, and recovery. Prevention methods restrict how processes can request resources to avoid deadlock conditions. Avoidance methods use additional information to ensure the system never enters an unsafe state allowing deadlocks. Detection finds cycles in a wait-for graph to detect existing deadlocks while recovery schemes deal with detected deadlocks.
Deadlock in Real Time operating Systempptxssuserca5764
This document discusses deadlocks in operating systems. It begins by defining a deadlock as a set of blocked processes each holding a resource and waiting for a resource held by another process. It then covers various methods for handling deadlocks including prevention, avoidance, detection, and recovery. Prevention methods restrain how processes can request resources to ensure deadlocks cannot occur. Avoidance methods allow requests but ensure the system never enters an unsafe state. Detection finds deadlocks after they occur, while recovery rolls back processes or preempts resources to break deadlock cycles.
The objectives of Deadlocks in Operating Systems are:
- To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks
- To present a number of different methods for preventing or avoiding deadlocks in a computer system
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
1. DEADLOCK AVOIDANCE
Deadlock-prevention algorithms - by restraining how requests can be
made. The restraints ensure that at least one of the necessary conditions for
deadlock cannot occur
Possible side effects of preventing deadlocks - low device utilization and
reduced system throughput.
Deadlock Avoidance - An alternative method for avoiding deadlocks is to
require additional information about how resources are to be requested
Each request requires that the system consider
• the resources currently available
• the resources currently allocated to each process
• the future requests and releases of each process
to decide whether the current request can be satisfied or must wait to avoid
a possible future deadlock.
Resource Allocation State
The resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock.
Safe sequence
A sequence of processes <p1,p2,..,pn> is a safe sequence for the current
allocation state if, for each Pi, the resources that Pi can still request can be
satisfied by the currently available resources plus the resources held by all
the Pj, with j < i.
A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe
state. Not all unsafe states are deadlocks. An unsafe state may lead to a
deadlock.
2. Example
Consider a system with 12 magnetic tape drives and 3 processes: P0, P1,
and P2.
At time t0, the system is in a safe state. The sequence <p1,p0,p2> satisfies
the safety condition.
It is possible to go from a safe state to an unsafe state
At time t1, process P2 requests and is allocated 1 more tape drive – No
longer safe state
Resource-Allocation Graph Algorithm
If we have a resource-allocation system with only one instance of each
resource type, a variant of the resource-allocation graph can be used for
deadlock avoidance.
In addition to the request and assignment edges, we introduce a new type of
edge, called a claim edge.
Claim Edge - A claim edge Pi —> Rj indicates that process Pi may request
resource Rj at some time in the future. This edge resembles a request edge
in direction, but is represented by a dashed line.
When process Pi requests resource Rj, the claim edge Pi —>Rj is converted
to a request edge. Similarly, when a resource Rj is released by Pi, the
assignment edge Rj —Pi is reconverted to a claim edge Pi—> Rj.
Conditions
3. Before process Pi starts executing, all its claim edges must already appear
in the resource-allocation graph.
Relax this condition by allowing a claim edge Pi->Rj to be added to the
graph only if all the edges associated with process Pi are claim edges.
If process Pf requests resource Rj. The request can be granted only if
converting the request edge Pf-> Rj to an assignment edge Rj —> Pf does
not result in the formation of a cycle in the resource-allocation graph.
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in
an unsafe state. Therefore, process Pj will have to wait for its requests to be
satisfied
Example
4. BANKER'S ALGORITHM
The resource-allocation graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type
When a new process enters the system, it must declare the maximum
number of instances of each resource type that it may need.
This number may not exceed the total number of resources in the system.
When a user requests a set of resources the system-must determine whether
the allocation of the resources will leave the system in a safe state. If it will,
the resources are allocated; otherwise/the process must wait until some other
process releases enough resources
Let n be the number of processes in the system and m be the number of
resource types
Data structures
Available: A vector of length m indicates the number of available resources
of each type. If Available[j] = k, there are k instances of resource type Rj
available.
Max: An nx m matrix defines the maximum demand of each process. If
Max[i,j] - k, then P; may request at most k instances of resource type Rj.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process. If Allocation[i,j]= k, then process Pi- is
currently allocated k instances of resource type Rj.
Need: An n x m matrix indicates the remaining resource need of each
process. If Need[i,j] = k, then Pi may need k more instances of resource type
Rj to complete its task.
Note that Need[i,j] = Max[i,j] - Allocation[i,j].
SAFETY ALGORITHM
The algorithm for finding out whether or not a system is in a safe state can
be described as follows:
Steps of Algorithm:
5. 1. Let Work and Finish be vectors of length m and n respectively.
Initialize Work= Available and Finish[i] = false for i=1 to n
2. Find an index i such that both
a) Finish[i] == false
b) Needi<= Work
If no such i exists go to step 4.
3. Work= Work+ Allocationi
Finish[i]= true
Go to Step 2.
4. If Finish[i]== true for all i, then the system is in a safe state.
RESOURCE-REQUEST ALGORITHM
Let Request i be the request vector for process Pi. If Request[j] = k, then
process Pi, wants k instances of resource type Rj;. When a request for
resources is made by process Pi, the following actions are taken:
1. If Request i < Need i, go to step 2. Otherwise, raise an error condition,
since the process has exceeded its maximum claim.
2. If Requesti < Available, go to step 3. Otherwise, Pi, must wait, since the
resources are not available.
3. Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
Available := Available – Request i
Allocation i := Allocation i + Request i
Need i= Need i — Request i
If the resulting resource-allocation state is safe, the transaction is completed
and process Pi is allocated its resources. However, if the new state is unsafe,
then Pi, must wait for Requesti and the old resource-allocation state is
restored
6. Example
Consider a system with five processes P0 through P5 and three resource
types A [10 instances], B [5 instances] and C [7 instances] time T0, the
following snapshot of the system has been taken:
Need = Max-allocation
System is in safe state <P1, P3, P4, P2, P0>
7. Suppose process P1 requests one additional instance of resource type A and
two instances of resource type C, so Request1 = (1,0,2).
Can we grant the request?
First check Request < Available (that is, (1,0,2) < (3,3,2)), which is true.
We then pretend that this request has been fulfilled, and we arrive at the
following new state:
We must determine whether this new system state is safe. To do so, we
execute our safety algorithm and find that the sequence <p1,p3,p4,p0,p2>
satisfies our safety requirement. Hence, we can immediately grant the
request of process p1.
A request for (3,3,0) by P4 cannot be granted, since the resources are not
available.
A request for (0,2,0) by P0 cannot be granted, even though the resources are
available, since the resulting state is unsafe.
8. DEADLOCK DETECTION
• An algorithm that examines the state of the system to determine
whether a deadlock has occurred.
• An algorithm to recover from the deadlock
Single Instance of Each Resource Type
Uses a variant of the resource-allocation graph, called the wait-for graph.
Wait for graph
We obtain this graph from the resource-allocation graph by removing the
nodes of type resource and collapsing the appropriate edges.
An edge from Pi-> Pj in a wait-for graph implies that process Pi is waiting
for process Pj to release a resource that Pi needs.
An edge Pj —> PJ exists in a wait-for graph if and only if the corresponding
resource allocation graph contains two edges Pi —> Rq and Rq -> Pj; for
some resource: Rq
A deadlock exists in the system if and only if the wait-for graph contains a
cycle
Resource allocation graph and corresponding wait-for graph
9. Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource-allocation system
with multiple instances of each resource type
DEADLOCK DETECTION ALGORITHM
Data structures used by deadlock detection algorithm
Available: A vector of length m indicates the number of available resources
of each type.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process
Request: An n x m matrix indicates the current request of each process. If
Request[i,j] = k, then process Pi is requesting k more instances of resource
type Rj.
ALGORITHM
1. Let Work and Finish be vectors of length m and n respectively. Initialize
Work= Available. For i=0, 1, …., n-1, if Requesti = 0, then Finish[i] = true;
otherwise, Finish[i]= false.
2. Find an index i such that both
a. Finish[i] = false.
b. Requesti < Work.
If no such i exists, go to step 4.
3. Work := Work + Allocation^
Finish[i] := true
go to step 2.
4.If Finish[i]== false for some i, 1<=i<n, then the system is in a deadlocked
state. Moreover, if Finish[i]==false the process Pi is deadlocked.
10. Example
Five processes P1through P4 and three resource types A, B, C.
Resource type A has 7 instances, resource type B has 2 instances, and
resource type C has 6 instances
At time T0
The sequence <Po, P2,P3,P1,P4> will result in Finish[i] = true for all i.
Suppose now that process P2 makes one additional request for an instance
of type C
Modified Request Matrix
The system is now deadlocked consisting of the resources P1,P2,P3 and
P4.
11. Detection-Algorithm Usage
Depends on
1. How often is a deadlock likely to occur?
2. How many processes will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be
invoked frequently.
RECOVERY FROM DEADLOCK
Methods
1. Process Termination
2. Resource Pre-emption
Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods.
In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes
2. Abort one process at a time until the deadlock cycle is eliminated
How to choose the process?
1.What the priority of the process is
2. How long the process has computed, and how much longer the process
will compute before completing its designated task
3. How many and what type of resources the process has used (for example,
whether the resources are simple to preempt)
4. How many more resources the process needs in order to complete
5. How many processes will need to be terminated
6. Whether the process is interactive or batch
12. Resource Pre-emption
To eliminate deadlocks using resource pre-emption, we successively pre-
empt some resources from processes and give these resources to other
processes until the deadlock cycle is broken
If preemption is required to deal with deadlocks, then three issues need to
be addressed:
1. Selecting a victim
2. Rollback
3. Starvation
COMBINED APPROACH TO DEADLOCK HANDLING
None of the basic approaches for handling deadlocks (prevention,
avoidance, and detection) alone is appropriate for the entire spectrum of
resource-allocation problems encountered in operating systems
One possibility is to combine the three basic approaches, allowing the use
of the optimal approach for each class of resources in the system. The
proposed method is based on the notion that resources can be partitioned
into classes that are hierarchically ordered.
A resource-ordering technique is applied to the classes. Within each class,
the most appropriate technique for handling deadlocks can be used.
Example
Consider a system that consists of the following four classes of
resources:
• Internal resources: Resources used by the system, such as a process
control block
• Central memory: Memory used by a user job
• Job resources: Assignable devices (such as a tape drive) and files
• Swappable space: Space for each user job on the backing store
13. One mixed deadlock solution for this system orders the classes as shown,
and uses the following approaches to each class:
• Internal resources: Prevention through resource ordering can be used,
since run-time choices between pending requests are unnecessary.
• Central memory: Prevention through preemption can be used, since a job
can always be swapped out, and the central memory can be preempted.
• Job resources: Avoidance can be used, since the information needed about
resource requirements can be obtained from the job-control cards.
• Swappable space: Preallocation can be used, since the maximum storage
requirements are usually known.
SUMMARY
A deadlock state occurs when two or more processes are waiting indefinitely
for an event that can be caused only by one of the waiting processes.
Principally,
There are three methods for dealing with deadlocks:
•Use some protocol to ensure that the system will never enter a deadlock
state.
• Allow the system to enter deadlock state and then recover.
• Ignore the problem all together, and pretend that deadlocks never occur
in the system.
A deadlock situation may occur if and only if four necessary conditions
hold simultaneously in the system: mutual exclusion, hold and wait, no
preemption, and circular wait. To prevent deadlocks, we ensure that at least
one ofthe necessary conditions never holds.
14. Another method for avoiding deadlocks that is less stringent than the
prevention algorithms is to have a priori information on how each process
will be utilizing the resources. The banker's algorithm needs to know the
maximum number of each resource class that may be requested by each
process. Using thisinformation, we can define a deadlock-avoidance
algorithm.
If a system does not employ a protocol to ensure that deadlocks will never
occur, then a detection and recovery scheme must be employed. A deadlock
detection algorithm must be invoked to determine whether a deadlock has
occurred. If a deadlock is detected, the system must recover either by
terminating some of the deadlocked processes, or by preempting resources
from some of the deadlocked processes.
In a system that selects victims for rollback primarily on the basis of cost
factors, starvation may occur. As a result, the selected process never
completes its designated task.
Finally, researchers have argued that none of these basic approaches
aloneare appropriate for the entire spectrum of resource-allocation problems
in operating systems. The basic approaches can be combined, allowing the
separate selection of an optimal one for each class of resources in a system.
15. PROCESS SYNCHRONISATION
Cooperating Process
A cooperating process is one that can affect or be affected by the other
processes executing in the system. Cooperating processes may either
directly share a logical address space (that is, both code and data), or be
allowed to share data only through files.
THE CRITICAL-SECTION PROBLEM
Consider a system consisting of n processes {Po,Pi,...,PM-i}.
• Each process has a segment of code, called a critical section, in which
the process may be changing common variables, updating a table,
writing a file, and so on.
• The important feature of the system is that, when one process is
executing in its critical section, no other process is to be allowed to
execute in its critical section.
• The execution of critical sections by the processes is mutually
exclusive in time.
• The critical-section problem is to design a protocol that the processes
can use to cooperate.
• Each process must request permission to enter its critical section.
• The section of code implementing this request is the entry section.
• The critical section may be followed by an exit section.
• The remaining code is the remainder section.
A solution to the critical-section problem must satisfy the following three'
requirements:
1. Mutual Exclusion: If process Pz is executing in its critical section, then
no other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and there exist
some processes that wish to enter their critical sections, then only those
processes that are not executing in their remainder section can participate
16. in the decision of which will enter its critical section next, and this selection
cannot be postponed indefinitely.
3. Bounded Waiting: There exist a bound on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
General structure of a typical process Pi
SYNCHRONIZATION HARDWARE
Uniprocessor environment: By not allowing interrupts to occur while a
shared variable is being modified.
Disabling interrupts on a multiprocessor can be time-consuming, as the
message is passed to all the processors. This message passing delays entry
into each critical section, and system efficiency decreases. Also, consider
the effect on a system's clock, if the clock is kept updated by interrupts.
Many machines therefore provide special hardware instructions that allow
us either to test and modify the content of a word, or to swap the contents
of two words, atomically.
17. The important characteristic is that this instruction is executed atomically—
that is, as one uninterruptible unit. Thus, if two Test-and-Set instructions are
executed simultaneously (each on a different CPU), they will be executed
sequentially in some arbitrary order. If the machine supports the Test-and-
Set instruction, then we can implement mutual exclusion by declaring a
Boolean variable lock, initialized to false.
SEMAPHORES
Used for complex problems.
Semaphore is a synchronization tool
A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait and signal.
The classical definitions of wait and signal are
wait(S): while S < 0 do no-op;
S := S - 1;
signal(S): S := S + 1;
Modifications to the integer value of the semaphore in the wait and signal
operations must be executed indivisibly.
Conditions
No two process can simultaneously modify the same semaphore value. In
addition, in the case of the wait(S), the testing of the integer value of S (S <
0), and its possible modification (S := S — 1), must also be executed without
interruption
Usage
We can use semaphores to deal with the n-process critical-section problem.
The n processes share a semaphore, mutex (standing for mutual exclusion),
initialized to 1.
18. Implementation
All require busy waiting.
While a process is in its critical section, any other process that tries to enter
its critical section must loop continuously in the entry code. This continual
lopping is clearly a problem in a real multiprogramming system, where a
single CPU is shared among many processes. Busy waiting wastes CPU
cycles that some other process might be able to use productively. This type
of semaphore is also called a spinlock
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
19. The definitions of wait and signal are as follows −
• Wait
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is
1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.
20. Advantages of Semaphores
Some of the advantages of semaphores are as follows
Semaphores allow only one process into the critical section. They follow
the mutual exclusion principle strictly and are much more efficient than
some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled
to allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows
Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.
Deadlocks and Starvation
The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes. The event in
question is the execution of a signal operation. When such a state is reached,
these processes are said to be deadlocked.
21. Example
Another problem related to deadlocks is indefinite blocking or starvation, a
situation where processes wait indefinitely within the semaphore. Indefinite
blocking may occur if we add and remove processes from the list associated
with a semaphore in LIFO order
CLASSICAL PROBLEMS OF SYNCHRONIZATION
The Bounded-Buffer Problem
The bounded-buffer problems (aka the producer-consumer problem) is a
classic example of concurrent access to a shared resource. A bounded buffer
lets multiple producers and multiple consumers share a single buffer. ...
Producers must block if the buffer is full. Consumers must block if the
buffer is empty.
About Producer-Consumer problem
The Producer-Consumer problem is a classic problem this is used for multi-
process synchronization i.e. synchronization between more than one
processes.
In the producer-consumer problem, there is one Producer that is producing
something and there is one Consumer that is consuming the products
produced by the Producer.
The producers and consumers share the same memory buffer that is of fixed-
size.
22. The job of the Producer is to generate the data, put it into the buffer, and
again start generating data.
While the job of the Consumer is to consume the data from the buffer.
Problems that might occur in the Producer-Consumer
• The producer should produce data only when the buffer is not full. If
the buffer is full, then the producer shouldn't be allowed to put any
data into the buffer.
• The consumer should consume data only when the buffer is not empty.
If the buffer is empty, then the consumer shouldn't be allowed to take
any data from the buffer.
• The producer and consumer should not access the buffer at the same
time.
The structure of the producer process
The empty and full semaphores count the number of empty and full buffers,
respectively. The semaphore empty is initialized to the value n; the
semaphore is initialized to the value 0.
23. The Readers and Writers Problem
The readers-writers problem relates to an object such as a file that is shared
between multiple processes. Some of these processes are readers i.e. they
only want to read the data from the object and some of the processes are
writers i.e. they want to write into the object.
The readers-writers problem is used to manage synchronization so that there
are no problems with the object data. For example - If two readers access
the object at the same time there is no problem. However if two writers or a
reader and writer access the object at the same time, there may be problems.
To solve this situation, a writer should get exclusive access to an object i.e.
when a writer is accessing the object, no reader or writer may access it.
However, multiple readers can access the object at the same time.
This can be implemented using semaphores. The codes for the reader and
writer process in the reader-writer problem are given as follows
Reader Process
wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
.
. READ THE OBJECT
.
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);
24. In the above code, mutex and wrt are semaphores that are initialized to 1.
Also, rc is a variable that is initialized to 0. The mutex semaphore ensures
mutual exclusion and wrt handles the writing mechanism and is common to
the reader and writer process code.
The variable rc denotes the number of readers accessing the object. As soon
as rc becomes 1, wait operation is used on wrt. This means that a writer
cannot access the object anymore. After the read operation is done, rc is
decremented. When re becomes 0, signal operation is used on wrt. So a
writer can access the object now.
Writer Process
wait(wrt);
.
. WRITE INTO THE OBJECT
.
signal(wrt);
If a writer wants to access the object, wait operation is performed on wrt.
After that no other writer can access the object. When a writer is done
writing into the object, signal operation is performed on wrt.
The Dining-Philosophers Problem
The dining philosophers problem states that there are 5 philosophers sharing
a circular table and they eat and think alternatively. There is a bowl of rice
for each of the philosophers and 5 chopsticks. A philosopher needs both
their right and left chopstick to eat. A hungry philosopher may only eat if
there are both chopsticks available. Otherwise a philosopher puts down their
chopstick and begin thinking again.
The dining philosopher is a classic synchronization problem as it
demonstrates a large class of concurrency control problems.
25. Solution of Dining Philosophers Problem
A solution of the Dining Philosophers Problem is to use a semaphore to
represent a chopstick. A chopstick can be picked up by executing a wait
operation on the semaphore and released by executing a signal semaphore.
The structure of the chopstick is shown below
semaphore chopstick [5];
Initially the elements of the chopstick are initialized to 1 as the chopsticks
are on the table and not picked up by a philosopher.
The structure of a random philosopher i is given as follows −
do {
wait(chopstick[i] );
wait(chopstick[ (i+1) % 5] );
. .
. EATING THE RICE
.
26. signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);
In the above structure, first wait operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the
chopsticks on his sides. Then the eating function is performed.
After that, signal operation is performed on chopstick[i] and
chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put
down the chopsticks on his sides. Then the philosopher goes back to
thinking.
27. CRITICAL REGIONS
The critical-region construct can be effectively used to solve certain 'general
"synchronization problems
MONITORS
Monitors are a synchronization construct that were created to overcome the
problems caused by semaphores such as timing errors.
Monitors are abstract data types and contain shared data variables and
procedures. The shared data variables cannot be directly accessed by a
process and procedures are required to allow a single process to access the
shared data variables at a time.
This is demonstrated as follows:
monitor monitorName
{
data variables;
Procedure P1(....)
{
}
Procedure P2(....)
{
}
Procedure Pn(....)
{
}
28. Initialization Code(....)
{
}
}
Only one process can be active in a monitor at a time. Other processes that
need to access the shared variables in a monitor have to line up in a queue
and are only provided access when the previous process release the shared
variables.
MONITOR VS SEMAPHORE
Both semaphores and monitors are used to solve the critical section problem
(as they allow processes to access the shared resources in mutual exclusion)
and to achieve process synchronization in the multiprocessing environment.
MONITOR
A Monitor type high-level synchronization construct. It is an abstract data
type. The Monitor type contains shared variables and the set of procedures
that operate on the shared variable.
When any process wishes to access the shared variables in the monitor, it
needs to access it through the procedures. These processes line up in a queue
and are only provided access when the previous process release the shared
variables. Only one process can be active in a monitor at a time. Monitor
has condition variables.
SEMAPHORE
A Semaphore is a lower-level object. A semaphore is a non-negative integer
variable. The value of Semaphore indicates the number of shared resources
available in the system. The value of semaphore can be modified only by
two functions, namely wait() and signal() operations (apart from the
initialization).
When any process accesses the shared resources, it performs the wait()
operation on the semaphore and when the process releases the shared
resources, it performs the signal() operation on the semaphore. Semaphore
29. does not have condition variables. When a process is modifying the value
of the semaphore, no other process can simultaneously modify the value of
the semaphore.