RACE CONDITION, DEADLCKS
AND SEMAPHORE
GROUP FIVE MEMBERS
INTRODUCTION
MEANING OF RACE CONITION
A Race Condition is an undesirable situation that
occurs when a device or system attempts to perform
two or more operations at the same time, but because
of the nature of the device or system, the operations
must be done in the proper sequence to be done
correctly.
A Race Condition is a situation in which the
outcome of a computation depends on the sequence in
which two or more concurrent sub-computations are
executed. Race conditions can occur in any system that
allows multiple processes or threads to access shared
resources.
They occur when two computer program
processes, or threads, attempt to access the same
resource at the same time and cause problems in the
system.
EXAMPLE OF RACE
CONDITION
A simple example of a race condition is a light switch. In some homes, there
are multiple light switches connected to a common ceiling light. When these
types of circuits are used, the switch position becomes irrelevant. If the light
is on, moving either switch from its current position turns the light
off. Similarly, if the light is off, then moving either switch from its current
position turns the light on.
With that in mind, imagine what might happen if two people tried to turn on
the light using two different switches at the same time. One instruction might
cancel the other or the two actions might trip the circuit breaker.
In computer memory or storage, a race condition may occur if commands to
read and write a large amount of data are received at almost the same
instant, and the machine attempts to overwrite some or all of the old data
while that old data is still being read. The result may be one or more of the
following:
 The computer crashes or identifies an illegal operation of the program
 Errors reading the old data
 Errors writing the new data
TYPES OF RACE
CONDITION
There are a few types of race conditions. Two categories that
define the impact of the race condition on a system are:
 A Critical Race Condition: This is a type of race condition that
will cause the end state of the device, system or program to
change. For example, if flipping two light switches connected to
a common light at the same time blows the circuit, it is
considered a critical race condition. In software, a critical race
condition is when a situation results in a bug with unpredictable
or undefined behaviour.
 A Non-critical Race Condition: This is a type of race condition
that does not directly affect the end state of the system, device
or program. In the light example, if the light is off and flipping
both switches simultaneously turns the light on and has the
same effect as flipping one switch, then it is a non-critical race
condition. In software, a non-critical race condition does not
result in a bug.
In programming, two main types of race conditions occur in a critical section of
code, which is a section of code executed by multiple threads. When multiple
threads try to read a variable and then each acts on it, one of the following
situations can occur:
 Read-modify-write. This kind of race condition happens when two processes
read a value in a program and write back a new value. It often causes a
software bug. Like the example above, the expectation is that the two
processes will happen sequentially -- the first process produces its value and
then the second process reads that value and returns a different one.
For example, if checks against a checking account are processed sequentially,
the system will make sure there are enough funds in the account to process
check A first and then look again to see if there are enough funds to process
check B after processing check A. However, if the two checks are processed at
the same time, the system may read the same account balance value for both
processes and give an incorrect account balance value, causing the account to
be overdrawn.
 Check-Then-Act. This race condition happens when two processes check a
value on which they will take each take an external action. The processes both
check the value, but only one process can take the value with it. The later-
occurring process will read the value as null. This results in a potentially out-of-
date or unavailable observation being used to determine what the program will
do next. For example, if a map application runs two processes simultaneously
that require the same location data, one will take the value first so the other
can't use it. The later process reads the data as null.

WHAT SECURITY VULNERABILITIES DO
RACE CONDITIONS CAUSE?
A program that is designed to handle tasks in a specific
sequence can experience security issues if it is asked to
perform two or more operations simultaneously. A threat
actor can take advantage of the time lapse between when
the service is initiated and when a security control takes
effect in order to create a deadlock or thread block
situation.
A deadlock vulnerability is a severe form of a denial-of-
service vulnerability. It can be made to occur when two or
more threads must wait for one another to acquire or
release a lock in a circular chain. This situation results in
deadlock, where the entire software system comes to a
halt because such locks can never be acquired or
released if the chain is circular.
Thread block can also dramatically impact application
performance. In this type of concurrency defect, one
thread calls a long-running operation while holding a lock
and preventing the progress of other threads.
How To Identify Race Conditions
Detecting and identifying race conditions is considered difficult. They
are a semantic problem that can arise from many possible flaws in
code. It's best to design code in a way that prevents these problems
from the start.
 Programmers use dynamic and static analysis tools to identify race
conditions. Static Testing tools scan a program without running it.
However, they produce many false reports. Dynamic analysis tools
have fewer false reports, but they may not catch race conditions that
aren't executed directly within the program.
 Race conditions are sometimes produced by data races, which occur
when two threads concurrently target the same memory location and
at least one is a write operation. Data races are easier to detect than
race conditions because specific conditions are required for them to
occur. Tools, such as the Go Project's Data Race Detector, monitor
data race situations. Race conditions are more closely tied to
application semantics and pose broader problems.
How Do You Prevent Race
Conditions?
Two ways programmers can prevent race
conditions in operating systems and other
software include:
 Avoid shared states. This means reviewing
code to ensure when shared resources are part
of a system or process, atomic operations are in
place that run independently of other processes
and locking is used to enforce the atomic
operation of critical sections of code. Immutable
objects can also be used that cannot be altered
once created.
 Use thread synchronization. Here, a given part
of the program can only execute one thread at a
time.
Preventing race conditions with other types of technology is also
possible such technologies include:
 Storage and memory
The serialization of memory or storage access will also prevent race
conditions. This means if read and write commands are received
close together, the read command is executed and completed first by
default.
 Networking
In a network, a race condition may occur if two users try to access
a channel at the same instant and neither computer receives
notification the channel is occupied before the system grants access.
Statistically, this kind of situation occurs mostly in networks with long
lag times, such as those that use geostationary satellites.
To prevent such a race condition, a priority scheme must be devised
to give one user exclusive access. For example, the subscriber whos
username or number begins with the earlier letter of the alphabet or
the lower numeral may get priority when two subscribers attempt to
access the system within a prescribed increment of time.
DEADLOCKS
A deadlock is a situation in which a set of
processes are blocked indefinitely because each
process is waiting for a resource that is being held
by another process in the set. Deadlocks can occur
in any system that allows multiple processes to
share resources.
A deadlock in OS is a situation in which more than
one process is blocked because it is holding a
resource and also requires some resource that is
acquired by some other process.
Neccessary Conditions for
Deadlock
The four necessary conditions for a deadlock to arise are as follows.
 Mutual Exclusion: Only one process can use a resource at any
given time i.e. the resources are non-sharable.
 Hold and wait: A process is holding at least one resource at a time
and is waiting to acquire other resources held by some other
process.
 No preemption: The resource can be released by a process
voluntarily i.e. after execution of the process.
 Circular Wait: A set of processes are waiting for each other in a
circular fashion. For example, lets say there are a set of processes
{�0P0​,�1P1​,�2P2​,�3P3​} such that �0P0​ depends
on �1P1​, �1P1​ depends on �2P2​, �2P2​ depends
on �3P3​ and �3P3​ depends on �0P0​. This creates a circular
relation between all these processes and they have to wait forever
to be executed.
Methods of Handling Deadlocks
in Operating System
The first two methods are used to ensure the
system never enters a deadlock.
 Deadlock Prevention
This is done by restraining the ways a request
can be made. Since deadlock occurs when all
the above four conditions are met, we try to
prevent any one of them, thus preventing a
deadlock.
Methods of Handling Deadlocks in
Operating System Conts.
 Deadlock Avoidance
When a process requests a resource, the deadlock
avoidance algorithm examines the resource-
allocation state. If allocating that resource sends
the system into an unsafe state, the request is not
granted.
Therefore, it requires additional information such
as how many resources of each type is required by
a process. If the system enters into an unsafe
state, it has to take a step back to avoid deadlock.
Methods of Handling Deadlocks in
Operating System Conts
 Deadlock Detection and Recovery
We let the system fall into a deadlock and if it
happens, we detect it using a detection
algorithm and try to recover.
 Some ways of recovery are as follows.
 Aborting all the deadlocked processes.
 Abort one process at a time until the system
recovers from the deadlock.
 Resource Preemption: Resources are taken
one by one from a process and assigned to
higher priority processes until the deadlock is
resolved
Methods of Handling Deadlocks in
Operating System Conts
 Deadlock Ignorance
In the method, the system assumes that
deadlock never occurs. Since the problem of
deadlock situation is not frequent, some
systems simply ignore it. Operating systems
such as UNIX and Windows follow this
approach. However, if a deadlock occurs we
can reboot our system and the deadlock is
resolved automatically.
 Note: The above approach is an example of
Ostrich Algorithm. It is a strategy of ignoring
potential problems on the basis that they are
extremely rare.
Advantage of Deadlock
Method
 No preemption is needed for
deadlocks.
 It is a good method if the state of the
resource can be saved and restored
easily.
 It is good for activities that perform a
single burst of activity.
 It does not need run-time
computations because the problem is
solved in system design.
Disadvantages of Deadlock
Method
 The processes must know the
maximum resource of each type
required to execute it.
 Preemptions are frequently
encountered.
 It delays the process initiation.
 There are inherent pre-emption
losses.
 It does not support incremental
request of resources.
Conclusion
 A deadlock in OS is a situation in which more than one process is
blocked because it is holding a resource and also requires some
resource that is acquired by some other process.
 The four necessary conditions for a deadlock situation are mutual
exclusion, no preemption, hold and wait and circular set.
 There are four methods of handling deadlocks - deadlock avoidance,
deadlock prevention, deadline detection and recovery and deadlock
ignorance.
 We can prevent a deadlock by preventing any one of the four
necessary conditions for a deadlock.
 There are different ways of detecting and recovering a deadlock in a
system.
 A starvation is a situation in which lower priority processes are
postponed indefinitely while higher priority processes are executed.
 The advantages of deadlock handling methods are that no
preemption is needed and it is good for activities that require a
single burst of activity.
 The disadvantages of deadlock handling methods are it delays
process initiation and preemptions are frequently encountered in it.
SEMAPHORE
 Semaphores are integer variables that are
used to solve the critical section problem by
using two atomic operations, wait and signal
that are used for process synchronization.
 The definitions of wait and signal are as
follows −
 Wait: The wait operation decrements the
value of its argument S, if it is positive. If S is
negative or zero, then no operation is
performed.
For instance wait(S) { while (S<=0); S--; }
 The signal operation increments the value of
its argument S.
 signal(S) { S++; }
Types of Semaphores
 There are two main types of semaphores i.e. counting
semaphores and binary semaphores. Details about these are
given as follows −
 Counting Semaphores: These are integer value semaphores and
have an unrestricted value domain. These semaphores are used to
coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added,
semaphore count automatically incremented and if the resources are
removed, the count is decremented.
 Binary Semaphores: The binary semaphores are like counting
semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to
implement binary semaphores than counting semaphores.
Advantages of Semaphores
 Some of the advantages of semaphores are as
follows −
 Semaphores allow only one process into the
critical section. They follow the mutual exclusion
principle strictly and are much more efficient than
some other methods of synchronization.
 There is no resource wastage because of busy
waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is
fulfilled to allow a process to access the critical
section.
 Semaphores are implemented in the machine
independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
 Some of the disadvantages of semaphores are
as follows −
 Semaphores are complicated so the wait and
signal operations must be implemented in the
correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as
their use leads to loss of modularity. This
happens because the wait and signal operations
prevent the creation of a structured layout for the
system.
 Semaphores may lead to a priority inversion
where low priority processes may access the
critical section first and high priority processes
later.

RACE CONDITION, DEADLCKS AND SEMAPHORE PRESENTATION

  • 1.
    RACE CONDITION, DEADLCKS ANDSEMAPHORE GROUP FIVE MEMBERS
  • 2.
  • 3.
    MEANING OF RACECONITION A Race Condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly. A Race Condition is a situation in which the outcome of a computation depends on the sequence in which two or more concurrent sub-computations are executed. Race conditions can occur in any system that allows multiple processes or threads to access shared resources. They occur when two computer program processes, or threads, attempt to access the same resource at the same time and cause problems in the system.
  • 4.
    EXAMPLE OF RACE CONDITION Asimple example of a race condition is a light switch. In some homes, there are multiple light switches connected to a common ceiling light. When these types of circuits are used, the switch position becomes irrelevant. If the light is on, moving either switch from its current position turns the light off. Similarly, if the light is off, then moving either switch from its current position turns the light on. With that in mind, imagine what might happen if two people tried to turn on the light using two different switches at the same time. One instruction might cancel the other or the two actions might trip the circuit breaker. In computer memory or storage, a race condition may occur if commands to read and write a large amount of data are received at almost the same instant, and the machine attempts to overwrite some or all of the old data while that old data is still being read. The result may be one or more of the following:  The computer crashes or identifies an illegal operation of the program  Errors reading the old data  Errors writing the new data
  • 5.
    TYPES OF RACE CONDITION Thereare a few types of race conditions. Two categories that define the impact of the race condition on a system are:  A Critical Race Condition: This is a type of race condition that will cause the end state of the device, system or program to change. For example, if flipping two light switches connected to a common light at the same time blows the circuit, it is considered a critical race condition. In software, a critical race condition is when a situation results in a bug with unpredictable or undefined behaviour.  A Non-critical Race Condition: This is a type of race condition that does not directly affect the end state of the system, device or program. In the light example, if the light is off and flipping both switches simultaneously turns the light on and has the same effect as flipping one switch, then it is a non-critical race condition. In software, a non-critical race condition does not result in a bug.
  • 6.
    In programming, twomain types of race conditions occur in a critical section of code, which is a section of code executed by multiple threads. When multiple threads try to read a variable and then each acts on it, one of the following situations can occur:  Read-modify-write. This kind of race condition happens when two processes read a value in a program and write back a new value. It often causes a software bug. Like the example above, the expectation is that the two processes will happen sequentially -- the first process produces its value and then the second process reads that value and returns a different one. For example, if checks against a checking account are processed sequentially, the system will make sure there are enough funds in the account to process check A first and then look again to see if there are enough funds to process check B after processing check A. However, if the two checks are processed at the same time, the system may read the same account balance value for both processes and give an incorrect account balance value, causing the account to be overdrawn.  Check-Then-Act. This race condition happens when two processes check a value on which they will take each take an external action. The processes both check the value, but only one process can take the value with it. The later- occurring process will read the value as null. This results in a potentially out-of- date or unavailable observation being used to determine what the program will do next. For example, if a map application runs two processes simultaneously that require the same location data, one will take the value first so the other can't use it. The later process reads the data as null. 
  • 7.
    WHAT SECURITY VULNERABILITIESDO RACE CONDITIONS CAUSE? A program that is designed to handle tasks in a specific sequence can experience security issues if it is asked to perform two or more operations simultaneously. A threat actor can take advantage of the time lapse between when the service is initiated and when a security control takes effect in order to create a deadlock or thread block situation. A deadlock vulnerability is a severe form of a denial-of- service vulnerability. It can be made to occur when two or more threads must wait for one another to acquire or release a lock in a circular chain. This situation results in deadlock, where the entire software system comes to a halt because such locks can never be acquired or released if the chain is circular. Thread block can also dramatically impact application performance. In this type of concurrency defect, one thread calls a long-running operation while holding a lock and preventing the progress of other threads.
  • 8.
    How To IdentifyRace Conditions Detecting and identifying race conditions is considered difficult. They are a semantic problem that can arise from many possible flaws in code. It's best to design code in a way that prevents these problems from the start.  Programmers use dynamic and static analysis tools to identify race conditions. Static Testing tools scan a program without running it. However, they produce many false reports. Dynamic analysis tools have fewer false reports, but they may not catch race conditions that aren't executed directly within the program.  Race conditions are sometimes produced by data races, which occur when two threads concurrently target the same memory location and at least one is a write operation. Data races are easier to detect than race conditions because specific conditions are required for them to occur. Tools, such as the Go Project's Data Race Detector, monitor data race situations. Race conditions are more closely tied to application semantics and pose broader problems.
  • 9.
    How Do YouPrevent Race Conditions? Two ways programmers can prevent race conditions in operating systems and other software include:  Avoid shared states. This means reviewing code to ensure when shared resources are part of a system or process, atomic operations are in place that run independently of other processes and locking is used to enforce the atomic operation of critical sections of code. Immutable objects can also be used that cannot be altered once created.  Use thread synchronization. Here, a given part of the program can only execute one thread at a time.
  • 10.
    Preventing race conditionswith other types of technology is also possible such technologies include:  Storage and memory The serialization of memory or storage access will also prevent race conditions. This means if read and write commands are received close together, the read command is executed and completed first by default.  Networking In a network, a race condition may occur if two users try to access a channel at the same instant and neither computer receives notification the channel is occupied before the system grants access. Statistically, this kind of situation occurs mostly in networks with long lag times, such as those that use geostationary satellites. To prevent such a race condition, a priority scheme must be devised to give one user exclusive access. For example, the subscriber whos username or number begins with the earlier letter of the alphabet or the lower numeral may get priority when two subscribers attempt to access the system within a prescribed increment of time.
  • 11.
    DEADLOCKS A deadlock isa situation in which a set of processes are blocked indefinitely because each process is waiting for a resource that is being held by another process in the set. Deadlocks can occur in any system that allows multiple processes to share resources. A deadlock in OS is a situation in which more than one process is blocked because it is holding a resource and also requires some resource that is acquired by some other process.
  • 12.
    Neccessary Conditions for Deadlock Thefour necessary conditions for a deadlock to arise are as follows.  Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-sharable.  Hold and wait: A process is holding at least one resource at a time and is waiting to acquire other resources held by some other process.  No preemption: The resource can be released by a process voluntarily i.e. after execution of the process.  Circular Wait: A set of processes are waiting for each other in a circular fashion. For example, lets say there are a set of processes {�0P0​,�1P1​,�2P2​,�3P3​} such that �0P0​ depends on �1P1​, �1P1​ depends on �2P2​, �2P2​ depends on �3P3​ and �3P3​ depends on �0P0​. This creates a circular relation between all these processes and they have to wait forever to be executed.
  • 13.
    Methods of HandlingDeadlocks in Operating System The first two methods are used to ensure the system never enters a deadlock.  Deadlock Prevention This is done by restraining the ways a request can be made. Since deadlock occurs when all the above four conditions are met, we try to prevent any one of them, thus preventing a deadlock.
  • 14.
    Methods of HandlingDeadlocks in Operating System Conts.  Deadlock Avoidance When a process requests a resource, the deadlock avoidance algorithm examines the resource- allocation state. If allocating that resource sends the system into an unsafe state, the request is not granted. Therefore, it requires additional information such as how many resources of each type is required by a process. If the system enters into an unsafe state, it has to take a step back to avoid deadlock.
  • 15.
    Methods of HandlingDeadlocks in Operating System Conts  Deadlock Detection and Recovery We let the system fall into a deadlock and if it happens, we detect it using a detection algorithm and try to recover.  Some ways of recovery are as follows.  Aborting all the deadlocked processes.  Abort one process at a time until the system recovers from the deadlock.  Resource Preemption: Resources are taken one by one from a process and assigned to higher priority processes until the deadlock is resolved
  • 16.
    Methods of HandlingDeadlocks in Operating System Conts  Deadlock Ignorance In the method, the system assumes that deadlock never occurs. Since the problem of deadlock situation is not frequent, some systems simply ignore it. Operating systems such as UNIX and Windows follow this approach. However, if a deadlock occurs we can reboot our system and the deadlock is resolved automatically.  Note: The above approach is an example of Ostrich Algorithm. It is a strategy of ignoring potential problems on the basis that they are extremely rare.
  • 17.
    Advantage of Deadlock Method No preemption is needed for deadlocks.  It is a good method if the state of the resource can be saved and restored easily.  It is good for activities that perform a single burst of activity.  It does not need run-time computations because the problem is solved in system design.
  • 18.
    Disadvantages of Deadlock Method The processes must know the maximum resource of each type required to execute it.  Preemptions are frequently encountered.  It delays the process initiation.  There are inherent pre-emption losses.  It does not support incremental request of resources.
  • 19.
    Conclusion  A deadlockin OS is a situation in which more than one process is blocked because it is holding a resource and also requires some resource that is acquired by some other process.  The four necessary conditions for a deadlock situation are mutual exclusion, no preemption, hold and wait and circular set.  There are four methods of handling deadlocks - deadlock avoidance, deadlock prevention, deadline detection and recovery and deadlock ignorance.  We can prevent a deadlock by preventing any one of the four necessary conditions for a deadlock.  There are different ways of detecting and recovering a deadlock in a system.  A starvation is a situation in which lower priority processes are postponed indefinitely while higher priority processes are executed.  The advantages of deadlock handling methods are that no preemption is needed and it is good for activities that require a single burst of activity.  The disadvantages of deadlock handling methods are it delays process initiation and preemptions are frequently encountered in it.
  • 20.
    SEMAPHORE  Semaphores areinteger variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization.  The definitions of wait and signal are as follows −  Wait: The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed. For instance wait(S) { while (S<=0); S--; }  The signal operation increments the value of its argument S.  signal(S) { S++; }
  • 21.
    Types of Semaphores There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows −  Counting Semaphores: These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource access, where the semaphore count is the number of available resources. If the resources are added, semaphore count automatically incremented and if the resources are removed, the count is decremented.  Binary Semaphores: The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores than counting semaphores.
  • 22.
    Advantages of Semaphores Some of the advantages of semaphores are as follows −  Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly and are much more efficient than some other methods of synchronization.  There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.  Semaphores are implemented in the machine independent code of the microkernel. So they are machine independent.
  • 23.
    Disadvantages of Semaphores Some of the disadvantages of semaphores are as follows −  Semaphores are complicated so the wait and signal operations must be implemented in the correct order to prevent deadlocks.  Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the wait and signal operations prevent the creation of a structured layout for the system.  Semaphores may lead to a priority inversion where low priority processes may access the critical section first and high priority processes later.