Assignment 02 Process State SimulationCSci 430 Introduction to.docxcargillfilberto
Assignment 02: Process State Simulation
CSci 430: Introduction to Operating Systems Fall 2020
In this assignment we will simulate a three-state process model (ready, running and blocked) and a simple process control block structure as introduced in Chapter 3 of our textbook. This simulation will utilize a ready queue and a list of blocked processes. We will simulate processes being created, deleted, timing out because they exceed their time quantum, and becoming blocked and unblocked because of (simulated) I/O events.
Questions
How does round robin scheduling work?
How does an operating system manage processes, move them between ready, running and blocked states, and
determine which process is scheduled next?
What is the purpose of the process control block? How does the PCB help an operating system manage and
keep track of processes?
Objectives
• Explore the Process state models from an implementation point of view.
• Practice using basic queue data types and implementing in C.
• Use C/C++ data structures to implement a process control block and round robin scheduling queues. • Learn about Process switching and multiprogramming concepts.
• Practice using STL queues and list data structures.
Introduction
In this assignment you will simulate a three-state process model (ready, running and blocked) and a simple list of processes, like the process control block structure as discussed in Chapter 3. Your program will read input and directives from a file. The input describes events that occur to the processes running in the simulation. These are the full set of events that can happen to and about processes in this simulation:
In addition to these events, there are 2 other implicit events that need to occur before and after every simulated event 1
listed above.
Action dispatch
timeout
Description
Before processing each event, if the CPU is currently idle, try and dispatch a process from the ready queue. If the ready queue is not empty, we will remove the process from the head of the ready queue and allocate it the CPU to run for 1 system time slice quantum.
After processing each event, we need to test if the running process has exceeded its time slice quantum yet. If a process is currently allocated to the CPU and running, check how long it has been run on its current dispatch. If it has exceeded its time slice quantum, the process should be timed out. It will be put back into a ready state, and will be pushed back to the end of the system ready queue.
The input file used for system tests and simulations will be a list of events that occur in the system, in the order they are to occur. For example, the first system test file looks like this:
----- process-events-01.sim -------- new
cpu
cpu
cpu
new
cpu
cpu
cpu
cpu
block 83
cpu
cpu
unblock 83
cpu
cpu
done
cpu
cpu
cpu
cpu ----------------------------------
The simulation you are developing is a model of process management and scheduling as described in.
Assignment 02 Process State SimulationCSci 430 Introduction to.docxcargillfilberto
Assignment 02: Process State Simulation
CSci 430: Introduction to Operating Systems Fall 2020
In this assignment we will simulate a three-state process model (ready, running and blocked) and a simple process control block structure as introduced in Chapter 3 of our textbook. This simulation will utilize a ready queue and a list of blocked processes. We will simulate processes being created, deleted, timing out because they exceed their time quantum, and becoming blocked and unblocked because of (simulated) I/O events.
Questions
How does round robin scheduling work?
How does an operating system manage processes, move them between ready, running and blocked states, and
determine which process is scheduled next?
What is the purpose of the process control block? How does the PCB help an operating system manage and
keep track of processes?
Objectives
• Explore the Process state models from an implementation point of view.
• Practice using basic queue data types and implementing in C.
• Use C/C++ data structures to implement a process control block and round robin scheduling queues. • Learn about Process switching and multiprogramming concepts.
• Practice using STL queues and list data structures.
Introduction
In this assignment you will simulate a three-state process model (ready, running and blocked) and a simple list of processes, like the process control block structure as discussed in Chapter 3. Your program will read input and directives from a file. The input describes events that occur to the processes running in the simulation. These are the full set of events that can happen to and about processes in this simulation:
In addition to these events, there are 2 other implicit events that need to occur before and after every simulated event 1
listed above.
Action dispatch
timeout
Description
Before processing each event, if the CPU is currently idle, try and dispatch a process from the ready queue. If the ready queue is not empty, we will remove the process from the head of the ready queue and allocate it the CPU to run for 1 system time slice quantum.
After processing each event, we need to test if the running process has exceeded its time slice quantum yet. If a process is currently allocated to the CPU and running, check how long it has been run on its current dispatch. If it has exceeded its time slice quantum, the process should be timed out. It will be put back into a ready state, and will be pushed back to the end of the system ready queue.
The input file used for system tests and simulations will be a list of events that occur in the system, in the order they are to occur. For example, the first system test file looks like this:
----- process-events-01.sim -------- new
cpu
cpu
cpu
new
cpu
cpu
cpu
cpu
block 83
cpu
cpu
unblock 83
cpu
cpu
done
cpu
cpu
cpu
cpu ----------------------------------
The simulation you are developing is a model of process management and scheduling as described in.
MODELING OF DISTRIBUTED MUTUAL EXCLUSION SYSTEM USING EVENT-B cscpconf
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently accessed by several sites. For correctness, it is required that shared resource must be accessed by a single site at a time. To decide, which site execute the critical section next, each site communicate with a set of other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are mathematical techniques that provide systematic approach for building and verification of model. We have used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof obligations arising due to consistency checking. In this paper, we outline a formal construction of model of Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector clock instead of using Lam-port's scalar clock for the purpose of message's time stamping
Modeling of distributed mutual exclusion system using event bcsandit
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently
accessed by several sites. For correctness, it is required that shared resource must be accessed by a single
site at a time. To decide, which site execute the critical section next, each site communicate with a set of
other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are
mathematical techniques that provide systematic approach for building and verification of model. We have
used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof
obligations arising due to consistency checking. In this paper, we outline a formal construction of model of
Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector
clock instead of using Lam-port's scalar clock for the purpose of message's time stamping.
Video can be found here: https://www.youtube.com/watch?v=qOST2eCgo2I
Rx helps us solve many complex problems, such as combining different streams and reacting to events that involve timing aspects.
However, solving those problems with code is not really "done" unless you can validate and assure your results.
In this session you'll learn the Rx.NET testing utilities and patterns that makes testing Rx code not only easy but also a lot of fun
Detection as Code, Automation, and Testing: The Key to Unlocking the Power of...MITRE ATT&CK
From ATT&CKcon 4.0
By Olaf Harton, FalconForce
"Modern security teams have been engineering solid detections for a while now. All this great output also needs to be managed well.
* How can we make sure that the detections we have spent a lot of time developing are deployed and are running in production in the same way as they were designed?
* How can we assure our detection and prevention controls are still working and are detecting the attacks they have been designed to cover?
We will show how we have built a robust and flexible development and deployment process using cloud technnologies. This process allows us to quickly and easily implement new detection controls, test them across multiple environments, and deploy them in a controlled and consistent manner.
We will discuss how security teams can reap the benefits of using detection-as-code, and how this can help achieving a single source of truth for their detection logic. Adopting this approach enables teams to use automation and unit testing to manage and validate their detection controls across multiple environments and ensure proper documentation. By adopting a detection-as-code approach, teams can gain the confidence that comes from knowing that their detections and mitigations work as intended."
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath components and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss-speculation, which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as to maximize its benefits. Our simulations show that the best choice can improve overall system throughput by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
Computer Applications: An International Journal (CAIJ)caijjournal
Computer Applications: An International Journal (CAIJ) is a Quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Computer Science Applications. The journal is devoted to the publication of high quality papers on theoretical and practical aspects of computer science applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on Computer science application advancements, and establishing new collaborations in these areas. Original research papers, state-of-the-art reviews are invited for publication in all areas of Computer Science Applications.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Computer Science Applications.
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath omponents and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss- peculation,
which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as
to maximize its benefits. Our simulations show that the best choice can improve overall system throughput
by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
MODELING OF DISTRIBUTED MUTUAL EXCLUSION SYSTEM USING EVENT-B cscpconf
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently accessed by several sites. For correctness, it is required that shared resource must be accessed by a single site at a time. To decide, which site execute the critical section next, each site communicate with a set of other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are mathematical techniques that provide systematic approach for building and verification of model. We have used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof obligations arising due to consistency checking. In this paper, we outline a formal construction of model of Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector clock instead of using Lam-port's scalar clock for the purpose of message's time stamping
Modeling of distributed mutual exclusion system using event bcsandit
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently
accessed by several sites. For correctness, it is required that shared resource must be accessed by a single
site at a time. To decide, which site execute the critical section next, each site communicate with a set of
other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are
mathematical techniques that provide systematic approach for building and verification of model. We have
used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof
obligations arising due to consistency checking. In this paper, we outline a formal construction of model of
Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector
clock instead of using Lam-port's scalar clock for the purpose of message's time stamping.
Video can be found here: https://www.youtube.com/watch?v=qOST2eCgo2I
Rx helps us solve many complex problems, such as combining different streams and reacting to events that involve timing aspects.
However, solving those problems with code is not really "done" unless you can validate and assure your results.
In this session you'll learn the Rx.NET testing utilities and patterns that makes testing Rx code not only easy but also a lot of fun
Detection as Code, Automation, and Testing: The Key to Unlocking the Power of...MITRE ATT&CK
From ATT&CKcon 4.0
By Olaf Harton, FalconForce
"Modern security teams have been engineering solid detections for a while now. All this great output also needs to be managed well.
* How can we make sure that the detections we have spent a lot of time developing are deployed and are running in production in the same way as they were designed?
* How can we assure our detection and prevention controls are still working and are detecting the attacks they have been designed to cover?
We will show how we have built a robust and flexible development and deployment process using cloud technnologies. This process allows us to quickly and easily implement new detection controls, test them across multiple environments, and deploy them in a controlled and consistent manner.
We will discuss how security teams can reap the benefits of using detection-as-code, and how this can help achieving a single source of truth for their detection logic. Adopting this approach enables teams to use automation and unit testing to manage and validate their detection controls across multiple environments and ensure proper documentation. By adopting a detection-as-code approach, teams can gain the confidence that comes from knowing that their detections and mitigations work as intended."
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath components and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss-speculation, which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as to maximize its benefits. Our simulations show that the best choice can improve overall system throughput by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
Computer Applications: An International Journal (CAIJ)caijjournal
Computer Applications: An International Journal (CAIJ) is a Quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Computer Science Applications. The journal is devoted to the publication of high quality papers on theoretical and practical aspects of computer science applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on Computer science application advancements, and establishing new collaborations in these areas. Original research papers, state-of-the-art reviews are invited for publication in all areas of Computer Science Applications.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Computer Science Applications.
PERFORMANCE ENHANCEMENT WITH SPECULATIVE-TRACE CAPPING AT DIFFERENT PIPELINE ...caijjournal
Simultaneous Multi-Threading (SMT) processors improve system performance by allowing concurrent execution of multiple independent threads with sharing key datapath omponents and better utilization of resources. Speculative execution allows modern processors to fetch continuously and reduce the delays of control instructions. However, a significant amount of resources is usually wasted due to miss- peculation,
which could have been used by other valid instructions, and such a waste is even more pronounced in an SMT system. In order to minimize the waste of resources, a speculative trace capping technique [1] was proposed to limit the number of speculative instructions in the system. In this paper, a thorough analysis is given to investigate the trade-offs among applying this capping mechanism at different pipeline stages so as
to maximize its benefits. Our simulations show that the best choice can improve overall system throughput
by a very significant margin (up to 46%) without sacrificing execution fairness among the threads.
The Operating System acts as a platform of information exchange between your computer's
hardware and the applications running on it. Most people are familiar with the Windows
Operating System family (like Windows 10, XP, or Vista) or Apple's suite of Operating Systems
(like Catalina, Mojave, or Sierra)
More from Shri Ram Swaroop Memorial College of Engineering & Management (14)
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
1. B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Operating System
KCS – 401
Semaphores
Dr. Pankaj Kumar
Associate Professor – CSE
SRMGPC Lucknow
2. Outline of the Lecture
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Semaphore – Introduction
Characteristic of Semaphore
Type of Semaphore
3. Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
A semaphore is a variable or abstract data type used to control access to a common resource by
multiple processes and avoid critical section problems in a concurrent system such as
a multitasking operating system. A trivial semaphore is a plain variable that is changed (for
example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A semaphore is a variable that indicates the number of resources that are available in a system at
a particular time and this semaphore variable is generally used to achieve the process
synchronization. It is generally denoted by "S".
4. Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
A semaphore uses two functions i.e. wait() and signal(). Both these functions are used to change
the value of the semaphore but the value can be changed by only one process at a particular time
and no other process should change the value simultaneously.
The wait() function is used to decrement the value of the semaphore variable "S" by one if the
value of the semaphore variable is positive. If the value of the semaphore variable is 0, then no
operation will be performed.
wait(S) {
while (S == 0); //there is ";" sign here S--;
}
The signal() function is used to increment the value of the semaphore variable by one.
signal(S) {
S++;
}
5. Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Characteristic of Semaphore
• It is a mechanism that can be used to provide synchronization of tasks.
• It is a low-level synchronization mechanism.
• Semaphore will always hold a non-negative integer value.
• Semaphore can be implemented using test operations and interrupts, which should be executed
using file descriptors.
6. Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Types of Semaphore
The two common kinds
• Counting semaphores
• Binary semaphores
Counting Semaphores:
In Counting semaphores, firstly, the semaphore variable is initialized
with the number of resources available. After that, whenever a
process needs some resource, then the wait() function is called and
the value of the semaphore variable is decreased by one. The process
then uses the resource and after using the resource,
the signal() function is called and the value of the semaphore variable
is increased by one.
So, when the value of the semaphore variable goes to 0 i.e all the
resources are taken by the process and there is no resource left to be
used, then if some other process wants to use resources then that
process has to wait for its turn. In this way, we achieve the process
synchronization.
7. Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Types of Semaphore
The two common kinds
• Counting semaphores
• Binary semaphores
Binary Semaphores: In Binary semaphores, the value of the
semaphore variable will be 0 or 1. Initially, the value of semaphore
variable is set to 1 and if some process wants to use some resource
then the wait() function is called and the value of the semaphore is
changed to 0 from 1. The process then uses the resource and when it
releases the resource then the signal() function is called and the value
of the semaphore variable is increased to 1.
If at a particular instant of time, the value of the semaphore variable
is 0 and some other process wants to use the same resource then it has
to wait for the release of the resource by the previous process. In this
way, process synchronization can be achieved.
8. Test and Set Lock
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Test and Set Lock (TSL) is a synchronization mechanism.
It uses a test and set instruction to provide the synchronization among the processes executing
concurrently.
It is an instruction that returns the old value of a memory location and sets the memory location
value to 1 as a single atomic operation.
If one process is currently executing a test-and-set, no other process is allowed to begin another
test-and-set until the first process test-and-set is finished.
9. Test and Set Lock
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Initially, lock value is set to 0.
● Lock value = 0 means the critical section
is currently vacant and no process is
present inside it.
● Lock value = 1 means the critical section
is currently occupied and a process is
present inside it.
10. Semaphore Usage
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S1;
signal(synch);
do {
wait(mutex):
critical Section
signal(mutex);
Remainder section
}while(1)
P2:
wait(synch);
S2;
11. Dining-Philosophers Problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Philosophers spend their lives alternating thinking and eating
Don’t interact with their neighbors, occasionally try to pick up 2
chopsticks (one at a time) to eat from bowl
Need both to eat, then release both when done
In the case of 5 philosophers
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
12. Dining-Philosophers Problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
13. Bounded buffer Problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
14. Sleeping Barber problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Problem : The analogy is based upon a hypothetical
barber shop with one barber. There is a barber shop
which has one barber, one barber chair, and n chairs
for waiting for customers if there are any to sit on the
chair.
•If there is no customer, then the barber sleeps in his
own chair.
•When a customer arrives, he has to wake up the
barber.
•If there are many customers and the barber is cutting
a customer’s hair, then the remaining customers
either wait if there are empty chairs in the waiting
room or they leave if no chairs are empty.
15. Sleeping Barber problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Customer {
while(true) {
sem_wait(accessseats);
if(FreeSeats > 0) {
FreeSeats--; /* sitting down.*/
sem_post(Customers); /* notify the barber. */
sem_post(accesseats); /* release the lock */
sem_wait(Barber); /* wait in the waiting room if barber is
busy. */
// customer is having hair cut
} else {
srm_post(accessSeats); /* release the lock */
// customer leaves } }}
Variables: shared data
Semaphore Customers = 0;
Semaphore Barber = 0; // sleeping
Access Mutex Seats = 1;
int FreeSeats = N;
16. Sleeping Barber problem
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Barber {
while(true) {
Wait (customers); /* waits for a customer (sleeps). */
wait (mutex); // whenever wait(1) is executed it decrement value 0 ie. mutex to protect the
number of available seats
Numberoffreeseat++ // a chair get free
sem_post(barber) /* bring customer for haircut.*/
sem_post(mutex) /* release the mutex on the chair.*/
/* barber is cutting hair.*/
}
}
17. Problem in Semaphore
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
1. One of the biggest limitations of semaphore is priority inversion.
2. Deadlock, suppose a process is trying to wake up another process which is not in a sleep state.
Therefore, a deadlock may block indefinitely.
3. The operating system has to keep track of all calls to wait and to signal the semaphore.
several process may active simultaneously
Signal(mutex);
……
Critical section
……..
Wait(mutex);
Deadlock may occur
wait(mutex);
……
Critical section
……..
wait(mutex);
18. Critical Region
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
One of the main problem in semaphore is that semaphore are not syntactically related to the
shared resources that they guard. In particular there are no declaration that could alert the
compiler that a specific data structure is shared and that its accessing needs to be controlled.
Acritical region protects a shared data structure by making its known to it known to the compiler
which can then generate code that maintains mutual exclusive access to the related data. The
declaration of shared variable has the following format:
V: shared : T;
Where the keyword shared informs the compiler that the variable mutes of user deifned type T, is
shared by several person. Process may acess a guarede variable by means of the region construct
as follows:
Region V when B do S;
Statement “do” is executed as a critical section.
19. Monitor
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
The monitor is one of the ways to achieve Process
synchronization. The monitor is supported by
programming languages to achieve mutual exclusion
between processes. For example Java Synchronized
methods. Java provides wait() and notify() constructs.
1.It is the collection of condition variables and
procedures combined together in a special kind of
module or a package.
2.The processes running outside the monitor can’t access
the internal variable of the monitor but can call
procedures of the monitor.
3.Only one process at a time can execute code inside
monitors.
20. Monitor
B.Tech – CS 2nd Year Operating System (KCS- 401) Dr. Pankaj Kumar
Key Differences Between Semaphore and Monitor
1. The basic difference between semaphore and monitor is that the semaphore is an integer
variable S which indicate the number of resources available in the system whereas,
the monitor is the abstract data type which allows only one process to execute in critical
section at a time.
2. The value of semaphore can be modified by wait() and signal() operation only. On the other
hand, a monitor has the shared variables and the procedures only through which shared
variables can be accessed by the processes.
3. In Semaphore when a process wants to access shared resources the process performs wait()
operation and block the resources and when it release the resources it performs signal()
operation. In monitors when a process needs to access shared resources, it has to access them
through procedures in monitor.
4. Monitor type has condition variables which semaphore does not have.