Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESsuthi
Short Notes on Parallel Computing
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.
1. What important part of the process switch operation is not shown .pdffathimaoptical
1. What important part of the process switch operation is not shown in Figure 3.4?
2. What is the operational difference between single-threaded and multi-threaded processes? i.e.,
how does it change the usage of each?
3. What kinds of operations take advantage of threads? Think of depth and breadth.
1).consider task parallelism
2).consider data parallelism
4.What is the difference between Many to One, One to One, and Many to Many models?
1).What are the benefits and constraints of each of these?
2).Provide examples of each of these
3).How does the two-level model help thread operations? process Po operating system process P
interrupt or system call executing save state into PCBo idle reload state from PCB1 dle interrupt
or system call executing save state into PCB1 idle reload state from PCB0 executing Figure 3.4
Diagram showing CPU switch from process to process.
Solution
PCB daigaram.
1.The Program control Block diagram is important ,we have PCB in the diagram,but in detail.
For each process there is a Process Control Block, PCB,
which stores the following ( types of ) process-specific information, as illustrated in Figure 3.1. (
Specific details may vary from system to system. )
•Process State - Running, waiting, etc., as discussed above.
•Process ID, and parent process ID.
•CPU registers and Program Counter - These need to be saved and restored when swapping
processes in and out of the CPU.
•CPU-Scheduling information - Such as priority information and pointers to scheduling queues.
•Memory-Management information - E.g. page tables or segment tables.
•Accounting information - user and kernel CPU time consumed, account numbers, limits, etc.
•I/O Status information - Devices allocated, open file tables, etc.
2.With a single thread process, the process runs/executes on single path.With multiple thread
process is where a process runs/executes on two or more paths.
Applications with multithreading implementation increases its responsiveness to the
application’s users, for instance;
with traditional single-threaded process implementation within a web server can serve only one
client request at a time and can make the waiting period for other users requesting services a very
long time.
With more efficient multithreaded server implementation; separate threads can be created to
respond to different users’ requests.
Multithreading technique in the above example increased the application responsiveness to the
users’ requests.
3.
Multiple Processes ,example proxy server satisfying the requests for a number of computers on a
LAN would be benefited by a multi-threaded process.
Task parallelism is the simultaneous execution on multiple cores of many different functions
across the same or different datasets.
This form of parallelism covers the execution of computer programs across multiple processors
on same or multiple machines. It focuses on executing different operations in parallel to fully
utilize the available computing resources in form of proces.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
More Related Content
Similar to Process Control Block (PCB) print 4.pdf
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESsuthi
Short Notes on Parallel Computing
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.
1. What important part of the process switch operation is not shown .pdffathimaoptical
1. What important part of the process switch operation is not shown in Figure 3.4?
2. What is the operational difference between single-threaded and multi-threaded processes? i.e.,
how does it change the usage of each?
3. What kinds of operations take advantage of threads? Think of depth and breadth.
1).consider task parallelism
2).consider data parallelism
4.What is the difference between Many to One, One to One, and Many to Many models?
1).What are the benefits and constraints of each of these?
2).Provide examples of each of these
3).How does the two-level model help thread operations? process Po operating system process P
interrupt or system call executing save state into PCBo idle reload state from PCB1 dle interrupt
or system call executing save state into PCB1 idle reload state from PCB0 executing Figure 3.4
Diagram showing CPU switch from process to process.
Solution
PCB daigaram.
1.The Program control Block diagram is important ,we have PCB in the diagram,but in detail.
For each process there is a Process Control Block, PCB,
which stores the following ( types of ) process-specific information, as illustrated in Figure 3.1. (
Specific details may vary from system to system. )
•Process State - Running, waiting, etc., as discussed above.
•Process ID, and parent process ID.
•CPU registers and Program Counter - These need to be saved and restored when swapping
processes in and out of the CPU.
•CPU-Scheduling information - Such as priority information and pointers to scheduling queues.
•Memory-Management information - E.g. page tables or segment tables.
•Accounting information - user and kernel CPU time consumed, account numbers, limits, etc.
•I/O Status information - Devices allocated, open file tables, etc.
2.With a single thread process, the process runs/executes on single path.With multiple thread
process is where a process runs/executes on two or more paths.
Applications with multithreading implementation increases its responsiveness to the
application’s users, for instance;
with traditional single-threaded process implementation within a web server can serve only one
client request at a time and can make the waiting period for other users requesting services a very
long time.
With more efficient multithreaded server implementation; separate threads can be created to
respond to different users’ requests.
Multithreading technique in the above example increased the application responsiveness to the
users’ requests.
3.
Multiple Processes ,example proxy server satisfying the requests for a number of computers on a
LAN would be benefited by a multi-threaded process.
Task parallelism is the simultaneous execution on multiple cores of many different functions
across the same or different datasets.
This form of parallelism covers the execution of computer programs across multiple processors
on same or multiple machines. It focuses on executing different operations in parallel to fully
utilize the available computing resources in form of proces.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Process Control Block (PCB) print 4.pdf
1. Process Control Block (PCB)
To implement the process model, the operating
system maintains a table (an array of structures)
called the process table, with one entry per
process, these entries are known as Process
Control Block.
Process table contains the information what the
operating system must know to manage and
control process switching, including the process
location and process attributes.
2. Process Control Block (PCB)
Various fields and information stored in PCB are
given as below
Process Id: Each process is given Id number at
the time of creation.
Process state: The state may be ready,
running, and blocked.
Program counter: The counter indicates the
address of the next instruction to be executed
for this process.
3. Process Control Block (PCB)
CPU registers: Along with the program counter, this
state information must be saved when an interrupt
occurs, to allow the process to be continued correctly
afterward
CPU-scheduling information: This information
includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Accounting information: This information includes
the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on
Status information: The information includes the list
of I/O devices allocated to this process, a list of open
files, and so on.
4. Thread
• A program has one or more locus of execution.
Each execution is called a thread of execution.
• In traditional operating systems, each process
has an address space and a single thread of
execution.
• It is the smallest unit of processing that can be
scheduled by an operating system.
• A thread is a single sequence stream within in a
process. Because threads have some of the
properties of processes,
5. Thread Structure
• The thread has a program counter that keeps track
of which instruction to execute next.
• It has registers, which holds its current working
variables.
• It has a stack, which contains the execution
history, with one frame for each procedure called
but not yet returned from.
• What threads add to the process model is to allow
multiple executions to take place in the same
process environment, to a large degree
independent of one another
6. Thread Structure
• Having multiple threads running in parallel in one
process is similar to having multiple processes
running in parallel in one compute
(a) Three processes each with one thread. (b) One process with three threads.
7. Thread Structure
In former case, the threads share an address
space, open files, and other resources.
In the latter case, process share physical memory,
disks, printers and other resources.
In Fig. (a) we see three traditional processes.
Each process has its own address space and a
single thread of control.
In contrast, in Fig. (b) we see a single process with
three threads of control
8. Thread Structure
• Although in both cases we have three threads, in
Fig. (a) each of them operates in a different
address space, whereas in Fig. (b) all three of
them share the same address space.
9. Multithreading and Multitasking
Multithreading
The ability of an operating system to execute
different parts of a program, called threads
simultaneously, is called multithreading.
The programmer must carefully design the
program in such a way that all the threads can
run at the same time without interfering with
each other.
10. Multithreading
On a single processor, multithreading generally
occurs by time division multiplexing the processor
switches between different threads.
This context switching generally happens so
speedy that the user perceives the threads or
tasks as running at the same time.
11. Multitasking
• The ability to execute more than one task at the
same time is called multitasking.
• In multitasking, only one CPU is involved, but it
switches from one program to another so quickly
that it gives the appearance of executing all of the
programs at the same time here are two basic
types of multitasking.
Preemptive: In preemptive multitasking, the
operating system assign CPU, time slices to each
program.
12. Multitasking
Cooperative: In cooperative multitasking, each
program can control the CPU for as long as it needs
CPU. If a program is not using the CPU, however, it
can allow another program to use it.
13. Similarities and dissimilarities between
process and thread.
Similarities
Like processes threads share CPU and only one
thread is active (running) at a time.
Like processes threads within a process execute
sequentially.
Like processes thread can create children.
Like a traditional process, a thread can be in any one
of several states: running, blocked, ready, or
terminated.
Like process threads have Program Counter, stack,
Registers and state
14. Similarities and dissimilarities between
process and thread.
Dissimilarities
Unlike processes threads are not independent of one
another, threads within the same process share an
address space.
Unlike processes all threads can access every address
in the task.
Unlike processes threads are design to assist one
other.
Note that processes might or might not assist one
another because processes may be originated from
different users.
15. Thread Usage-Why do we need threads?
• E.g., a word processor has different parts; parts
for
– Interacting with the user
– Formatting the page as soon as the changes are
made
– Timed savings (for auto recovery)
– Spelling and grammar checking
15
16. Thread Usage-Why do we need
threads?
1. Simplifying the programming model since many
activities are going on at once.
2. They are easier to create and destroy than
processes since they don't have any resources
attached to them
3. Performance improves by overlapping activities
if there is too much I/O
17. Thread Usage-Why do we need
threads?
4. Real parallelism is possible if there are multiple
CPUs
Note: implementation details are beyond the
scope of the course (distributed systems).
18. Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a
process.
Efficient communication.
It is more economical to create and context
switch threads.
Threads allow utilization of multiprocessor
architectures to a greater scale and
efficiency.
19. Context Switch
A context switch is the mechanism to store and
restore the state or context of a CPU in Process
Control block so that a process execution can be
resumed from the same point at a later time.
Using this technique, a context switcher enables
multiple processes to share a single CPU.
Context switching is an essential part of a
multitasking operating system features.
20. Context Switch
When the scheduler switches the CPU from
executing one process to execute another, the
state from the current running process is stored
into the process control block.
After this, the state for the process to run next is
loaded from its own PCB and used to set the PC,
registers, etc.
At that point, the second process can start
executing
22. Context switching
Switching the CPU to another process requires
saving the environment of the old process and
loading the saved environment of the new
process. This task is called context switching.
Context switch time also called dispatch
latency is an overhead /is the wasted in the
transition by the OS/ and it depends on the
hardware (1 to 100ms)
It is sometimes a performance bottle neck
24. Interprocess communication
Processes frequently need to communicate with
other processes.
Processes may share a memory area or a file for
communication
There are three issues related to IPC
1. How can one process pass information to another.
2. How can we make two or more processes don’t
interfere with each other when engaged in critical
activities; e.g., getting the last 1MB of memory
29. Interprocess communication
3. Sequencing of events when dependency exist;
e.g., one process produces data and another
process consumes it.
These issues are also applied to threads; the
first is easy for thread since they share a
common address space.
30. Why interprocess communication?
As we said process must be cooperative to doing
the task and to doing this task process must be
communicate to each others ,why? The output of
one of process would be the input of the others
process
Why interprocess communication is important?
To know the information of each others and
transfer data among themselves
31. Why interprocess communication
When the dependencies among occurs the
process would occur, example if process A
produces data and process prints it, and B
would be wait until the process A produce the
data, before starting printing data
32. Interprocess communication
To doing task process must be communicate at
certain points
Example P1-----------@-----------P2
To do task process P1 task process P2 need the
inputs(who use this resource) if process P1 and p2
doing works by using variables in @,they use the
variables @ as sequential when p1 using variables
@,P2 must be waiting up to p1 finish the variable
or using @
33. Interprocess communication
Why p1 and p2 waiting to each others?
Because of they have common shared
resource and depend on each others and
both process using global and local variable
When one process using the resource the
other process must be waiting to each
others
P1----------R----------p2
34. Interprocess communication
• From the above when p1 and p2 using the
memory ,files and shared resources they must be
use turn by turn , but to using this resource they
must be computation to each other to the
resource
35. Interprocess communication
• Example, if two process trying to printing the
process p1 reach to the printer and the printer is
on printing but process p2 coming while process
p1 on printing and override the p1 and delete
each others by merging and p2 start to print
( means if they do not waiting to each others).
36. Race Conditions
When several processes access and manipulate
the same data concurrently and the outcome of
the execution depends on the particular order in
which the access takes place, is called a race
condition.
In the os processes that are working together
may be share some common resources such as
one read and the other need to write
37. Race Conditions
Race conditions
– Arises as a result of sharing some resources
– E.g. printer spooler
– When a process wants to print a file, it enters a
file name on a special spooler directory
38. Race Conditions
Another process, the printer daemon,
periodically checks to see if there are any files
to be printed, and if there are it prints them
and removes from the directory.
Recall the daemon is a process running in the
back ground and started automatically when
the system is booted.
Assume that the spooler directory has a large
number of slots, numbered 0,1,2,3,…,n, each
capable of holding a file name.
40. Race Conditions
Then the following may happen
– Process A reads in and stores the value 7 in a local
variable.
– A clock interrupt occurs and the CPU is switched to B.
– B also reads in and stores the value 7 in a local
variable.
– It stores the name of its file in slot 7 and updates in to
be 8.
– A runs again; it runs its file name in slot 7, erasing the
file name that B wrote.
– It updates in to be 8.
41. Race Conditions
The printer daemon will not notice this; but B
will never receive any output
Situation like this, where two or more process
are reading and writing some shared data and
the final result depends on who runs precisely
when are called race conditions.
42. Critical Regions
Critical region is a part of program where the
shared resources is found and it is also called as
critical section
On the critical section there is high race
conditions occurs, but how prevent this race
conditions
The main ways of make prevention is to making
as more than one process do not using the
shared resources such as share memory, shared
files at the same times
43. Avoiding Race Conditions
• To avoid race condition we need Mutual
Exclusion.
• Mutual Exclusion is someway of making sure
that if one process is using a shared variable or
file, the other processes will be excluded from
doing the same things.
• The difficulty above in the printer spooler occurs
because process B started using one of the shared
variables before process A was finished with it.
44. Critical Region
What is critical region?
It is the part of program where a shared resource
is found it is also called critical section, in this
critical section there is high race condition occur
,but how we prevent this those race condition ?
The main way is to prevent as not more than
one process using the shared memory, shared files
at the same time ,means when P1 using resources
p2 must be wait or prevents by using OS design
45. Avoiding Race Conditions
That part of the program where the shared
memory is accessed is called the critical region
or critical section
If we could arrange matters such that no two
processes were ever in their critical regions at
the same time, we could avoid race conditions.
Although this requirement avoids race conditions,
this is not sufficient for having parallel processes
cooperate correctly and efficiently using shared
data.
48. Mutual exclusion using critical Regions
As we seen on the above process A enter its
critical regions at a time T1 in little later at a T2
Process B attempts to enter it is critical regions
and we allows only one at a time ,consequently B
is temporary suspended until Time T3 when PA
leaves its critical regions and allowing B to enter
immediacy.
But in the other word what we need is called as
mutual exclusion that is some what making sure
that if some process using a resource the other
process must be prevents
49. Mutual exclusion using critical Regions
Eventually B leaves at T4 and we are back to the
original situation with no process in the critical
region
50. Mutual exclusion using with busy waiting
What is busy waiting ?
Means waiting some process up to it leave from the
critical regions without doing any things
There are two way of achieving mutual exclusion
1.Mutual exclusion using with busy waiting
Disabling Interrupt
One of the simple processor system have
simplest solution to have each processes disabling
the over all interrupt just after one process enter
to the critical region and enable after the process
leave from the critical region
51. Disabling Interrupt
• Example if P1 want to share/ use critical region and
enter to the critical region and the process p2 need
to enter to the critical region would be disable or
the enter point is switch off and after the process is
terminate from the critical region and the CPU can
send the signal to the process and enabling
• This approach is generally un attractive because it
is unwise the user process when the CPU forget to
turn on again the process forget and cause the end
system
52. Mutual exclusion using with busy waiting
Lock Variable
Let look for the software solution having single
shared and lock variable initially zero (0) when
the process wants to enter its critical region, it tests
the lock ,if lock is zero the process is set its to 1
and enter to the critical region ,if the lock is
already 1 the process just waits until it become 0
,when lock is 0 no process is in the critical region
and when lock is 1 there is the process is in the
critical region
53. Lock Variable
The problem of this algorithm is, it may be contain
the total flow because of that one process read
the lock and see that it is 0 , before it set the lock
to 1 an other process is scheduled runs and set the
lock to 1 and the two process would be in the
critical regions at the same time.
54. Mutual exclusion using without busy waiting
The process which occur in the Critical region to
share resource and they make that race take
place because they share common resources such
as RAM,CPU , Files and Folder,
this Race problem is solve by using ME with busy
waiting have some drawback b/ce the CPU busy
and cause unexpected effect, to solve this
problem an other algorithms are developed
55. Mutual exclusion using without busy waiting
Sleep and Wake Up
Sleeping is the process in which the process do not
busy waiting up to the process complete the task
and sleeping on some part or doing some task up
to the first process complete the task or critical
region is free
Then when the process is out from the critical
region the caller called as wake up operating
system and sending some signal as the process
using the resource by the means of signal when
the resource is free.
56. Sleep and Wake Up
Notice, what happen when the process who sleep do
not hear the signal or the alarm of wakeup?
Producer and consumer problem
When two processes share common file buffer is the
part of memory which is used to holding information
when the size of the memory is limited
Producer –consumer occur in two processes
known as p1 putting some information in the buffer
which is called as producer and an other process
called as consumer takes the stored resource from
the buffer and make the buffer is free.
57. Producer and consumer problem
• Producer is the process who put resource into
buffer and the consumer is the process which is
make buffer free by taking the data from the
buffer.
• But what happen if the producer putting all
resources on the buffer ,but when sending the
signal the consumer not heard and the buffer is
full and the producer is got sleep up to the
buffer is free, but the consumer take and make
the buffer is empty ,but the producer do not hear
as the buffer is empty.
58. Process Scheduling
What is scheduler ?
Due to many processes would be use single
resources, example CPU, but have million
processes this process is done by Operating
system because of many process uses single
resources.
The process scheduling is the activity of the
process manager that handles the removal
of the running process from the CPU and the
selection of another process on the basis of a
particular strategy.
59. Process Scheduling
Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process to
be loaded into the executable memory at a time
and the loaded process shares the CPU using time
multiplexing.
60. Process Scheduling Queue
The OS maintains all PCBs in Process
Scheduling Queues.
The OS maintains a separate queue for each of
the process states and PCBs of all processes in the
same execution state are placed in the same
queue.’
When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its
new state queue
61. Process Scheduling Queue
Ready queue - This queue keeps a set of all
processes be located in in main memory, ready
and waiting to execute. A new process is always
put in this queue./when the process is ready for the
RAM to CPU
Device queues - The processes which are blocked
due to unavailability of an I/O device constitute
this queue.
62. Process Scheduling Queues
The Operating System maintains the following
important process scheduling queues:
Job queue - This queue keeps all the processes in
the system/when the process want to enter to
the CPU/from HD to RAM
63. Process scheduling
When a process is multi programmed, it
frequently has multiple processes competing for
the CPU at the same time.
If only one CPU is available a choice has to be
made which process to run next.
Multiprogramming - aims to increase the
output
Time sharing - to allow all users use the CPU
equally
63
64. Scheduling Queues
• As the process enters the system or when a
running process is interrupted, it is put into
a ready queue
• There are also device queues(waiting
queues), where each device has its own
device queue.
• All are generally stored in a queue(linked
list), not necessarily a FIFO queue.
64
65. Scheduling levels
• Short-term (CPU scheduler)—selects from jobs in memory those
jobs that are ready to execute and allocates the CPU to them.
Which process execute next, after define it call the dispatcher and it
do the remaining work and it do the context switching
• Medium-term—used especially with time-sharing systems as an
intermediate scheduling level.
– A swapping scheme is implemented to remove partially run
programs from memory and reinstate them later to continue
where they left off. When RAM is full it swap out the process to
the buffer and when the space available it swap in to the main
memory
• Long-term (job scheduler)—determines which jobs are brought
into memory for processing.
65
66. Scheduling Algorithms
What are the most common algorithms??
1. FCFS
2. Round Robin
3. Shortest Job First
4. Shortest Remaining Job First
5. Priority Scheduling
67. Scheduling Algorithms
FCFS (First Come First Serve)
Selection criteria :
The process that request first is served first. It
means that processes are served in the exact
order of their arrival.
Decision Mode :
Non preemptive: Once a process is selected, it
runs until it is blocked for an I/O or some event,
or it is terminated.
68. Scheduling Algorithms
FCFS (First Come First Serve)
Implementation:
• This strategy can be easily implemented by using FIFO
queue, FIFO means First In First Out. When CPU
becomes free, a process from the first position in a
queue is selected to run.
Example :
Consider the following set of four processes. Their arrival
time and time required to complete the execution are
given in following table. Consider all time values in
milliseconds.
69. FCFS (First Come First Serve)
Initially only process P0 is present and it is allowed to run. But, when P0 completes, all other
processes are present. So, next process P1 from ready queue is selected and allowed to run till
it complete. This procedure is repeated till all processes completed their execution
71. FCFS (First Come First Serve)
Advantages:
Simple, fair, no starvation.
Easy to understand, easy to implement.
Disadvantages :
Not efficient. Average waiting time is too high.
Convoy effect is possible. All small I/O bound processes
wait for one big CPU bound process to acquire CPU.
CPU utilization may be less efficient especially when a
CPU bound process is running with many I/O bound
processes.
72. Scheduling Algorithms
Shortest Job First (SJF):
Selection Criteria :
The process, that requires shortest time to
complete execution, is served first.
Decision Mode :
Non preemptive: Once a process is selected, it
runs until either it is blocked for an I/O or some
event, or it is terminated.
Implementation :
73. Shortest Job First (SJF):
• This strategy can be implemented by using sorted FIFO
queue.
• All processes in a queue are sorted in ascending order
based on their required CPU bursts. When CPU
becomes free, a process from the first position in a
queue is selected to run.
Example :
• Consider the following set of four processes. Their
arrival time and time required to complete the execution
are given in following table. Consider all time values in
milliseconds.
75. Shortest Job First (SJF):
• Initially only process P0 is present and it is
allowed to run. But, when P0 completes, all
other processes are present.
• So, process with shortest CPU burst P2 is selected
and allowed to run till it completes.
• Whenever more than one process is available,
such type of decision is taken.
• This procedure us repeated till all process
complete their execution.
77. Shortest Job First (SJF):
Advantages:
Less waiting time.
Good response for short processes.
Disadvantages :
It is difficult to estimate time required to
complete execution.
Starvation is possible for long process. Long
process may wait forever.
78. Shortest Remaining Time Next (SRTN):
Selection criteria :
• The process, whose remaining run time is shortest, is
served first. This is a preemptive version of SJF
scheduling.
Decision Mode:
• Preemptive: When a new process arrives, its total
time is compared to the current process remaining
run time.
• If the new job needs less time to finish than the
current process, the current process is suspended
and the new job is started.
79. Shortest Remaining Time Next (SRTN):
Implementation :
• This strategy can also be implemented by using
sorted FIFO queue. All processes in a queue are
sorted in ascending order on their remaining run
time.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
80. Shortest Remaining Time Next (SRTN):
Example :
• Consider the following set of four processes.
Their arrival time and time required to complete
the execution are given in following table.
Consider all time values in milliseconds.
81. Shortest Remaining Time Next (SRTN):
Initially only process P0 is present and it is allowed to
run. But, when P1 comes, it has shortest remaining run
time. So, P0 is preempted and P1 is allowed to run.
Whenever new process comes or current process blocks,
such type of decision is taken. This procedure is
repeated till all processes complete their execution.
83. Shortest Remaining Time Next (SRTN):
Advantages :
Less waiting time.
Quite good response for short processes.
Disadvantages :
Again it is difficult to estimate remaining time
necessary to complete execution.
Starvation is possible for long process. Long
process may wait forever.
Context switch overhead is there.
84. Round Robin:
Selection Criteria:
Each selected process is assigned a time interval,
called time quantum or time slice.
Process is allowed to run only for this time
interval. Here, two things are possible:
First, Process is either blocked or terminated
before the quantum has elapsed. In this case the
CPU switching is done and another process is
scheduled to run.
85. Round Robin:
• Second, Process needs CPU burst longer than
time quantum. In this case, process is running at
the end of the time quantum.
• Now, it will be preempted and moved to the end
of the queue. CPU will be allocated to another
process. Here, length of time quantum is critical
to determine.
86. Round Robin:
Decision Mode:
• Preemptive:
Implementation :
This strategy can be implemented by using circular
FIFO queue. If any process comes, or process
releases CPU, or process is preempted.
It is moved to the end of the queue. When CPU
becomes free, a process from the first position in a
queue is selected to run.
87. Round Robin:
Example :
Consider the following set of four processes. Their
arrival time and time required to complete the
execution are given in the following table.
All time values are in milliseconds. Consider that
time quantum is of 4 ms, and context switch
overhead is of 1 ms.
88. Round Robin:
At 4ms, process P0 completes its time quantum. So it preempted and another
process P1 is allowed to run. At 12 ms, process P2 voluntarily releases CPU, and
another process is selected to run. 1 ms is wasted on each context switch as
overhead. This procedure is repeated till all process completes their execution.
90. Round Robin:
Advantages:
One of the oldest, simplest, fairest and most
widely used algorithms.
Disadvantages:
Context switch overhead is there.
Determination of time quantum is too critical. If it
is too short, it causes frequent context switches
and lowers CPU efficiency. If it is too long, it causes
poor response for short interactive process.
91. Non Preemptive Priority Scheduling:
Selection criteria :
The process, that has highest priority, is served
first.
Decision Mode:
Non Preemptive: Once a process is selected, it
runs until it blocks for an I/O or some event, or it
terminates
92. Non Preemptive Priority Scheduling:
Implementation :
This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted
based on their priority with highest priority
process at front end.
When CPU becomes free, a process from the first
position in a queue is selected to run.
93. Non Preemptive Priority Scheduling:
Example :
Consider the following set of four processes.
Their arrival time, total time required completing
the execution and priorities are given in following
table. Consider all time values in millisecond and
small values for priority means higher priority of
a process
94. Non Preemptive Priority Scheduling:
Initially only process P0 is present and it is allowed to run. But,
when P0 completes, all other processes are present. So, process
with highest priority P3 is selected and allowed to run till it
completes. This procedure is repeated till all processes complete
their execution.
96. Non Preemptive Priority Scheduling:
Advantages:
Priority is considered. Critical processes can get
even better response time.
Disadvantages:
Starvation is possible for low priority processes.
It can be overcome by using technique called
‘Aging’.
Aging: gradually increases the priority of
processes that wait in the system for a long time.
97. Preemptive Priority Scheduling:
Selection criteria :
The process, that has highest priority, is served
first.
Decision Mode:
Preemptive: When a new process arrives, its
priority is compared with current process priority.
If the new job has higher priority than the current,
the current process is suspended and new job is
started.
98. Preemptive Priority Scheduling:
Implementation :
• This strategy can be implemented by using sorted
FIFO queue. All processes in a queue are sorted
based on priority with highest priority process at
front end.
• When CPU becomes free, a process from the first
position in a queue is selected to run.
99. Preemptive Priority Scheduling:
Example :
Consider the following set of four processes. Their
arrival time, time required completing the
execution and priorities are given in following
table.
Consider all time values in milliseconds and small
value of priority means higher priority of the
process.
102. Preemptive Priority Scheduling:
Advantages:
Priority is considered. Critical processes can get
even better response time.
Disadvantages:
Starvation is possible for low priority processes.
It can be overcome by using technique called
‘Aging’.
Aging: gradually increases the priority of
processes that wait in the system for a long time.
Context switch overhead is there.