This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
Unix Process Management
Process management is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronisation among processes
Unix Process Management
Process management is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronisation among processes
What is a Process?
A process is a program in execution. Process is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. Attributes held by process include hardware state, memory, CPU etc .
( Program & Process )
Program
A computer program is a collection of instruction that performs a specific task when executed by a computer .
Passive entity
( Process )
- Active entity
- Program code + pc + associated resources + Status of the
process’s execution .
Processes
Process Concept
Process Scheduling
Operations on Processes
Cooperating Processes
Interprocess Communication
Communication in Client-Server Systems
( Process Concept )
An operating system executes a variety of programs: -
- Batch system – jobs
- Time-shared systems – user programs or tasks
Process – a program in execution; process execution must progress in sequential fashion.
A process includes:
Text section
Data section
Stack section
program counter
( Process Concept )
Program is passive entity stored on disk (executable file), process is active
Program becomes process when executable file loaded into memory
Execution of program started via GUI mouse clicks, command line entry of its name, etc
One program can be several processes
Consider multiple users executing the same program
What the OS is going to do for the Process?
Creating and removing( destroying )process .
Controlling the progress of processes .
Acting on interrupts and arithmetic errors .
Resource allocation among processes .
Inter process communication .
( Process Memory )
Process memory is divided into four sections for efficient working : -
The Text section is made up of the compiled program code, read in from non-volatile storage when the program is launched.
The Data section is made up the global and static variables, allocated and initialized prior to executing the main.
The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when they are declared.
( Process Memory )
#include<iostream>
using namespace std;
int total;
int Square(int x)
{
return x*x;
}
int SquareOfSum(int x,int y)
{
int z=Square(x+y);
return z;
}
int main()
{
int a=4,b=8;
total=SquareOfSum(a,b);
cout<<"Total ="<<total<<endl;
system("pause");
}
( Stack & Heap )
#include<iostream>
using namespace std;
void main()
{
int x;
x=10;
int *ptr;
ptr=&x;
cout<<*ptr<<" "<<x<<" "<<ptr <<" "<<&x<<endl;
*ptr=30;
cout<<endl<<endl;
cout<<*ptr<<" "<<x<<" "<< ptr <<" "<<&x<<endl;
cout<<endl<<endl;
system("pause");
}
Process, Process in memory, Process life cycle, Program execution.
PCB(process control block)
Context switch
IPC (INTER PROCESS COMMUNICATION)
Shared memory, Message passing method
User mode and kernel mode
Cooperating system
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMScseij
In this paper I have described the memory management and allocation techniques in computer systems. The
purpose of writing this survey paper is to elaborate the concept of memory allocation and management in
computer systems because of the significance of memory component in computer system’s hardware. It is
apparent from the work of computer scientists that effective and efficient main memory management and
virtual memory management in computer systems improves the computer system’s performance by
increasing throughput and processor utilization and by decreasing the response time and turnaround time.
Firstly I have compared Uniprogramming system with Multiprogramming system. After comparison I found
that Multiprogramming systems are quite better than Uniprogramming systems from the point of view of
memory utilization. Also the functionality of operating system routines which are responsible for user’s
memory partitioning must be improved to get better system performance in Multiprogramming system .In
Uniprogramming system , the processor remains idle most of the time but dividing the memory into
partitions for holding multiple processes as in Multiprogramming system does not solve the problem of
idleness of a processor. Mostly all of the processes need I/O access, therefore processor also remain idle in
Multiprogramming system. We have also discussed resource memory in detail and compared fixed
partitioning with variable partitioning. After in depth study we found that variable partitioning is more
advantageous than fixed partitioning because reallocation of page frames is impossible in fixed
What is a Process?
A process is a program in execution. Process is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. Attributes held by process include hardware state, memory, CPU etc .
( Program & Process )
Program
A computer program is a collection of instruction that performs a specific task when executed by a computer .
Passive entity
( Process )
- Active entity
- Program code + pc + associated resources + Status of the
process’s execution .
Processes
Process Concept
Process Scheduling
Operations on Processes
Cooperating Processes
Interprocess Communication
Communication in Client-Server Systems
( Process Concept )
An operating system executes a variety of programs: -
- Batch system – jobs
- Time-shared systems – user programs or tasks
Process – a program in execution; process execution must progress in sequential fashion.
A process includes:
Text section
Data section
Stack section
program counter
( Process Concept )
Program is passive entity stored on disk (executable file), process is active
Program becomes process when executable file loaded into memory
Execution of program started via GUI mouse clicks, command line entry of its name, etc
One program can be several processes
Consider multiple users executing the same program
What the OS is going to do for the Process?
Creating and removing( destroying )process .
Controlling the progress of processes .
Acting on interrupts and arithmetic errors .
Resource allocation among processes .
Inter process communication .
( Process Memory )
Process memory is divided into four sections for efficient working : -
The Text section is made up of the compiled program code, read in from non-volatile storage when the program is launched.
The Data section is made up the global and static variables, allocated and initialized prior to executing the main.
The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when they are declared.
( Process Memory )
#include<iostream>
using namespace std;
int total;
int Square(int x)
{
return x*x;
}
int SquareOfSum(int x,int y)
{
int z=Square(x+y);
return z;
}
int main()
{
int a=4,b=8;
total=SquareOfSum(a,b);
cout<<"Total ="<<total<<endl;
system("pause");
}
( Stack & Heap )
#include<iostream>
using namespace std;
void main()
{
int x;
x=10;
int *ptr;
ptr=&x;
cout<<*ptr<<" "<<x<<" "<<ptr <<" "<<&x<<endl;
*ptr=30;
cout<<endl<<endl;
cout<<*ptr<<" "<<x<<" "<< ptr <<" "<<&x<<endl;
cout<<endl<<endl;
system("pause");
}
Process, Process in memory, Process life cycle, Program execution.
PCB(process control block)
Context switch
IPC (INTER PROCESS COMMUNICATION)
Shared memory, Message passing method
User mode and kernel mode
Cooperating system
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMScseij
In this paper I have described the memory management and allocation techniques in computer systems. The
purpose of writing this survey paper is to elaborate the concept of memory allocation and management in
computer systems because of the significance of memory component in computer system’s hardware. It is
apparent from the work of computer scientists that effective and efficient main memory management and
virtual memory management in computer systems improves the computer system’s performance by
increasing throughput and processor utilization and by decreasing the response time and turnaround time.
Firstly I have compared Uniprogramming system with Multiprogramming system. After comparison I found
that Multiprogramming systems are quite better than Uniprogramming systems from the point of view of
memory utilization. Also the functionality of operating system routines which are responsible for user’s
memory partitioning must be improved to get better system performance in Multiprogramming system .In
Uniprogramming system , the processor remains idle most of the time but dividing the memory into
partitions for holding multiple processes as in Multiprogramming system does not solve the problem of
idleness of a processor. Mostly all of the processes need I/O access, therefore processor also remain idle in
Multiprogramming system. We have also discussed resource memory in detail and compared fixed
partitioning with variable partitioning. After in depth study we found that variable partitioning is more
advantageous than fixed partitioning because reallocation of page frames is impossible in fixed
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
This ppt covers following topics,
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
Communication in Client-Server Systems
UNIT II PROCESS MANAGEMENT
Processes – Process Concept, Process Scheduling, Operations on Processes, Inter-process Communication; CPU Scheduling – Scheduling criteria, Scheduling algorithms, Multiple-processor scheduling, Real time scheduling; Threads- Overview, Multithreading models, Threading issues; Process Synchronization – The critical-section problem, Synchronization hardware, Mutex locks, Semaphores, Classic problems of synchronization, Critical regions, Monitors; Deadlock – System model, Deadlock characterization, Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from deadlock.
Operating system 18 process creation and terminationVaibhav Khanna
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to next execute
CPU registers – contents of all process-centric registers
CPU scheduling information- priorities, scheduling queue pointers
Memory-management information – memory allocated to the process
Accounting information – CPU used, clock time elapsed since start, time limits
I/O status information – I/O devices allocated to process, list of open files
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Os task
1. 0 | P a g e
UNIVERSITI TEKNIKAL MALAYSIA MELAKA
SEMESTER 1 YEAR 2
BITS 1213 - OPERATING SYSTEM
LIST OF MEMBERS :
NO NAME MATRIC NO.
1 NUR ATIQAH BT MOHD ROSLI B031210097
2 SU’AIDAH BT MOKHTAR B031210193
3 SITI NADIRAH BT MINHAT B0131210037
4 NOR HADHIRAH BT SHERIFF B031210041
5 NURUL NAJEEHA BT ANNUAR B031210270
2. 1 | P a g e
CONTENT
PROCESS................................................................................................... 2-14
What is process............................................................................................ 2-3
Process state................................................................................................. 4-8
Process creation........................................................................................... 9-10
Process termination..................................................................................... 11-12
THREAD..................................................................................................... 12-23
Introduction................................................................................................. 12
How it work................................................................................................. 12-13
Advantages and disadvantages.................................................................... 14-15
Threading issues.......................................................................................... 16-18
Types of thread............................................................................................ 19-23
SYMMETRIC MULTIPROCESSING..................................................... 24-25
Description and process................................................................................ 24
How it works................................................................................................ 24
Diagram of multiprocessing......................................................................... 25
MICROKERNEL....................................................................................... 26-30
Introduction.................................................................................................. 26
Descriptions.................................................................................................. 26
Features........................................................................................................ 27
Advantages and disadvantages.................................................................... 28-29
Diagram of microkernel............................................................................... 30
3. 2 | P a g e
PROCESS IN OPERATING SYSTEM
I. WHAT IS PROCESS?
Process – a program in execution; process execution must progress in sequential fashion
Task, execution of individual program.
A process includes:
– program counter – specifying next instruction to be executed.
– Stack – containing temporary data such as return address.
– data section – containing global variables.
Figure 3.1
Process memory is divided into four sections as shown in Figure 3.1.
Text - Comprises the compiled program code, read in from non-volatile storage
when the program is launched.
4. 3 | P a g e
Data -Stores global and static variables, allocated and initialized prior to
executing main.
Heap - Dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.
Stack - Local variables. Space on the stack is reserved for local variables when
they are declared ( at function entrance or elsewhere, depending on the
language ), and the space is freed up when the variables go out of scope. Note
that the stack is also used for function return values, and the exact mechanisms
of stack management may be language specific.
Note that the stack and the heap start at opposite ends of the process's free
space and grow towards each other. If they should ever meet, then either a
stack overflow error will occur, or else a call to new or malloc will fail due to
insufficient memory available.
When processes are swapped out of memory and later restored, additional
information must also be stored and restored. Key among them are the program
counter and the value of all program registers.
5. 4 | P a g e
II. PROCESS STATE
There is five states in process as shown in Figure 3.2 below (may have other states besides the
ones listed):
New - The process is in the stage of being created.
Ready - The process has all the resources available that it needs to run, but the CPU is
not currently working on this process's instructions.
Running - The CPU is working on this process's instructions.
Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to
go off, or a child process to finish.
Terminated - The process has completed.
Figure 3.2
Process Control Block (PCB)
6. 5 | P a g e
Figure 3.3
A PCB contains the following Information:
– Process state: new, ready, …
– Program counter: indicates the address of the next instruction
to be executed for this program.
– CPU registers: includes accumulators, stack pointers, …
– CPU scheduling information: includes process priority,
pointers to scheduling queues.
– Memory-management information: includes the value of base
and limit registers (protection) …
– Accounting information: includes amount of CPU and real
time used, account numbers, process numbers, …
– I/O status information: includes list of I/O devices allocated to this process, a list
of open files, …
CPU Switch From Process to Process
7. 6 | P a g e
Figure 3.4
Process Scheduling Queue
The two main objectives of the process scheduling system are to keep the CPU busy
at all times and to deliver "acceptable" response times for all programs, particularly
for interactive ones.
The process scheduler must meet these objectives by implementing suitable policies
for swapping processes in and out of the CPU.
( Note that these objectives can be conflicting. In particular, every time the system
steps in to swap processes it takes up time on the CPU to do so, which is thereby "lost"
from doing any useful productive work. ).
8. 7 | P a g e
Figure 3.5 - Ready Queue And Various I/O Device Queues
All processes are stored in the job queue.
Processes in the Ready state are placed in the ready queue.
Processes waiting for a device to become available or to deliver data are placed in
device queues. There is generally a separate device queue for each device.
Schedulers
A long-term scheduler is typical of a batch system or a very heavily loaded system. It
runs infrequently, ( such as when one process ends selecting one more to be loaded in
from disk in its place ), and can afford to take the time to implement intelligent and
advanced scheduling algorithms.
9. 8 | P a g e
The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of 100
milliseconds, and must very quickly swap one process out of the CPU and swap in
another one.
Some systems also employ a medium-term scheduler. When system loads get high,
this scheduler will swap one or more processes out of the ready queue system for a
few seconds, in order to allow smaller faster jobs to finish up quickly and clear the
system. See the differences in Figures 3.7 and 3.8 below.
An efficient scheduling system will select a good process mix of CPU-bound
processes and I/O bound processes.
Figure 3.6 - Queueing-diagram representation of process scheduling
10. 9 | P a g e
III. PROCESS CREATION
• A process may create several new processes, via a create-process system call, during
execution.
• Parent process creates children processes, which, in turn create other processes,
forming a tree of processes.
• Resource sharing, such as CPU time, memory, files, I/O devices …
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• When a process creates a new process, two possibilities exist in terms of execution:
– Parent and children execute concurrently.
– Parent waits until children terminate.
• There are also two possibilities in terms of the address space of the new process:
– Child duplicate of parent.
– Child has a program loaded into it.
• UNIX examples:
– fork system call creates new process
– execve system call used after a fork to replace the process’ memory space with
new program.
11. 10 | P a g e
IV. PROCESS TERMINATION
• Process executes last statement and asks the operating system to delete it by using
the exit system call.
– Output data from child to parent via wait system call.
– Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes via abort system call for a
variety of reasons, such as:
– Child has exceeded allocated resources.
– Task assigned to child is no longer required.
– Parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
Interprocess Communications (IPC)
Mechanism for processes to communicate and to synchronize their actions.
• IPC is best provided by message-passing systems.
• IPC facility provides two operations:
– send (message) – message size fixed or variable
– receive (message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Processes can communicate in two ways:
– Direct communication
– Indirect communication.
12. 11 | P a g e
Advantages of process.
Information sharing – such as shared files.
Computation speed-up – to run a task faster, we must break it into subtasks, each of
which will be executing in parallel. This speed up can be achieved only if the
computer has multiple processing elements (such as CPUs or I/O channels).
Modularity – construct a system in a modular function (i.e., dividing the system
functions into separate processes).
Convenience – one user may have many tasks to work on at one time. For example, a
user may be editing, printing, and compiling in parallel.
Disadvantages of process.
Not convenient for user and poor performance.
Complexity in OS.
Processes can misbehave
– By avoiding all traps and performing no I/O, can take over entire
machine.
– Only solution: Reboot!
Difficult to setup process correctly and to express all possible options
– Process permissions, where to write I/O, environment variables.
– Example: WindowsNT has call with 10 arguments.
13. 12 | P a g e
THREADS
INTRODUCTION
What is thread in operating system?
A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. A thread is also called a light weight
process. Threads provide a way to improve application performance through
parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical
process.
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism.
The CPU switches rapidly back and forth among the threads giving illusion that
the threads are running in parallel. Like a traditional process i.e., process with
one thread, a thread can be in any of several states (Running, Blocked, Ready or
Terminated). Each thread has its own stack.
DESCRIPTION OF TOPIC
A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. A thread is also called a light weight
process. Threads provide a way to improve application performance through
parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical
process.
14. 13 | P a g e
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. Folowing figure shows the working of the single and
multithreaded processes.
HOW IT WORK
Each process has its own memory space. When Process 1 accesses some given
memory location, say 0x8000, that address will be mapped to some physical
memory address1. But from Process 2, location 0x8000 will generally refer to a
completely different portion of physical memory. A thread is a subdivision that
shares the memory space of its parent process. So when either Thread 1 or
Thread 2 of Process 1 accesses "memory address 0x8000", they will be referring
to the same physical address. Threads belonging to a process usually share a few
other key resources as well, such as their working directory, environment
variables, file handles etc.
On the other hand, each thread has its own private stack and registers, including
program counter. These are essentially the things that threads need in order to be
independent. Depending on the OS, threads may have some other private
15. 14 | P a g e
resources too, such as thread-local storage (effectively, a way of referring to
"variable number X", where each thread has its own private value of X). The OS
will generally attach a bit of "housekeeping" information to each thread, such as
its priority and state (running, waiting for I/O etc).
ADVANTAGES AND DISADVANTAGES
NO ADVANTAGES DISADVANTAGES
1.
Responsiveness - One thread may
provide rapid response while other
threads are blocked or slowed down
doing intensive calculations.
Global variables are shared
between threads.Inadvertent
modification of shared
variables can be disastrous.
2
Resource sharing - By default
threads share common code, data,
and other resources, which allows
multiple tasks to be performed
simultaneously in a single address
Many library functions are not
thread safe.
16. 15 | P a g e
space.
3
Economy - Creating and managing
threads is much faster than
performing the same tasks for
processes.
If one thread crashes the whole
application crashes.
4
Scalability, i.e. Utilization of
multiprocessor architectures - A
single threaded process can only
run on one CPU, no matter how
many may be available, whereas
the execution of a multi-threaded
application may be split amongst
available processors.
Memory crash in one thread
kills other threads sharing the
same memory, unlike
processes.
5
User level threads are fast to create
and manage.
In a typical operating system,
most system calls are blocking.
6
User level thread can run on any
operating system.
Transfer of control from one
thread to another within same
process requires a mode switch
to the Kernel.
17. 16 | P a g e
THREADING ISSUES
1- The Semantics of fork() and exec() system calls
It is system dependant. If the new process execs right away, there is no
need to copy all the other threads. If it doesn't, then the entire process
should be copied. Many versions of UNIX provide multiple versions of
the fork call for this purpose.
2- Signal Handling
i- When a multi-threaded process receives a signal, there are four
major options where that the thread will be delivered based on the
signal:-
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process.
Deliver the signal to certain threads in the process.
Assign a specific thread to receive all signals in a process.
ii- The best choice may depend on which specific signal is involved.
iii- Windows does not support signals, but they can be emulated using
Asynchronous Procedure Calls ( APCs ). APCs are delivered to
specific threads, not processes.
iv- Synchronous and asynchronous
18. 17 | P a g e
3- Thread Cancellation of target thread
Threads that are no longer needed may be cancelled by another thread in
two ways:
Asynchronous Cancellation cancels the thread immediately.
Deferred Cancellation sets a flag indicating the thread should
cancel itself when it is convenient. It is then up to the cancelled
thread to check this flag periodically and exit nicely when it sees
the flag set.
4- Thread-Local Storage
i- Most data is shared among threads, and this is one of the major
benefits of using threads in the first place.
ii- Sometimes threads need thread-specific data also.
iii- Most major thread libraries ( pThreads, Win32, Java ) provide
support for thread-specific data, known as thread-local storage or
TLS.
iv- Note that this is more like static data than local variables, because it
does not cease to exist when the function ends.
5- Scheduler Activations
i- Many implementations of threads provide
virtual processor (interface between the user thread),
Kernel thread, particularly for the many-to-many or two-tier
models.
ii- This virtual processor is known as a "Lightweight Process", LWP.
19. 18 | P a g e
iii- There is a one-to-one correspondence between LWPs and kernel
threads.
iv- The number of kernel threads available, and hence the number of
LWPs may change dynamically.
v- The application (user level thread library) maps user threads onto
available LWPs.
vi- Kernel threads are scheduled onto the real processors by the OS.
vii- The kernel communicates to the user-level thread library when
certain events occur (such as a thread about to block) via an upcall,
which is handled in the thread library by an upcall handler. The
upcall also provides a new LWP for the upcall handler to run on,
which it can then use to reschedule the user thread that is about to
become blocked. The OS will also issue upcalls when a thread
becomes unblocked, so the thread library can make appropriate
adjustments.
viii- If the kernel thread blocks, then the LWP blocks, which blocks the
user thread.
ix- Ideally there should be at least as many LWPs available as there
could be concurrently blocked kernel threads. Otherwise if all
LWPs are blocked, then user threads will have to wait for one to
become available.
20. 19 | P a g e
TYPES OF THREAD
Threads are implemented in following two ways;
User Level Threads
In this case, application manages thread management kernel is not aware
of the existence of threads. The thread library contains code for creating
and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts.
The application begins with a single thread and begins running in that
thread.
Kernel Level Threads
In this case, thread management done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported
directly by the operating system. Any application can be programmed to
be multithreaded. All of the threads within an application are supported
within a single process.
The Kernel maintains context information for the process as a whole and
for individuals threads within the process. Scheduling by the Kernel is
21. 20 | P a g e
done on a thread basis. The Kernel performs thread creation, scheduling
and management in Kernel space. Kernel threads are generally slower to
create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
Kernel routines themselves can multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the
user threads.
Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
Kernel routines themselves can multithreaded.
Kernel threads are generally slower to create and manage than the
user threads.
22. 21 | P a g e
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types;
Many to Many Model
Many user level threads multiplexes to the Kernel thread of smaller or equal
numbers. The number of Kernel threads may be specific to either a particular
application or a particular machine.
Following diagram shows the many to many model. In this model, developers
can create as many user threads as necessary and the corresponding Kernel
threads can run in parallels on a multiprocessor.
Many to One Model
Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking
23. 22 | P a g e
system call, the entire process will be blocks. Only one thread can access the
Kernel at a time,so multiple threads are unable to run in parallel on
multiprocessors.
If the user level thread libraries are implemented in the operating system in such
a way that system does not support them then Kernel threads use the many to
one relationship modes.
One to One Model
There is one to one relationship of user level thread to the kernel level
thread.This model provides more concurrency than the many to one model. It
also another thread to run when a thread makes a blocking system call. It
support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to
one relationship model.
24. 23 | P a g e
Difference between User Level & Kernel Level Thread
S.N. User Level Threads Kernel Level Thread
1
User level threads are faster to
create and manage.
Kernel level threads are slower to
create and manage.
2
Implementation is by a thread
library at the user level.
Operating system supports creation of
Kernel threads.
3
User level thread is generic and
can run on any operating system.
Kernel level thread is specific to the
operating system.
4
Multi-threaded application cannot
take advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
25. 24 | P a g e
SYMMETRIC MULTIPROCESSING
SMP systems allow any processor to work on any task no matter where
the data for that task are located in memory, provided that each task in the
system is not in execution on two or more processors at the same time.
With proper operating system support, SMP systems can easily move
tasks between processors to balance the workload efficiently.
SMP systems are tightly coupled multiprocessor systems with a pool of
homogeneous processors running independently, each processor
executing different programs and working on different data and with
capability of sharing common resources (memory, I/O device, interrupt
system and so on) and connected using a system bus or a crossbar.
Uniprocessor and SMP systems require different programming methods to
achieve maximum performance. Programs running on SMP systems may
experience a performance increase even when they have been written for
uniprocessor systems. This is because hardware interrupts that usually
suspend program execution while the kernel handles them can execute on
an idle processor instead
26. 25 | P a g e
Process of Symmetric Multiprocessor
27. 26 | P a g e
Microkernel
Introduction of Microkernel
Early operating system kernels were rather small, partly because computer memory was
limited. As the capability of computers grew, the number of devices the kernel had to control
also grew. Through the early history of Unix, kernels were generally small, even though those
kernels contained device drivers and file system managers. When address spaces increased
from 16 to 32 bits, kernel design was no longer cramped by the hardware architecture, and
kernels began to grow.
The Berkeley Software Distribution (BSD) of Unix began the era of big kernels. In addition to
operating a basic system consisting of the CPU, disks and printers, BSD started adding
additional file systems, a complete TCP/IP networking system, and a number of "virtual"
devices that allowed the existing programs to work invisibly over the network. This growth
continued for many years, resulting in kernels with millions of lines of source code. As a
result of this growth, kernels were more prone to bugs and became increasingly difficult to
maintain.
The microkernel was designed to address the increasing growth of kernels and the difficulties
that came with them. In theory, the microkernel design allows for easier management of code
due to its division into user space services. This also allows for increased security and
stability resulting from the reduced amount of code running in kernel mode. For example, if a
networking service crashed due to buffer overflow, only the networking service's memory
would be corrupted, leaving the rest of the system still functional.
28. 27 | P a g e
Descriptions of Microkernel
A Microkernel is a highly Spartan modular subsystem composed of OS-neutral abstractions,
providing only essential services such as process abstractions, threads, IPC, and memory
management primitives. All device drivers, etc., which are normally part of an OS kernel, run
on the microkernel as just another user process.
• Multiple operating systems can then be layered on top of these abstractions, and are thus
viewed assimply another application.
• Thisfocus on modularity allowsforscalability,extensibility and portability not found in
monolithic operating systems(Unix, Linux, DOS, etc.)
Features
• the microkernel provides only rudimentary core facilities, different OS personalities(such as
BSD Unix, Linux,NT, etc.) can be hosted on the microkernel.
• Because ofits highly modular nature,many of the services commonly found in“kernelspace”
are found in “userspace”on a microkernelFeatures
• Flexibility (can restart modules without rebooting the OS)
• Lower fixed memory demand: The L4 (Mach)Microkernel only takes up about 32 Kilobytes
of memory.
• However, a microkernel + regular OS will probably take up more memory than a simple OS
would take up, because of the additional memory required by the microkernel itself
• SMP delivery is easier
29. 28 | P a g e
The advantages and disadvatages of Microkernel
Advantages
Extensible: add a new server to add new OS functionality.
Kernel does not determine operating system environment.
• Allows support for multiple OS personalities
• Need an emulation server for each system (e.g. Mac,Windows, Unix)
• All applications run on same microkernel
• Applications can use customized OS (e.g. for databases)
Most hardware agnostic
Threads ,IPC, user-level servers don't need to worry abount udelying hardware.
Strong protection
Even of the OS againts itself (i.e the part of the OS that are implemented as
servers)
Esay extension to multiprocessor and distributed system.
Microkernel are simplicity of the kernel (small)
Flexibility
we can have a file server and a database server
Disadvantages
Performance
System call can require a lot of protection mode changes
Expensive to reimplement everything with a new model.
30. 29 | P a g e
OS personalities are easir to port to new hardware after porting to microkernel,
but porting to microkernel may be hardder than porting to new hardware.
More overhead
Loss of the system call and context switches
E.g Mach, L4, AmigaOS, Minix , K42.