SlideShare a Scribd company logo
1 of 121
Operating system
An operating system (OS) is a collection of system programs that
together control the operation of a computer system. Or
Operating system is a system software which act as a interface
between computer user and computer hardware. It is master
controller of the computer system because it is responsible for
managing all hardware and software's.
user
Application programs
OS
hardware
Operating system goals:
–Execute user programs and make solving user problems easier.
–Make the computer system convenient to use.
–Use the computer hardware in an efficient manner.
Two Functions of OS
•OS as an Extended Machine
•OS as a Resource Manager
OS as an Extended Machine
OS creates higher-level abstraction for programmer
• Example: (Floppy disk I/O operation)
-disks contains a collection of named files
-each file must be open for READ/WRITE
-after READ/WRITE complete close that file
-no any detail to deal
OS shields the programmer from the disk hardware and presents a
simple file oriented interface.
OS’s function is to present the user with the equivalent of an
extended machine or virtual machine that is easier to program
than the underlying hardware.
OS as a Resource Manager
Modern computer consist of many resources such as processor,
memories, printer and other devices. The job of os is to provide an
orderly and controlled allocation of these resources.
 What would happen if three programs running on computer all
trying to print the file?
 What happens if two network users try to update a shared
document at the same time?
In short this view of OS is holds that it primary task is to keep track of
who is using which resources for what time.
Resource management includes multiplexing(sharing) resources in
two ways.
Time and space
When resource is time multiplexed different programs or users takes
turns using it.
The OS allocates resource to one program then other and son on.
For eg CPU. Os must decide how long the program in CPU.
The other kind of multiplexing is space . Instead of taking turn
to use resources, each program allocates part of resources. For
eg RAM. OS must deal about fairness, protection etc.
 Another space multiplexed resource is HDD. In many
system single disk can hold multiple files form multiple users.
 Os must concern about the allocating disk space and
keeping track of which is using which disk block.
 In short OS’s primary function is to manage all pieces of a
complex system.
The first generation (1945 -55) vacuum tubes and
plug boards
 Single group of people designed, built, programmed,
operated, and maintained each machine.
 All programming was on absolute machine language
 Programming language were unknown
 Operating system were un-heard
 Plug-board was for controlling the computer function
The second generation (1955-65) Transistor and
Batch system
Input on punch card
Out put on printed formatted
Input on card are first recorded in input tape
and the input tape is inserted to main system
and finally output is recorded into output tape
and printed in off line .
Batch operating system ( one task at time)
Third generation(1965-1980) ICs and multiprogramming
 multiprogramming ( when one program waits for input out put
CPU switches to another one)
 spooling: Whenever running job finished, the OS could load a
new job from the disk into memory partition.
Fourth Generation(1980-present) Personal
computer
Multiprocessor
N/w operating system: Needs network interface
to work
Distributed Os: That in one that appears to its
users as a traditional uni-processor system, even
though it is actually composed of multiple
processor. The users are not aware of where
their programs are beings run or where their
files are located; that should be handled by OS.
Service provided by OS
An Operating System provides services to both the users and to the
programs.
• It provides programs an environment to execute.
• It provides users the services to execute the programs in a
convenient manner.
Following are a few common services provided by an operating
system :
• Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
System calls
The system call provides an interface to the operating system services.
There are 5 different categories of system calls: process control, file
manipulation, device manipulation, information maintenance and
communication.
1. Process Control
A running program needs to be able to stop execution either normally
or abnormally. There are several process management process:
create, delete, wait. Etc…
2. File Management
Some common system calls are create, delete, read, write, reposition,
or close. Also, there is a need to determine the file attributes –
get and set file attribute. Many times the OS provides an API to
make these system calls.
3.File Management
Some common system calls
are create, delete, read, write, reposition, or close. Also, there is a
need to determine the file attributes – get and set file attribute.
Many times the OS provides an API to make these system calls.
4. Device Management
• Process usually require several resources to execute, if these
resources are available, they will be granted and control
returned to the user process. These resources are also thought
of as devices. Some are physical, such as a video card, and
others are abstract, such as a file.
• User programs request the device, and when finished
they release the device. Similar to files, we can read, write,
and reposition the device.
4. Information Management
• Some system calls exist purely for transferring information
between the user program and the operating system. An
example of this is time, or date.
• The OS also keeps information about all its processes and
provides system calls to report this information.
5. Communication
There are two models of interprocess communication, the
message-passing model and the shared memory model.
• Message-passing uses a common mailbox to pass messages
between processes.
• Shared memory use certain system calls to create and gain
access to regions of memory owned by other processes. The
two processes exchange information by reading and writing
in the shared data.
Unit 2 : Introduction to Process
Process model:
• We are assuming a multiprogramming OS that can switch
from one process to another.
• Sometimes this is called pseudoparallelism since one has the
illusion of a parallel processor.
• The other possibility is real parallelism in which two or more
processes are actually running at once because the computer
system is a parallel processor, i.e., has more than one
processor.
• We do not study real parallelism (parallel processing,
distributed systems, multiprocessors, etc) in this course.
Note: switching of CPU is called multiprogramming
Process
• A process is basically a program in execution. The
execution of a process must progress in a sequential
fashion.
• A process is defined as an entity which represents the basic
unit of work to be implemented in the system.
• To put it in simple terms, we write our computer programs
in a text file and when we execute this program, it becomes
a process which performs all the tasks mentioned in the
program.
• When a program is loaded into the memory and it becomes
a process, it can be divided into four sections ─ stack, heap,
text and data. The following image shows a simplified
layout of a process inside main memory −
• Component & Description
Stack
• The process Stack contains the temporary data
such as method/function parameters, return
address and local variables.
Heap
• This is dynamically allocated memory to a process
during its run time.
Text
• This includes the current activity represented by
the value of Program Counter and the contents of
the processor's registers.
Data
• This section contains the global and static
variables.
Operations on process
Process Creation
From the users or external viewpoint there are
several mechanisms for creating a process.
1. System initialization, including daemon
processes.
2. Execution of a process creation system call by
a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
Process termination
Again from the outside there appear to be several
termination mechanism.
1. Normal exit (voluntary).
3. Fatal error (involuntary) i.e if user types command
cc foo.c to compile the program foo.c and no such
file exists, the compiler simply exits.
3. Error exit (voluntary). i.e due to the programs bug.
Example include executing an illegal instruction,
referencing nonexistent memory or dividing by
zero.
4. Killed by another process (involuntary).
Process Hierarchies
Modern general purpose operating systems permit a user to create
and destroy processes.
 Parent creates a child process, child processes can create its
 own process „Forms a hierarchy.
• In unix this is done by the fork system call, which creates
a child process ,and the exit system call, which terminates the
current process.
• After a fork both parent and child keep running (indeed they have
the same program text) and each can fork off other processes.
• A process tree results. The root of the tree is a special process
created by the OS during startup.
• A process can choose to wait for children to terminate. For
example, if C issued a wait() system call it would block until G
finished
Process states
• When a process executes, it passes through different states.
These stages may differ in different operating systems, and the
names of these states are also not standardized.
• In general, a process can have one of the following five states at
a time.
 new/ start : This is the initial state when a process is first
started/created.
 Ready: The process is waiting to be assigned to a processor.
Process may come into this state after Start state or while
running but interrupted by the scheduler to assign CPU to
some other process.
 Running: instructions are being executed.
Waiting: Process moves into the waiting state if it needs to wait
for a resource, such as waiting for user input, or waiting for a
file to become available.
Terminated : The process has finished execution
Process control block
Process must be saved when the process is switched from one
state to another so that it can be restarted later as it had never been
stopped.
• The PCB is the data structure containing certain important
information about the process -also called process table or
processor descriptor. Which contains:
• Process state: running, ready, blocked.
• Program counter: Address of next instruction for the process.
• Registers: Stack pointer, accumulator, PSW etc.
• Scheduling information: Process priority, pointer to scheduling
queue etc. Memory-allocation: value of base and limit register,
page table, segment table etc.
• Accounting information: time limit, process numbers etc.
• Status information: list of I/O devices, list of open files etc.
Threads
What is thread?
 Threads, like process, are a mechanism to allow a program to do
more then one task.
 Conceptually, a thread (also called lightweight process) exists
within a process (heavyweight process), is a basic unit of CPU
utilization.
 Traditional process has its own address space and a single
thread of control. However many modern OS multi threaded
concept.
 The multithreading is used to describe the situation of
allowing the multiple threads in same process.
 All threads share the same address space, global variables, set of
open file, alarms and signals etc of the process to which they
belongs.
 For example, a web browser might have one thread display
images or text or other threads receives data form network.
 Another example, a word processor may have a thread for
displaying graphics, another thread for spelling and grammar
checking in background.
Fig (a) process with single thread of control (b) process with multiple thread of control.
Benefits
Responsiveness. i.e when one thread is blockd
another providing response to the user.
Resource sharing. i.e it share same recourse of
the same process to which it belongs.
Economy: allocating resources for process is
costly. However process allocate same memory
spaces to which it belongs.
Utilization of multiprocessor concept. More
than one process can run on more then one
process.
Users and Kernel Threads
• User Threads:
Thread management done by user-level threads library.
 Implemented thread library at user level.
 Library provides support for thread creation, scheduling and management
with no support from the kernel.
–Fast to create
–If kernel is sigle threaded, blocking system calls will cause the entire process
to block. –Example: POSIX Pthreads, Mach C-threads.
• Kernel Threads:
 Supported by the Kernel
–Kernel performs thread creation, scheduling and management in kernel space.
–Slower to create and manage
1.Blocking system calls are no problem
2.Most OS’s support these threads
3.Examples: WinX, Linux
Inter-Process Communication
Process frequently need to communicate to with other
process. In shell pipeline, the out put of first process must
be passed to second process, and son on down the line.
Thus there is need for communication between processes
in well structured manner without interrupts.
Issues related to Inter-Process communication:
–How one process can pass the information to the another?
–How to make sure two or more processes do not get into
each other’s way when engaging in critical activities?
–How to maintain the proper sequence when dependencies
are presents?
• Thus, IPC provide the mechanism to allow the processes to
communicate and to synchronize their actions.
Some scenarios related to IPC and their problems and
their solutions
print spooler: when a process wants to print a file, it enters the file
name in a special spooler directory, another process, print
daemon, periodically checks to see if there are any files to be
printed, print them and removes their names from the directory.
Situation where two or more processes are reading or writing
some shared data and the final result depends on who run
precisely, are called race conditions.
Critical sections.
 how do we avoid race conditions? Mutual exclusion – some
way of making sure that if one process is using a shared
variable or file, the other processes will be excluded form
doing the same task.
 The part of program where the shared memory is accessed is
called the critical region or critical sections. If we could
arrange the matter such that no two processes were ever in
their critical regions at the same time, we could avoid race
conditions.
 This requirement avoid race conditions.
Four necessary conditions have good
solutions.
1. No two process may be simultaneously inside
the critical regions.
2. No assumptions may be made about speeds
or the number of CPU’s.
3. No process running outside its critical may
block other processes.
4. No process should to wait forever to enter its
critical regions.
Fig: Mutual exclusion using critical regions
Mutula exclusion with busy waiting
Ways to achieve the ME:
Interrupt Disabling:
• Each process disable all interrupts just after entering its CR and
re-enable them just before leaving it.
• No clock interrupt, no other interrupt, no CPU switching to other
process until the process turn on the interrupt.
• DisableInterrupt()
// perform CR task
• EnableInterrupt()
• Advantages:
• Mutual exclusion can be achieved by implementing OS primitives
to disable and enable interrupt.
• Problems:
• -allow the power of interrupt handling to the user.
 The chances of never turned on –is a disaster.
 it only works in single processor environment.
Lock Variables
A single, shared (lock) variable, initially 0. When a process
wants to enter its CR, it first test the lock. If the lock is 0, the
process set it to 1 and enters the CR. If the lock is already 1, the
process just waits until it becomes 0.
Problems:
• problem like spooler directory; suppose that one process reads
the lock and sees that it is 0, before it can set lock to 1, another
process scheduled, enter the CR, set lock to 1 and can have two
process at CR (violates mutual exclusion).
Strict Alternation
Processes share a common integer variable turn. If turn == i then
process Pi is allowed to execute in its CR, if turn == j then
process Pj is allowed to execute.
Initially turn = 0; keep the track of whose turn is it.
Code structure
while(turn=0) while(turn=1)
critical –region C-R
Trun =1 turn=0
Non- critical region N-R
Advantages: Ensures that only one process at a time can be in its
CR.
Problems: occurs when one process is slower than other.
Peterson's Algorithm
• Before entering its CR, each process call enter_regionwith
its own process number, 0 or 1 as parameter.
• Call will cause to wait, if need be, until it is safe to enter.
• When leaving CR, the process calls leave_regionto
indicate that it is done and to allow other process to
enter CR.
• Advantages: Preserves all conditions.
• Problems: difficult to program for n-processes system and
less efficient.
Hardware Solution -TSL (Test and set lock)
• Test and Set Lock (TSL) instruction reads the contents of the
memory word lock (shared variable) into the register and
then stores nonzero value at the memory address lock.
• The CPU executing TSL locks the memory bus to prohibit
other CPUs from accessing memory until it is done.
• When lock is 0, any process may set it to 1 using the TSL
instruction.
enter_region:
• TSL register, lock |copy lock to register and set lock to 1
CMP register, #0 |was lock 0?
• JNE enter_region |if it was non zero, lock was set, so
loop
• RET |return to caller; critical_region();
• leave_region:
• MOVE lock, #0 |store a 0 in lock
• RET |return to caller noncritical_section();
• Advantages: Preserves all condition, easier
programming task and improve system efficiency.
• Problems : difficulty in hardware design.
Busy Waiting Alternate?
Busy waiting:
• When a process want to enter its CR, it checks to see if the entry
is allowed, if it is not, the process just sits in a tight loop waiting
until it is.
• Waste of CPU time for NOTHING!
• Possibility of Sleep and Wakeup pair instead of waiting.
• Sleep causes to caller to block, until another process wakes it up.
The procedure consumer problem:
 Two process share a common, fixed sized buffer.
• Suppose one process, producer, is generating information that the second process, consumer, is
using.
• Their speed may be mismatched, if producer insert item rapidly, the buffer will full and go to
the sleep until consumer consumes some item, while consumer consumes rapidly, the
buffer will empty and go to sleep until producer put something in the buffer.
• Sleep and Wakeup
#define N = 100 /* number of slots in the buffer*/
int count = 0; /* number of item in the buffer*/
void procedure()
{
int item ;
while(true){ /repeat forever/ void consumer()
item = produce _item(); {
if (count==N) sleep(); while(TRUE){
insert_ item(item); if(count==0) sleep();
cosume item();
count= count+1; cont = count-1;
if(count==1) wakeup(consumer); if(count=N-1) weak up (producer);
} }
} }
The race condition will occur:
 The buffer is empty and the consumer has just read count to see
it is 0. At that Instant, the scheduler decides to stop running the
consumer temporarily and start running producer. The procedure
enters an item in the buffer, increments count, and notice that it is
now 1. Reasoning that count was just 0,and thus the consumer
must be sleeping, the procedure calls wakeup to wake the
consumer up
 The consumer not yet asleep, so the wakeup signal is lost, the
consumer has the count value 0 from the last read so go to the
sleep. Producer keeps on producing and fill the buffer and go to
sleep, both will sleep forever.
 Think: If we were able to save the wakeup signal that was
lost........
Semaphores
E. W. Dijkstra (1965) suggested using an integer variable to count
the number of wakeups, called a semaphore.
• It could have the value 0, indicating no wakeups ware
saved, or some positive value if one or more wakeups were
pending.
• Operations: Down and Up
• Down: checks if the value greater than 0 ;yes-decrement the
value (i.e. Uses one stored wakeup) and continues.
• No-process is put to sleep without completing down.
• Checking value, changing it, and possibly going to sleep, is
all done as single action.
• Up: increments the value; if one or more processes were
sleeping, unable to complete earlier down operation, one of
them is chosen and is allowed to complete its down.
If each process does a down just before entering its critical region
and up just after leaving it, mutual exclusion is achieved.
Message Passing:
With the trend of distributed operating system, many OS are used
to communicate through Internet, intranet, remote data
processing etc.
Inter-process communication based on two primitives: send and
receive.
send(destination, &message);
receive(source, &message) is known as message passing.
The send and receive calls are normally implemented as operating
system calls accessible from many programming language
environments.
Producer-Consumer with Message Passing
#define N 100
Producer (void)
{
int item;
message m;/*message buffer*/
while (TRUE){ item = produce_item(); /*generate something */
receive(consumer, &m); /*wait for an empty to arrive*/
build_message(&m, item) /*construct a message to send*/
send(consumer, &m);
} }
void consumer(void)
{ int item;
message m;
for(i = 0; i<N; i++) send(producer, &m); /*send N empties*/ while(TRUE){
receive(producer, &m); /* get message containing item*/
item = extract_item(&m); /* extract item from message*/
send(producer, &m) /* send back empty reply*/
consume_item(item); /*do something with item*/
}}
Classical IPC Problems
The Dining Philosophers Problem
Scenario: consider the five philosophers are seated in a common
round table for their lunch, each philosopher has a plate of
spaghetti and there is a forks between two plates, a philosopher
needs two forks to eat it. They alternates thinking and eating.
What is the solution (program)
for each philosopher that does what
is supposed to do and never
got stuck?
Attempt 1:When the philosopher is hungry it picks up a fork and
wait for another fork, when get, it eats for a while and put both
forks back to the table. Problem: What happen, if all five
philosophers take their left fork simultaneously?
Attempt 2: After taking the fork, checks for right fork. If it is not
available, the philosopher puts down the left one, waits for some
time, and then repeats the whole process.
Problem: What happen, if all five philosophers take their left fork
simultaneously?
Scheduling
 Which process is given control of the CPU and how long?
 By switching the processor among the processes, the OS can
make the computer more productive.
CPU-bound: Processes that use CPU until the quantum expire.
I/O-bound: Processes that use CPU briefly and generate I/O request.
CPU-bound processes have a long CPU-burst while I/O-bound
processes have short CPU burst.
When to Schedule
1.When a new process is created.
2.When a process terminates.
3.When I/O interrupt occur
4. When a clock interrupt occur
DIFFERENCES BETWEEN PREEMPTIVE AND NON-
PREEMPTIVE
Non-Preemptive: Non-preemptive algorithms are designed so that
once a process enters the running state(is allowed a process), it is
not removed from the processor until it has completed its service
time ( or it explicitly yields the processor). context_switch() is
called only when the process terminates or blocks.
Preemptive: Preemptive algorithms are driven by the notion of
prioritized computation. The process with the highest priority
should always be the one currently using the processor. If a
process is currently using the processor and a new process with a
higher priority enters, the ready list, the process on the processor
should be removed and returned to the ready list until it is once
again the highest-priority process in the system.
Categories of scheduling algorithm
1. Batch
2. Interactive
3. Real time
In order to design the good scheduling algorithm ,it is necessary to have
some idea of what good algorithm should do.
Scheduling criteria:
All system
Fairness: giving each process a fair share of CPU
Policy enforcement: seeing that stated policy is carried out.
Balance: keeping all the parts of CPU busy
Batch
Throughput : maximize jobs per hour
turnaround time : minimize the time of between submission and
termination
CPU utilization : Keep the CPU busy all the time
Interactive system
Response time : responded to request quickly
Real time system:
Meeting deadlines: avoid loosing data
Predictability: avoid quality degradation in multimedia .
Scheduling Algorithms:
First-Come First-Serve (FCFS)
 Processes are scheduled in the order they are received.
 Once the process has the CPU, it runs to completion
–Non-preemptive.
 Easily implemented, by managing a simple queue or by
storing time the process was received.
Some formulas
Arrival time: time at which at process arrives in
ready queue.
Completion time: time at which process completes
its execution.
Burst time: Time required by a processor for CPU
execution
Turn Around time :Completion time - arrival time.
Waiting timing: Turn around time – Burst time
• Problems:
• No guarantee of good response time.
• Large average waiting time.
Shortest Job First (SJF):
• The processing times are known in advanced.
• SJF selects the process with shortest expected processing time. In
case of the tie FCFS scheduling is used.
• Advantages:
• Reduces the average waiting time over FCFS.
• Favors shorts jobs at the cost of long jobs.
• Problems:
• Estimation of run time to completion.
• Not applicable in timesharing system.
Shortest-Remaining-Time-First (SRTF)
Preemptive version of SJF.
• Any time a new process enters the pool of processes to be
scheduled, the scheduler compares the expected value for its
remaining processing time with that of the process currently
scheduled. If the new process’s time is less, the currently
scheduled process is preempted.
Merits
• Low average waiting time than SJF.
• Useful in timesharing.
Demerits:
• Very high overhead than SJF.
• Requires additional computation.
• Favors short jobs, longs jobs can be victims of starvation.
• Scenario:Consider the following four processes
with the length of CPU-burst time given in
milliseconds:
• Processes Arrival Time Burst
• A 0.0 7
• B 2.0 4
• C 4.0 1
• D 5.0 4 SJF:
Average waiting time = (9 + 1 + 0 +2)/4 = 3
Round Robin scheduling
Each process is assigned a time interval (quantum),
after the specified quantum, the running process is preempted and a new
process is allowed to run.
Preempted process is placed at the back of the ready list.
Advantages:
Fair allocation of CPU across the process.
Used in timesharing system.
Low average waiting time when process lengths very widely.
Performance depends on quantum size. Quantum size:
If the quantum is very large, each process is given as much time as needs for
completion; RR degenerate to FCFS policy.
If quantum is very small, system busy at just switching from one process to
another process, the overhead of context-switching causes the system
efficiency degrading.
Example: of Round Robin scheduling . Consider quantum size is 4
millisecond.
Priority scheduling
• Each process is assigned a priority value, and runable process
with the highest priority is allowed to run.
• FCFS or RR can be used in case of tie.
• To prevent high-priority process from running indefinitely, the
scheduler may decrease the priority of the currently running
process at each clock tick.
Assigning Priority
Static:
• Some processes have higher priority than others.
• Problem: Starvation.
• Dynamic:
• Priority chosen by system.
• Decrease priority of CPU-bound processes.
• Increase priority of I/O-bound processes.
• Many different policies possible…….
• E. g.: priority = (time waiting + processing time)/processing time.
Priority level
Some questions of unit 2
1. Differentiate the term process and program with suitable example.
2. What are the possible operations on process ? How may ways the process are created
and terminated.
3. Explain the function of process table.
4. How thread differ from process. What are the advantages multi threaded concept.
5. Define Race condition with example.
6. Define critical section. How race condition can be avoided by mutual exclusion with
busy waiting.
7. What is inter process communication? How sleep and wake primitives are used in
producer consumer problem.
8. What do you mean by process scheduling? Schedule the flowing jobs according to the
FCFS,SJF,SRTF and calculate the average waiting time for each algorithm.
Jobs arrival time burst time
J1 1 3
J2 2 4
J3 4 5
Lab assignment:
1. WAP that give concept of system calls in windows
operating system.
2. WAP that implement the lock variable.
3. WAP that implement strict alternation.
4. . WAP that implement Peterson solution
5. WAP that implement the sleep and wake up pair in
producer consumer problems.
Note: Each student's should have individual lab sheet.
Due date: Magh 10 gate
Thank you this the end of unit 2
Unit 3 memory management
 memory is important resource that must be carefully managed.
 Modern computer system have a memory hierarchy, smallest and
fastest cache memory, 100 of mb medium sized primary
memory(RAM), and tens or 100 GB of slow cheap non volatile
disk storage. It is job of operating system to coordinate how these
memories are used.
 The part of operating system that manages the memory hierarchy
is usually called memory manger.
 Its job is to keep the track of which that parts of memory are in use
and which are not in use.
 Allocate memory for process for when they needed and de-alocate
memory when they are done.
 And manage the swapping between main memory and disk when
RAM is to small to hold all process.
Basic memory management
Cane be divided into two basic classes:
 those that move process back and froth between memory and
disk during executing(swapping and paging )
1. Mono programming without swapping or paging
 simplest management techniques
 Runs one jobs at time by sharing memory between user program
and OS. The placement of OS may be in following order.
Fig: three way
Of organizing
Memory
 When the system is organized in this way , only one job runs at
time.
 As soon as user type the command the Os copies the program
from disk to memory and execute it.
 When the process finishes, the Os displays the prompt command
and waits for another request.
Multiprogramming with fixed partitions
 the easiest way to achieve the multiprogramming is simply to
divide the memory up into n (un-equal) partitions .
 This is done at start up to the system
 When job arrives, it can put into the input queue for the smallest
partition large enough to hold it.
 Sine partitions are fixed , any space not used by job is wasted.
Disadvantage: here small jobs have to wait to
get into memory ,even though plenty of
memory is empty.
Alternative:
Is to maintain single queue as
shown in fig when ever partitions
becomes free, the job closest to the
front of the queue that fits in it
could be loaded into the partitions
and runs .
Memory management with swapping
Swapping: Consist of bringing in each process, running it for while,
then putting it back on the disk. The process is illustrated in fig
below.
Fig: Memory allocation changes as processes come into memory and leave it. The shaded regions are unused
memory
Different between fixed partitions and swapping
 The number, locations and size may vary dynamically in this
approach and fixed in previous .
 Complexity may arise in allocating and de allocating and
keeping track of it.
Swapping crates multiple holes in memory.
 It may be possible to combine them to make larger one by
moving all down word. Is known as compactions. But is to
much time consuming.
Main issues: How to allocate memory
 if the process are created with fixed size that never change, it is
simple. The os allocate what it needed.
 If process size vary dynamically during run time. Problem
occurs when it tries to grow.
 if the hole is adjacent the process is allowed to grow.
 If hole in not adjacent then either to move to the hole
enough for it or one or more process have to swapped
out.
#Solution:
 allocate a little bit extra memory when process is
swapped in.
 If program has two growing segment. The other approach
is shown below.
Allocating
space for a growing data
segment.
Allocating space for a growing stack and a growing data
segment.
Memory management with bit maps
When memory is assigned dynamically, the Os must manage it.
Two ways:
Bit maps and free lists
Bit maps: Memory is divided into no of allocation units ,
corresponding to each allocation unit there is bit in bit map, which
is 0 for free and 1 use.
Problem: when it has to be decided to bring k unit process into memory, the memory
manager must search bit map to find k consecutive runs of 0 bits in the map.
Memory management with linked list
 Another way of managing memory .
 Maintenance the list of free and allocated segments.
 The segment may be process or hole between process .
Each entry in the list specify , process or hole, staring address and length and pointer
to next entry.
 Terminating process has two neighbors except for last and first.
 Updating list requires replacing p by H
Possible combinations
Figure 4-6. Four neighbor combinations for
the terminating process, X.
Several algorithms can used to allocate the memory
First fit: manager scan the list and Use first hole big enough for
process.
then break hole into two segment one for process and another for
unused memory.
Next fit: in addition to search the whole list form begging it begins
for the list where it left in previous state.
Best fit: find the smallest hole for process.
Worse fit: always take the largest hole, so that hole broken to be big
enough to be useful.
Virtual memory
 Virtual memory is a concept that is associated with ability to
address a memory space much larger than that the available
physical memory.
 The basic idea behind the virtual memory is that the combined
size of the of the program, data, and stack may exceed the amount
of physical memory available for it. The OS keeps those part of
the program currently in use in main memory, and the rest on the
disk.
 All microprocessor now support virtual memory. Virtual memory
can be implemented by two most commonly used methods :
Paging and Segmentation or mix of both.
 “VM is a memory which works more than its capacity.”
Paging
it is the techniques to implement the virtual memory.
• Virtual address space vs. Physical address space
• The set of all virtual (logical) addresses generated by a program
is a virtual address space; the set of all physical addresses
corresponding to these virtual addresses is a physical address
space.
• The run time mapping from virtual address to physical address is
done by hardware devices called memory-management-unit
(MMU).
The virtual address space (process) is divided up into fixed sized
blocks called pages and the corresponding same size block in main
memory is called frames. When a process is to be executed, its
pages are loaded into any available memory frames from the
backing store(HDD).
 Paging permits the physical address space of process to be
noncontiguous.
 Traditionally, support for paging has been handled by hardware,
but the recent design have implemented by closely integrating the
hardware and OS.
This example shows
64KB program can run
in 32KB physical
memory. The
complete copy is
stored in the disk and
the pieces can be
brought into memory
as needed.
Fig: Mapping between logical and
physical address
 With 64KB of virtual address space and 32KB of physical
memory and 4KB page size, we get 16 virtual pages and 8
frames.
 When page frame is not mapped then this is called page fault.
Then one of the little used page must be replaced from RAM to
make the room for new page.
Address mapping done by page table
The 16 bits incoming VA is divided into
• 4-bit Page number(p)–used as an index into a page table which contains base
address of each page in physical memory.
• 12-bit Page offset(d)–combined with base address to define the physical
memory address that is sent to the memory unit .
Page table
 The purpose of the page table is to map virtual pages into pages
frames. This function of page table can be represented in
mathematical notation as:
 page_frame = page_table(page_number)
 The virtual page number is used as an index into the page table to
find the corresponding page frame.
 The exact layout of page table entry is highly machine dependent,
but more common structure for all machine is as:
• Page frame number: goal is to locate this
• Present/absent bit: If present/absent bit is present, the virtual
addresses is mapped to the corresponding physical address. If
present/absent is absent the trap is occur called page fault.
Protection bit: Tells what kinds of access are permitted read,
write or read only.
• Modified bit (dirty bit): Identifies the changed status of the page
since last access; if it is modified then it must be rewritten back
to the disk.
• Referenced bit: set whenever a page is referenced; used in page
replacement.
Page replacement Algorithms
 When a page fault occurs, the OS has to choose a page to remove
from memory to make the room for the page that has to be
brought in.
 The algorithms that is used to replace the page from main
memory is known as page replacement algorithm
Principle of Optimality: To obtain optimal performance the page to
replace is one that will not be used for the furthest time in the
future.
Optimal Page Replacement (OPR)
Advantages:
• An optimal page-replacement algorithm; it guarantees the lowest
possible page fault rate.
Problems:
• Unrealizable, at the time of the page fault, the OS has no way of
knowing when each of the pages will be referenced next.
• This is not used in practical system.
FIFO
Advantages:
• Easy to understand and program.
• Distributes fair chance to all.
Problems:
• FIFO is likely to replace heavily (or constantly) used pages and
they are still needed for further processing.
Second chance algorithm :
 Is the modification on FIFO
 If the page is selected , we inspect its reference its reference bit if
it is 0 it is proceed to replace this page.
 If it is 1 we give that page a second chance and move on select on
next page to select.
• When a page gets a second chance, its reference bit is cleared
and its arrival time is reset to the current time.
Problems:
• If all the pages have been referenced, second chance
degenerates into pure FIFO.
Segmentation
 The virtual memory is one dimensional because VA addresses go
from 0 to some maximum, one address after another.
 For many reasons having two or more separate address space is
often better.
 For example compiler has many tables that are built up as
compilation proceeds.
 The source text
 The symbol table, contains name and attributes of variable
 The table containing all the integer and floating point constants.
 The pares tree, containing syntactic analysis of the program
 The stack use for procedure calls within the compiler.
in virtual address space each first table grows continuously as compilation proceeds. The last one grows and
shrinks in unpredictable ways during compilation . in one dimensional memory the table would have to
allocated contiguous chunk of virtual address space .
consider what happens if a program has an exceptionally large number of variables but normal amount of
everything else, the chunk of address space allocated for the symbol table may fill up but there is lots of
room in other tables.
Stack
Parse tree
Constant table
Source text
Symbol table
Solutions:
The general solution of these issues is to provide the machine with
many completely independent address spaces, called segments.
• Different segments have its own name and size.
• The different segment can grow or shrink independently, with out
effecting the others; so the size of segment changed during
execution.
• The process of providing independent address space to the
program is known as segmentation.
Dead lock
A process in a multiprogramming system is said to be in dead lock if
it is waiting for a particular event that will never occur.
Deadlock Example
•All automobiles trying to cross.
•Traffic Completely stopped.
•Not possible without backing some.
traffic dead lock
Resource dead lock
System model:
A process request a resource before using it, and release after using
it.
• 1.Request the resource.
• 2.Use the resource.
• 3.Release the resource.
• If the resource is not available when it is requested, the
requesting process is force to wait state, because the resources
they have requested are held by other waiting process. This
situation is called a deadlock.
Conditions for dead lock
1.Mutual Exclusion: Each resources is either currently assigned to
exactly one process or is available.
2.Hold and Wait: Processes hold resources already allocated to
them while waiting for additional resources.
3.No preemption: Resources previously granted can not be forcibly
taken away from the process.
4.Circular wait: there must be a circular chain of two or more
resources, each of which is waiting for resources held by the
next member of chain.
Deadlock modeling
these four conditions can be modeled by using directed graphs. These
graphs have two kinds of nodes: process shown as circles, and
resources, shown as squares.
 An arc for a resources node(square) to process node (circle) means
that the resources has previously been requested by, granted to and is
currently held by that process.
 An arc from a process to a resources means that the process is
currently blocked waiting for that resources.
(a) Holding resource (b) requesting resources (c) deadlock
A
R
R
B
In fig (c): Process P1 holds resource R1 and needs resource R2 to
continue; Process P2 holds resource R2 and needs resource R1
to continue –deadlock.
A deadlock situation can arise if all four conditions hold
simultaneously in the system.
Deadlock handling strategies:
•We can use a protocol to prevent or avoid deadlocks, ensuring
that the system never enter a deadlock state.
•We can allow the system to enter a deadlock state, detect it, and
recover.
• We can ignore the problem all to gather, and pretended that
deadlock never occur in the system.
File system
Overview
 File concept
 File attributes
 File operations
 File types
 File structure
 Access methods
 Directory structure
File concept
• How to store the large amount of data into the computer?
• What happens, when process terminates or killed using some data?
• How to assign the same data to the multiple processes?
The solution to all these problems is to store information on
disks or on other external media called files.
A file is named collection of related information normally
resides on a secondary storage device such as disk or tape.
• Commonly, files represent programs (both source
and object forms) and data; data files may be
numeric, alphanumeric, or binary.
• Information stored in files must be persistent,-not
be effected by power failures and system reboot.
• The files are managed by OS.
• The part of OS that is responsible to manage files
is known as file system.
File system issues
• How to create files?
• How they named?
• How they are structured?
• What operations are allowed on files?
• How to protect them?
• How they accessed or used?
• How to implement?
File naming
 When a process creates a file, it gives the file name; while process
terminates, the file continue to exist and can be accesses by other
processes.
 A file is named, for the convenience of its human users, and it is
referred to by its name. A name is string of characters.
The string may be of digits or special characters (eg. 2, !,% etc).
 Many OSs support two-part file names; separated by period; the
part following the period is called the file extension.
 Extension indicates something about the file (e.g., file.c –C source
file). But in some system it may have two or more extension such as
in Unix proc.c.Z -C source file compressed using Ziv-Lampel
algorithm.
File Structure
• Files must have structure that is understood by
OS. Files can be structured in several ways. The
most common structures are:
• Unstructured
• Record Structured
• Tree Structured
Unstructured:
• Consist of unstructured sequence of bytes or words. OS does not
know or care what is in the file. Any meaning must be imposed
by user level programs.
• Provides maximum flexibility; user can put anything they want
and name they anyway that is convenient.
• Both Unix and Windows use these approach.
Record Structured:
• A file is a sequence of fixed-length records, each with some
internal structure.
• Each read operation returns one records, and write operation
overwrites or append one record.
• Many old mainframe systems use this structure.
Tree Structured:
• File consists of tree of records, not necessarily all the same
length.
• Each containing a key field in a fixed position in the record,
sorted on the key to allow the rapid searching.
• The operation is to get the record with the specific key.
• Used in large mainframe for commercial data processing.
File types
Many OS supports several types of files
• Regular files: contains user information, are generally ASCII or
binary.
• Directories: system files for maintaining the structure of file
system.
• Character Special files: related to I/O and used to model serial
I/O devices such as terminals, printers, and networks.
• Block special files: used to model disks.
ASCII files:
• Consists of line of text.
• They can be displayed and printed as is and can be edited with
ordinary text editor.
Binary files:
• Consists of sequence of byte only.
• They have some internal structure known to programs that use
them (e.g., executable or archive files).
File Access methods
Direct access
• Files whose bytes or records can be read in any order. Based on
disk model of file, since disks allow random access to any block.
• Used for immediate access to large amounts of information.
File Attributes
• In addition to name and data, all other information about file is
termed as file attributes.
• The file attributes may very from system to system.
• Some common attributes are listed here.
File Operations
• OS provides system calls to perform operations on
files. Some common calls are:
• Create: If disk space is available it create new file
without data.
• Delete: Deletes files to free up disk space.
• Open: Before using a file, a process must open it.
• Close: When all access are finished, the file should
be closed to free up the internal table space.
• Read: Reads data from file.
Directory Structure
Directory Structure
Single-Level-Directory:
• All files are contained in the same directory.
• Easy to support and understand; but difficult to manage large amount
of files and to manage different users.
• Difficult mange files if two or more files have same name. i.e
overwrite problems.
•
Two-Level-Directory:
• Separate directory for each user.
• Used on a multiuser computer and on a simple network computers.
• It has problem when users want to cooperate on some task and to
access one another's files. It also cause problem when a single user
has large number of files.
Hierarchical-Directory:
• Generalization of two-level-structure to a tree of arbitrary height.
• This allow the user to create their own subdirectories and to
organize their files accordingly.
• Nearly all modern file systems are organized in this manner.

More Related Content

Similar to operatinndnd jdj jjrg-system-1(1) (1).pptx

introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptxSHUJEHASSAN
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating SystemDivya S
 
Advanced computer architecture lesson 1 and 2
Advanced computer architecture lesson 1 and 2Advanced computer architecture lesson 1 and 2
Advanced computer architecture lesson 1 and 2Ismail Mukiibi
 
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...naikayushkumar32
 
Operating system 2
Operating system 2Operating system 2
Operating system 2matsapit
 
installing and optimizing operating system software
installing and optimizing operating system software   installing and optimizing operating system software
installing and optimizing operating system software Jaleto Sunkemo
 
Reformat PPT.pptx
Reformat PPT.pptxReformat PPT.pptx
Reformat PPT.pptxLINDYLGERAL
 
Operating Systems & Applications
Operating Systems & ApplicationsOperating Systems & Applications
Operating Systems & ApplicationsMaulen Bale
 
Bedtime Stories on Operating Systems.pdf
Bedtime Stories on Operating Systems.pdfBedtime Stories on Operating Systems.pdf
Bedtime Stories on Operating Systems.pdfAyushBaiswar1
 

Similar to operatinndnd jdj jjrg-system-1(1) (1).pptx (20)

introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptx
 
Ch1
Ch1Ch1
Ch1
 
Introduction of operating system
Introduction of operating systemIntroduction of operating system
Introduction of operating system
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating System
 
unit1 part1.ppt
unit1 part1.pptunit1 part1.ppt
unit1 part1.ppt
 
Advanced computer architecture lesson 1 and 2
Advanced computer architecture lesson 1 and 2Advanced computer architecture lesson 1 and 2
Advanced computer architecture lesson 1 and 2
 
Ch1 - OS.pdf
Ch1 - OS.pdfCh1 - OS.pdf
Ch1 - OS.pdf
 
operating system structure
operating system structureoperating system structure
operating system structure
 
OS chapter 1.pptx
OS chapter 1.pptxOS chapter 1.pptx
OS chapter 1.pptx
 
OS chapter 1.pptx
OS chapter 1.pptxOS chapter 1.pptx
OS chapter 1.pptx
 
Os
OsOs
Os
 
Os
OsOs
Os
 
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...
JULY-DEC_2023_BSCBT_3_SEM_V9_BSCBT301_BSCBT301_Fundamentals_of_IT_Unit_2__Ppt...
 
Operating system 2
Operating system 2Operating system 2
Operating system 2
 
Chapter 5
Chapter 5Chapter 5
Chapter 5
 
installing and optimizing operating system software
installing and optimizing operating system software   installing and optimizing operating system software
installing and optimizing operating system software
 
Reformat PPT.pptx
Reformat PPT.pptxReformat PPT.pptx
Reformat PPT.pptx
 
LEC 1.pptx
LEC 1.pptxLEC 1.pptx
LEC 1.pptx
 
Operating Systems & Applications
Operating Systems & ApplicationsOperating Systems & Applications
Operating Systems & Applications
 
Bedtime Stories on Operating Systems.pdf
Bedtime Stories on Operating Systems.pdfBedtime Stories on Operating Systems.pdf
Bedtime Stories on Operating Systems.pdf
 

Recently uploaded

DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 

Recently uploaded (20)

DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 

operatinndnd jdj jjrg-system-1(1) (1).pptx

  • 1. Operating system An operating system (OS) is a collection of system programs that together control the operation of a computer system. Or Operating system is a system software which act as a interface between computer user and computer hardware. It is master controller of the computer system because it is responsible for managing all hardware and software's. user Application programs OS hardware Operating system goals: –Execute user programs and make solving user problems easier. –Make the computer system convenient to use. –Use the computer hardware in an efficient manner.
  • 2. Two Functions of OS •OS as an Extended Machine •OS as a Resource Manager OS as an Extended Machine OS creates higher-level abstraction for programmer • Example: (Floppy disk I/O operation) -disks contains a collection of named files -each file must be open for READ/WRITE -after READ/WRITE complete close that file -no any detail to deal OS shields the programmer from the disk hardware and presents a simple file oriented interface. OS’s function is to present the user with the equivalent of an extended machine or virtual machine that is easier to program than the underlying hardware.
  • 3. OS as a Resource Manager Modern computer consist of many resources such as processor, memories, printer and other devices. The job of os is to provide an orderly and controlled allocation of these resources.  What would happen if three programs running on computer all trying to print the file?  What happens if two network users try to update a shared document at the same time? In short this view of OS is holds that it primary task is to keep track of who is using which resources for what time. Resource management includes multiplexing(sharing) resources in two ways. Time and space When resource is time multiplexed different programs or users takes turns using it.
  • 4. The OS allocates resource to one program then other and son on. For eg CPU. Os must decide how long the program in CPU. The other kind of multiplexing is space . Instead of taking turn to use resources, each program allocates part of resources. For eg RAM. OS must deal about fairness, protection etc.  Another space multiplexed resource is HDD. In many system single disk can hold multiple files form multiple users.  Os must concern about the allocating disk space and keeping track of which is using which disk block.  In short OS’s primary function is to manage all pieces of a complex system.
  • 5. The first generation (1945 -55) vacuum tubes and plug boards  Single group of people designed, built, programmed, operated, and maintained each machine.  All programming was on absolute machine language  Programming language were unknown  Operating system were un-heard  Plug-board was for controlling the computer function
  • 6. The second generation (1955-65) Transistor and Batch system Input on punch card Out put on printed formatted Input on card are first recorded in input tape and the input tape is inserted to main system and finally output is recorded into output tape and printed in off line . Batch operating system ( one task at time)
  • 7. Third generation(1965-1980) ICs and multiprogramming  multiprogramming ( when one program waits for input out put CPU switches to another one)  spooling: Whenever running job finished, the OS could load a new job from the disk into memory partition.
  • 8. Fourth Generation(1980-present) Personal computer Multiprocessor N/w operating system: Needs network interface to work Distributed Os: That in one that appears to its users as a traditional uni-processor system, even though it is actually composed of multiple processor. The users are not aware of where their programs are beings run or where their files are located; that should be handled by OS.
  • 9. Service provided by OS An Operating System provides services to both the users and to the programs. • It provides programs an environment to execute. • It provides users the services to execute the programs in a convenient manner. Following are a few common services provided by an operating system : • Program execution • I/O operations • File System manipulation • Communication • Error Detection • Resource Allocation • Protection
  • 10. System calls The system call provides an interface to the operating system services. There are 5 different categories of system calls: process control, file manipulation, device manipulation, information maintenance and communication. 1. Process Control A running program needs to be able to stop execution either normally or abnormally. There are several process management process: create, delete, wait. Etc… 2. File Management Some common system calls are create, delete, read, write, reposition, or close. Also, there is a need to determine the file attributes – get and set file attribute. Many times the OS provides an API to make these system calls.
  • 11. 3.File Management Some common system calls are create, delete, read, write, reposition, or close. Also, there is a need to determine the file attributes – get and set file attribute. Many times the OS provides an API to make these system calls. 4. Device Management • Process usually require several resources to execute, if these resources are available, they will be granted and control returned to the user process. These resources are also thought of as devices. Some are physical, such as a video card, and others are abstract, such as a file. • User programs request the device, and when finished they release the device. Similar to files, we can read, write, and reposition the device.
  • 12. 4. Information Management • Some system calls exist purely for transferring information between the user program and the operating system. An example of this is time, or date. • The OS also keeps information about all its processes and provides system calls to report this information. 5. Communication There are two models of interprocess communication, the message-passing model and the shared memory model. • Message-passing uses a common mailbox to pass messages between processes. • Shared memory use certain system calls to create and gain access to regions of memory owned by other processes. The two processes exchange information by reading and writing in the shared data.
  • 13. Unit 2 : Introduction to Process Process model: • We are assuming a multiprogramming OS that can switch from one process to another. • Sometimes this is called pseudoparallelism since one has the illusion of a parallel processor. • The other possibility is real parallelism in which two or more processes are actually running at once because the computer system is a parallel processor, i.e., has more than one processor. • We do not study real parallelism (parallel processing, distributed systems, multiprocessors, etc) in this course.
  • 14.
  • 15. Note: switching of CPU is called multiprogramming Process • A process is basically a program in execution. The execution of a process must progress in a sequential fashion. • A process is defined as an entity which represents the basic unit of work to be implemented in the system. • To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. • When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −
  • 16.
  • 17. • Component & Description Stack • The process Stack contains the temporary data such as method/function parameters, return address and local variables. Heap • This is dynamically allocated memory to a process during its run time. Text • This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. Data • This section contains the global and static variables.
  • 18. Operations on process Process Creation From the users or external viewpoint there are several mechanisms for creating a process. 1. System initialization, including daemon processes. 2. Execution of a process creation system call by a running process. 3. A user request to create a new process. 4. Initiation of a batch job.
  • 19. Process termination Again from the outside there appear to be several termination mechanism. 1. Normal exit (voluntary). 3. Fatal error (involuntary) i.e if user types command cc foo.c to compile the program foo.c and no such file exists, the compiler simply exits. 3. Error exit (voluntary). i.e due to the programs bug. Example include executing an illegal instruction, referencing nonexistent memory or dividing by zero. 4. Killed by another process (involuntary).
  • 20. Process Hierarchies Modern general purpose operating systems permit a user to create and destroy processes.  Parent creates a child process, child processes can create its  own process „Forms a hierarchy. • In unix this is done by the fork system call, which creates a child process ,and the exit system call, which terminates the current process. • After a fork both parent and child keep running (indeed they have the same program text) and each can fork off other processes. • A process tree results. The root of the tree is a special process created by the OS during startup. • A process can choose to wait for children to terminate. For example, if C issued a wait() system call it would block until G finished
  • 21.
  • 22. Process states • When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. • In general, a process can have one of the following five states at a time.  new/ start : This is the initial state when a process is first started/created.  Ready: The process is waiting to be assigned to a processor. Process may come into this state after Start state or while running but interrupted by the scheduler to assign CPU to some other process.  Running: instructions are being executed.
  • 23. Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. Terminated : The process has finished execution
  • 24. Process control block Process must be saved when the process is switched from one state to another so that it can be restarted later as it had never been stopped. • The PCB is the data structure containing certain important information about the process -also called process table or processor descriptor. Which contains: • Process state: running, ready, blocked. • Program counter: Address of next instruction for the process. • Registers: Stack pointer, accumulator, PSW etc. • Scheduling information: Process priority, pointer to scheduling queue etc. Memory-allocation: value of base and limit register, page table, segment table etc. • Accounting information: time limit, process numbers etc. • Status information: list of I/O devices, list of open files etc.
  • 25.
  • 26. Threads What is thread?  Threads, like process, are a mechanism to allow a program to do more then one task.  Conceptually, a thread (also called lightweight process) exists within a process (heavyweight process), is a basic unit of CPU utilization.  Traditional process has its own address space and a single thread of control. However many modern OS multi threaded concept.  The multithreading is used to describe the situation of allowing the multiple threads in same process.
  • 27.  All threads share the same address space, global variables, set of open file, alarms and signals etc of the process to which they belongs.  For example, a web browser might have one thread display images or text or other threads receives data form network.  Another example, a word processor may have a thread for displaying graphics, another thread for spelling and grammar checking in background.
  • 28. Fig (a) process with single thread of control (b) process with multiple thread of control.
  • 29. Benefits Responsiveness. i.e when one thread is blockd another providing response to the user. Resource sharing. i.e it share same recourse of the same process to which it belongs. Economy: allocating resources for process is costly. However process allocate same memory spaces to which it belongs. Utilization of multiprocessor concept. More than one process can run on more then one process.
  • 30. Users and Kernel Threads • User Threads: Thread management done by user-level threads library.  Implemented thread library at user level.  Library provides support for thread creation, scheduling and management with no support from the kernel. –Fast to create –If kernel is sigle threaded, blocking system calls will cause the entire process to block. –Example: POSIX Pthreads, Mach C-threads. • Kernel Threads:  Supported by the Kernel –Kernel performs thread creation, scheduling and management in kernel space. –Slower to create and manage 1.Blocking system calls are no problem 2.Most OS’s support these threads 3.Examples: WinX, Linux
  • 31. Inter-Process Communication Process frequently need to communicate to with other process. In shell pipeline, the out put of first process must be passed to second process, and son on down the line. Thus there is need for communication between processes in well structured manner without interrupts. Issues related to Inter-Process communication: –How one process can pass the information to the another? –How to make sure two or more processes do not get into each other’s way when engaging in critical activities? –How to maintain the proper sequence when dependencies are presents? • Thus, IPC provide the mechanism to allow the processes to communicate and to synchronize their actions.
  • 32. Some scenarios related to IPC and their problems and their solutions print spooler: when a process wants to print a file, it enters the file name in a special spooler directory, another process, print daemon, periodically checks to see if there are any files to be printed, print them and removes their names from the directory.
  • 33. Situation where two or more processes are reading or writing some shared data and the final result depends on who run precisely, are called race conditions. Critical sections.  how do we avoid race conditions? Mutual exclusion – some way of making sure that if one process is using a shared variable or file, the other processes will be excluded form doing the same task.  The part of program where the shared memory is accessed is called the critical region or critical sections. If we could arrange the matter such that no two processes were ever in their critical regions at the same time, we could avoid race conditions.  This requirement avoid race conditions.
  • 34. Four necessary conditions have good solutions. 1. No two process may be simultaneously inside the critical regions. 2. No assumptions may be made about speeds or the number of CPU’s. 3. No process running outside its critical may block other processes. 4. No process should to wait forever to enter its critical regions.
  • 35. Fig: Mutual exclusion using critical regions
  • 36. Mutula exclusion with busy waiting Ways to achieve the ME: Interrupt Disabling: • Each process disable all interrupts just after entering its CR and re-enable them just before leaving it. • No clock interrupt, no other interrupt, no CPU switching to other process until the process turn on the interrupt. • DisableInterrupt() // perform CR task • EnableInterrupt() • Advantages: • Mutual exclusion can be achieved by implementing OS primitives to disable and enable interrupt. • Problems: • -allow the power of interrupt handling to the user.
  • 37.  The chances of never turned on –is a disaster.  it only works in single processor environment. Lock Variables A single, shared (lock) variable, initially 0. When a process wants to enter its CR, it first test the lock. If the lock is 0, the process set it to 1 and enters the CR. If the lock is already 1, the process just waits until it becomes 0. Problems: • problem like spooler directory; suppose that one process reads the lock and sees that it is 0, before it can set lock to 1, another process scheduled, enter the CR, set lock to 1 and can have two process at CR (violates mutual exclusion).
  • 38. Strict Alternation Processes share a common integer variable turn. If turn == i then process Pi is allowed to execute in its CR, if turn == j then process Pj is allowed to execute. Initially turn = 0; keep the track of whose turn is it. Code structure while(turn=0) while(turn=1) critical –region C-R Trun =1 turn=0 Non- critical region N-R Advantages: Ensures that only one process at a time can be in its CR. Problems: occurs when one process is slower than other.
  • 39. Peterson's Algorithm • Before entering its CR, each process call enter_regionwith its own process number, 0 or 1 as parameter. • Call will cause to wait, if need be, until it is safe to enter. • When leaving CR, the process calls leave_regionto indicate that it is done and to allow other process to enter CR. • Advantages: Preserves all conditions. • Problems: difficult to program for n-processes system and less efficient.
  • 40. Hardware Solution -TSL (Test and set lock) • Test and Set Lock (TSL) instruction reads the contents of the memory word lock (shared variable) into the register and then stores nonzero value at the memory address lock. • The CPU executing TSL locks the memory bus to prohibit other CPUs from accessing memory until it is done. • When lock is 0, any process may set it to 1 using the TSL instruction. enter_region: • TSL register, lock |copy lock to register and set lock to 1 CMP register, #0 |was lock 0? • JNE enter_region |if it was non zero, lock was set, so loop • RET |return to caller; critical_region();
  • 41. • leave_region: • MOVE lock, #0 |store a 0 in lock • RET |return to caller noncritical_section(); • Advantages: Preserves all condition, easier programming task and improve system efficiency. • Problems : difficulty in hardware design.
  • 42. Busy Waiting Alternate? Busy waiting: • When a process want to enter its CR, it checks to see if the entry is allowed, if it is not, the process just sits in a tight loop waiting until it is. • Waste of CPU time for NOTHING! • Possibility of Sleep and Wakeup pair instead of waiting. • Sleep causes to caller to block, until another process wakes it up.
  • 43. The procedure consumer problem:  Two process share a common, fixed sized buffer. • Suppose one process, producer, is generating information that the second process, consumer, is using. • Their speed may be mismatched, if producer insert item rapidly, the buffer will full and go to the sleep until consumer consumes some item, while consumer consumes rapidly, the buffer will empty and go to sleep until producer put something in the buffer. • Sleep and Wakeup #define N = 100 /* number of slots in the buffer*/ int count = 0; /* number of item in the buffer*/ void procedure() { int item ; while(true){ /repeat forever/ void consumer() item = produce _item(); { if (count==N) sleep(); while(TRUE){ insert_ item(item); if(count==0) sleep(); cosume item(); count= count+1; cont = count-1; if(count==1) wakeup(consumer); if(count=N-1) weak up (producer); } } } }
  • 44. The race condition will occur:  The buffer is empty and the consumer has just read count to see it is 0. At that Instant, the scheduler decides to stop running the consumer temporarily and start running producer. The procedure enters an item in the buffer, increments count, and notice that it is now 1. Reasoning that count was just 0,and thus the consumer must be sleeping, the procedure calls wakeup to wake the consumer up  The consumer not yet asleep, so the wakeup signal is lost, the consumer has the count value 0 from the last read so go to the sleep. Producer keeps on producing and fill the buffer and go to sleep, both will sleep forever.  Think: If we were able to save the wakeup signal that was lost........
  • 45. Semaphores E. W. Dijkstra (1965) suggested using an integer variable to count the number of wakeups, called a semaphore. • It could have the value 0, indicating no wakeups ware saved, or some positive value if one or more wakeups were pending. • Operations: Down and Up • Down: checks if the value greater than 0 ;yes-decrement the value (i.e. Uses one stored wakeup) and continues. • No-process is put to sleep without completing down. • Checking value, changing it, and possibly going to sleep, is all done as single action. • Up: increments the value; if one or more processes were sleeping, unable to complete earlier down operation, one of them is chosen and is allowed to complete its down.
  • 46. If each process does a down just before entering its critical region and up just after leaving it, mutual exclusion is achieved. Message Passing: With the trend of distributed operating system, many OS are used to communicate through Internet, intranet, remote data processing etc. Inter-process communication based on two primitives: send and receive. send(destination, &message); receive(source, &message) is known as message passing. The send and receive calls are normally implemented as operating system calls accessible from many programming language environments.
  • 47. Producer-Consumer with Message Passing #define N 100 Producer (void) { int item; message m;/*message buffer*/ while (TRUE){ item = produce_item(); /*generate something */ receive(consumer, &m); /*wait for an empty to arrive*/ build_message(&m, item) /*construct a message to send*/ send(consumer, &m); } } void consumer(void) { int item; message m; for(i = 0; i<N; i++) send(producer, &m); /*send N empties*/ while(TRUE){ receive(producer, &m); /* get message containing item*/ item = extract_item(&m); /* extract item from message*/ send(producer, &m) /* send back empty reply*/ consume_item(item); /*do something with item*/ }}
  • 48. Classical IPC Problems The Dining Philosophers Problem Scenario: consider the five philosophers are seated in a common round table for their lunch, each philosopher has a plate of spaghetti and there is a forks between two plates, a philosopher needs two forks to eat it. They alternates thinking and eating. What is the solution (program) for each philosopher that does what is supposed to do and never got stuck?
  • 49. Attempt 1:When the philosopher is hungry it picks up a fork and wait for another fork, when get, it eats for a while and put both forks back to the table. Problem: What happen, if all five philosophers take their left fork simultaneously? Attempt 2: After taking the fork, checks for right fork. If it is not available, the philosopher puts down the left one, waits for some time, and then repeats the whole process. Problem: What happen, if all five philosophers take their left fork simultaneously?
  • 50. Scheduling  Which process is given control of the CPU and how long?  By switching the processor among the processes, the OS can make the computer more productive. CPU-bound: Processes that use CPU until the quantum expire. I/O-bound: Processes that use CPU briefly and generate I/O request. CPU-bound processes have a long CPU-burst while I/O-bound processes have short CPU burst. When to Schedule 1.When a new process is created. 2.When a process terminates. 3.When I/O interrupt occur 4. When a clock interrupt occur
  • 51. DIFFERENCES BETWEEN PREEMPTIVE AND NON- PREEMPTIVE Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state(is allowed a process), it is not removed from the processor until it has completed its service time ( or it explicitly yields the processor). context_switch() is called only when the process terminates or blocks. Preemptive: Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.
  • 52. Categories of scheduling algorithm 1. Batch 2. Interactive 3. Real time In order to design the good scheduling algorithm ,it is necessary to have some idea of what good algorithm should do. Scheduling criteria: All system Fairness: giving each process a fair share of CPU Policy enforcement: seeing that stated policy is carried out. Balance: keeping all the parts of CPU busy Batch Throughput : maximize jobs per hour turnaround time : minimize the time of between submission and termination CPU utilization : Keep the CPU busy all the time
  • 53. Interactive system Response time : responded to request quickly Real time system: Meeting deadlines: avoid loosing data Predictability: avoid quality degradation in multimedia . Scheduling Algorithms: First-Come First-Serve (FCFS)  Processes are scheduled in the order they are received.  Once the process has the CPU, it runs to completion –Non-preemptive.  Easily implemented, by managing a simple queue or by storing time the process was received.
  • 54. Some formulas Arrival time: time at which at process arrives in ready queue. Completion time: time at which process completes its execution. Burst time: Time required by a processor for CPU execution Turn Around time :Completion time - arrival time. Waiting timing: Turn around time – Burst time
  • 55. • Problems: • No guarantee of good response time. • Large average waiting time. Shortest Job First (SJF): • The processing times are known in advanced. • SJF selects the process with shortest expected processing time. In case of the tie FCFS scheduling is used. • Advantages: • Reduces the average waiting time over FCFS. • Favors shorts jobs at the cost of long jobs. • Problems: • Estimation of run time to completion. • Not applicable in timesharing system.
  • 56.
  • 57. Shortest-Remaining-Time-First (SRTF) Preemptive version of SJF. • Any time a new process enters the pool of processes to be scheduled, the scheduler compares the expected value for its remaining processing time with that of the process currently scheduled. If the new process’s time is less, the currently scheduled process is preempted. Merits • Low average waiting time than SJF. • Useful in timesharing. Demerits: • Very high overhead than SJF. • Requires additional computation. • Favors short jobs, longs jobs can be victims of starvation.
  • 58. • Scenario:Consider the following four processes with the length of CPU-burst time given in milliseconds: • Processes Arrival Time Burst • A 0.0 7 • B 2.0 4 • C 4.0 1 • D 5.0 4 SJF:
  • 59. Average waiting time = (9 + 1 + 0 +2)/4 = 3
  • 60.
  • 61. Round Robin scheduling Each process is assigned a time interval (quantum), after the specified quantum, the running process is preempted and a new process is allowed to run. Preempted process is placed at the back of the ready list. Advantages: Fair allocation of CPU across the process. Used in timesharing system. Low average waiting time when process lengths very widely. Performance depends on quantum size. Quantum size: If the quantum is very large, each process is given as much time as needs for completion; RR degenerate to FCFS policy. If quantum is very small, system busy at just switching from one process to another process, the overhead of context-switching causes the system efficiency degrading.
  • 62. Example: of Round Robin scheduling . Consider quantum size is 4 millisecond.
  • 63. Priority scheduling • Each process is assigned a priority value, and runable process with the highest priority is allowed to run. • FCFS or RR can be used in case of tie. • To prevent high-priority process from running indefinitely, the scheduler may decrease the priority of the currently running process at each clock tick. Assigning Priority Static: • Some processes have higher priority than others. • Problem: Starvation.
  • 64. • Dynamic: • Priority chosen by system. • Decrease priority of CPU-bound processes. • Increase priority of I/O-bound processes. • Many different policies possible……. • E. g.: priority = (time waiting + processing time)/processing time.
  • 66. Some questions of unit 2 1. Differentiate the term process and program with suitable example. 2. What are the possible operations on process ? How may ways the process are created and terminated. 3. Explain the function of process table. 4. How thread differ from process. What are the advantages multi threaded concept. 5. Define Race condition with example. 6. Define critical section. How race condition can be avoided by mutual exclusion with busy waiting. 7. What is inter process communication? How sleep and wake primitives are used in producer consumer problem. 8. What do you mean by process scheduling? Schedule the flowing jobs according to the FCFS,SJF,SRTF and calculate the average waiting time for each algorithm. Jobs arrival time burst time J1 1 3 J2 2 4 J3 4 5
  • 67. Lab assignment: 1. WAP that give concept of system calls in windows operating system. 2. WAP that implement the lock variable. 3. WAP that implement strict alternation. 4. . WAP that implement Peterson solution 5. WAP that implement the sleep and wake up pair in producer consumer problems. Note: Each student's should have individual lab sheet. Due date: Magh 10 gate Thank you this the end of unit 2
  • 68. Unit 3 memory management  memory is important resource that must be carefully managed.  Modern computer system have a memory hierarchy, smallest and fastest cache memory, 100 of mb medium sized primary memory(RAM), and tens or 100 GB of slow cheap non volatile disk storage. It is job of operating system to coordinate how these memories are used.  The part of operating system that manages the memory hierarchy is usually called memory manger.  Its job is to keep the track of which that parts of memory are in use and which are not in use.  Allocate memory for process for when they needed and de-alocate memory when they are done.  And manage the swapping between main memory and disk when RAM is to small to hold all process.
  • 69. Basic memory management Cane be divided into two basic classes:  those that move process back and froth between memory and disk during executing(swapping and paging ) 1. Mono programming without swapping or paging  simplest management techniques  Runs one jobs at time by sharing memory between user program and OS. The placement of OS may be in following order. Fig: three way Of organizing Memory
  • 70.  When the system is organized in this way , only one job runs at time.  As soon as user type the command the Os copies the program from disk to memory and execute it.  When the process finishes, the Os displays the prompt command and waits for another request. Multiprogramming with fixed partitions  the easiest way to achieve the multiprogramming is simply to divide the memory up into n (un-equal) partitions .  This is done at start up to the system  When job arrives, it can put into the input queue for the smallest partition large enough to hold it.  Sine partitions are fixed , any space not used by job is wasted.
  • 71. Disadvantage: here small jobs have to wait to get into memory ,even though plenty of memory is empty. Alternative: Is to maintain single queue as shown in fig when ever partitions becomes free, the job closest to the front of the queue that fits in it could be loaded into the partitions and runs .
  • 72. Memory management with swapping Swapping: Consist of bringing in each process, running it for while, then putting it back on the disk. The process is illustrated in fig below. Fig: Memory allocation changes as processes come into memory and leave it. The shaded regions are unused memory
  • 73. Different between fixed partitions and swapping  The number, locations and size may vary dynamically in this approach and fixed in previous .  Complexity may arise in allocating and de allocating and keeping track of it. Swapping crates multiple holes in memory.  It may be possible to combine them to make larger one by moving all down word. Is known as compactions. But is to much time consuming. Main issues: How to allocate memory  if the process are created with fixed size that never change, it is simple. The os allocate what it needed.  If process size vary dynamically during run time. Problem occurs when it tries to grow.
  • 74.  if the hole is adjacent the process is allowed to grow.  If hole in not adjacent then either to move to the hole enough for it or one or more process have to swapped out. #Solution:  allocate a little bit extra memory when process is swapped in.  If program has two growing segment. The other approach is shown below.
  • 75. Allocating space for a growing data segment. Allocating space for a growing stack and a growing data segment.
  • 76. Memory management with bit maps When memory is assigned dynamically, the Os must manage it. Two ways: Bit maps and free lists Bit maps: Memory is divided into no of allocation units , corresponding to each allocation unit there is bit in bit map, which is 0 for free and 1 use.
  • 77. Problem: when it has to be decided to bring k unit process into memory, the memory manager must search bit map to find k consecutive runs of 0 bits in the map. Memory management with linked list  Another way of managing memory .  Maintenance the list of free and allocated segments.  The segment may be process or hole between process . Each entry in the list specify , process or hole, staring address and length and pointer to next entry.  Terminating process has two neighbors except for last and first.  Updating list requires replacing p by H Possible combinations Figure 4-6. Four neighbor combinations for the terminating process, X.
  • 78. Several algorithms can used to allocate the memory First fit: manager scan the list and Use first hole big enough for process. then break hole into two segment one for process and another for unused memory. Next fit: in addition to search the whole list form begging it begins for the list where it left in previous state. Best fit: find the smallest hole for process. Worse fit: always take the largest hole, so that hole broken to be big enough to be useful.
  • 79. Virtual memory  Virtual memory is a concept that is associated with ability to address a memory space much larger than that the available physical memory.  The basic idea behind the virtual memory is that the combined size of the of the program, data, and stack may exceed the amount of physical memory available for it. The OS keeps those part of the program currently in use in main memory, and the rest on the disk.  All microprocessor now support virtual memory. Virtual memory can be implemented by two most commonly used methods : Paging and Segmentation or mix of both.  “VM is a memory which works more than its capacity.”
  • 80. Paging it is the techniques to implement the virtual memory. • Virtual address space vs. Physical address space • The set of all virtual (logical) addresses generated by a program is a virtual address space; the set of all physical addresses corresponding to these virtual addresses is a physical address space. • The run time mapping from virtual address to physical address is done by hardware devices called memory-management-unit (MMU).
  • 81. The virtual address space (process) is divided up into fixed sized blocks called pages and the corresponding same size block in main memory is called frames. When a process is to be executed, its pages are loaded into any available memory frames from the backing store(HDD).  Paging permits the physical address space of process to be noncontiguous.  Traditionally, support for paging has been handled by hardware, but the recent design have implemented by closely integrating the hardware and OS.
  • 82. This example shows 64KB program can run in 32KB physical memory. The complete copy is stored in the disk and the pieces can be brought into memory as needed. Fig: Mapping between logical and physical address
  • 83.  With 64KB of virtual address space and 32KB of physical memory and 4KB page size, we get 16 virtual pages and 8 frames.  When page frame is not mapped then this is called page fault. Then one of the little used page must be replaced from RAM to make the room for new page. Address mapping done by page table The 16 bits incoming VA is divided into • 4-bit Page number(p)–used as an index into a page table which contains base address of each page in physical memory. • 12-bit Page offset(d)–combined with base address to define the physical memory address that is sent to the memory unit .
  • 84.
  • 85. Page table  The purpose of the page table is to map virtual pages into pages frames. This function of page table can be represented in mathematical notation as:  page_frame = page_table(page_number)  The virtual page number is used as an index into the page table to find the corresponding page frame.  The exact layout of page table entry is highly machine dependent, but more common structure for all machine is as:
  • 86. • Page frame number: goal is to locate this • Present/absent bit: If present/absent bit is present, the virtual addresses is mapped to the corresponding physical address. If present/absent is absent the trap is occur called page fault. Protection bit: Tells what kinds of access are permitted read, write or read only. • Modified bit (dirty bit): Identifies the changed status of the page since last access; if it is modified then it must be rewritten back to the disk. • Referenced bit: set whenever a page is referenced; used in page replacement.
  • 87. Page replacement Algorithms  When a page fault occurs, the OS has to choose a page to remove from memory to make the room for the page that has to be brought in.  The algorithms that is used to replace the page from main memory is known as page replacement algorithm Principle of Optimality: To obtain optimal performance the page to replace is one that will not be used for the furthest time in the future. Optimal Page Replacement (OPR)
  • 88.
  • 89. Advantages: • An optimal page-replacement algorithm; it guarantees the lowest possible page fault rate. Problems: • Unrealizable, at the time of the page fault, the OS has no way of knowing when each of the pages will be referenced next. • This is not used in practical system.
  • 90. FIFO
  • 91. Advantages: • Easy to understand and program. • Distributes fair chance to all. Problems: • FIFO is likely to replace heavily (or constantly) used pages and they are still needed for further processing. Second chance algorithm :  Is the modification on FIFO  If the page is selected , we inspect its reference its reference bit if it is 0 it is proceed to replace this page.  If it is 1 we give that page a second chance and move on select on next page to select. • When a page gets a second chance, its reference bit is cleared and its arrival time is reset to the current time.
  • 92. Problems: • If all the pages have been referenced, second chance degenerates into pure FIFO.
  • 93.
  • 94. Segmentation  The virtual memory is one dimensional because VA addresses go from 0 to some maximum, one address after another.  For many reasons having two or more separate address space is often better.  For example compiler has many tables that are built up as compilation proceeds.  The source text  The symbol table, contains name and attributes of variable  The table containing all the integer and floating point constants.  The pares tree, containing syntactic analysis of the program  The stack use for procedure calls within the compiler.
  • 95. in virtual address space each first table grows continuously as compilation proceeds. The last one grows and shrinks in unpredictable ways during compilation . in one dimensional memory the table would have to allocated contiguous chunk of virtual address space . consider what happens if a program has an exceptionally large number of variables but normal amount of everything else, the chunk of address space allocated for the symbol table may fill up but there is lots of room in other tables. Stack Parse tree Constant table Source text Symbol table
  • 96. Solutions: The general solution of these issues is to provide the machine with many completely independent address spaces, called segments.
  • 97. • Different segments have its own name and size. • The different segment can grow or shrink independently, with out effecting the others; so the size of segment changed during execution. • The process of providing independent address space to the program is known as segmentation.
  • 98. Dead lock A process in a multiprogramming system is said to be in dead lock if it is waiting for a particular event that will never occur. Deadlock Example •All automobiles trying to cross. •Traffic Completely stopped. •Not possible without backing some. traffic dead lock
  • 99. Resource dead lock System model: A process request a resource before using it, and release after using it. • 1.Request the resource. • 2.Use the resource. • 3.Release the resource. • If the resource is not available when it is requested, the requesting process is force to wait state, because the resources they have requested are held by other waiting process. This situation is called a deadlock.
  • 100. Conditions for dead lock 1.Mutual Exclusion: Each resources is either currently assigned to exactly one process or is available. 2.Hold and Wait: Processes hold resources already allocated to them while waiting for additional resources. 3.No preemption: Resources previously granted can not be forcibly taken away from the process. 4.Circular wait: there must be a circular chain of two or more resources, each of which is waiting for resources held by the next member of chain.
  • 101. Deadlock modeling these four conditions can be modeled by using directed graphs. These graphs have two kinds of nodes: process shown as circles, and resources, shown as squares.  An arc for a resources node(square) to process node (circle) means that the resources has previously been requested by, granted to and is currently held by that process.  An arc from a process to a resources means that the process is currently blocked waiting for that resources. (a) Holding resource (b) requesting resources (c) deadlock A R R B
  • 102. In fig (c): Process P1 holds resource R1 and needs resource R2 to continue; Process P2 holds resource R2 and needs resource R1 to continue –deadlock. A deadlock situation can arise if all four conditions hold simultaneously in the system. Deadlock handling strategies: •We can use a protocol to prevent or avoid deadlocks, ensuring that the system never enter a deadlock state. •We can allow the system to enter a deadlock state, detect it, and recover. • We can ignore the problem all to gather, and pretended that deadlock never occur in the system.
  • 103. File system Overview  File concept  File attributes  File operations  File types  File structure  Access methods  Directory structure
  • 104. File concept • How to store the large amount of data into the computer? • What happens, when process terminates or killed using some data? • How to assign the same data to the multiple processes? The solution to all these problems is to store information on disks or on other external media called files. A file is named collection of related information normally resides on a secondary storage device such as disk or tape.
  • 105. • Commonly, files represent programs (both source and object forms) and data; data files may be numeric, alphanumeric, or binary. • Information stored in files must be persistent,-not be effected by power failures and system reboot. • The files are managed by OS. • The part of OS that is responsible to manage files is known as file system.
  • 106. File system issues • How to create files? • How they named? • How they are structured? • What operations are allowed on files? • How to protect them? • How they accessed or used? • How to implement?
  • 107. File naming  When a process creates a file, it gives the file name; while process terminates, the file continue to exist and can be accesses by other processes.  A file is named, for the convenience of its human users, and it is referred to by its name. A name is string of characters. The string may be of digits or special characters (eg. 2, !,% etc).  Many OSs support two-part file names; separated by period; the part following the period is called the file extension.  Extension indicates something about the file (e.g., file.c –C source file). But in some system it may have two or more extension such as in Unix proc.c.Z -C source file compressed using Ziv-Lampel algorithm.
  • 108. File Structure • Files must have structure that is understood by OS. Files can be structured in several ways. The most common structures are: • Unstructured • Record Structured • Tree Structured
  • 109.
  • 110. Unstructured: • Consist of unstructured sequence of bytes or words. OS does not know or care what is in the file. Any meaning must be imposed by user level programs. • Provides maximum flexibility; user can put anything they want and name they anyway that is convenient. • Both Unix and Windows use these approach. Record Structured: • A file is a sequence of fixed-length records, each with some internal structure. • Each read operation returns one records, and write operation overwrites or append one record. • Many old mainframe systems use this structure.
  • 111. Tree Structured: • File consists of tree of records, not necessarily all the same length. • Each containing a key field in a fixed position in the record, sorted on the key to allow the rapid searching. • The operation is to get the record with the specific key. • Used in large mainframe for commercial data processing.
  • 112. File types Many OS supports several types of files • Regular files: contains user information, are generally ASCII or binary. • Directories: system files for maintaining the structure of file system. • Character Special files: related to I/O and used to model serial I/O devices such as terminals, printers, and networks. • Block special files: used to model disks.
  • 113. ASCII files: • Consists of line of text. • They can be displayed and printed as is and can be edited with ordinary text editor. Binary files: • Consists of sequence of byte only. • They have some internal structure known to programs that use them (e.g., executable or archive files).
  • 115. Direct access • Files whose bytes or records can be read in any order. Based on disk model of file, since disks allow random access to any block. • Used for immediate access to large amounts of information. File Attributes • In addition to name and data, all other information about file is termed as file attributes. • The file attributes may very from system to system. • Some common attributes are listed here.
  • 116.
  • 117. File Operations • OS provides system calls to perform operations on files. Some common calls are: • Create: If disk space is available it create new file without data. • Delete: Deletes files to free up disk space. • Open: Before using a file, a process must open it. • Close: When all access are finished, the file should be closed to free up the internal table space. • Read: Reads data from file.
  • 119. Directory Structure Single-Level-Directory: • All files are contained in the same directory. • Easy to support and understand; but difficult to manage large amount of files and to manage different users. • Difficult mange files if two or more files have same name. i.e overwrite problems. •
  • 120. Two-Level-Directory: • Separate directory for each user. • Used on a multiuser computer and on a simple network computers. • It has problem when users want to cooperate on some task and to access one another's files. It also cause problem when a single user has large number of files.
  • 121. Hierarchical-Directory: • Generalization of two-level-structure to a tree of arbitrary height. • This allow the user to create their own subdirectories and to organize their files accordingly. • Nearly all modern file systems are organized in this manner.