OS | Functions of OS | Operations of OS | Operations of a process | Scheduling algorithms | FCFS scheduling | SJF scheduling | RR scheduling | Paging | File system implementation | Cryptography as a security tool
2. Operating System
• OS is an interface between user and computer.
• It is a program that manages computer resources.
• It acts as intermediary between a user those resources.
3. Topics :
• Functions of OS
• Operations of OS
• Operations of a processs
• Scheduling algorithms
i) FCFS scheduling
ii) SJF scheduling
iii) RR scheduling
• Paging
• File system implementation
• Cryptography as a security tool
4. Functions of OS
Following are some of important functions of an operating System.
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
5. Memory Management
An Operating System does the following activities for memory management:
1. Keeps tracks of primary memory, i.e., what part of it are in use by whom,
what part are not in use.
2. In multiprogramming, the OS decides which process will get memory
when and how much.
3. Allocates the memory when a process requests it to do so.
4. De-allocates the memory when a process no longer needs it or has been
terminated.
Memory management Management of “Main memory”
Main memory a large array of words/bytes where each word /byte has its own address.
Use of main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory.
6. Processor Management
In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process scheduling.
An Operating System does the following activities for processor management:
1. Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
2. Allocates the processor (CPU) to a process.
3. De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective
drivers. It does the following activities for device management:
1.Keeps tracks of all devices. The program responsible for this task is known as the
I/O controller.
2.Decides which process gets the device when and for how much time.
3.Allocates the device in the most efficient way.
4.De-allocates devices.
7. File Management
A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions.
An Operating System does the following activities for file management:
1.Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
2.Decides who gets the resources.
3.Allocates the resources.
4.De-allocates the resources.
Security : By means of password and similar other techniques, it prevents unauthorized
access to programs and data.
Control over system performance : Recording delays between request for a service and
response from the system.
Job accounting : Keeping track of time and resources used by various jobs and users.
Error detecting aids : Production of dumps, traces, error messages, and other
debugging and error detecting aids.
Coordination between other software and users : Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the
computer systems.
8. Operations of OS
• The modern operating systems are interrupt driven.
If there are no processes to execute,
no I/O devices to service,
and no users to whom to respond,
then operating system will sit quietly, waiting for something to happen.
• Events are almost always signalled by the occurrence of an interrupt or a trap.
A trap / an exception : is a software-generated interrupt caused either by an error
or by a specific request from a user program that an OS service be performed.
Ex: division by zero ( or ) invalid memory access
• An interrupt service routine is provided to deal with the interrupt.
• A properly designed OS must ensure that an incorrect (or malicious) program cannot cause
other programs to execute incorrectly
9. Dual-Mode:
• To ensure the proper execution of the operating system we have to distinguish
between the execution of operating-system code and user defined code.
There are two separate modes of operation:
1. User mode
2. Kernel mode (also called as supervisormode, systemmode, or privilegedmode)
Mode bit :
• a bit is added to the h/w of the computer to indicate the current mode
• with this we can distinguish between a task
that is executed on behalf of the OS and one that is executed on behalf of the user.
• However, when a user application requests a service from the OS (via a system call),
the system must transition from user to kernel mode to fulfil the request.
• At system boot time, the hardware starts in kernel mode. The operating system is
then loaded and starts user applications in user mode.
0 - Kernel
1 - User
10.
11. Timer:
A timer can be set to interrupt the computer after a specified period.
The period may be
fixed ( Ex : 1/60 second)
Variable ( Ex : from 1 ms to 1 s).
Variable timer :
• implemented by a fixed-rate clock and a counter.
• The operating system sets the counter.
• Every time the clock ticks, the counter is decremented.
• When the counter reaches 0, an interrupt occurs.
12. Operations on a Processes :
The processes in most systems can be created and deleted dynamically.
Thus, these systems must provide a mechanism for process creation and termination.
➢ Process creation
➢ Process termination
1 . Process Creation :
• A process may create several new processes.
Each of these new processes may in turn create other processes, forming a tree of
processes.
pid ( process identifier ):
The pid provides a unique value (integer) for each process in the system, and it can be
used as an index to access various attributes of a process within the kernel.
Concurrent execution
13. init process : root parent
process
kthreadd process : responsible
for creating additional
processes
sshd process : responsible for
managing clients that connect
to the system
The login process :
responsible for managing
clients that directly log onto
the system.
14. When a process creates a new process
Two possibilities for execution exist:
1.The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
Two possibilities for address-space exist:
1. The child process is a duplicate of the parent process
2. The child process has a new program loaded into it
fork()
A new process is created by the fork() system call.
The new process consists of a copy of the address space of the original process.
This mechanism allows the parent process to communicate easily with its child process.
exec()
to replace the process’s memory space with a new program.
The exec() system call loads a binary file into memory and starts its execution.
In this manner, the two processes are able to communicate and then go their separate ways.
wait()
If parent has nothing else to do while the child runs, it can issue a wait() system call
to move itself off the ready queue until the termination of the child.
15. #include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid_t pid;
pid= fork();
if (pid < 0)
{
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0)
{
execlp("/bin/ls","ls",NULL);
}
else
{
wait(NULL);
printf("Child Complete");
}
return 0;
}
C program for creating a separate process using the UNIX , Windows
#include <stdio.h>
#include <windows.h>
int main(VOID)
{
STARTUPINFO si;
PROCESS INFORMATION pi;
ZeroMemory (&si, sizeof(si));
si.cb= sizeof(si);
ZeroMemory (&pi, sizeof (pi));
if (!CreateProcess
(NULL,"C:WINDOWSsystem32mspaint.exe",
NULL,NULL,FALSE,0,NULL, NULL,&si,&pi) )
{
fprintf(stderr, "Create Process Failed");
return -1;
}
WaitForSingleObject (pi.hProcess, INFINITE);
printf("Child Complete");
CloseHandle (pi.hProcess): CloneHandle(pi.hThread);
}
UNIX Windows
16. 2 . Process Termination:
exit()
A process terminates when it finishes executing its final statement
and asks the operating system to delete it by using the exit() system call.
Termination can occur in other circumstances as well.
1. A process can cause the termination of another process via an appropriate system call
Ex : TerminateProcess() inWindows
2. A parent may terminate the execution of one of its children for a variety of reasons, such as
The child has exceeded its usage of some of the resources that it has been allocated.
The task assigned to the child is no longer required.
The parent is exiting, and the OS doesn’t allow a child to continue if its parent terminates.
Cascading termination :The phenomenon in which if a process terminates then all
its children must also be terminated.
Ex :
/* exit with status 1 */
exit(1);
17. Once the parent calls wait(), the pid of the zombie process is released.
Now consider what would happen
if a parent did not invoke wait() and instead terminated, thereby leaving its child processes as
orphans.
Linux and UNIX address this scenario by assigning the init process as the new parent to
orphan processes.
The init process periodically invokes wait(), thereby allowing the exit status of any orphaned
process to be collected and releasing the orphan’s process identifier and process-table entry.
Zombie process : A process that has terminated, but whose parent has not yet called wait()
18. Scheduling algorithms
(i) First-Come, First-Served (FCFS) Scheduling:
The simplest CPU-scheduling algorithm is the FCFS scheduling algorithm.
In this, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue.
When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue.
The running process is then removed from the queue.
Pros : The code for FCFS scheduling is simple to write and understand.
Cons :The average waiting time under the FCFS policy is often quite long.
Example : set of processes that arrive at time 0 &
length of the CPU burst given in milliseconds:
non-pre emptive
19. If the processes arrive in the order P1, P2, P3, and
are served in FCFS order, we get the result shown
in the following Gantt chart
a bar chart that illustrates a particular
schedule, including the start and finish times
of each of the participating processes
The waiting time is
0 milliseconds for process P1
24 milliseconds for process P2
27 milliseconds for process P3.
The average waiting time is = (0 + 24 + 27)/3
= 17 milliseconds.
If the processes arrive in the order P2, P3, P1,
however, the results will be as shown in the
following Gantt chart:
The average waiting time = (6 + 0 + 3)/3
= 3 milliseconds.
This reduction is substantial.
Thus, the average waiting time under an FCFS
policy is generally not minimal and may vary
substantially if the processes’ CPU burst times vary
greatly.
20. (ii) Shortest Job First (SJF):
• This is also known as shortest job Next (SJN)
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known inadvance.
• Impossible to implement in interactive systems where required CPU time is unknown.
• The processer should know in advance how much time process will take.
Ex :
SJF ( Shortest Job First ) : Out
of all available processes SJF
selects the process having
shortest burst time
21. process Arrival time
(AT)
Burst time
(BT)
completion
time (CT)
Turn around
(TAT =CT-AT)
Waiting time
(TAT - BT )
P1 0 5 5 5 0
P2 1 3 9 8 5
P3 2 1 6 4 3
P4 3 3 12 9 6
P5 4 5 17 13 8
Average turn around time = 5+8+4+9+13
5
=
39
5
= 7.8 ms
Average waiting time = 0+5+3+6+8
5
=
22
5
= 4.4 ms
P1 P3 P2 P4 P5
0 5 6 9 12 17
22. (iii) Round-Robin (RR) Scheduling:
This algorithm is designed especially for time sharing systems.
It is similar to FCFS, but pre emption is added to enable the system to switch
between processes.
A small unit of time, called a time quantum (10 -100 milliseconds) is defined.
The ready queue is treated as a circular queue.
The CPU scheduler goes around the ready queue, allocating the CPU to each
process for a time interval of upto 1 time quantum.
To implement RR scheduling, we again treat the ready queue as a FIFO queue of
processes.
New processes are added to the tail of the ready queue.
If a process does not complete before its CPU time expires, CPU is pre empted
and given to next process waiting in queue
pre emptive
23. Example : time quantum =20
Process Burst time
P1 53
P2 17
P3 68
P4 24
0 20 37 57 77
P1 P2 P3 P4
time quantum =20
Process Burst time
P1 33 (53-20)
P3 48
P4 4
0 20 37 57 77 97 117
121
P1 P2 P3 P4 P1 P3 P4
time quantum =20
Process Burst time
P1 13 Process Burst time
P3 28 P3 8
0 20 37 57 77 97 117 121 134 154
162
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
24. Paging:
• It’s a memory management technique
• In this process address space is broken into blocks of the same size (power of 2
) called pages
• The size of the process is measured in the number of pages.
• Similarly, main memory is divided into small fixed-sized blocks of (physical)
memory called frames
• The size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
• Size of the page = size of the frame.
• Paging avoids external fragmentation
26. In Figure , Every address generated by the CPU is divided into two parts:
1. a page number (p)
2. a page offset (d).
The page number is used as an index in to a page table.
The page table contains the base address of each page in physical memory.
This base address is combined with the page offset to define the physical memory address that
is sent to the memory unit.
The logical address is as follows:
p is an index into the page table
d is the displacement within the page.
The paging model of memory is
as follows
27. Hardware Support:
• Page table is kept in main memory
• Page-table base register (PTBR) indicates
the page table
• Page-table length register (PRLR)
indicates size of the page table
• In this scheme every data/instruction
access requires two memory accesses.
One for the page table
One for the data/instruction.
• The two memory access problem can be
solved by the use of a special fast-lookup
hardware cache called associative memory
or translation look-aside buffers (TLBs)
28. File system implementation
File systems store several important data structures on the disk:
1. A boot-control block, ( per volume )
It is known as the or
the partition
contains information about off of this disk.
2. A volume control block, ( per volume )
It is known as the or
the ,
which contains information such as the .
3. A directory structure ( per file system ),
containing file names and pointers to corresponding FCBs.
29. 4. The File Control Block, FCB, ( per file )
containing details about ownership, size, permissions, dates, etc
There are also several key data structures stored in memory:
1. A system-wide open file table containing a copy of the FCB for every
currently open file in the system, as well as some other related information.
2. A per-process open file table, containing a pointer to the system open
file table as well as some other information.
Allocation Methods:
There are three major methods of storing files on disks:
1. contiguous
2. linked
3. indexed.
30. 1. Contiguous Allocation
• It requires that all blocks of
a file be kept together
contiguously.
• Performance is very fast
• Over-estimation of the file's
final size increases external
fragmentation
• Under-estimation may
require a process to be
aborted
31. 2. Linked Allocation
• Disk files can be stored as
linked lists
• Linked allocation involves
no external fragmentation
• It allows files to grow
dynamically at any time.
• Unfortunately linked
allocation is only efficient
for sequential access files
• Another big problem with
linked allocation is
reliability if a pointer is
damaged
32. 3. Indexed Allocation
• Indexed Allocation
combines all of the indexes
for accessing each file into a
common block
There are several approaches:
a) Linked scheme
b) Multi level index scheme
c) combined scheme
33. Cryptography as a Security Tool:
Two big questions of security:
Trust - How can the system be sure that the messages received are really from the
source that they say they are, and can that source be trusted?
Confidentiality - How can one ensure that the messages one is sending are
received only by the intended recipient?
Encryption:
The basic idea of encryption is to encode a message so that only the desired
recipient can decode and read it.
34. Steps in the procedure of Encryption :
1. The sender first creates a message, m in
plaintext.
2. The message is then entered into an
encryption algorithm, E, along
with the encryption key, Ke.
3. The encryption algorithm generates the
ciphertext,
4. The ciphertext can then be sent over an
unsecure network, where it may be
received by attackers.
5. The recipient enters the ciphertext into
a decryption algorithm, D, along with the
decryption key, Kd.
6. The decryption algorithm re-generates
the plaintext message, .
35. Types of Encryption :
1. Symmetric Encryption
2. Asymmetric Encryption
1. Symmetric Encryption
In this same key is used for both encryption and decryption , & must be guarded.
Symmetric encryption algorithms
developed by the NIST
• Messages are broken down into 64-bit chunks, each of which are encrypted
using a 56-bit key . DES is known as a block cipher
2. The Advanced Encryption Standard [AES] - developed by NIST
in 2001 to replace DES uses key lengths of 128, 192, or 256 bits, and encrypts in
blocks of 128 bits
36. 2. Asymmetric Encryption
• In this decryption key Kd is not the same as the encryption key Ke
• means the encryption key can be made publicly available, and only the
decryption key needs to be kept secret ( or vice-versa)
• One of the most widely used asymmetric encryption algorithms is RSA
RSA Algorithm :
• RSA is based on two large prime numbers, p and q, ( on the order of 512 bits
each ), and their product N.
• Ke and Kd must satisfy the relationship:
( Ke * Kd ) % [ ( p - 1 ) * ( q - 1 ) ] = = 1
• The encryption algorithm is:
c = E(Ke)(m) = m^Ke % N
• The decryption algorithm is:
m = D(Kd)(c) = c^Kd % N
Rivest
Shamir
Adleman
37. An example using small numbers:
Let p = 7 and q = 13
Then N = 7 * 13 = 91
( p - 1 ) * ( q - 1 ) = 6 * 12 = 72
Select Ke < 72 and (Ke,72) =1
Let Ke=5
Now select Kd, such that ( Ke * Kd ) % 72 = = 1
( 5 * Kd ) % 72 = = 1
Let Kd=29
The public key = (Ke,N) = ( 5, 91 )
The private key = (Kd ,N) = ( 29, 91 )
Let the message, m = 42
Encrypt: c = m^Ke % N=42^5 % 91 = 35
Decrypt: m= c^Kd % N= 35^29 % 91 = 42
Note
• Asymmetric encryption is
much more expensive than
symmetric encryption
• It is not normally used for
large transmissions.
• Asymmetric encryption is
suitable for small
messages