SlideShare a Scribd company logo
1 of 184
Download to read offline
Operating System
Engr. Syed Zaid Irshad
What Operating Systems Do?
● Computer System Consist of four things:
○ Hardware
○ Operating System
○ Application Programs
○ Users
● An operating system acts as a bridge between user and hardware.
● It handles the applications used by the user and manage hardware resources in a
efficient manner.
● Operating System consist of two parts
○ Kernel (which runs all the time on computer)
○ System Programs (not part of the kernel and application programs)
Cont.
● Operations of the operating systems can differ as the point of view changes:
○ User view (Provider of the interface and Performance)
○ System view (Manager of the time, space, devices and much more)
● Definition of an operating system:
○ It is a system program that provides interface between user and computer. When computer
boots up Operating System is the first program that loads.
Computer-System Organization
● It consist of three parts:
○ Computer System Operation
■ Initially a bootstrap program (firmware) stored in ROM/EEPROM runs as we powered up
the computer.
■ It initialized necessary resources and loads the kernel.
■ Then System Programs are loaded into memory.
■ Now System will wait for the interrupt (System Call/Monitor Call if software initiate a
trigger) to occur.
○ Storage Structure
■ For a program to get executed it must be store in main memory (RAM/DRAM).
■ Registers->Cache->Main Memory->Solid State DIsk->Magnetic Disk->Optical
Disk->Magnetic Tapes
Cont.
○ I/O Structure
■ Large portion of OS is dedicated to manage I/O devices
Computer-System Architecture
● Single Processor Systems
○ One CPU for general usage and multiple special purpose processors for various tasks.
● Multiprocessor System
○ Also known as parallel processor or multicore systems.
○ Advantages:
■ Increased throughput (Ideally more work is done in less time, reality it creates overhead)
■ Economy of scale (Cost of multiple single processor system is more than multiprocessor
system)
■ Increased reliability (If one CPU failed than other CPUs can manage its load)
Cont.
● Multiprocessor System
○ Types of multiprocessors
■ Asymmetric multiprocessor (Boss Worker Relationship)
■ Symmetric Multiprocessor (Peer Relationship)
■ Multicore systems can be multiprocessor systems but not all multiprocessor systems are
multicore
Cont.
● Clustered Systems
○ It is type of Multiprocessor system in which multiple systems are joined together to create a
cluster of systems.
○ In a asymmetric clustering one machine stays in a hot-standby mode, that takes the processing
if one system fails.
○ Symmetric clustering involves all the participating nodes to process instructions and monitor
each other.
○ If an application is written in a way that its components can run on separate systems which is
known as parallelization we might be able to reduce it execution time.
Operating-System Structure
● Multiprogramming
○ It allows to run multiple processes by switching between them.
● Time Sharing
○ It allows multiple processes/users to use the same resources in their respective timeslots.
○ Response time is usually less than 1 second which gives the idea of parallel processing.
○ To ensure the reasonable time response swapping is perform which allows process to move in
and out from memory.
○ Space where these processes are moved is called virtual memory.
Cont.
● If the memory is small for the processes then a job pool is created in disk which hold
them
● If many processes are ready to move then Job Scheduling makes the decision which
processes will go to memory
● If many processes are ready for the CPU, the CPU Scheduling decides which processes
will get CUP time
Operating-System Operations
● Operating System wait for something to happen to perform an action
● This something, either be an interrupt or a trap
● Interrupt is raised by the hardware
● Trap is either an error or a request to perform operating system’s service
● Operating system has to ensure that one interrupt/trap don’t affect the other running
programs
Cont.
● Types of Operations
○ Dual-mode and Multimode
■ Kernel mode and user mode (Assign suitable mode to process).
○ Timer
■ Specify the duration of holding the CPU.
Operating Systems Types
● Mainframe Operating Systems i.e. IBM z/OS
● Server Operating Systems i.e. Linux
● Multiprocessor Operating Systems i.e. Unix (Symmetric Architecture)
● Personal Computer Operating Systems i.e. macOS
● Handheld Computer Operating Systems i.e. Android
● Embedded Operating Systems i.e. uC/OS
● Sensor-Node Operating Systems i.e. TinyOS
● Real-Time Operating Systems i.e. QNX (uses is automobile)
● Smart Card Operating Systems i.e. Java Card
Process Management
● Scheduling processes and threads on the CPUs
● Creating and deleting both user and system processes
● Suspending and resuming processes
● Providing mechanisms for process synchronization
○ Restrict the access of shared data/resources at the same time
● Providing mechanisms for process communication
○ Convey information between multiple processes
Memory Management
● Keeping track of which parts of memory are currently being used and who is using them
● Deciding which processes (or parts of processes) and data to move into and out of
memory
● Allocating and deallocating memory space as needed
Storage Management
● File-System Management
○ Creating and deleting files
○ Creating and deleting directories to organize files
○ Supporting primitives for manipulating files and directories
○ Mapping files onto secondary storage
○ Backing up files on stable (nonvolatile) storage media
● Mass-Storage Management
○ Free-space management
○ Storage allocation
○ Disk scheduling (Requests that need to use disk)
Cont.
● Caching
○ If single data is shared among multiprocessors than each CPU cache must have the same
information.
○ This situation is called cache coherency
● I/O Systems
○ A memory-management component that includes buffering, caching, and spooling (same data
between different devices)
○ A general device-driver interface
○ Drivers for specific hardware devices
Protection and Security
● Protection, then, is any mechanism for controlling the access of processes or users to the
resources defined by a computer system.
● This mechanism must provide means to specify the controls to be imposed and to
enforce the controls.
● A system can have adequate protection but still be prone to failure and allow
inappropriate access.
● It is the job of security to defend a system from external and internal attacks.
Kernel Data Structures
● Main memory is constructed as an array
● Stacks are used to invoke function calls
● Tasks that are waiting are organized in queues
● For CPU scheduling Linux uses trees
● To quickly retrieve data hashing is used
● Bitmaps are used to find the availability of the resource
Computing Environments
● Traditional Computing (Desktops, Laptops, Online Portals)
● Mobile Computing (Handheld Devices)
● Distributed Computing (Physically separated systems attached using network)
● Client Server Computing
○ Compute-server system (User send request to server to perform certain task)
○ File-server system (User can perform directly using web browsers)
● Peer-to-Peer Computing (Multiple nodes joined in network)
● Virtualization (Multiple environments created using emulator)
Cont.
● Cloud Computing
○ Public Cloud (Anyone who pays) i.e. Microsoft Azure
○ Private Cloud (That Organization owns) i.e. HP Data Centers
○ Hybrid Cloud (Combination Public & Private) i.e. VMware Cloud on AWS
○ Software as a service SaaS (Application available via internet) i.e. Microsoft 365
○ Platform as a service PaaS (Software stack ready for use via internet) i.e. Google App Engine
○ Infrastructure as a service IaaS (Servers/Storage available over internet) i.e. Amazon Web
Services
● Real-Time Embedded System
Operating-System Services
● User Interface
● Program Execution
● I/O Operations
● File-System Manipulation
● Communications
● Error Detection
● Resource Allocation
● Logging
● Protection & Security
User and Operating-System Interface
● There are three fundamental approaches to use OS by user:
○ Command Interpreter (windows powershell, linux terminal)
○ Graphical User Interface (Keyboard/mouse)
○ Touch-Screen Interface (Gestures by hand)
System Calls
● System Calls are the ways to provide essential services to applications/users.
● These services include:
○ Input/Output (I/O) Operations
○ Process Creation and Management
○ Memory Allocation and Management
○ File Management
● If program is in user mode then it request the kernel to provide response for which it sends core
information about the request.
Cont.
● Most common System Calls are:
○ read(): Reads data/device
○ write(): Writes data/device
○ open(): Opens a file/device
○ close(): Closes a file/device
○ fork(): Creates a new process that is a copy of the calling process.
○ exec(): Replaces the current process image with a new process image.
○ getpid(): Returns the process ID of the calling process.
○ exit(): Terminates the calling process and returns control to the operating system.
POSIX System Calls
● Portal Operating System Interface is the standards developed by IEEE in 1980s, adopt in 1990s by ISO
● Its goal is to prompt compatibility between different Operating Systems so that they be able to run
same Application/Software
● These Standards covers:
○ Standardized file I/O operations, including file access and manipulation.
○ Process management, including process creation, termination, and communication.
○ Interprocess communication (IPC), including shared memory, message queues, and semaphores.
○ System administration, including user and group management, system logging, and time management.
○ Network interfaces and protocols, including sockets and network file systems.
Cont.
● Process management
○ pid = for k( ) Create a child process identical to the parent
○ pid = waitpid(pid, &statloc, options) Wait for a child to terminate
○ s = execve(name, argv, environp) Replace a process’ core image
○ exit(status) Terminate process execution and return status
● File management
○ fd = open(file, how, ...) Open a file for reading, writing, or both
○ s = close(fd) Close an open file
○ n = read(fd, buffer, nbytes) Read data from a file into a buffer
○ n = write(fd, buffer, nbytes) Write data from a buffer into a file
○ position = lseek(fd, offset, whence) Move the file pointer
○ s = stat(name, &buf) Get a file’s status information
Cont.
● Directory- and file-system management
○ s = mkdir(name, mode) Create a new directory
○ s = rmdir(name) Remove an empty directory
○ s = link(name1, name2) Create a new entry, name2, pointing to name1
○ s = unlink(name) Remove a directory entry
○ s = mount(special, name, flag) Mount a file system
○ s = umount(special) Unmount a file system
● Miscellaneous
○ s = chdir(dir name) Change the working directory
○ s = chmod(name, mode) Change a file’s protection bits
○ s = kill(pid, signal) Send a signal to a process
○ seconds = time(&seconds) Get the elapsed time since Jan. 1, 1970
System Services
● System calls can be divided into following:
○ File Management (create, delete, copy, rename, print etc)
○ Status Information (date, time, available space, no of users etc)
○ File Modification (modify and create stored files’ content)
○ Programming Language Support
■ Compilers: Convert program from one language to another
■ Assemblers: Convert program to machine language
■ Debuggers: Test & debug other programs
■ Interpreters: Execute program without requiring to be complied before
○ Program loading and execution (load compiled program to memory)
○ Communication (Connection between processes/threads)
○ Background Services
Linkers and Loaders
● Before using any program its source code gets compiled which generates an object file
● This object file then passed to linker which finds all the supporting code needed for successful
execution which creates an executable file
● This newly created executable file then passed to loader which is responsible for loading it to memory
● If required dynamic linked libraries (DLLs) are also added
● These two program either be a single executable program or separated working together to fulfill
essential duty
Why Applications Are Operating-System
Specific
● Currently our choice for using an OS depends upon which applications it can run
● It is because not all applications are able to run in all different OS
● If it was the case then our choice would be based on the utilities it provide
● Following are the few ways an application can run on all types of OS
○ Write application in interpreter language (i.e. python/ruby)
○ Write application which include Virtual Machine containing the running application (Runtime Environment RTE
OS)
○ Use standard API to develop your application
● Still following challenges remains:
○ OS specific template for instruction layout
○ CPU instruction set
○ OS System calls
Operating-System Structure
There are six designs in Operating Systems:
1. Monolithic systems (OS Runs as a single program in kernel mode) i.e. win98/DOS
2. Layered systems (OS is divided into multiple independent layers working together) i.e. Win XP
3. Microkernels (OS is divided into smaller chunks and only one portion runs in kernel) i.e. Hurd
4. Client-server systems i.e. Windows Server
5. Virtual machines (Clones OS for each user) i.e. Oracle Virtualbox
a. Hypervisor (Type I runs directly on Hardware, Type II runs on Host OS)
6. Exokernels (Rather Cloning the OS, divide it based on user requirement) i.e. Nemesis
Process Concept
● A process in an operating system is a program in execution, with its own address space and resources,
managed by the operating system for efficient and secure operation of the system.
● A single process consist of following sections:
○ Text (Holds executable code)
○ Data (Holds global variables)
○ Heap (Dynamic Memory allocated during execution)
○ Stack (Temporary storage for invoking functions)
Cont.
● A process may be one of the following state:
○ New (Being created)
○ Running (Being Executed)
○ Waiting (Idle till some event occur)
○ Ready (Assigned to processor)
○ Terminate (Finished Execution)
Cont.
● Each process is represented in Process Control Block also called Task Control Block, it consist of
following pieces:
○ Process State
○ Program Counter (address of next instruction for the process)
○ CPU Registers (information about which registers are needed for execution)
○ CPU-scheduling Information (process priority and scheduling parameters)
○ Memory-management Information
○ Accounting Information (stats about CPU usage, process numbers etc.)
○ I/O Status Information (list of I/O devices used/needed for process)
● Threads
○ Subdivision of process that execute only one task at a time.
Process Scheduling
● Process Scheduler is responsible for selecting available process so that CPU utilization can be
maximized which is the main purpose of multiprogramming
● The number of processes currently resides in memory is known as degree of multiprogramming
● Processes can be broadly classified as:
○ I/O bound (more I/O then computation) i.e. online media players
○ CPU bound (more computation then I/O) i.e multimedia processing softwares
● A process has to move around in following:
○ Scheduling Queues (Wait for resources and put it in ready queue)
○ CPU Scheduling (Pick process from ready queue)
○ Context Switch (toggle between processes)
Operations on Processes
● Process creation
○ Process creates new process
■ Parent execute alongside child
■ Parent wait for child to complete its execution
○ Address-space for new process
■ Child has duplicate process
■ Child has new process
● Process Terminate
○ Child may terminate due to
■ Out used it resources
■ No longer needed
■ Parent get terminate
Cont.
● If a process terminates (either normally or abnormally), then all its children must also be terminated.
This phenomenon, referred to as cascading termination, is normally initiated by the operating system
● When a subprocess get terminated but its parent process has not calls wait() (ask for the status of the
process) is known as Zombie process
● If parent process get terminated without invoking wait(), the child process is now referred as orphan
process
Interprocess Communication
● Sharing of resources between two processes are referred to as interprocess communication
● Following are the reasons for this type of communication:
○ Information sharing (processes interested in same resource)
○ Computation speedup (Break process to manageable subprocesses for faster computation)
○ Modularity (ability to break in separate components)
● There are models of interprocess communication:
○ Shared Memory
○ Message Passing
IPC in Shared-Memory Systems
● In shared-memory systems, Inter-Process Communication (IPC) refers to the mechanism of
communication between processes that share the same physical memory.
● IPC is essential for coordinating and synchronizing activities among multiple processes in a
shared-memory system.
○ Shared Memory: Processes can communicate by reading and writing to a shared memory region.
○ Semaphores: Semaphores are used to synchronize access to shared resources, such as shared memory regions.
○ Mutexes: Mutexes (short for mutual exclusion) are used to protect critical sections of code from simultaneous
access by multiple processes.
○ Condition Variables: Condition variables are used to signal and wait for specific conditions to occur before a
process can proceed.
○ Message Passing: In message passing, processes communicate by sending and receiving messages.
IPC in Message-Passing Systems
● In message-passing systems, Inter-Process Communication (IPC) refers to the mechanism of
communication between processes that do not share the same physical memory.
○ Message Queues: Processes communicate by sending and receiving messages through a message queue.
○ Pipes: A pipe is a unidirectional communication channel between two processes. One process writes data to the
pipe and the other process reads data from the pipe.
○ Sockets: A socket is a bidirectional communication channel between two processes over a network.
○ Remote Procedure Calls (RPCs): In an RPC, a process can call a procedure or function that runs on another
process. The process that makes the call sends a message to the other process, which then executes the
procedure and returns the result.
Cont.
● Operations:
○ Send / Receive
● Communication
○ Direct / Indirect
● Type of Communication
○ Blocking Send (process block till message received by another process)
○ Non Blocking Send (process sends the message and resume process)
○ Blocking Receive (receiver block until message is available)
○ Non Blocking Receive (Receiver retrieves either valid message or null)
● Buffering
○ Zero Capacity (queue length is zero)
○ Bounded capacity (queue length is n (finite))
○ Unbounded capacity (queue length is infinite)
Threads
● A thread is a basic unit of CPU utilization; it comprises:
○ a thread ID
○ a program counter (PC)
○ a register set
○ and a stack.
● It shares with other threads belonging to the same process:
○ its code section
○ data section
○ and other operating-system resources, such as open files and signals.
● A traditional process has a single thread of control.
● If a process has multiple threads of control, it can perform more than one task at a time.
Cont.
Single Thread
for (int i = 0; i < ARRAY_SIZE; i++) {
if (arr[i] > largestNumber) {
largestNumber = arr[i]; } }
Multi Thread
public LargestNumberFinder(int threadIndex) {
this.threadIndex = threadIndex;
int chunkSize = ARRAY_SIZE / THREAD_COUNT;
this.startIndex = threadIndex * chunkSize;
this.endIndex = (threadIndex == THREAD_COUNT
- 1) ? ARRAY_SIZE : (threadIndex + 1) * chunkSize; }
Cont.
● Benefits of multithreaded programming:
○ Responsiveness
■ It allows application to continue processing even if some portion is block or in long execution
○ Resource Sharing
■ It allows multiple processes to share same memory
○ Economy
■ Allocating memory/resources to process is costly (time consuming), creating multiple threads sharing
same resources are more efficient
○ Scalability
■ For a multiprocessor, multithreaded processes may utilize its capacity more efficiently
Multicore Programming
● Helps to create concurrent systems for deployment on multicore processor and multiprocessor
systems
● Programming Challenges:
○ Identifying Tasks (divide task into concurrent tasks)
○ Balance (ensure task perform equal work of equal value)
○ Data Splitting (data must be able to run on separate cores)
○ Data Dependency (data must be examined for dependencies)
○ Testing & Debugging (it is more difficult to test/debug multi threaded tasks)
AMDAHL’S LAW
● Amdahl’s Law is a formula that identifies potential performance gains from adding additional
computing cores to an application that has both serial (nonparallel) and parallel components.
● If S is the portion of the application that must be performed serially on a system with N processing
cores, the formula appears as follows:
○ Speedup ≤ 1/(S+(1-S)/N)
● Create a graph that shows how much performs is gained if the application is consist of 36% serial
instructions. Number of cores are 1 to 32
● Also find the minimum # of cores for Maximum speed gain
3428 cores for 2.77634 times speed gain
Cont.
● There are two types of parallelism:
○ Data Parallelism (distributing same data across multiple computing cores or same operation on different data)
○ Task Parallelism (distributing different threads across multiple cores performing unique operations)
Multithreading Models
Thread Libraries
● A thread library provides the programmer with an API for creating and managing threads.
● There are two primary ways of implementing a thread library.
○ The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library
exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call.
○ The second approach is to implement a kernel-level library supported directly by the operating system. In this case, code and data
structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the
kernel.
● Three main thread libraries are in use today:
○ POSIX Pthreads Pthreads, the threads extension of the POSIX standard, may be provided as either a user-level or a kernel-level
library.
○ Windows The Windows thread library is a kernel-level library available on Windows systems.
○ Java The Java thread API allows threads to be created and managed directly in Java programs.
Threading Issues
● Race conditions: A race condition occurs when two or more threads access a shared resource in an
unpredictable order, leading to incorrect results or program crashes. This happens when one thread
modifies the shared resource while another thread is reading or modifying it.
● Deadlocks: A deadlock occurs when two or more threads are waiting for each other to release a
shared resource. As a result, none of the threads can make progress, and the program freezes.
● Starvation: Starvation occurs when one or more threads are prevented from accessing a shared
resource indefinitely, usually due to higher-priority threads hogging the resource.
● Priority inversion: Priority inversion occurs when a high-priority thread is blocked by a low-priority
thread that is holding a shared resource. This can cause the high-priority thread to wait longer than it
should, leading to performance degradation.
CPU Scheduling
● CPU – I/O Burst Cycle
○ Process execution consists of a cycle of CPU execution and I/O wait.
● CPU Scheduler
○ The selection process is carried out by the CPU scheduler, which selects a process from the processes in memory
that are ready to execute and allocates the CPU to that process.
● Preemptive and Nonpreemptive Scheduling
○ Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until
it releases it either by terminating or by switching to the waiting state.
● Dispatcher
○ Gives control of the CPU’s core to the process selected by the CPU scheduler
Cont.
● Preemptive and Nonpreemptive Scheduling
a. When a process switches from the running state to the waiting state (for example, as the result of an I/O request
or an invocation of wait() for the termination of a child process)
b. When a process switches from the running state to the ready state (for example, when an interrupt occurs)
c. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
d. When a process terminates
● For situations a and d, there is no choice in terms of scheduling. A new process (if one exists in the
ready queue) must be selected for execution. There is a choice, however, for situations b and c.
● When scheduling takes place only under circumstances a and d, we say that the scheduling scheme is
non preemptive or cooperative. Otherwise, it is preemptive.
● Virtually all modern operating systems including Windows, macOS, Linux, and UNIX use preemptive
scheduling algorithms.
Scheduling Criteria
● CPU Utilization
○ It should be 40% to 90%
● Throughput
○ No of processors completed per time unit
● Turnaround Time
○ Interval between time of submission and time of completion
● Waiting Time
○ Sum of periods spent waiting in ready queue
● Response Time
○ Interval between time of submission and first response
Formulae
● Wait Time = Turnaround Time - Burst Time
● Turnaround Time = Finish Time - Arrival Time
● Finish Time (C) = Finish Time (P) + Burst Time
Scheduling Algorithms
1. First Come First Served
2. Shortest Job First
3. Round Robin
4. Priority
5. Multilevel Queue
6. Multilevel Feedback Queue
First Come First Served
● FCFS is a simple and easy-to-implement scheduling algorithm.
● In FCFS, the CPU is allocated to the first process that arrives, and the process runs until it completes its
execution or gets blocked.
● FCFS is a non-preemptive scheduling algorithm, which means that once a process starts executing, it
cannot be preempted until it completes its execution or blocks.
● FCFS suffers from the convoy effect, where a long-running process can hold up the entire system, even
if there are shorter processes waiting.
● FCFS is suitable for batch processing and low-traffic systems where the response time is not critical.
● The average waiting time for the FCFS algorithm can be long, especially if there are many long-running
processes.
● FCFS does not take into account the priority of the processes, so high-priority processes may have to
wait for a long time.
Cont.
PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15
AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9
BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
Cont.
● Formulae for Calculation:
○ Wait Time = Turnaround Time - Burst Time
○ Turnaround Time = Finish Time - Arrival Time
○ Finish Time (C) = Finish Time (P) + Burst Time
● Also Calculate Average Turnaround Time and Wait Time
○ Turnaround time = 36 units
○ Wait time = 30.73 units
#include<iostream>
using namespace std;
struct Process {
int pid;
int arrival_time;
int burst_time;
int finish_time;
int wait_time;
int turnaround_time;
};
int main() {
int n;
float avg_waiting_time = 0.0, avg_turnaround_time = 0.0;
cout<<"Enter the number of processes: ";
cin>>n;
Process p[n];
// Taking input for the arrival time and burst time of the
processes
for(int i=0; i<n; i++) {
cout<<"Enter arrival time and burst time for process "<<i+1<<":
";
cin>>p[i].arrival_time>>p[i].burst_time;
p[i].pid = i+1;
}
// Sorting the processes according to their arrival time
for(int i=0; i<n-1; i++) {
for(int j=0; j<n-i-1; j++) {
if(p[j].arrival_time > p[j+1].arrival_time) {
Process temp = p[j];
p[j] = p[j+1];
p[j+1] = temp;
}
}
}
// Calculating finish time, waiting time and turnaround time of the
processes
int finish_time = 0;
for(int i=0; i<n; i++) {
p[i].finish_time = finish_time + p[i].burst_time;
p[i].turnaround_time = p[i].finish_time - p[i].arrival_time;
p[i].wait_time = p[i].turnaround_time - p[i].burst_time;
finish_time = p[i].finish_time;
if(p[i].wait_time < 0)
p[i].wait_time = 0;
avg_waiting_time += p[i].wait_time;
avg_turnaround_time += p[i].turnaround_time;
}
// Printing the results
avg_waiting_time /= n;
avg_turnaround_time /= n;
cout<<"PIDtArrival TimetBurst TimetFinish TimetWaiting
TimetTurnaround Timen";
for(int i=0; i<n; i++) {
cout<<p[i].pid<<"t"<<p[i].arrival_time<<"tt"<<p[i].burst_time<<"tt"<<
p[i].finish_time<<"tt"<<p[i].wait_time<<"tt"<<p[i].turnaround_time<<
endl;
}
cout<<"Average waiting time: "<<avg_waiting_time<<endl;
cout<<"Average turnaround time: "<<avg_turnaround_time<<endl;
return 0;
}
Shortest Job First
● SJF is a non-preemptive scheduling algorithm, which means that once a process starts executing, it cannot be
preempted until it completes its execution.
● SJF selects the process with the smallest burst time to be executed next, which leads to a shorter average
waiting time and turnaround time compared to other scheduling algorithms.
● SJF can be either preemptive or non-preemptive. In preemptive SJF, a process with a smaller burst time can
preempt a currently executing process with a longer burst time. In non-preemptive SJF, a process with a
shorter burst time has to wait until the currently executing process completes its execution.
● SJF can suffer from starvation, where a long-running process with a small burst time may have to wait for a
long time if many shorter processes arrive.
● SJF is suitable for batch processing and high-traffic systems where the response time is critical.
● SJF requires knowledge of the burst time of all the processes in advance, which may not be feasible in
real-time systems.
Cont.
PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15
AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9
BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
Cont.
● Formulae for Calculation:
○ Wait Time = Turnaround Time - Burst Time
○ Turnaround Time = Finish Time - Arrival Time
○ Finish Time (C) = Finish Time (P) + Burst Time
● Also Calculate Average Turnaround Time and Wait Time
○ Turnaround time = 27.20 units
○ Wait time = 21.93 units
Round Robin
● RR is a preemptive scheduling algorithm, which means that a process can be preempted after its time slice
expires, and the CPU can be allocated to another process.
● In RR, each process is allocated a fixed time slice or quantum, which can range from a few milliseconds to
several seconds, depending on the system configuration.
● After a process completes its time slice, it is preempted and placed at the end of the ready queue, and the next
process in the queue is allocated the CPU.
● RR provides fairness in CPU allocation, as each process is given an equal chance to execute, regardless of its
priority or burst time.
● RR can suffer from high context switching overhead, especially if the time slice is too small or the number of
processes in the queue is large.
● RR is suitable for interactive systems and systems with a mix of short and long-running processes.
● The time slice in RR should be chosen carefully to balance the trade-off between fairness and overhead.
Cont.
PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15
AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9
BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
Cont.
● Formulae for Calculation:
○ Wait Time = Turnaround Time - Burst Time
○ Turnaround Time = Finish Time - Arrival Time
○ Finish Time (C) = Finish Time (P) + Burst Time
● Also Calculate Average Turnaround Time and Wait Time
○ Turnaround time = 47.33 units
○ Wait time = 40.06 units
Priority
● Priority scheduling is a CPU scheduling algorithm that assigns priorities to each process and selects the process
with the highest priority to execute first.
● The priority of a process is typically determined by its characteristics, such as its time-criticality, importance,
and resource requirements.
● The process with the highest priority is allocated the CPU first, and if two or more processes have the same
priority, then the scheduling algorithm may use other criteria, such as first-come-first-served (FCFS) or
round-robin scheduling.
● Priority scheduling can be implemented in several ways, including preemptive and non-preemptive methods.
● In preemptive priority scheduling, the CPU can be taken away from a running process if a higher priority
process arrives.
● In non-preemptive priority scheduling, a running process keeps the CPU until it finishes or voluntarily gives up
the CPU.
Cont.
PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15
AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9
BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
Cont.
● Formulae for Calculation:
○ Wait Time = Turnaround Time - Burst Time
○ Turnaround Time = Finish Time - Arrival Time
○ Finish Time (C) = Finish Time (P) + Burst Time
● Also Calculate Average Turnaround Time and Wait Time
○ Preemptive
■ Turnaround time = 34.2 units
■ Wait time = 28.93 units
○ Non-preemptive
■ Turnaround time = 35 units
■ Wait time = 29.73 units
Multilevel Queue
● Multilevel queue scheduling is a CPU scheduling algorithm that divides processes into separate
queues, each with its own scheduling algorithm.
● Each queue is typically assigned a different priority level based on the type of process or its priority.
For example, one queue may be dedicated to time-critical processes, while another may be for
background processes.
● The multilevel queue scheduling algorithm uses a combination of scheduling techniques, such as FCFS,
round-robin, and priority scheduling, to schedule processes in each queue.
● The scheduling algorithm can be either preemptive or non-preemptive, depending on the
requirements of the system.
Multilevel Feedback Queue
● In the MLFQ algorithm, each process is initially assigned to the highest priority queue, and the CPU is
allocated to the process in that queue.
● If a process uses up its allocated time slice in a given queue, it is moved to a lower-priority queue.
● If a process continues to use up its time slice in a lower-priority queue, it is moved down to an even
lower-priority queue.
● This process of moving a process down the priority levels is called demotion.
● If a process releases the CPU before its time slice is used up, it can move up to a higher-priority queue.
This process of moving a process up the priority levels is called promotion.
● The purpose of the feedback mechanism is to allow processes that require more CPU time to move up
to higher-priority queues, while processes that use less CPU time move down to lower-priority queues.
Thread Scheduling
● Thread scheduling is the process of assigning CPU time to different threads of a process in a
multi-threaded environment.
● Round Robin: Each thread is allocated a fixed time slice or quantum of CPU time, after which it is
preempted and replaced by the next thread in the queue.
● Priority-based scheduling: Threads are allocated CPU time based on their priority, with higher priority
threads getting more CPU time than lower priority threads.
● Fair-share scheduling: CPU time is allocated to threads based on a predefined allocation scheme, such
as the number of threads in a group or the amount of memory used by a thread.
● Thread-specific scheduling: The operating system can use different scheduling algorithms for different
threads of a process, based on their requirements.
Multiprocessor Scheduling
● Multiprocessor scheduling is the process of allocating tasks to multiple processors or cores in a parallel
computing environment.
● Load balancing: In load balancing, tasks are assigned to processors or cores based on their current load
or utilization.
● Task decomposition: In task decomposition, a large task is divided into smaller subtasks that can be
executed in parallel.
● Gang scheduling: In gang scheduling, a group of related tasks is scheduled to execute simultaneously
on different processors or cores.
● Priority scheduling: In priority scheduling, tasks are assigned priorities based on their importance or
criticality, and the processor or core with the highest priority task is allocated CPU time first.
● Round-robin scheduling: In round-robin scheduling, each processor or core is allocated a fixed time
slice or quantum of CPU time, and tasks are assigned to processors or cores in a rotating fashion.
The Critical Section Problem
● The problem occurs when multiple threads or processes attempt to access a shared resource or a
critical section of code that must not be executed concurrently by more than one thread or process.
● The critical section refers to a portion of the code that accesses shared data, resources, or variables.
● The goal of the problem is to ensure that only one thread or process at a time executes the critical
section, to avoid race conditions, inconsistencies, and data corruption.
● The Critical Section Problem is an essential concept in concurrent programming and plays a crucial role
in ensuring correct and reliable operation of multi-threaded and multi-process programs.
Cont.
● A solution to the critical-section problem must satisfy the following three requirements:
○ Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections.
○ Progress. If no process is executing in its critical section and some processes wish to enter their critical sections,
then only those processes that are not executing in their remainder sections can participate in deciding which
will enter its critical section next, and this selection cannot be postponed indefinitely.
○ Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before that request
is granted.
Semaphores
● Semaphores are type of synchronization mechanism used in multi-threaded or multi-process programs
to control access to shared resources or critical sections of code.
● A semaphore is a variable that can be accessed by multiple threads or processes and used to
coordinate their access to shared resources.
● Semaphores are of two types:
○ Binary Semaphores: Binary semaphores can take only two values, 0 or 1, and are used to indicate the availability
of a shared resource. A thread or process can acquire the semaphore by setting its value to 1, and release it by
setting its value back to 0.
○ Counting Semaphores: Counting semaphores can take any non-negative integer value and are used to control the
number of threads or processes that can access a shared resource. A thread or process can acquire the
semaphore by decrementing its value, and release it by incrementing its value.
Cont.
● Working of Semaphore:
○ A semaphore is initialized to a certain value, depending on the number of threads or processes that can access
the shared resource.
○ When a thread or process wants to access the shared resource, it tries to acquire the semaphore by
decrementing its value.
○ If the semaphore value is greater than or equal to zero, the thread or process can proceed to access the shared
resource.
○ If the semaphore value is less than zero, the thread or process is blocked, and its request is added to a queue of
waiting threads or processes.
○ When a thread or process releases the semaphore by incrementing its value, the next waiting thread or process
in the queue is unblocked and allowed to access the shared resource.
Mutex Locks
● Mutex locks, or simply mutexes, are a type of synchronization mechanism used in multi-threaded
programs to prevent concurrent access to shared resources or critical sections of code.
● A mutex is a binary semaphore that can be locked or unlocked by threads to synchronize access to a
shared resource.
● Working of Mutex:
○ A thread that needs to access a shared resource tries to acquire the mutex lock. If the lock is available (unlocked),
the thread acquires the lock and proceeds to execute the critical section of code.
○ If another thread also tries to acquire the same lock, it will be blocked until the first thread releases the lock by
unlocking it.
○ Once the first thread completes its execution in the critical section, it unlocks the mutex, allowing other threads
to acquire it and access the shared resource.
Monitor
● A monitor is a high-level synchronization mechanism used in concurrent programming languages to
provide a structured way of controlling access to shared resources or critical sections of code.
● A monitor is implemented as an abstract data type that encapsulates shared resources and provides
methods or procedures for accessing and modifying them.
● Features:
○ Mutual Exclusion: A monitor ensures that only one thread can execute a critical section of code at a time,
preventing race conditions and ensuring data consistency.
○ Condition Variables: A monitor provides condition variables, which allow threads to wait for certain conditions to
be met before proceeding with their execution. Condition variables can be used to implement
producer-consumer models or other types of synchronization patterns.
○ Data Abstraction: A monitor encapsulates shared resources and provides methods or procedures for accessing
and modifying them.
Spinlocks
● When a thread tries to acquire a spin lock and finds that the lock is already held by another thread, it
spins in a loop, periodically checking if the lock has become available.
● The thread continues to spin until the lock is released by the thread currently holding it.
● Once the lock is released, the spinning thread acquires the lock and continues its execution.
● Spin locks are generally used for short-duration, non-blocking operations, and they are well-suited for
situations where the time spent waiting for the lock to be released is expected to be short.
● Spin locks are lightweight and efficient because they avoid the overhead of blocking the thread, which
can save time and improve performance.
Liveness
● Liveness refers to a property of a concurrent system that guarantees that certain events will eventually
occur.
● A concurrent system is considered to be live if it is always able to make progress and respond to events
in a timely manner.
● Liveness is often contrasted with safety, which refers to the property of a system that guarantees that
certain events will never occur.
● In other words, a system is considered safe if it is free from errors or violations of critical properties.
● Liveness properties are important in concurrent systems because they ensure that the system is able
to make progress even in the presence of delays, failures, or other types of unexpected events.
Cont.
● Properties of Liveness:
○ Termination: A system satisfies the termination property if every process eventually terminates and does not get
stuck in an infinite loop.
○ Progress: A system satisfies the progress property if it is always able to make progress towards a desired outcome
or goal, even in the presence of delays or failures.
○ Livelock freedom: A system satisfies the livelock freedom property if it does not get stuck in a state where all
processes are active but no progress is being made.
○ Deadlock freedom: A system satisfies the deadlock freedom property if it does not get stuck in a state where
multiple processes are waiting for each other to release resources.
Two-phase Locking
● Two-phase locking (2PL) is a concurrency control mechanism used in operating systems to ensure
mutual exclusion and prevent conflicts between processes accessing shared resources.
● It involves two phases: the growing phase and the shrinking phase. In the growing phase, a process
acquires locks on all the resources it needs to complete its operations, and is not allowed to release
any locks until it has acquired all the locks it needs.
● In the shrinking phase, the process releases all the locks it has acquired in reverse order, ensuring that
the locks are released in a consistent order and no deadlocks occur.
The Readers –Writers Problem
● The Readers-Writers problem is a classic synchronization problem in computer science, which arises
when multiple processes or threads need to access a shared resource, such as a file, a database, or a
piece of memory.
● In this problem, there are two types of processes:
○ Readers
○ Writers
● Readers only read the shared resource, while writers modify it.
● The goal is to design a solution that allows multiple readers to access the resource simultaneously, but
only one writer can access it at a time.
Cont.
● Possible Solutions:
○ Readers-preference solution: In this solution, multiple readers are allowed to access the shared resource
simultaneously, but a writer can only access the resource when no readers are accessing it. This solution
prioritizes readers over writers.
○ Writer-preference solution: In this solution, a writer is given priority over readers. This means that a writer can
access the resource even if there are readers already accessing it. However, the writer must wait until all the
current readers have finished reading before modifying the resource.
○ Fairness solution: In this solution, the system tries to be fair to both readers and writers by alternating access to
the resource. In other words, after a writer has modified the resource, the next access must be given to the
waiting writer, regardless of whether there are readers waiting to access the resource.
The Dining-Philosophers Problem
● The Dining-Philosophers problem is another classic synchronization problem in computer science,
which involves a set of philosophers who share a circular table and alternate between thinking and
eating.
● Each philosopher has a bowl of rice and chopsticks on either side of their bowl.
● However, there are only a limited number of chopsticks available, and each philosopher needs two
chopsticks to eat.
● The problem is to design a solution that allows the philosophers to eat without creating a deadlock,
where all philosophers are waiting for chopsticks to become available.
Cont.
● Possible Solutions:
○ Resource hierarchy: One solution is to assign a unique number to each chopstick and require the philosophers to
always pick up the chopstick with the lowest number first. This ensures that there can never be a deadlock, as no
two philosophers will ever pick up the same two chopsticks at the same time.
○ Arbitrator solution: Another solution is to introduce an arbitrator, who is responsible for allocating the chopsticks
to the philosophers. The arbitrator ensures that no two philosophers are eating with the same chopsticks at the
same time, thus avoiding deadlocks.
○ Chandy/Misra solution: The Chandy/Misra solution involves introducing a "request" and "permission" system,
where a philosopher can only pick up the chopsticks if they receive permission from both the philosopher on
their left and the philosopher on their right. This ensures that no two philosophers will pick up the same
chopsticks at the same time.
Synchronization within the Kernel
● Windows, the kernel provides various synchronization mechanisms such as mutexes, semaphores,
spin locks, and critical sections to ensure synchronization among threads and processes.
○ Windows also provides a dispatcher object that manages the execution of threads and processes, ensuring that
only one thread or process is executing in the kernel at a time.
○ This mechanism is known as kernel-mode scheduling.
● Linux, synchronization within the kernel is primarily achieved through spinlocks and semaphores.
○ Spin locks are used to protect data structures and prevent data races, while semaphores are used to block or
unblock threads based on the availability of resources.
○ Additionally, Linux also uses kernel preemption to allow for preemptive multitasking within the kernel, allowing
higher-priority tasks to preempt lower-priority ones.
Resources
● Resources, refer to any component or entity that a computer system requires to perform a task or
complete a process.
● They can be physical or virtual, and can be divided into several categories, such as hardware, software,
network, data, and human resources.
● Hardware resources include physical components such as CPUs, memory, hard drives, and
input/output devices.
● Software resources include programs and applications that run on a computer, network resources
include routers, switches, modems, and cables, data resources include information and data files
stored on a computer system
● Human resources refer to the people who use and interact with computer systems.
Cont.
● A resource can only be used in following sequence:
a. Request. The thread requests the resource. If the request cannot be granted immediately (for example, if a
mutex lock is currently held by another thread), then the requesting thread must wait until it can acquire the
resource.
b. Use. The thread can operate on the resource (for example, if the resource is a mutex lock, the thread can access
its critical section).
c. Release. The thread releases the resource.
● Examples of Request and Release:
a. request() and release() of a device
b. open() and close() of a file
c. allocate() and free() memory system calls
d. wait() and signal() operations on semaphores
e. acquire() and release() of a mutex lock
Deadlock
● A deadlock is a situation where two or more processes are blocked or waiting for each other to release
resources that they are holding, preventing them from making progress.
● These deadlocks typically occur in systems where processes are competing for a finite set of resources,
such as shared memory, file access, or network connections.
● To prevent deadlocks, operating systems use various techniques such as resource allocation
algorithms, process scheduling algorithms, and deadlock detection and recovery algorithms.
● These techniques aim to ensure that resources are allocated fairly and efficiently, and that deadlocks
are avoided or resolved in a timely manner.
Deadlock in Multithreaded Applications
● Deadlocks can occur in multithreaded applications where multiple threads are competing for shared
resources.
● For example:
○ Two threads, T1 and T2, need two resources, R1 and R2, to complete their task.
○ If T1 acquires R1 and T2 acquires R2,
○ A deadlock can occur, as both threads are waiting for each other to release the resources they need.
○ This can lead to both threads being blocked and unable to proceed.
Deadlock Characterization
● The following four conditions are necessary for a deadlock to occur:
○ Mutual exclusion: At least one resource must be held in a non-sharable mode, meaning only one process can
access the resource at a time.
○ Hold and wait: A process must be holding at least one resource and waiting for another resource that is currently
being held by another process.
○ No preemption: Resources cannot be preempted or forcibly taken away from a process that is holding them.
○ Circular wait: A circular chain of two or more processes exists, where each process is waiting for a resource that is
held by the next process in the chain.
Resource Allocation Graph
● A Resource Allocation Graph (RAG) is a visual representation of the allocation of resources in a system
that helps in identifying deadlocks in a system.
● It is commonly used in operating systems to manage resources.
● In a RAG, resources are represented by rectangular nodes and processes are represented by circular
nodes.
● An arrow from a process to a resource node represents a request, and an arrow from a resource to a
process represents an allocation.
● A cycle in the graph indicates a potential deadlock in the system, and can be analyzed to identify cycles
and take appropriate actions to break the cycle and prevent a deadlock.
Cont.
● Draw RAG for the following:
○ R7 -> P5, P4 -> R5
○ R7 -> P3, R6 -> P4
○ P5 -> R1, R6 -> P6
○ R1 -> P1, R0 -> P6
○ P3 -> R5, R5 -> P5
○ R5 -> P1, P1 -> R6
○ R4 -> P3, R6 -> p3
● Identify deadlock
Methods for Handling Deadlocks
● There are three ways a deadlock is handled:
○ We can ignore the problem altogether and pretend that deadlocks never occur in the system.
○ We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked
state.
○ We can allow the system to enter a deadlocked state, detect it, and recover.
● There are four methods for handling:
○ Prevention
○ Avoidance
○ Detection and Recovery
○ Ignorance
Deadlock Prevention
● To prevent a deadlock, one of the four condition has to be stopped.
○ The mutual-exclusion condition must hold, meaning at least one resource must be non sharable.
○ To ensure that the hold-and-wait condition never occurs in the system, a protocol must be used that requires
each thread to request and be allocated all its resources before it begins execution.
○ The third necessary condition for deadlocks is that there be no preemption of resources that have already been
allocated. To ensure this, a protocol is used to preempt all resources the thread is currently holding and add them
to the list of resources for which the thread is waiting. The thread will be restarted when it can regain its old and
new resources.
○ One way to ensure that circular wait never holds is to impose a total ordering of all resource types and to require
that each thread requests resources in an increasing order of enumeration.
Deadlock Avoidance
● Deadlock avoidance is a technique used to prevent deadlocks from occurring by dynamically assessing
the safety of each resource request before granting it.
● It requires the system to have a prior knowledge of the maximum resources needed, which can be
difficult to obtain in some cases.
● However, it is a useful technique for handling deadlocks in systems where deadlock prevention is not
feasible or practical.
● The most popular algorithm for deadlock avoidance is the Banker's algorithm. This algorithm uses a
resource allocation graph to determine whether a resource request should be granted or denied.
Banker’s Algorithm
● The Banker's algorithm considers the following inputs:
○ The total number of resources of each type in the system.
○ The number of resources of each type that are currently available.
○ The maximum demand of each process, which is the maximum number of resources of each type that a process may
need.
○ The number of resources of each type currently allocated to each process.
● To determine if a request for resources can be granted, the Banker's algorithm uses the following steps:
○ The process makes a request for a certain number of resources.
○ The system checks if the request can be granted by verifying that the number of available resources is greater than or
equal to the number of resources requested by the process.
○ The system temporarily allocates the requested resources to the process.
○ The system checks if the resulting state is safe by simulating the allocation of resources to all processes. If the system can
allocate resources to all processes and avoid deadlock, then the request is granted. Otherwise, the request is denied, and
the system returns to its previous state.
Cont.
● Types of Data Structures used:
○ Available (1-D array for available resources)
○ Work (1-D array for number of resources of each resource type)
○ Max (2-D array for max resources each process can request)
○ Allocation (2-D array for currently assigned resources to each process)
○ Need (2-D array for remaining resources required by each process)
● Banker’s algorithm comprises of two algorithms:
○ Safety algorithm
○ Resource request algorithm
Cont.
● The Safety algorithm proceeds as follows:
a. Set the Work array equal to the available resources of each type.
b. Search for a process i such that the Finish[i] is false, and Need[i] is less than or equal to the Work array. If such a
process exists, then add the Allocation[i] to the Work array, set Finish[i] to true, and repeat from step b. If no
such process exists, proceed to step d.
c. If all processes can complete their execution (i.e., all values in the Finish array are set to true), then the system is
in a safe state. Otherwise, the system is in an unsafe state.
d. If the system is in an unsafe state, then the Banker's algorithm denies the resource request, and the system
returns to its previous state.
Cont.
● The resource request algorithm proceeds as follows:
○ If the request for resources from process P is greater than its need, deny the request.
○ If the request for resources from process P is greater than the Available resources, deny the request.
○ Temporarily allocate the requested resources to process P.
○ Use the Safety algorithm to determine if the system is in a safe state after the allocation. If the system is in a safe
state, grant the request, and update the Available, Allocation, and Need data structures accordingly. If the system
is not in a safe state, deny the request, and restore the previous state.
Cont.
● FInd the Following:
○ How many resources of type A, B, C, D are there?
○ What are the contents of need matrix?
○ Find if the system is in safe state? If it is, find the safe sequence.
● No of resources are already given in question. (If not then find the sum of allocation and available)
● Need Matrix
○ A B C D
○ 0 1 0 0
○ 0 4 2 1
○ 1 0 0 1
○ 0 0 2 0
○ 0 6 4 2
● Safe Sequence
○ P0, P3, P4, P1, P2
Resource Trajectory
● No of Process: 2 (P0, P1)
● Threads created by each process:
○ P0 -> t1, t2, t3, t4
○ P1 -> ta, tb, tc, td
○ t1 (Request R1), t2 (Request R2), t3 (Release R1), t4 (Release R2), ta (Request R2), tb (Request R1), tc (Release
R2), td (Release R1)
● Define safe and unsafe state using Resource Trajectory
● A state of the system is called safe if the system can allocate all the resources requested by all the
processes without entering into deadlock.
● If the system cannot fulfill the request of all processes then the state of the system is called unsafe.
Cont.
Cont.
Cont.
Cont.
Cont.
Cont.
Deadlock Detection
● Deadlock detection is a technique used in computer systems to identify situations where multiple
processes are waiting for each other to release resources that they need in order to proceed.
● There are several algorithms for detecting deadlocks in computer systems, including the banker's
algorithm, wait-for graph algorithm, and resource allocation graph algorithm.
● The banker's algorithm is a resource allocation and deadlock avoidance algorithm that ensures that
the system will be in a safe state before allocating resources to a process.
● The wait-for graph algorithm uses a directed graph to represent the wait-for relationships between
processes.
● Finally, the resource allocation graph algorithm uses a directed graph to represent the allocation of
resources to processes.
● Once a deadlock has been detected, various techniques can be used to resolve it, such as resource
preemption, process termination, or resource allocation.
Recovery from Deadlock
● There are two options for breaking a deadlock.
○ Process and Thread Termination
■ Abort all deadlocked processes
■ Abort one process at a time until th deadlock cycle is eliminated
○ Resource Preemption
■ Victim selection
■ Rollback process to safe state for restart
■ Starvation, if process do not get required resources than it cen go in starvation mode
Communication Deadlocks
● Communication deadlocks occur when two or more processes are waiting for each other to send or
receive data, resulting in a deadlock.
● This can occur in a distributed system when two or more processes are waiting for a message from
each other, but none of them can proceed until they receive the message.
● In a shared memory system, a communication deadlock can occur when two or more processes are
waiting to acquire a lock on a shared resource.
● To prevent communication deadlocks, several techniques can be used, such as avoiding circular
dependencies between processes, using timeouts to prevent processes from waiting indefinitely for a
message or lock, using a deadlock detection algorithm to identify and resolve deadlocks, and
implementing a protocol that ensures that processes acquire locks in a consistent order.
Address Binding
● Address binding is the process of mapping a logical or symbolic address used by a program to a
physical address in computer memory.
● There are two types of address bindings: compile-time binding and run-time binding.
○ Compile-time binding involves assigning physical memory addresses to program variables and instructions at the
time of compilation,
○ while run-time binding involves assigning physical memory addresses to program variables and instructions at
run-time.
○ Dynamic address binding is a type of run-time binding that allows programs to use shared libraries without
having to know the physical addresses of the library code in memory. The MMU will map the logical addresses
used by the program to the physical addresses of the shared library code.
Logical Versus Physical Address Space
● The logical address space is the set of all addresses used by a program or process, while the physical
address space is the set of all addresses used by the hardware of the computer system.
● The logical address space is used by the program or process to address memory locations, while the
physical address space is managed by the hardware and divided into smaller units called pages or
frames.
● The translation from logical addresses to physical addresses is performed by the memory management
unit (MMU) of the computer system, which uses a mapping table to translate the logical address used
by a program to a physical address in memory.
● The use of logical address space provides several advantages, such as simplifying the process of
programming, allowing for the efficient use of physical memory, and providing a mechanism for
memory protection.
Contiguous Memory Allocation
● Contiguous memory allocation is a memory management technique used by operating systems to
allocate memory to processes.
● It is used primarily in systems with a single memory space, where each process requires access to the
entire memory space.
● Advantages of contiguous memory allocation include being easy to implement and efficient in terms of
memory usage, but it can lead to fragmentation of memory.
● To overcome this issue, some operating systems use memory compaction techniques to defragment
the memory, but this can be a time-consuming process and can affect the performance of the system.
Memory Protection
● There are two ways to protect Memory:
○ Software-based
■ In this Operating system will the permission that a process has to access the desired memory if it does
only then it can access it, otherwise the process get flagged and terminated
○ Hardware-based
■ It uses the two registers to ensure that the process utilized allocated memory only. For this limit register
and relocation registers are used
■ Limit registers hold the upper limit of the process
■ Whereas relocation register provides the starting point in physical memory
Dynamic Storage Allocation
● Dynamic storage allocation is a technique used by computer programs to allocate memory dynamically
during runtime.
● It can improve the efficiency of memory usage and make programs more flexible and adaptable.
● There are several techniques used for dynamic storage allocation, such as heap-based allocation,
stack-based allocation, and garbage collection.
● Heap-based allocation involves allocating memory from a pool of free memory known as the heap,
while stack-based allocation involves allocating memory from a stack.
● Garbage collection involves automatically deallocating memory that is no longer being used by a
program.
Cont.
● Dynamic storage allocation can be divided into three common strategies: Best Fit, First Fit, and Worst
Fit.
○ Best Fit: the allocator finds the smallest free block of memory that can accommodate the requested memory
size
○ First Fit: the allocator finds the first free block of memory that can accommodate the requested memory size
○ Worst Fit: the allocator finds the largest free block of memory that can accommodate the requested memory size
● Each strategy has its advantages and disadvantages, and the choice of strategy depends on the specific
requirements of the program and the characteristics of the memory being allocated.
● First Fit is the simplest and most commonly used strategy because it strikes a balance between speed
and fragmentation.
Cont.
● DMA controller that needs to transfer a block of data of size 2000 bytes to main memory. Suppose the
DMA controller has access to the following free memory blocks in main memory:
○ Block A: Starting address 1000, size 500 bytes
○ Block B: Starting address 2000, size 1500 bytes
○ Block C: Starting address 4000, size 1000 bytes
● Find Best Fit
● Find First Fit
● Find Worst Fit
Cont.
● Best Fit
○ Block A
● First Fit
○ Block A
● Worst Fit
○ Block B
Segmentation
● Segmentation is a memory management technique used in operating systems to organize and manage
memory as logical segments or regions.
● Segments can be shared between multiple processes, allowing for more efficient use of memory.
● Segmentation provides several advantages over other memory management techniques, such as
dynamic memory allocation, protection from unauthorized access, and sharing of memory between
multiple processes.
● However, segmentation also has some disadvantages, such as fragmentation, which can lead to
inefficient use of memory and reduced system performance.
Fragmentation
● Fragmentation in main memory refers to the phenomenon where the available memory becomes
fragmented into small free blocks of memory, making it difficult to allocate a large contiguous block of
memory to a program.
● There are two types of fragmentation that can occur in main memory:
○ external fragmentation, which occurs when there are many small free blocks of memory scattered throughout
the memory space, and
○ internal fragmentation, which occurs when the allocated memory block is larger than the required memory block
and the unused portion of the block is wasted.
● To reduce the impact of fragmentation, various techniques can be employed, such as compaction,
virtual memory, and memory allocation algorithms
Cont.
● Compaction is a technique used to eliminate external fragmentation by relocating allocated blocks of
memory to form a larger block of free memory,
● while virtual memory is a technique used to overcome external fragmentation by allowing programs to
access memory that is not physically available in main memory.
● Memory allocation algorithms, such as Best Fit, First Fit, and Worst Fit, can be used to minimize the
amount of fragmentation that occurs during dynamic memory allocation and deallocation.
Paging
● Paging is a technique that allows the operating system to allocate memory (logical) to a process in
small fixed-size chunks called "pages".
● Each page is a contiguous block of memory (if in physical memory referred as frame) that can be
swapped between the RAM and the hard disk independently of other pages.
● This allows the operating system to efficiently manage memory and swap out pages that are not
currently being used to free up space in the RAM for other processes.
● Each CPU generated address contains page number and page offset
● When a program references a logical address, the paging system translates it to a physical address by
looking up the corresponding entry in the page table.
● If the page is not currently in the RAM, a page fault occurs, and the operating system retrieves the
page from the hard disk and loads it into a free frame in the RAM.
● The page table is then updated to reflect the new location of the page.
Translation Lookaside Buffer
● A Translation Lookaside Buffer (TLB) is a hardware cache used in modern computer systems to improve
memory access speeds.
● It stores recently accessed virtual-to-physical address translations, allowing the CPU to avoid
repeatedly accessing the page table in main memory to resolve address translations.
● When a program accesses memory, the MMU first checks the TLB to see if the translation is already
stored there.
● If the translation is found, the MMU can immediately use the cached physical address to access the
memory, without accessing the page table in main memory.
● If the translation is not found in the TLB, the MMU must access the page table in main memory to
resolve the translation.
● The size of the TLB is limited by hardware constraints, and larger TLBs generally result in better
performance due to fewer page table accesses.
Effective Memory-Access Time
● The effective memory-access time (EMAT) is the average time it takes to access a memory location in a
computer system, taking into account the time it takes to access the cache and main memory. To find
the EMAT, you need to know the cache hit rate, the cache access time, the main memory access time,
and the block transfer time.
● Here is the formula to calculate the EMAT:
● EMAT = cache hit rate x cache access time + (1 - cache hit rate) x (main memory access time + block
transfer time)
● Actual time = (cache hit rate x cache access time) + (1 - cache hit rate) x (main memory access time + block transfer time)
Cont.
● To use this formula, you need to know the following terms:
● Cache hit rate: The percentage of memory access requests that are found in the cache. It can be
calculated as the number of cache hits divided by the total number of memory access requests.
● Cache access time: The time it takes to access the cache, including the time it takes to check if the
memory location is in the cache.
● Main memory access time: The time it takes to access main memory if the memory location is not in
the cache.
● Block transfer time: The time it takes to transfer a block of data between the cache and main memory.
This is typically the time it takes to transfer an entire cache line or block of data.
Cont.
● For example, suppose a computer system has a cache hit rate of 90%, a cache access time of 1 ns, a
main memory access time of 100 ns, and a block transfer time of 10 ns.
● The EMAT can be calculated as follows:
○ cache hit rate = 90%
○ cache access time = 1 ns
○ main memory access time = 100 ns
○ block transfer time = 10 ns
○ EMAT = 0.9 x 1 ns + (1 - 0.9) x (100 ns + 10 ns) = 11.9 ns
Page Table
● A page table is a data structure used by a virtual memory system to keep track of the mapping
between virtual addresses and physical addresses.
● It is typically stored in main memory and is accessed on every memory access, so its design can have a
significant impact on system performance.
● The page table is organized as a table of entries, where each entry corresponds to a page of virtual
memory.
● It contains information about the physical page frame that is currently mapped to the virtual page, as
well as other control bits and metadata.
● The size and structure of the page table can vary depending on the virtual memory architecture of the
system, such as hierarchical page tables to reduce the size and improve performance, or hashed page
tables to improve lookup speed.
Structure of the Page Table
● The main components of a page table entry (PTE) include:
○ Page frame number (PFN): This is the physical address of the page frame that is currently mapped to the virtual
page.
○ Valid/Invalid bit: This bit indicates whether the virtual page is currently mapped to a physical page frame. If the
bit is set to "valid", the page is currently mapped, and the PFN field contains the physical address of the page
frame. If the bit is set to "invalid", the virtual page is not currently mapped, and the PFN field is ignored.
○ Protection bits: These bits define the access rights for the page. They determine whether the page is read-only or
read-write, whether it can be executed, and whether it can be accessed by privileged or unprivileged code.
○ Dirty bit: This bit indicates whether the page has been modified since it was last written to disk. It is used to
optimize page replacement policies and reduce the number of unnecessary writes to disk.
○ Reference bit: This bit indicates whether the page has been accessed recently. It is used to optimize page
replacement policies and reduce the number of unnecessary page swaps.
○ Page table pointer: In systems that use hierarchical page tables, the page table pointer field is used to store a
pointer to the next level of the page table.
Cont.
Virtual Page Number PFN Valid/Invalid Bit Protection Bits Dirty Bit Reference Bit
0 100 1 RW 0 1
1 200 1 RO 0 0
2 400 0
Swapping
● Swapping is a memory management technique used by operating systems to temporarily remove
pages or portions of a process's working memory from physical memory and move them to secondary
storage, such as a hard disk or solid-state drive (SSD).
● This frees up space in physical memory, which can be used to load other processes or pages.
● It involves several steps, including selecting pages to swap, writing pages to disk, updating page tables,
reading pages from disk, and updating page tables.
● Swapping can cause performance issues, but is necessary for managing memory and allowing multiple
processes to share a limited amount of physical memory.
Virtual Memory
● Virtual memory is a computer memory management technique that allows a computer to use more
memory than it physically has available.
● It is a way of temporarily transferring pages of data from random access memory (RAM) to disk
storage.
● When a program needs to access data or instructions that are not currently in RAM, the operating
system moves the required data from the hard disk into RAM.
● The operating system then maps the virtual address requested by the program to a physical address in
RAM, allowing the program to access the requested data or instructions.
● Virtual memory also provides security benefits by isolating processes and preventing them from
accessing each other's memory spaces.
Demand Paging
● Demand paging is a memory management technique used by operating systems to optimize the use of
physical memory.
● It works by dividing a program into pages, which are the smallest unit of memory that can be loaded
into physical memory.
● When a program needs to access a page of memory that is not currently in physical memory, the
operating system will load that page into memory from disk storage.
● This process is known as a page fault and can significantly reduce the amount of physical memory
required to run a program, but it can also introduce performance overhead.
Cont.
● Performance of Demand Paging
○ Effective Access Time = (1 − probability_of_a_page_fault) × memory_access_time + probability_of_a_page_fault
× page_fault_time
○ Probability_of_a_page_fault = 5/999
○ Memory_access_time = 200 ns
○ Page_fault_time = 8 ms
○ = (1 − 5/999) × 200 ns + (5/999) × 8 ms
○ = (1 − 0.005005) × 200 ns + (0.005005) × 8 ms
○ = 0.994995 × 200 ns + 0.04004 ms
○ = 198.999 ns + 0.04004 ms
○ = 0.000198999 ms + 0.04004 ms
○ = 0.040239 ms
Page Fault
● A page fault is a type of interrupt that occurs when a program tries to access a page of memory that is
not currently in physical memory (RAM).
● The operating system needs to retrieve the requested page from disk and load it into physical memory,
known as a page fault handler.
● Virtual memory is a technique that allows a program to use more memory than is physically available
by temporarily transferring pages of data from RAM to disk storage.
● When a program requests data that is not currently in RAM, the operating system will move the
required data from disk storage to RAM.
● If the required data is not in any of the pages in RAM, a page fault will occur. Modern operating
systems are optimized to minimize the frequency of page faults and to handle them efficiently when
they do occur.
Copy-on-Write
● Copy-on-write (COW) is a technique used in computer programming and operating systems to
optimize memory usage and improve performance.
● When a process requests to make a copy of a block of memory, the operating system sets up a
reference to the original block of memory and marks it as copy-on-write.
● This means that the original block of memory is shared between the two processes until one of them
tries to modify it.
● COW is commonly used in forked processes, where a child process is created from a parent process.
● When the child process modifies a memory block, the operating system creates a new copy of the
memory block, so that the changes made by the child process do not affect the memory of the parent
process.
● COW can significantly reduce the memory overhead of creating new processes or making copies of
memory blocks, but it does add some overhead to the process of modifying memory blocks.
Page Replacement
● Page replacement is a technique used by computer operating systems to manage memory when there
is not enough physical memory (RAM) to store all the data needed by running programs.
● When a process requests a memory page that is not present in the page table, a page fault occurs and
the operating system must find an unused page frame in physical memory to load the requested page
info.
● Page replacement algorithms determine which page frame to evict when there is no free page frame
available in memory.
● There are several common page replacement algorithms, such as LRU, FIFO, Optimal, and Clock.
● LRU evicts the page that has not been accessed for the longest time, FIFO evicts the page that was
loaded into memory first, Optimal evicts the page that will not be needed for the longest time in the
future, and Clock evicts the page that will not be needed for the longest time in the future.
Least Recently Used
● LRU page replacement is an algorithm used by computer operating systems to manage memory when
there is not enough physical memory (RAM) to store all the data needed by running programs.
● It evicts the least recently used page from memory when a new page needs to be loaded and there are
no free page frames available.
● The LRU algorithm is designed to minimize the number of page faults by prioritizing pages that are
more likely to be used again in the near future.
● To reduce the number of page faults and improve system performance, one common approach is to
use a data structure called a "page table" to keep track of the access history of pages.
● The page table is typically implemented as a linked list or a tree, with each page having a timestamp
that is updated whenever the page is accessed.
Cont.
● Update frame whose value was used longest
●
● Example:
● Page 7 0 1 2 0 3 0 4 2 3 0 3 2
● Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0
● Frame 2 0 0 0 0 0 0 0 0 3 3 3 3
● Frame 3 1 1 1 3 3 3 2 2 2 2 2
First-In, First-Out
● FIFO page replacement is a simple and commonly used algorithm for managing memory in computer
operating systems.
● It operates by evicting the oldest page in memory, which was loaded into memory first, when a new
page needs to be loaded into memory and there are no free page frames available.
● However, it has several shortcomings, such as the "Belady's Anomaly" issue, where increasing the
number of page frames in memory can actually increase the number of page faults.
● Additionally, it does not take into account the access patterns of pages or their frequency of use.
Despite its limitations, the FIFO page replacement algorithm is still widely used in some operating
systems, particularly those with limited resources and simpler memory management requirements.
Cont.
● Update frame whose value was updated first
●
● Example:
● Page 7 0 1 2 0 3 0 4 2 3 0 3 2
● Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0
● Frame 2 0 0 0 0 3 3 3 2 2 2 2 2
● Frame 3 1 1 1 1 0 0 0 3 3 3 3
Optimal Page Replacement
● The Optimal page replacement algorithm, also known as the MIN (Minimum) algorithm, is a
theoretical page replacement algorithm that provides an upper bound on the performance of other
page replacement algorithms.
● It works by predicting which pages are least likely to be used in the future and evicting those pages
from memory.
● To implement the Optimal algorithm, the operating system needs to have perfect knowledge of the
future memory access pattern of the program, which is not possible in real-world situations.
● Despite its theoretical nature, the Optimal algorithm can be useful in some situations where the
memory access patterns of a program are known in advance.
● It can provide a performance metric that can be used to evaluate the effectiveness of other page
replacement algorithms.
Cont.
● Replace page which will not be use in near future
●
● Example
● Page 7 0 1 2 0 3 0 4 2 3 0 3 2
● Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2
● Frame 2 0 0 0 0 0 0 4 4 4 0 0 0
● Frame 3 1 1 1 3 3 3 3 3 3 3 3
Page Faults in each Replacement Algorithm
● LRU
○
● FIFO
○
● OPR
○
Allocation of Frames
● The allocation of frames is the process of assigning a certain amount of memory to a running program
or process.
● In a computer system, physical memory is divided into fixed-size chunks called "frames". These frames
are allocated to processes or programs running on the system to store their data and instructions.
● There are several techniques used to allocate frames, such as fixed partitioning, dynamic partitioning,
paging, and segmentation.
● Fixed partitioning involves dividing memory into fixed-size partitions, while dynamic partitioning
involves dividing memory into variable-sized partitions.
● Page allocation involves dividing memory into fixed-size pages, while segmentation involves dividing
memory into variable-sized segments.
Cont.
● If each process gets equal numbers of frame than it is known as equal allocation whereas, if the
allocation is based on the size of process or the need than it is called proportional allocation.
● In equal allocation each process gets (free_frames/total_process) frames.
● In proportional allocation each process receives ai
= si
/S × m where S is the total number of pages, m
is the number of free frames, and a is the number of allocated frames to each process
● Consider 2 process contain 10 and 127 pages each, total free frames are 80, identify how many frames
each process will get.
● a0
= (10/137)X80 = 5.8 = 6
● a1
= (127/137)X80 = 74.1 = 74
Thrashing
● Thrashing is a phenomenon that occurs in computer systems when the system spends a significant
amount of time and resources continuously swapping pages between physical memory and virtual
memory, without making any real progress in executing the program.
● It usually occurs when a system is under heavy load and there is not enough physical memory
available to store all the pages needed by the active processes.
● To prevent thrashing, it is important to ensure that the system has enough physical memory to handle
the workload, use efficient memory management techniques, and monitor the system's memory
usage and page fault rates.
Address Translation
● Address translation is the process of converting a virtual memory address used by a process to a
physical memory address used by the system's memory management unit (MMU).
● This allows for efficient use of memory resources by allowing the operating system to manage the
mapping of virtual addresses to physical addresses and allocate and free memory as needed.
●
Cont.
● A process generates a virtual address of 0x12F8. The page size is 4KB and the process's page table
maps virtual page number 0x31 to physical page frame number 0x2A.
○ What is the page number for the virtual address?
○ What is the offset for the virtual address?
○ What is the physical memory address corresponding to the virtual address?
● Note: You can assume that each page table entry is 4 bytes in size, and that the physical memory starts
at address 0x00000.
Cont.
● What is the page number for the virtual address?
○ Page_Number = Virtual_Address / Page_Size = 0x12F8/4KB = 0x1
● What is the offset for the virtual address?
○ Offset = Virtual_Address % Page_Size = 0x12F8 % 4KB = 0x2F8
● What is the physical memory address corresponding to the virtual address?
○ Physical_Memory_Address = (Physical_Page_Frame_Number * Page_Size) + Offset
○ = (0x2A * 4KB) + 0x2F8
○ = 0x2A000 + 0x2F8
○ = 0x2A2F8
HDD Scheduling
● HDD (Hard Disk Drive) scheduling is the process of managing access to data stored on a hard disk drive.
● It is important in computer systems where multiple processes or users are accessing the hard disk
simultaneously.
● The main objective of HDD scheduling is to optimize the use of the disk resources while maintaining high
performance and minimizing the time required to access data.
● There are several approaches to HDD scheduling, such as FCFS, SSTF, SCAN, and C-SCAN.
○ FCFS services requests in the order in which they arrive, while
○ SSTF services the request that requires the least amount of movement of the read/write head.
○ SCAN services requests in a sweeping motion from one end of the disk to the other, then back to the starting point.
○ C-SCAN services requests only in one direction, then jumps to the other end of the disk and services requests in the same
direction again.
Cont.
● Example: 98, 183, 37, 122, 14, 124, 65, 67, and head is on 53
○ FCFS: 98, 183, 37, 122, 14, 124, 65, 67
○ SSTF: 65, 67, 37, 14, 98, 122, 124, 183
○ SCAN: 65, 67, 98, 122, 124, 183, 37, 14
○ S-SCAN: 65, 67, 98, 122, 124, 183, 14, 37
● To find the total distance head has to move for in each we take sum of absolute difference:
○ FCFS: |53-98|+|98-183|+|183-37|+|37-122|+|122-14|+|14-124|+|124-65|+|65-67| = 640
○ SSTF: |53-65|+|65-67|+|67-37|+|37-14|+|14-98|+|98-122|+|122-124|+|124-183| = 236
○ SCAN : |53-65|+|65-67|+|67-98|+|98-122|+|122-124|+|124-183|+|183-37|+|37-14| = 299
○ C-SCAN: |53-65|+|65-67|+|67-98|+|98-122|+|122-124|+|124-183|+|183-14|+|14-37| = 322
NVM Scheduling
● NVM (Non-Volatile Memory) scheduling is the process of managing access to non-volatile memory
devices such as flash memory.
● It is important in systems that use NVM as a storage medium, such as SSDs and hybrid memory
systems.
● The main objective of NVM scheduling is to optimize the use of the NVM resources while maintaining
high performance and minimizing wear on the NVM devices.
● There are several approaches to NVM scheduling, such as queue-based scheduling, deadline-based
scheduling, and group-based scheduling.
○ Queue-based scheduling uses a queue to manage incoming requests to the NVM devices, while
○ deadline-based scheduling assigns a deadline to each request and services requests based on their deadline.
○ Group-based scheduling groups requests based on their access patterns and services them in batches, which
reduces the number of erase and write operations required.
Application I/O Interface
● An Application I/O Interface (AIO) is a software layer that provides a standard interface between
applications and I/O devices.
● It abstracts the underlying hardware details and provides a common set of functions that applications
can use to interact with I/O devices, regardless of the specific device or platform being used.
● One of the key benefits of an AIO interface is that it allows applications to be written in a
platform-independent manner, while also simplifying the process of writing device drivers.
● Overall, an AIO provides a convenient and consistent way for applications to interact with I/O devices,
while also allowing for flexibility and platform independence.
Kernel I/O Subsystem
● The Kernel I/O Subsystem is a core component of an operating system that provides a unified and
efficient way for applications to access I/O devices.
● It is responsible for managing the flow of data between applications and the hardware devices, and
ensuring that I/O operations are performed in a reliable and secure manner.
● It typically includes device drivers, system calls, library functions, and system-level services that handle
tasks such as buffering, caching, and queuing of I/O requests.
● It provides a consistent and standardized interface for applications to access I/O devices, and can
improve the performance and efficiency of I/O operations by optimizing the use of system resources.
● Overall, the Kernel I/O Subsystem plays a critical role in providing a reliable and efficient way for
applications to access I/O devices, and is a core component of most modern operating systems.
Transforming I/O Requests to Hardware
Operations
● Transforming I/O requests to hardware operations involves a number of steps that are typically handled by the Kernel I/O
Subsystem of an operating system.
● These steps include:
○ application requests I/O operation: an application issues a system call to request an I/O operation specifying the device and data to
be transferred;
○ I/O request queued: the Kernel I/O Subsystem places the I/O request in a queue, and schedules the request to be processed by the
appropriate device driver;
○ device driver prepares hardware operation: the device driver prepares the necessary hardware commands to perform the requested
I/O operation;
○ hardware operation executed: the hardware performs the requested I/O operation, and data is transferred between the device and
memory;
○ interrupt generated: the device generates an interrupt signal to inform the CPU that the operation has finished;
○ interrupt handled: the Kernel I/O Subsystem wakes up the application thread that initiated the I/O request and signals that the
operation has completed;
○ data returned to application: the application receives the data that was transferred during the I/O operation and resumes execution.
File Concept
● File Attributes:
○ Name, Identifier, Type, Location, Size, Protection, Timestamps and User Identification
● File Operations
○ Creating, Opening, Writing, Reading, Repositioning within file, Deleting, Truncating
● File Structure
○ Contiguous allocation: This structure stores a file as a contiguous block of data on the storage device. It is simple and efficient but can
lead to fragmentation as files are added, deleted, and modified.
○ Linked allocation: This structure stores a file as a linked list of data blocks on the storage device. Each block contains a pointer to the
next block in the file. It is flexible and can handle files of any size, but can be slow to access as each block must be read separately.
○ Indexed allocation: This structure uses an index to store the addresses of the data blocks that make up a file. The index is stored
separately from the file data, making it faster to access and less susceptible to fragmentation.
○ Combined allocation: This structure combines elements of both contiguous and linked allocation to provide a more efficient solution.
The file is stored as a contiguous block of data until it reaches a certain size, after which it is stored using linked allocation.
Access Methods
● Sequential Access: In this method, data is accessed in a sequential manner, i.e., the computer reads data from
the beginning of a file or storage device and continues to read data until it reaches the end of the file. This
method is commonly used for reading data from tapes.
● Direct Access: This method allows the computer to read data from any point in a file or storage device. With
direct access, the computer can jump directly to the desired location and retrieve the data, rather than having
to read all the data in sequence. This method is commonly used for reading data from hard disk drives.
● Random Access: This method allows the computer to access any location in a storage device directly and
quickly. Random access is used in memory devices such as RAM and cache, where data can be accessed in any
order.
● Indexed Access: This method involves the use of an index to locate data within a file or storage device. An
index is a data structure that contains pointers to the locations of data in the file. This method is commonly
used for reading data from databases.
Directory Structure
● Single-level directory structure: In this structure, all files are stored in a single directory. This approach is simple
but can become unmanageable when the number of files grows.
● Two-level directory structure: This structure uses a root directory to contain multiple user directories. Each
user directory can then contain its own set of files. This approach is more organized than a single-level
directory structure but can still become difficult to manage as the number of users and files increases.
● Hierarchical directory structure: This structure uses a tree-like hierarchy of directories to organize files. Each
directory can contain its own set of files or subdirectories, creating a logical organization of files. This approach
is used in most modern operating systems.
● Indexed directory structure: This structure uses an index to organize files. The index contains pointers to the
data, allowing the system to quickly access specific files without having to scan the entire directory.
● Virtual file system: A virtual file system is a file structure that provides an interface for accessing different types
of files and storage devices. This allows the operating system to manage all file types and storage devices using
a single file structure.
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf
Operating System.pdf

More Related Content

What's hot

Operating Systems Basics
Operating Systems BasicsOperating Systems Basics
Operating Systems Basicsnishantsri
 
Peripheral devices
Peripheral devicesPeripheral devices
Peripheral devicesBurhan Ahmed
 
Operating Systems
Operating SystemsOperating Systems
Operating SystemsDan Hess
 
Functions Of Operating System
Functions Of Operating SystemFunctions Of Operating System
Functions Of Operating SystemDr.Suresh Isave
 
Operating systems system structures
Operating systems   system structuresOperating systems   system structures
Operating systems system structuresMukesh Chinta
 
Function of Operating system
Function of Operating systemFunction of Operating system
Function of Operating systemAmit Mehla
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating SystemDivya S
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OSKumar Pritam
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating SystemsDamian T. Gordon
 
Operating Systems
Operating SystemsOperating Systems
Operating Systemsvampugani
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating SystemsMukesh Chinta
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating systemAisyah Rafiuddin
 
Types of operating system unit 1 by Ram K Paliwal
Types of operating system  unit 1 by Ram K PaliwalTypes of operating system  unit 1 by Ram K Paliwal
Types of operating system unit 1 by Ram K PaliwalRam Paliwal
 
Software and hardware
Software and hardwareSoftware and hardware
Software and hardwaremeryy21
 

What's hot (20)

Operating Systems Basics
Operating Systems BasicsOperating Systems Basics
Operating Systems Basics
 
Peripheral devices
Peripheral devicesPeripheral devices
Peripheral devices
 
Operating Systems
Operating SystemsOperating Systems
Operating Systems
 
Os ppt
Os pptOs ppt
Os ppt
 
Functions Of Operating System
Functions Of Operating SystemFunctions Of Operating System
Functions Of Operating System
 
Operating systems system structures
Operating systems   system structuresOperating systems   system structures
Operating systems system structures
 
Function of Operating system
Function of Operating systemFunction of Operating system
Function of Operating system
 
Operating system
Operating systemOperating system
Operating system
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating System
 
Batch operating system
Batch operating system Batch operating system
Batch operating system
 
operating system
operating systemoperating system
operating system
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating Systems
 
Memory management
Memory managementMemory management
Memory management
 
Operating Systems
Operating SystemsOperating Systems
Operating Systems
 
memory hierarchy
memory hierarchymemory hierarchy
memory hierarchy
 
Introduction to Operating Systems
Introduction to Operating SystemsIntroduction to Operating Systems
Introduction to Operating Systems
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating system
 
Types of operating system unit 1 by Ram K Paliwal
Types of operating system  unit 1 by Ram K PaliwalTypes of operating system  unit 1 by Ram K Paliwal
Types of operating system unit 1 by Ram K Paliwal
 
Software and hardware
Software and hardwareSoftware and hardware
Software and hardware
 

Similar to Operating System.pdf

Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating systemAkshay Ithape
 
Operating Systems & Applications
Operating Systems & ApplicationsOperating Systems & Applications
Operating Systems & ApplicationsMaulen Bale
 
linux monitoring and performance tunning
linux monitoring and performance tunning linux monitoring and performance tunning
linux monitoring and performance tunning iman darabi
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdfwisard1
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxkrishnajoshi70
 
os unit 1 (2).pptx. introduction to operating systems
os unit 1 (2).pptx. introduction to operating systemsos unit 1 (2).pptx. introduction to operating systems
os unit 1 (2).pptx. introduction to operating systemsssuser6aef00
 
introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptxSHUJEHASSAN
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecturejeetesh036
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfEngg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfnikhil287188
 
Operating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfOperating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfFahanaAbdulVahab
 
Nt introduction(os)
Nt introduction(os)Nt introduction(os)
Nt introduction(os)NehaTadam
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.pptFarhanaMariyam1
 

Similar to Operating System.pdf (20)

Operating System concepts
Operating System conceptsOperating System concepts
Operating System concepts
 
Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating system
 
Operating Systems & Applications
Operating Systems & ApplicationsOperating Systems & Applications
Operating Systems & Applications
 
linux monitoring and performance tunning
linux monitoring and performance tunning linux monitoring and performance tunning
linux monitoring and performance tunning
 
Operating System
Operating SystemOperating System
Operating System
 
Unit I OS CS.ppt
Unit I OS CS.pptUnit I OS CS.ppt
Unit I OS CS.ppt
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdf
 
unit1 part1.ppt
unit1 part1.pptunit1 part1.ppt
unit1 part1.ppt
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptx
 
os unit 1 (2).pptx. introduction to operating systems
os unit 1 (2).pptx. introduction to operating systemsos unit 1 (2).pptx. introduction to operating systems
os unit 1 (2).pptx. introduction to operating systems
 
Distributive operating system
Distributive operating systemDistributive operating system
Distributive operating system
 
introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptx
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecture
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfEngg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdf
 
Operating system
Operating systemOperating system
Operating system
 
Operating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfOperating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdf
 
Nt introduction(os)
Nt introduction(os)Nt introduction(os)
Nt introduction(os)
 
OS Content.pdf
OS Content.pdfOS Content.pdf
OS Content.pdf
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.ppt
 

More from Syed Zaid Irshad

More from Syed Zaid Irshad (20)

DBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_SolutionDBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_Solution
 
Data Structure and Algorithms.pptx
Data Structure and Algorithms.pptxData Structure and Algorithms.pptx
Data Structure and Algorithms.pptx
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
 
Professional Issues in Computing
Professional Issues in ComputingProfessional Issues in Computing
Professional Issues in Computing
 
Reduce course notes class xi
Reduce course notes class xiReduce course notes class xi
Reduce course notes class xi
 
Reduce course notes class xii
Reduce course notes class xiiReduce course notes class xii
Reduce course notes class xii
 
Introduction to Database
Introduction to DatabaseIntroduction to Database
Introduction to Database
 
C Language
C LanguageC Language
C Language
 
Flowchart
FlowchartFlowchart
Flowchart
 
Algorithm Pseudo
Algorithm PseudoAlgorithm Pseudo
Algorithm Pseudo
 
Computer Programming
Computer ProgrammingComputer Programming
Computer Programming
 
ICS 2nd Year Book Introduction
ICS 2nd Year Book IntroductionICS 2nd Year Book Introduction
ICS 2nd Year Book Introduction
 
Security, Copyright and the Law
Security, Copyright and the LawSecurity, Copyright and the Law
Security, Copyright and the Law
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
 
Data Communication
Data CommunicationData Communication
Data Communication
 
Information Networks
Information NetworksInformation Networks
Information Networks
 
Basic Concept of Information Technology
Basic Concept of Information TechnologyBasic Concept of Information Technology
Basic Concept of Information Technology
 
Introduction to ICS 1st Year Book
Introduction to ICS 1st Year BookIntroduction to ICS 1st Year Book
Introduction to ICS 1st Year Book
 
Using the set operators
Using the set operatorsUsing the set operators
Using the set operators
 
Using subqueries to solve queries
Using subqueries to solve queriesUsing subqueries to solve queries
Using subqueries to solve queries
 

Recently uploaded

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxsqpmdrvczh
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxLigayaBacuel1
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 

Recently uploaded (20)

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptx
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 

Operating System.pdf

  • 2. What Operating Systems Do? ● Computer System Consist of four things: ○ Hardware ○ Operating System ○ Application Programs ○ Users ● An operating system acts as a bridge between user and hardware. ● It handles the applications used by the user and manage hardware resources in a efficient manner. ● Operating System consist of two parts ○ Kernel (which runs all the time on computer) ○ System Programs (not part of the kernel and application programs)
  • 3. Cont. ● Operations of the operating systems can differ as the point of view changes: ○ User view (Provider of the interface and Performance) ○ System view (Manager of the time, space, devices and much more) ● Definition of an operating system: ○ It is a system program that provides interface between user and computer. When computer boots up Operating System is the first program that loads.
  • 4. Computer-System Organization ● It consist of three parts: ○ Computer System Operation ■ Initially a bootstrap program (firmware) stored in ROM/EEPROM runs as we powered up the computer. ■ It initialized necessary resources and loads the kernel. ■ Then System Programs are loaded into memory. ■ Now System will wait for the interrupt (System Call/Monitor Call if software initiate a trigger) to occur. ○ Storage Structure ■ For a program to get executed it must be store in main memory (RAM/DRAM). ■ Registers->Cache->Main Memory->Solid State DIsk->Magnetic Disk->Optical Disk->Magnetic Tapes
  • 5. Cont. ○ I/O Structure ■ Large portion of OS is dedicated to manage I/O devices
  • 6. Computer-System Architecture ● Single Processor Systems ○ One CPU for general usage and multiple special purpose processors for various tasks. ● Multiprocessor System ○ Also known as parallel processor or multicore systems. ○ Advantages: ■ Increased throughput (Ideally more work is done in less time, reality it creates overhead) ■ Economy of scale (Cost of multiple single processor system is more than multiprocessor system) ■ Increased reliability (If one CPU failed than other CPUs can manage its load)
  • 7. Cont. ● Multiprocessor System ○ Types of multiprocessors ■ Asymmetric multiprocessor (Boss Worker Relationship) ■ Symmetric Multiprocessor (Peer Relationship) ■ Multicore systems can be multiprocessor systems but not all multiprocessor systems are multicore
  • 8. Cont. ● Clustered Systems ○ It is type of Multiprocessor system in which multiple systems are joined together to create a cluster of systems. ○ In a asymmetric clustering one machine stays in a hot-standby mode, that takes the processing if one system fails. ○ Symmetric clustering involves all the participating nodes to process instructions and monitor each other. ○ If an application is written in a way that its components can run on separate systems which is known as parallelization we might be able to reduce it execution time.
  • 9. Operating-System Structure ● Multiprogramming ○ It allows to run multiple processes by switching between them. ● Time Sharing ○ It allows multiple processes/users to use the same resources in their respective timeslots. ○ Response time is usually less than 1 second which gives the idea of parallel processing. ○ To ensure the reasonable time response swapping is perform which allows process to move in and out from memory. ○ Space where these processes are moved is called virtual memory.
  • 10. Cont. ● If the memory is small for the processes then a job pool is created in disk which hold them ● If many processes are ready to move then Job Scheduling makes the decision which processes will go to memory ● If many processes are ready for the CPU, the CPU Scheduling decides which processes will get CUP time
  • 11. Operating-System Operations ● Operating System wait for something to happen to perform an action ● This something, either be an interrupt or a trap ● Interrupt is raised by the hardware ● Trap is either an error or a request to perform operating system’s service ● Operating system has to ensure that one interrupt/trap don’t affect the other running programs
  • 12. Cont. ● Types of Operations ○ Dual-mode and Multimode ■ Kernel mode and user mode (Assign suitable mode to process). ○ Timer ■ Specify the duration of holding the CPU.
  • 13. Operating Systems Types ● Mainframe Operating Systems i.e. IBM z/OS ● Server Operating Systems i.e. Linux ● Multiprocessor Operating Systems i.e. Unix (Symmetric Architecture) ● Personal Computer Operating Systems i.e. macOS ● Handheld Computer Operating Systems i.e. Android ● Embedded Operating Systems i.e. uC/OS ● Sensor-Node Operating Systems i.e. TinyOS ● Real-Time Operating Systems i.e. QNX (uses is automobile) ● Smart Card Operating Systems i.e. Java Card
  • 14. Process Management ● Scheduling processes and threads on the CPUs ● Creating and deleting both user and system processes ● Suspending and resuming processes ● Providing mechanisms for process synchronization ○ Restrict the access of shared data/resources at the same time ● Providing mechanisms for process communication ○ Convey information between multiple processes
  • 15. Memory Management ● Keeping track of which parts of memory are currently being used and who is using them ● Deciding which processes (or parts of processes) and data to move into and out of memory ● Allocating and deallocating memory space as needed
  • 16. Storage Management ● File-System Management ○ Creating and deleting files ○ Creating and deleting directories to organize files ○ Supporting primitives for manipulating files and directories ○ Mapping files onto secondary storage ○ Backing up files on stable (nonvolatile) storage media ● Mass-Storage Management ○ Free-space management ○ Storage allocation ○ Disk scheduling (Requests that need to use disk)
  • 17. Cont. ● Caching ○ If single data is shared among multiprocessors than each CPU cache must have the same information. ○ This situation is called cache coherency ● I/O Systems ○ A memory-management component that includes buffering, caching, and spooling (same data between different devices) ○ A general device-driver interface ○ Drivers for specific hardware devices
  • 18. Protection and Security ● Protection, then, is any mechanism for controlling the access of processes or users to the resources defined by a computer system. ● This mechanism must provide means to specify the controls to be imposed and to enforce the controls. ● A system can have adequate protection but still be prone to failure and allow inappropriate access. ● It is the job of security to defend a system from external and internal attacks.
  • 19. Kernel Data Structures ● Main memory is constructed as an array ● Stacks are used to invoke function calls ● Tasks that are waiting are organized in queues ● For CPU scheduling Linux uses trees ● To quickly retrieve data hashing is used ● Bitmaps are used to find the availability of the resource
  • 20. Computing Environments ● Traditional Computing (Desktops, Laptops, Online Portals) ● Mobile Computing (Handheld Devices) ● Distributed Computing (Physically separated systems attached using network) ● Client Server Computing ○ Compute-server system (User send request to server to perform certain task) ○ File-server system (User can perform directly using web browsers) ● Peer-to-Peer Computing (Multiple nodes joined in network) ● Virtualization (Multiple environments created using emulator)
  • 21. Cont. ● Cloud Computing ○ Public Cloud (Anyone who pays) i.e. Microsoft Azure ○ Private Cloud (That Organization owns) i.e. HP Data Centers ○ Hybrid Cloud (Combination Public & Private) i.e. VMware Cloud on AWS ○ Software as a service SaaS (Application available via internet) i.e. Microsoft 365 ○ Platform as a service PaaS (Software stack ready for use via internet) i.e. Google App Engine ○ Infrastructure as a service IaaS (Servers/Storage available over internet) i.e. Amazon Web Services ● Real-Time Embedded System
  • 22. Operating-System Services ● User Interface ● Program Execution ● I/O Operations ● File-System Manipulation ● Communications ● Error Detection ● Resource Allocation ● Logging ● Protection & Security
  • 23. User and Operating-System Interface ● There are three fundamental approaches to use OS by user: ○ Command Interpreter (windows powershell, linux terminal) ○ Graphical User Interface (Keyboard/mouse) ○ Touch-Screen Interface (Gestures by hand)
  • 24. System Calls ● System Calls are the ways to provide essential services to applications/users. ● These services include: ○ Input/Output (I/O) Operations ○ Process Creation and Management ○ Memory Allocation and Management ○ File Management ● If program is in user mode then it request the kernel to provide response for which it sends core information about the request.
  • 25. Cont. ● Most common System Calls are: ○ read(): Reads data/device ○ write(): Writes data/device ○ open(): Opens a file/device ○ close(): Closes a file/device ○ fork(): Creates a new process that is a copy of the calling process. ○ exec(): Replaces the current process image with a new process image. ○ getpid(): Returns the process ID of the calling process. ○ exit(): Terminates the calling process and returns control to the operating system.
  • 26. POSIX System Calls ● Portal Operating System Interface is the standards developed by IEEE in 1980s, adopt in 1990s by ISO ● Its goal is to prompt compatibility between different Operating Systems so that they be able to run same Application/Software ● These Standards covers: ○ Standardized file I/O operations, including file access and manipulation. ○ Process management, including process creation, termination, and communication. ○ Interprocess communication (IPC), including shared memory, message queues, and semaphores. ○ System administration, including user and group management, system logging, and time management. ○ Network interfaces and protocols, including sockets and network file systems.
  • 27. Cont. ● Process management ○ pid = for k( ) Create a child process identical to the parent ○ pid = waitpid(pid, &statloc, options) Wait for a child to terminate ○ s = execve(name, argv, environp) Replace a process’ core image ○ exit(status) Terminate process execution and return status ● File management ○ fd = open(file, how, ...) Open a file for reading, writing, or both ○ s = close(fd) Close an open file ○ n = read(fd, buffer, nbytes) Read data from a file into a buffer ○ n = write(fd, buffer, nbytes) Write data from a buffer into a file ○ position = lseek(fd, offset, whence) Move the file pointer ○ s = stat(name, &buf) Get a file’s status information
  • 28. Cont. ● Directory- and file-system management ○ s = mkdir(name, mode) Create a new directory ○ s = rmdir(name) Remove an empty directory ○ s = link(name1, name2) Create a new entry, name2, pointing to name1 ○ s = unlink(name) Remove a directory entry ○ s = mount(special, name, flag) Mount a file system ○ s = umount(special) Unmount a file system ● Miscellaneous ○ s = chdir(dir name) Change the working directory ○ s = chmod(name, mode) Change a file’s protection bits ○ s = kill(pid, signal) Send a signal to a process ○ seconds = time(&seconds) Get the elapsed time since Jan. 1, 1970
  • 29. System Services ● System calls can be divided into following: ○ File Management (create, delete, copy, rename, print etc) ○ Status Information (date, time, available space, no of users etc) ○ File Modification (modify and create stored files’ content) ○ Programming Language Support ■ Compilers: Convert program from one language to another ■ Assemblers: Convert program to machine language ■ Debuggers: Test & debug other programs ■ Interpreters: Execute program without requiring to be complied before ○ Program loading and execution (load compiled program to memory) ○ Communication (Connection between processes/threads) ○ Background Services
  • 30. Linkers and Loaders ● Before using any program its source code gets compiled which generates an object file ● This object file then passed to linker which finds all the supporting code needed for successful execution which creates an executable file ● This newly created executable file then passed to loader which is responsible for loading it to memory ● If required dynamic linked libraries (DLLs) are also added ● These two program either be a single executable program or separated working together to fulfill essential duty
  • 31. Why Applications Are Operating-System Specific ● Currently our choice for using an OS depends upon which applications it can run ● It is because not all applications are able to run in all different OS ● If it was the case then our choice would be based on the utilities it provide ● Following are the few ways an application can run on all types of OS ○ Write application in interpreter language (i.e. python/ruby) ○ Write application which include Virtual Machine containing the running application (Runtime Environment RTE OS) ○ Use standard API to develop your application ● Still following challenges remains: ○ OS specific template for instruction layout ○ CPU instruction set ○ OS System calls
  • 32. Operating-System Structure There are six designs in Operating Systems: 1. Monolithic systems (OS Runs as a single program in kernel mode) i.e. win98/DOS 2. Layered systems (OS is divided into multiple independent layers working together) i.e. Win XP 3. Microkernels (OS is divided into smaller chunks and only one portion runs in kernel) i.e. Hurd 4. Client-server systems i.e. Windows Server 5. Virtual machines (Clones OS for each user) i.e. Oracle Virtualbox a. Hypervisor (Type I runs directly on Hardware, Type II runs on Host OS) 6. Exokernels (Rather Cloning the OS, divide it based on user requirement) i.e. Nemesis
  • 33. Process Concept ● A process in an operating system is a program in execution, with its own address space and resources, managed by the operating system for efficient and secure operation of the system. ● A single process consist of following sections: ○ Text (Holds executable code) ○ Data (Holds global variables) ○ Heap (Dynamic Memory allocated during execution) ○ Stack (Temporary storage for invoking functions)
  • 34.
  • 35. Cont. ● A process may be one of the following state: ○ New (Being created) ○ Running (Being Executed) ○ Waiting (Idle till some event occur) ○ Ready (Assigned to processor) ○ Terminate (Finished Execution)
  • 36. Cont. ● Each process is represented in Process Control Block also called Task Control Block, it consist of following pieces: ○ Process State ○ Program Counter (address of next instruction for the process) ○ CPU Registers (information about which registers are needed for execution) ○ CPU-scheduling Information (process priority and scheduling parameters) ○ Memory-management Information ○ Accounting Information (stats about CPU usage, process numbers etc.) ○ I/O Status Information (list of I/O devices used/needed for process) ● Threads ○ Subdivision of process that execute only one task at a time.
  • 37. Process Scheduling ● Process Scheduler is responsible for selecting available process so that CPU utilization can be maximized which is the main purpose of multiprogramming ● The number of processes currently resides in memory is known as degree of multiprogramming ● Processes can be broadly classified as: ○ I/O bound (more I/O then computation) i.e. online media players ○ CPU bound (more computation then I/O) i.e multimedia processing softwares ● A process has to move around in following: ○ Scheduling Queues (Wait for resources and put it in ready queue) ○ CPU Scheduling (Pick process from ready queue) ○ Context Switch (toggle between processes)
  • 38.
  • 39. Operations on Processes ● Process creation ○ Process creates new process ■ Parent execute alongside child ■ Parent wait for child to complete its execution ○ Address-space for new process ■ Child has duplicate process ■ Child has new process ● Process Terminate ○ Child may terminate due to ■ Out used it resources ■ No longer needed ■ Parent get terminate
  • 40. Cont. ● If a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system ● When a subprocess get terminated but its parent process has not calls wait() (ask for the status of the process) is known as Zombie process ● If parent process get terminated without invoking wait(), the child process is now referred as orphan process
  • 41. Interprocess Communication ● Sharing of resources between two processes are referred to as interprocess communication ● Following are the reasons for this type of communication: ○ Information sharing (processes interested in same resource) ○ Computation speedup (Break process to manageable subprocesses for faster computation) ○ Modularity (ability to break in separate components) ● There are models of interprocess communication: ○ Shared Memory ○ Message Passing
  • 42.
  • 43. IPC in Shared-Memory Systems ● In shared-memory systems, Inter-Process Communication (IPC) refers to the mechanism of communication between processes that share the same physical memory. ● IPC is essential for coordinating and synchronizing activities among multiple processes in a shared-memory system. ○ Shared Memory: Processes can communicate by reading and writing to a shared memory region. ○ Semaphores: Semaphores are used to synchronize access to shared resources, such as shared memory regions. ○ Mutexes: Mutexes (short for mutual exclusion) are used to protect critical sections of code from simultaneous access by multiple processes. ○ Condition Variables: Condition variables are used to signal and wait for specific conditions to occur before a process can proceed. ○ Message Passing: In message passing, processes communicate by sending and receiving messages.
  • 44. IPC in Message-Passing Systems ● In message-passing systems, Inter-Process Communication (IPC) refers to the mechanism of communication between processes that do not share the same physical memory. ○ Message Queues: Processes communicate by sending and receiving messages through a message queue. ○ Pipes: A pipe is a unidirectional communication channel between two processes. One process writes data to the pipe and the other process reads data from the pipe. ○ Sockets: A socket is a bidirectional communication channel between two processes over a network. ○ Remote Procedure Calls (RPCs): In an RPC, a process can call a procedure or function that runs on another process. The process that makes the call sends a message to the other process, which then executes the procedure and returns the result.
  • 45. Cont. ● Operations: ○ Send / Receive ● Communication ○ Direct / Indirect ● Type of Communication ○ Blocking Send (process block till message received by another process) ○ Non Blocking Send (process sends the message and resume process) ○ Blocking Receive (receiver block until message is available) ○ Non Blocking Receive (Receiver retrieves either valid message or null) ● Buffering ○ Zero Capacity (queue length is zero) ○ Bounded capacity (queue length is n (finite)) ○ Unbounded capacity (queue length is infinite)
  • 46. Threads ● A thread is a basic unit of CPU utilization; it comprises: ○ a thread ID ○ a program counter (PC) ○ a register set ○ and a stack. ● It shares with other threads belonging to the same process: ○ its code section ○ data section ○ and other operating-system resources, such as open files and signals. ● A traditional process has a single thread of control. ● If a process has multiple threads of control, it can perform more than one task at a time.
  • 47.
  • 48. Cont. Single Thread for (int i = 0; i < ARRAY_SIZE; i++) { if (arr[i] > largestNumber) { largestNumber = arr[i]; } } Multi Thread public LargestNumberFinder(int threadIndex) { this.threadIndex = threadIndex; int chunkSize = ARRAY_SIZE / THREAD_COUNT; this.startIndex = threadIndex * chunkSize; this.endIndex = (threadIndex == THREAD_COUNT - 1) ? ARRAY_SIZE : (threadIndex + 1) * chunkSize; }
  • 49. Cont. ● Benefits of multithreaded programming: ○ Responsiveness ■ It allows application to continue processing even if some portion is block or in long execution ○ Resource Sharing ■ It allows multiple processes to share same memory ○ Economy ■ Allocating memory/resources to process is costly (time consuming), creating multiple threads sharing same resources are more efficient ○ Scalability ■ For a multiprocessor, multithreaded processes may utilize its capacity more efficiently
  • 50. Multicore Programming ● Helps to create concurrent systems for deployment on multicore processor and multiprocessor systems ● Programming Challenges: ○ Identifying Tasks (divide task into concurrent tasks) ○ Balance (ensure task perform equal work of equal value) ○ Data Splitting (data must be able to run on separate cores) ○ Data Dependency (data must be examined for dependencies) ○ Testing & Debugging (it is more difficult to test/debug multi threaded tasks)
  • 51. AMDAHL’S LAW ● Amdahl’s Law is a formula that identifies potential performance gains from adding additional computing cores to an application that has both serial (nonparallel) and parallel components. ● If S is the portion of the application that must be performed serially on a system with N processing cores, the formula appears as follows: ○ Speedup ≤ 1/(S+(1-S)/N) ● Create a graph that shows how much performs is gained if the application is consist of 36% serial instructions. Number of cores are 1 to 32 ● Also find the minimum # of cores for Maximum speed gain
  • 52. 3428 cores for 2.77634 times speed gain
  • 53. Cont. ● There are two types of parallelism: ○ Data Parallelism (distributing same data across multiple computing cores or same operation on different data) ○ Task Parallelism (distributing different threads across multiple cores performing unique operations)
  • 55. Thread Libraries ● A thread library provides the programmer with an API for creating and managing threads. ● There are two primary ways of implementing a thread library. ○ The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call. ○ The second approach is to implement a kernel-level library supported directly by the operating system. In this case, code and data structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the kernel. ● Three main thread libraries are in use today: ○ POSIX Pthreads Pthreads, the threads extension of the POSIX standard, may be provided as either a user-level or a kernel-level library. ○ Windows The Windows thread library is a kernel-level library available on Windows systems. ○ Java The Java thread API allows threads to be created and managed directly in Java programs.
  • 56. Threading Issues ● Race conditions: A race condition occurs when two or more threads access a shared resource in an unpredictable order, leading to incorrect results or program crashes. This happens when one thread modifies the shared resource while another thread is reading or modifying it. ● Deadlocks: A deadlock occurs when two or more threads are waiting for each other to release a shared resource. As a result, none of the threads can make progress, and the program freezes. ● Starvation: Starvation occurs when one or more threads are prevented from accessing a shared resource indefinitely, usually due to higher-priority threads hogging the resource. ● Priority inversion: Priority inversion occurs when a high-priority thread is blocked by a low-priority thread that is holding a shared resource. This can cause the high-priority thread to wait longer than it should, leading to performance degradation.
  • 57. CPU Scheduling ● CPU – I/O Burst Cycle ○ Process execution consists of a cycle of CPU execution and I/O wait. ● CPU Scheduler ○ The selection process is carried out by the CPU scheduler, which selects a process from the processes in memory that are ready to execute and allocates the CPU to that process. ● Preemptive and Nonpreemptive Scheduling ○ Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state. ● Dispatcher ○ Gives control of the CPU’s core to the process selected by the CPU scheduler
  • 58. Cont. ● Preemptive and Nonpreemptive Scheduling a. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation of wait() for the termination of a child process) b. When a process switches from the running state to the ready state (for example, when an interrupt occurs) c. When a process switches from the waiting state to the ready state (for example, at completion of I/O) d. When a process terminates ● For situations a and d, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. There is a choice, however, for situations b and c. ● When scheduling takes place only under circumstances a and d, we say that the scheduling scheme is non preemptive or cooperative. Otherwise, it is preemptive. ● Virtually all modern operating systems including Windows, macOS, Linux, and UNIX use preemptive scheduling algorithms.
  • 59. Scheduling Criteria ● CPU Utilization ○ It should be 40% to 90% ● Throughput ○ No of processors completed per time unit ● Turnaround Time ○ Interval between time of submission and time of completion ● Waiting Time ○ Sum of periods spent waiting in ready queue ● Response Time ○ Interval between time of submission and first response
  • 60. Formulae ● Wait Time = Turnaround Time - Burst Time ● Turnaround Time = Finish Time - Arrival Time ● Finish Time (C) = Finish Time (P) + Burst Time
  • 61. Scheduling Algorithms 1. First Come First Served 2. Shortest Job First 3. Round Robin 4. Priority 5. Multilevel Queue 6. Multilevel Feedback Queue
  • 62. First Come First Served ● FCFS is a simple and easy-to-implement scheduling algorithm. ● In FCFS, the CPU is allocated to the first process that arrives, and the process runs until it completes its execution or gets blocked. ● FCFS is a non-preemptive scheduling algorithm, which means that once a process starts executing, it cannot be preempted until it completes its execution or blocks. ● FCFS suffers from the convoy effect, where a long-running process can hold up the entire system, even if there are shorter processes waiting. ● FCFS is suitable for batch processing and low-traffic systems where the response time is not critical. ● The average waiting time for the FCFS algorithm can be long, especially if there are many long-running processes. ● FCFS does not take into account the priority of the processes, so high-priority processes may have to wait for a long time.
  • 63. Cont. PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9 BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
  • 64. Cont. ● Formulae for Calculation: ○ Wait Time = Turnaround Time - Burst Time ○ Turnaround Time = Finish Time - Arrival Time ○ Finish Time (C) = Finish Time (P) + Burst Time ● Also Calculate Average Turnaround Time and Wait Time ○ Turnaround time = 36 units ○ Wait time = 30.73 units
  • 65. #include<iostream> using namespace std; struct Process { int pid; int arrival_time; int burst_time; int finish_time; int wait_time; int turnaround_time; }; int main() { int n; float avg_waiting_time = 0.0, avg_turnaround_time = 0.0; cout<<"Enter the number of processes: "; cin>>n; Process p[n]; // Taking input for the arrival time and burst time of the processes for(int i=0; i<n; i++) { cout<<"Enter arrival time and burst time for process "<<i+1<<": "; cin>>p[i].arrival_time>>p[i].burst_time; p[i].pid = i+1; } // Sorting the processes according to their arrival time for(int i=0; i<n-1; i++) { for(int j=0; j<n-i-1; j++) { if(p[j].arrival_time > p[j+1].arrival_time) { Process temp = p[j]; p[j] = p[j+1]; p[j+1] = temp; } } } // Calculating finish time, waiting time and turnaround time of the processes int finish_time = 0; for(int i=0; i<n; i++) { p[i].finish_time = finish_time + p[i].burst_time; p[i].turnaround_time = p[i].finish_time - p[i].arrival_time; p[i].wait_time = p[i].turnaround_time - p[i].burst_time; finish_time = p[i].finish_time; if(p[i].wait_time < 0) p[i].wait_time = 0; avg_waiting_time += p[i].wait_time; avg_turnaround_time += p[i].turnaround_time; } // Printing the results avg_waiting_time /= n; avg_turnaround_time /= n; cout<<"PIDtArrival TimetBurst TimetFinish TimetWaiting TimetTurnaround Timen"; for(int i=0; i<n; i++) { cout<<p[i].pid<<"t"<<p[i].arrival_time<<"tt"<<p[i].burst_time<<"tt"<< p[i].finish_time<<"tt"<<p[i].wait_time<<"tt"<<p[i].turnaround_time<< endl; } cout<<"Average waiting time: "<<avg_waiting_time<<endl; cout<<"Average turnaround time: "<<avg_turnaround_time<<endl; return 0; }
  • 66. Shortest Job First ● SJF is a non-preemptive scheduling algorithm, which means that once a process starts executing, it cannot be preempted until it completes its execution. ● SJF selects the process with the smallest burst time to be executed next, which leads to a shorter average waiting time and turnaround time compared to other scheduling algorithms. ● SJF can be either preemptive or non-preemptive. In preemptive SJF, a process with a smaller burst time can preempt a currently executing process with a longer burst time. In non-preemptive SJF, a process with a shorter burst time has to wait until the currently executing process completes its execution. ● SJF can suffer from starvation, where a long-running process with a small burst time may have to wait for a long time if many shorter processes arrive. ● SJF is suitable for batch processing and high-traffic systems where the response time is critical. ● SJF requires knowledge of the burst time of all the processes in advance, which may not be feasible in real-time systems.
  • 67. Cont. PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9 BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
  • 68. Cont. ● Formulae for Calculation: ○ Wait Time = Turnaround Time - Burst Time ○ Turnaround Time = Finish Time - Arrival Time ○ Finish Time (C) = Finish Time (P) + Burst Time ● Also Calculate Average Turnaround Time and Wait Time ○ Turnaround time = 27.20 units ○ Wait time = 21.93 units
  • 69. Round Robin ● RR is a preemptive scheduling algorithm, which means that a process can be preempted after its time slice expires, and the CPU can be allocated to another process. ● In RR, each process is allocated a fixed time slice or quantum, which can range from a few milliseconds to several seconds, depending on the system configuration. ● After a process completes its time slice, it is preempted and placed at the end of the ready queue, and the next process in the queue is allocated the CPU. ● RR provides fairness in CPU allocation, as each process is given an equal chance to execute, regardless of its priority or burst time. ● RR can suffer from high context switching overhead, especially if the time slice is too small or the number of processes in the queue is large. ● RR is suitable for interactive systems and systems with a mix of short and long-running processes. ● The time slice in RR should be chosen carefully to balance the trade-off between fairness and overhead.
  • 70. Cont. PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9 BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
  • 71. Cont. ● Formulae for Calculation: ○ Wait Time = Turnaround Time - Burst Time ○ Turnaround Time = Finish Time - Arrival Time ○ Finish Time (C) = Finish Time (P) + Burst Time ● Also Calculate Average Turnaround Time and Wait Time ○ Turnaround time = 47.33 units ○ Wait time = 40.06 units
  • 72. Priority ● Priority scheduling is a CPU scheduling algorithm that assigns priorities to each process and selects the process with the highest priority to execute first. ● The priority of a process is typically determined by its characteristics, such as its time-criticality, importance, and resource requirements. ● The process with the highest priority is allocated the CPU first, and if two or more processes have the same priority, then the scheduling algorithm may use other criteria, such as first-come-first-served (FCFS) or round-robin scheduling. ● Priority scheduling can be implemented in several ways, including preemptive and non-preemptive methods. ● In preemptive priority scheduling, the CPU can be taken away from a running process if a higher priority process arrives. ● In non-preemptive priority scheduling, a running process keeps the CPU until it finishes or voluntarily gives up the CPU.
  • 73. Cont. PID P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 AT 0 5 7 3 6 9 2 4 1 5 3 6 5 4 9 BT 2 6 5 8 4 1 9 3 2 5 4 7 9 6 8
  • 74. Cont. ● Formulae for Calculation: ○ Wait Time = Turnaround Time - Burst Time ○ Turnaround Time = Finish Time - Arrival Time ○ Finish Time (C) = Finish Time (P) + Burst Time ● Also Calculate Average Turnaround Time and Wait Time ○ Preemptive ■ Turnaround time = 34.2 units ■ Wait time = 28.93 units ○ Non-preemptive ■ Turnaround time = 35 units ■ Wait time = 29.73 units
  • 75. Multilevel Queue ● Multilevel queue scheduling is a CPU scheduling algorithm that divides processes into separate queues, each with its own scheduling algorithm. ● Each queue is typically assigned a different priority level based on the type of process or its priority. For example, one queue may be dedicated to time-critical processes, while another may be for background processes. ● The multilevel queue scheduling algorithm uses a combination of scheduling techniques, such as FCFS, round-robin, and priority scheduling, to schedule processes in each queue. ● The scheduling algorithm can be either preemptive or non-preemptive, depending on the requirements of the system.
  • 76.
  • 77. Multilevel Feedback Queue ● In the MLFQ algorithm, each process is initially assigned to the highest priority queue, and the CPU is allocated to the process in that queue. ● If a process uses up its allocated time slice in a given queue, it is moved to a lower-priority queue. ● If a process continues to use up its time slice in a lower-priority queue, it is moved down to an even lower-priority queue. ● This process of moving a process down the priority levels is called demotion. ● If a process releases the CPU before its time slice is used up, it can move up to a higher-priority queue. This process of moving a process up the priority levels is called promotion. ● The purpose of the feedback mechanism is to allow processes that require more CPU time to move up to higher-priority queues, while processes that use less CPU time move down to lower-priority queues.
  • 78.
  • 79. Thread Scheduling ● Thread scheduling is the process of assigning CPU time to different threads of a process in a multi-threaded environment. ● Round Robin: Each thread is allocated a fixed time slice or quantum of CPU time, after which it is preempted and replaced by the next thread in the queue. ● Priority-based scheduling: Threads are allocated CPU time based on their priority, with higher priority threads getting more CPU time than lower priority threads. ● Fair-share scheduling: CPU time is allocated to threads based on a predefined allocation scheme, such as the number of threads in a group or the amount of memory used by a thread. ● Thread-specific scheduling: The operating system can use different scheduling algorithms for different threads of a process, based on their requirements.
  • 80. Multiprocessor Scheduling ● Multiprocessor scheduling is the process of allocating tasks to multiple processors or cores in a parallel computing environment. ● Load balancing: In load balancing, tasks are assigned to processors or cores based on their current load or utilization. ● Task decomposition: In task decomposition, a large task is divided into smaller subtasks that can be executed in parallel. ● Gang scheduling: In gang scheduling, a group of related tasks is scheduled to execute simultaneously on different processors or cores. ● Priority scheduling: In priority scheduling, tasks are assigned priorities based on their importance or criticality, and the processor or core with the highest priority task is allocated CPU time first. ● Round-robin scheduling: In round-robin scheduling, each processor or core is allocated a fixed time slice or quantum of CPU time, and tasks are assigned to processors or cores in a rotating fashion.
  • 81. The Critical Section Problem ● The problem occurs when multiple threads or processes attempt to access a shared resource or a critical section of code that must not be executed concurrently by more than one thread or process. ● The critical section refers to a portion of the code that accesses shared data, resources, or variables. ● The goal of the problem is to ensure that only one thread or process at a time executes the critical section, to avoid race conditions, inconsistencies, and data corruption. ● The Critical Section Problem is an essential concept in concurrent programming and plays a crucial role in ensuring correct and reliable operation of multi-threaded and multi-process programs.
  • 82. Cont. ● A solution to the critical-section problem must satisfy the following three requirements: ○ Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. ○ Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. ○ Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
  • 83. Semaphores ● Semaphores are type of synchronization mechanism used in multi-threaded or multi-process programs to control access to shared resources or critical sections of code. ● A semaphore is a variable that can be accessed by multiple threads or processes and used to coordinate their access to shared resources. ● Semaphores are of two types: ○ Binary Semaphores: Binary semaphores can take only two values, 0 or 1, and are used to indicate the availability of a shared resource. A thread or process can acquire the semaphore by setting its value to 1, and release it by setting its value back to 0. ○ Counting Semaphores: Counting semaphores can take any non-negative integer value and are used to control the number of threads or processes that can access a shared resource. A thread or process can acquire the semaphore by decrementing its value, and release it by incrementing its value.
  • 84. Cont. ● Working of Semaphore: ○ A semaphore is initialized to a certain value, depending on the number of threads or processes that can access the shared resource. ○ When a thread or process wants to access the shared resource, it tries to acquire the semaphore by decrementing its value. ○ If the semaphore value is greater than or equal to zero, the thread or process can proceed to access the shared resource. ○ If the semaphore value is less than zero, the thread or process is blocked, and its request is added to a queue of waiting threads or processes. ○ When a thread or process releases the semaphore by incrementing its value, the next waiting thread or process in the queue is unblocked and allowed to access the shared resource.
  • 85. Mutex Locks ● Mutex locks, or simply mutexes, are a type of synchronization mechanism used in multi-threaded programs to prevent concurrent access to shared resources or critical sections of code. ● A mutex is a binary semaphore that can be locked or unlocked by threads to synchronize access to a shared resource. ● Working of Mutex: ○ A thread that needs to access a shared resource tries to acquire the mutex lock. If the lock is available (unlocked), the thread acquires the lock and proceeds to execute the critical section of code. ○ If another thread also tries to acquire the same lock, it will be blocked until the first thread releases the lock by unlocking it. ○ Once the first thread completes its execution in the critical section, it unlocks the mutex, allowing other threads to acquire it and access the shared resource.
  • 86. Monitor ● A monitor is a high-level synchronization mechanism used in concurrent programming languages to provide a structured way of controlling access to shared resources or critical sections of code. ● A monitor is implemented as an abstract data type that encapsulates shared resources and provides methods or procedures for accessing and modifying them. ● Features: ○ Mutual Exclusion: A monitor ensures that only one thread can execute a critical section of code at a time, preventing race conditions and ensuring data consistency. ○ Condition Variables: A monitor provides condition variables, which allow threads to wait for certain conditions to be met before proceeding with their execution. Condition variables can be used to implement producer-consumer models or other types of synchronization patterns. ○ Data Abstraction: A monitor encapsulates shared resources and provides methods or procedures for accessing and modifying them.
  • 87. Spinlocks ● When a thread tries to acquire a spin lock and finds that the lock is already held by another thread, it spins in a loop, periodically checking if the lock has become available. ● The thread continues to spin until the lock is released by the thread currently holding it. ● Once the lock is released, the spinning thread acquires the lock and continues its execution. ● Spin locks are generally used for short-duration, non-blocking operations, and they are well-suited for situations where the time spent waiting for the lock to be released is expected to be short. ● Spin locks are lightweight and efficient because they avoid the overhead of blocking the thread, which can save time and improve performance.
  • 88. Liveness ● Liveness refers to a property of a concurrent system that guarantees that certain events will eventually occur. ● A concurrent system is considered to be live if it is always able to make progress and respond to events in a timely manner. ● Liveness is often contrasted with safety, which refers to the property of a system that guarantees that certain events will never occur. ● In other words, a system is considered safe if it is free from errors or violations of critical properties. ● Liveness properties are important in concurrent systems because they ensure that the system is able to make progress even in the presence of delays, failures, or other types of unexpected events.
  • 89. Cont. ● Properties of Liveness: ○ Termination: A system satisfies the termination property if every process eventually terminates and does not get stuck in an infinite loop. ○ Progress: A system satisfies the progress property if it is always able to make progress towards a desired outcome or goal, even in the presence of delays or failures. ○ Livelock freedom: A system satisfies the livelock freedom property if it does not get stuck in a state where all processes are active but no progress is being made. ○ Deadlock freedom: A system satisfies the deadlock freedom property if it does not get stuck in a state where multiple processes are waiting for each other to release resources.
  • 90. Two-phase Locking ● Two-phase locking (2PL) is a concurrency control mechanism used in operating systems to ensure mutual exclusion and prevent conflicts between processes accessing shared resources. ● It involves two phases: the growing phase and the shrinking phase. In the growing phase, a process acquires locks on all the resources it needs to complete its operations, and is not allowed to release any locks until it has acquired all the locks it needs. ● In the shrinking phase, the process releases all the locks it has acquired in reverse order, ensuring that the locks are released in a consistent order and no deadlocks occur.
  • 91. The Readers –Writers Problem ● The Readers-Writers problem is a classic synchronization problem in computer science, which arises when multiple processes or threads need to access a shared resource, such as a file, a database, or a piece of memory. ● In this problem, there are two types of processes: ○ Readers ○ Writers ● Readers only read the shared resource, while writers modify it. ● The goal is to design a solution that allows multiple readers to access the resource simultaneously, but only one writer can access it at a time.
  • 92. Cont. ● Possible Solutions: ○ Readers-preference solution: In this solution, multiple readers are allowed to access the shared resource simultaneously, but a writer can only access the resource when no readers are accessing it. This solution prioritizes readers over writers. ○ Writer-preference solution: In this solution, a writer is given priority over readers. This means that a writer can access the resource even if there are readers already accessing it. However, the writer must wait until all the current readers have finished reading before modifying the resource. ○ Fairness solution: In this solution, the system tries to be fair to both readers and writers by alternating access to the resource. In other words, after a writer has modified the resource, the next access must be given to the waiting writer, regardless of whether there are readers waiting to access the resource.
  • 93. The Dining-Philosophers Problem ● The Dining-Philosophers problem is another classic synchronization problem in computer science, which involves a set of philosophers who share a circular table and alternate between thinking and eating. ● Each philosopher has a bowl of rice and chopsticks on either side of their bowl. ● However, there are only a limited number of chopsticks available, and each philosopher needs two chopsticks to eat. ● The problem is to design a solution that allows the philosophers to eat without creating a deadlock, where all philosophers are waiting for chopsticks to become available.
  • 94. Cont. ● Possible Solutions: ○ Resource hierarchy: One solution is to assign a unique number to each chopstick and require the philosophers to always pick up the chopstick with the lowest number first. This ensures that there can never be a deadlock, as no two philosophers will ever pick up the same two chopsticks at the same time. ○ Arbitrator solution: Another solution is to introduce an arbitrator, who is responsible for allocating the chopsticks to the philosophers. The arbitrator ensures that no two philosophers are eating with the same chopsticks at the same time, thus avoiding deadlocks. ○ Chandy/Misra solution: The Chandy/Misra solution involves introducing a "request" and "permission" system, where a philosopher can only pick up the chopsticks if they receive permission from both the philosopher on their left and the philosopher on their right. This ensures that no two philosophers will pick up the same chopsticks at the same time.
  • 95. Synchronization within the Kernel ● Windows, the kernel provides various synchronization mechanisms such as mutexes, semaphores, spin locks, and critical sections to ensure synchronization among threads and processes. ○ Windows also provides a dispatcher object that manages the execution of threads and processes, ensuring that only one thread or process is executing in the kernel at a time. ○ This mechanism is known as kernel-mode scheduling. ● Linux, synchronization within the kernel is primarily achieved through spinlocks and semaphores. ○ Spin locks are used to protect data structures and prevent data races, while semaphores are used to block or unblock threads based on the availability of resources. ○ Additionally, Linux also uses kernel preemption to allow for preemptive multitasking within the kernel, allowing higher-priority tasks to preempt lower-priority ones.
  • 96. Resources ● Resources, refer to any component or entity that a computer system requires to perform a task or complete a process. ● They can be physical or virtual, and can be divided into several categories, such as hardware, software, network, data, and human resources. ● Hardware resources include physical components such as CPUs, memory, hard drives, and input/output devices. ● Software resources include programs and applications that run on a computer, network resources include routers, switches, modems, and cables, data resources include information and data files stored on a computer system ● Human resources refer to the people who use and interact with computer systems.
  • 97. Cont. ● A resource can only be used in following sequence: a. Request. The thread requests the resource. If the request cannot be granted immediately (for example, if a mutex lock is currently held by another thread), then the requesting thread must wait until it can acquire the resource. b. Use. The thread can operate on the resource (for example, if the resource is a mutex lock, the thread can access its critical section). c. Release. The thread releases the resource. ● Examples of Request and Release: a. request() and release() of a device b. open() and close() of a file c. allocate() and free() memory system calls d. wait() and signal() operations on semaphores e. acquire() and release() of a mutex lock
  • 98. Deadlock ● A deadlock is a situation where two or more processes are blocked or waiting for each other to release resources that they are holding, preventing them from making progress. ● These deadlocks typically occur in systems where processes are competing for a finite set of resources, such as shared memory, file access, or network connections. ● To prevent deadlocks, operating systems use various techniques such as resource allocation algorithms, process scheduling algorithms, and deadlock detection and recovery algorithms. ● These techniques aim to ensure that resources are allocated fairly and efficiently, and that deadlocks are avoided or resolved in a timely manner.
  • 99. Deadlock in Multithreaded Applications ● Deadlocks can occur in multithreaded applications where multiple threads are competing for shared resources. ● For example: ○ Two threads, T1 and T2, need two resources, R1 and R2, to complete their task. ○ If T1 acquires R1 and T2 acquires R2, ○ A deadlock can occur, as both threads are waiting for each other to release the resources they need. ○ This can lead to both threads being blocked and unable to proceed.
  • 100. Deadlock Characterization ● The following four conditions are necessary for a deadlock to occur: ○ Mutual exclusion: At least one resource must be held in a non-sharable mode, meaning only one process can access the resource at a time. ○ Hold and wait: A process must be holding at least one resource and waiting for another resource that is currently being held by another process. ○ No preemption: Resources cannot be preempted or forcibly taken away from a process that is holding them. ○ Circular wait: A circular chain of two or more processes exists, where each process is waiting for a resource that is held by the next process in the chain.
  • 101. Resource Allocation Graph ● A Resource Allocation Graph (RAG) is a visual representation of the allocation of resources in a system that helps in identifying deadlocks in a system. ● It is commonly used in operating systems to manage resources. ● In a RAG, resources are represented by rectangular nodes and processes are represented by circular nodes. ● An arrow from a process to a resource node represents a request, and an arrow from a resource to a process represents an allocation. ● A cycle in the graph indicates a potential deadlock in the system, and can be analyzed to identify cycles and take appropriate actions to break the cycle and prevent a deadlock.
  • 102. Cont. ● Draw RAG for the following: ○ R7 -> P5, P4 -> R5 ○ R7 -> P3, R6 -> P4 ○ P5 -> R1, R6 -> P6 ○ R1 -> P1, R0 -> P6 ○ P3 -> R5, R5 -> P5 ○ R5 -> P1, P1 -> R6 ○ R4 -> P3, R6 -> p3 ● Identify deadlock
  • 103.
  • 104. Methods for Handling Deadlocks ● There are three ways a deadlock is handled: ○ We can ignore the problem altogether and pretend that deadlocks never occur in the system. ○ We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked state. ○ We can allow the system to enter a deadlocked state, detect it, and recover. ● There are four methods for handling: ○ Prevention ○ Avoidance ○ Detection and Recovery ○ Ignorance
  • 105. Deadlock Prevention ● To prevent a deadlock, one of the four condition has to be stopped. ○ The mutual-exclusion condition must hold, meaning at least one resource must be non sharable. ○ To ensure that the hold-and-wait condition never occurs in the system, a protocol must be used that requires each thread to request and be allocated all its resources before it begins execution. ○ The third necessary condition for deadlocks is that there be no preemption of resources that have already been allocated. To ensure this, a protocol is used to preempt all resources the thread is currently holding and add them to the list of resources for which the thread is waiting. The thread will be restarted when it can regain its old and new resources. ○ One way to ensure that circular wait never holds is to impose a total ordering of all resource types and to require that each thread requests resources in an increasing order of enumeration.
  • 106. Deadlock Avoidance ● Deadlock avoidance is a technique used to prevent deadlocks from occurring by dynamically assessing the safety of each resource request before granting it. ● It requires the system to have a prior knowledge of the maximum resources needed, which can be difficult to obtain in some cases. ● However, it is a useful technique for handling deadlocks in systems where deadlock prevention is not feasible or practical. ● The most popular algorithm for deadlock avoidance is the Banker's algorithm. This algorithm uses a resource allocation graph to determine whether a resource request should be granted or denied.
  • 107. Banker’s Algorithm ● The Banker's algorithm considers the following inputs: ○ The total number of resources of each type in the system. ○ The number of resources of each type that are currently available. ○ The maximum demand of each process, which is the maximum number of resources of each type that a process may need. ○ The number of resources of each type currently allocated to each process. ● To determine if a request for resources can be granted, the Banker's algorithm uses the following steps: ○ The process makes a request for a certain number of resources. ○ The system checks if the request can be granted by verifying that the number of available resources is greater than or equal to the number of resources requested by the process. ○ The system temporarily allocates the requested resources to the process. ○ The system checks if the resulting state is safe by simulating the allocation of resources to all processes. If the system can allocate resources to all processes and avoid deadlock, then the request is granted. Otherwise, the request is denied, and the system returns to its previous state.
  • 108. Cont. ● Types of Data Structures used: ○ Available (1-D array for available resources) ○ Work (1-D array for number of resources of each resource type) ○ Max (2-D array for max resources each process can request) ○ Allocation (2-D array for currently assigned resources to each process) ○ Need (2-D array for remaining resources required by each process) ● Banker’s algorithm comprises of two algorithms: ○ Safety algorithm ○ Resource request algorithm
  • 109. Cont. ● The Safety algorithm proceeds as follows: a. Set the Work array equal to the available resources of each type. b. Search for a process i such that the Finish[i] is false, and Need[i] is less than or equal to the Work array. If such a process exists, then add the Allocation[i] to the Work array, set Finish[i] to true, and repeat from step b. If no such process exists, proceed to step d. c. If all processes can complete their execution (i.e., all values in the Finish array are set to true), then the system is in a safe state. Otherwise, the system is in an unsafe state. d. If the system is in an unsafe state, then the Banker's algorithm denies the resource request, and the system returns to its previous state.
  • 110. Cont. ● The resource request algorithm proceeds as follows: ○ If the request for resources from process P is greater than its need, deny the request. ○ If the request for resources from process P is greater than the Available resources, deny the request. ○ Temporarily allocate the requested resources to process P. ○ Use the Safety algorithm to determine if the system is in a safe state after the allocation. If the system is in a safe state, grant the request, and update the Available, Allocation, and Need data structures accordingly. If the system is not in a safe state, deny the request, and restore the previous state.
  • 111.
  • 112. Cont. ● FInd the Following: ○ How many resources of type A, B, C, D are there? ○ What are the contents of need matrix? ○ Find if the system is in safe state? If it is, find the safe sequence. ● No of resources are already given in question. (If not then find the sum of allocation and available) ● Need Matrix ○ A B C D ○ 0 1 0 0 ○ 0 4 2 1 ○ 1 0 0 1 ○ 0 0 2 0 ○ 0 6 4 2 ● Safe Sequence ○ P0, P3, P4, P1, P2
  • 113. Resource Trajectory ● No of Process: 2 (P0, P1) ● Threads created by each process: ○ P0 -> t1, t2, t3, t4 ○ P1 -> ta, tb, tc, td ○ t1 (Request R1), t2 (Request R2), t3 (Release R1), t4 (Release R2), ta (Request R2), tb (Request R1), tc (Release R2), td (Release R1) ● Define safe and unsafe state using Resource Trajectory ● A state of the system is called safe if the system can allocate all the resources requested by all the processes without entering into deadlock. ● If the system cannot fulfill the request of all processes then the state of the system is called unsafe.
  • 114. Cont.
  • 115. Cont.
  • 116. Cont.
  • 117. Cont.
  • 118. Cont.
  • 119. Cont.
  • 120. Deadlock Detection ● Deadlock detection is a technique used in computer systems to identify situations where multiple processes are waiting for each other to release resources that they need in order to proceed. ● There are several algorithms for detecting deadlocks in computer systems, including the banker's algorithm, wait-for graph algorithm, and resource allocation graph algorithm. ● The banker's algorithm is a resource allocation and deadlock avoidance algorithm that ensures that the system will be in a safe state before allocating resources to a process. ● The wait-for graph algorithm uses a directed graph to represent the wait-for relationships between processes. ● Finally, the resource allocation graph algorithm uses a directed graph to represent the allocation of resources to processes. ● Once a deadlock has been detected, various techniques can be used to resolve it, such as resource preemption, process termination, or resource allocation.
  • 121. Recovery from Deadlock ● There are two options for breaking a deadlock. ○ Process and Thread Termination ■ Abort all deadlocked processes ■ Abort one process at a time until th deadlock cycle is eliminated ○ Resource Preemption ■ Victim selection ■ Rollback process to safe state for restart ■ Starvation, if process do not get required resources than it cen go in starvation mode
  • 122. Communication Deadlocks ● Communication deadlocks occur when two or more processes are waiting for each other to send or receive data, resulting in a deadlock. ● This can occur in a distributed system when two or more processes are waiting for a message from each other, but none of them can proceed until they receive the message. ● In a shared memory system, a communication deadlock can occur when two or more processes are waiting to acquire a lock on a shared resource. ● To prevent communication deadlocks, several techniques can be used, such as avoiding circular dependencies between processes, using timeouts to prevent processes from waiting indefinitely for a message or lock, using a deadlock detection algorithm to identify and resolve deadlocks, and implementing a protocol that ensures that processes acquire locks in a consistent order.
  • 123. Address Binding ● Address binding is the process of mapping a logical or symbolic address used by a program to a physical address in computer memory. ● There are two types of address bindings: compile-time binding and run-time binding. ○ Compile-time binding involves assigning physical memory addresses to program variables and instructions at the time of compilation, ○ while run-time binding involves assigning physical memory addresses to program variables and instructions at run-time. ○ Dynamic address binding is a type of run-time binding that allows programs to use shared libraries without having to know the physical addresses of the library code in memory. The MMU will map the logical addresses used by the program to the physical addresses of the shared library code.
  • 124. Logical Versus Physical Address Space ● The logical address space is the set of all addresses used by a program or process, while the physical address space is the set of all addresses used by the hardware of the computer system. ● The logical address space is used by the program or process to address memory locations, while the physical address space is managed by the hardware and divided into smaller units called pages or frames. ● The translation from logical addresses to physical addresses is performed by the memory management unit (MMU) of the computer system, which uses a mapping table to translate the logical address used by a program to a physical address in memory. ● The use of logical address space provides several advantages, such as simplifying the process of programming, allowing for the efficient use of physical memory, and providing a mechanism for memory protection.
  • 125. Contiguous Memory Allocation ● Contiguous memory allocation is a memory management technique used by operating systems to allocate memory to processes. ● It is used primarily in systems with a single memory space, where each process requires access to the entire memory space. ● Advantages of contiguous memory allocation include being easy to implement and efficient in terms of memory usage, but it can lead to fragmentation of memory. ● To overcome this issue, some operating systems use memory compaction techniques to defragment the memory, but this can be a time-consuming process and can affect the performance of the system.
  • 126. Memory Protection ● There are two ways to protect Memory: ○ Software-based ■ In this Operating system will the permission that a process has to access the desired memory if it does only then it can access it, otherwise the process get flagged and terminated ○ Hardware-based ■ It uses the two registers to ensure that the process utilized allocated memory only. For this limit register and relocation registers are used ■ Limit registers hold the upper limit of the process ■ Whereas relocation register provides the starting point in physical memory
  • 127. Dynamic Storage Allocation ● Dynamic storage allocation is a technique used by computer programs to allocate memory dynamically during runtime. ● It can improve the efficiency of memory usage and make programs more flexible and adaptable. ● There are several techniques used for dynamic storage allocation, such as heap-based allocation, stack-based allocation, and garbage collection. ● Heap-based allocation involves allocating memory from a pool of free memory known as the heap, while stack-based allocation involves allocating memory from a stack. ● Garbage collection involves automatically deallocating memory that is no longer being used by a program.
  • 128. Cont. ● Dynamic storage allocation can be divided into three common strategies: Best Fit, First Fit, and Worst Fit. ○ Best Fit: the allocator finds the smallest free block of memory that can accommodate the requested memory size ○ First Fit: the allocator finds the first free block of memory that can accommodate the requested memory size ○ Worst Fit: the allocator finds the largest free block of memory that can accommodate the requested memory size ● Each strategy has its advantages and disadvantages, and the choice of strategy depends on the specific requirements of the program and the characteristics of the memory being allocated. ● First Fit is the simplest and most commonly used strategy because it strikes a balance between speed and fragmentation.
  • 129. Cont. ● DMA controller that needs to transfer a block of data of size 2000 bytes to main memory. Suppose the DMA controller has access to the following free memory blocks in main memory: ○ Block A: Starting address 1000, size 500 bytes ○ Block B: Starting address 2000, size 1500 bytes ○ Block C: Starting address 4000, size 1000 bytes ● Find Best Fit ● Find First Fit ● Find Worst Fit
  • 130. Cont. ● Best Fit ○ Block A ● First Fit ○ Block A ● Worst Fit ○ Block B
  • 131. Segmentation ● Segmentation is a memory management technique used in operating systems to organize and manage memory as logical segments or regions. ● Segments can be shared between multiple processes, allowing for more efficient use of memory. ● Segmentation provides several advantages over other memory management techniques, such as dynamic memory allocation, protection from unauthorized access, and sharing of memory between multiple processes. ● However, segmentation also has some disadvantages, such as fragmentation, which can lead to inefficient use of memory and reduced system performance.
  • 132. Fragmentation ● Fragmentation in main memory refers to the phenomenon where the available memory becomes fragmented into small free blocks of memory, making it difficult to allocate a large contiguous block of memory to a program. ● There are two types of fragmentation that can occur in main memory: ○ external fragmentation, which occurs when there are many small free blocks of memory scattered throughout the memory space, and ○ internal fragmentation, which occurs when the allocated memory block is larger than the required memory block and the unused portion of the block is wasted. ● To reduce the impact of fragmentation, various techniques can be employed, such as compaction, virtual memory, and memory allocation algorithms
  • 133. Cont. ● Compaction is a technique used to eliminate external fragmentation by relocating allocated blocks of memory to form a larger block of free memory, ● while virtual memory is a technique used to overcome external fragmentation by allowing programs to access memory that is not physically available in main memory. ● Memory allocation algorithms, such as Best Fit, First Fit, and Worst Fit, can be used to minimize the amount of fragmentation that occurs during dynamic memory allocation and deallocation.
  • 134. Paging ● Paging is a technique that allows the operating system to allocate memory (logical) to a process in small fixed-size chunks called "pages". ● Each page is a contiguous block of memory (if in physical memory referred as frame) that can be swapped between the RAM and the hard disk independently of other pages. ● This allows the operating system to efficiently manage memory and swap out pages that are not currently being used to free up space in the RAM for other processes. ● Each CPU generated address contains page number and page offset ● When a program references a logical address, the paging system translates it to a physical address by looking up the corresponding entry in the page table. ● If the page is not currently in the RAM, a page fault occurs, and the operating system retrieves the page from the hard disk and loads it into a free frame in the RAM. ● The page table is then updated to reflect the new location of the page.
  • 135. Translation Lookaside Buffer ● A Translation Lookaside Buffer (TLB) is a hardware cache used in modern computer systems to improve memory access speeds. ● It stores recently accessed virtual-to-physical address translations, allowing the CPU to avoid repeatedly accessing the page table in main memory to resolve address translations. ● When a program accesses memory, the MMU first checks the TLB to see if the translation is already stored there. ● If the translation is found, the MMU can immediately use the cached physical address to access the memory, without accessing the page table in main memory. ● If the translation is not found in the TLB, the MMU must access the page table in main memory to resolve the translation. ● The size of the TLB is limited by hardware constraints, and larger TLBs generally result in better performance due to fewer page table accesses.
  • 136. Effective Memory-Access Time ● The effective memory-access time (EMAT) is the average time it takes to access a memory location in a computer system, taking into account the time it takes to access the cache and main memory. To find the EMAT, you need to know the cache hit rate, the cache access time, the main memory access time, and the block transfer time. ● Here is the formula to calculate the EMAT: ● EMAT = cache hit rate x cache access time + (1 - cache hit rate) x (main memory access time + block transfer time) ● Actual time = (cache hit rate x cache access time) + (1 - cache hit rate) x (main memory access time + block transfer time)
  • 137. Cont. ● To use this formula, you need to know the following terms: ● Cache hit rate: The percentage of memory access requests that are found in the cache. It can be calculated as the number of cache hits divided by the total number of memory access requests. ● Cache access time: The time it takes to access the cache, including the time it takes to check if the memory location is in the cache. ● Main memory access time: The time it takes to access main memory if the memory location is not in the cache. ● Block transfer time: The time it takes to transfer a block of data between the cache and main memory. This is typically the time it takes to transfer an entire cache line or block of data.
  • 138. Cont. ● For example, suppose a computer system has a cache hit rate of 90%, a cache access time of 1 ns, a main memory access time of 100 ns, and a block transfer time of 10 ns. ● The EMAT can be calculated as follows: ○ cache hit rate = 90% ○ cache access time = 1 ns ○ main memory access time = 100 ns ○ block transfer time = 10 ns ○ EMAT = 0.9 x 1 ns + (1 - 0.9) x (100 ns + 10 ns) = 11.9 ns
  • 139. Page Table ● A page table is a data structure used by a virtual memory system to keep track of the mapping between virtual addresses and physical addresses. ● It is typically stored in main memory and is accessed on every memory access, so its design can have a significant impact on system performance. ● The page table is organized as a table of entries, where each entry corresponds to a page of virtual memory. ● It contains information about the physical page frame that is currently mapped to the virtual page, as well as other control bits and metadata. ● The size and structure of the page table can vary depending on the virtual memory architecture of the system, such as hierarchical page tables to reduce the size and improve performance, or hashed page tables to improve lookup speed.
  • 140. Structure of the Page Table ● The main components of a page table entry (PTE) include: ○ Page frame number (PFN): This is the physical address of the page frame that is currently mapped to the virtual page. ○ Valid/Invalid bit: This bit indicates whether the virtual page is currently mapped to a physical page frame. If the bit is set to "valid", the page is currently mapped, and the PFN field contains the physical address of the page frame. If the bit is set to "invalid", the virtual page is not currently mapped, and the PFN field is ignored. ○ Protection bits: These bits define the access rights for the page. They determine whether the page is read-only or read-write, whether it can be executed, and whether it can be accessed by privileged or unprivileged code. ○ Dirty bit: This bit indicates whether the page has been modified since it was last written to disk. It is used to optimize page replacement policies and reduce the number of unnecessary writes to disk. ○ Reference bit: This bit indicates whether the page has been accessed recently. It is used to optimize page replacement policies and reduce the number of unnecessary page swaps. ○ Page table pointer: In systems that use hierarchical page tables, the page table pointer field is used to store a pointer to the next level of the page table.
  • 141. Cont. Virtual Page Number PFN Valid/Invalid Bit Protection Bits Dirty Bit Reference Bit 0 100 1 RW 0 1 1 200 1 RO 0 0 2 400 0
  • 142. Swapping ● Swapping is a memory management technique used by operating systems to temporarily remove pages or portions of a process's working memory from physical memory and move them to secondary storage, such as a hard disk or solid-state drive (SSD). ● This frees up space in physical memory, which can be used to load other processes or pages. ● It involves several steps, including selecting pages to swap, writing pages to disk, updating page tables, reading pages from disk, and updating page tables. ● Swapping can cause performance issues, but is necessary for managing memory and allowing multiple processes to share a limited amount of physical memory.
  • 143. Virtual Memory ● Virtual memory is a computer memory management technique that allows a computer to use more memory than it physically has available. ● It is a way of temporarily transferring pages of data from random access memory (RAM) to disk storage. ● When a program needs to access data or instructions that are not currently in RAM, the operating system moves the required data from the hard disk into RAM. ● The operating system then maps the virtual address requested by the program to a physical address in RAM, allowing the program to access the requested data or instructions. ● Virtual memory also provides security benefits by isolating processes and preventing them from accessing each other's memory spaces.
  • 144. Demand Paging ● Demand paging is a memory management technique used by operating systems to optimize the use of physical memory. ● It works by dividing a program into pages, which are the smallest unit of memory that can be loaded into physical memory. ● When a program needs to access a page of memory that is not currently in physical memory, the operating system will load that page into memory from disk storage. ● This process is known as a page fault and can significantly reduce the amount of physical memory required to run a program, but it can also introduce performance overhead.
  • 145. Cont. ● Performance of Demand Paging ○ Effective Access Time = (1 − probability_of_a_page_fault) × memory_access_time + probability_of_a_page_fault × page_fault_time ○ Probability_of_a_page_fault = 5/999 ○ Memory_access_time = 200 ns ○ Page_fault_time = 8 ms ○ = (1 − 5/999) × 200 ns + (5/999) × 8 ms ○ = (1 − 0.005005) × 200 ns + (0.005005) × 8 ms ○ = 0.994995 × 200 ns + 0.04004 ms ○ = 198.999 ns + 0.04004 ms ○ = 0.000198999 ms + 0.04004 ms ○ = 0.040239 ms
  • 146. Page Fault ● A page fault is a type of interrupt that occurs when a program tries to access a page of memory that is not currently in physical memory (RAM). ● The operating system needs to retrieve the requested page from disk and load it into physical memory, known as a page fault handler. ● Virtual memory is a technique that allows a program to use more memory than is physically available by temporarily transferring pages of data from RAM to disk storage. ● When a program requests data that is not currently in RAM, the operating system will move the required data from disk storage to RAM. ● If the required data is not in any of the pages in RAM, a page fault will occur. Modern operating systems are optimized to minimize the frequency of page faults and to handle them efficiently when they do occur.
  • 147. Copy-on-Write ● Copy-on-write (COW) is a technique used in computer programming and operating systems to optimize memory usage and improve performance. ● When a process requests to make a copy of a block of memory, the operating system sets up a reference to the original block of memory and marks it as copy-on-write. ● This means that the original block of memory is shared between the two processes until one of them tries to modify it. ● COW is commonly used in forked processes, where a child process is created from a parent process. ● When the child process modifies a memory block, the operating system creates a new copy of the memory block, so that the changes made by the child process do not affect the memory of the parent process. ● COW can significantly reduce the memory overhead of creating new processes or making copies of memory blocks, but it does add some overhead to the process of modifying memory blocks.
  • 148. Page Replacement ● Page replacement is a technique used by computer operating systems to manage memory when there is not enough physical memory (RAM) to store all the data needed by running programs. ● When a process requests a memory page that is not present in the page table, a page fault occurs and the operating system must find an unused page frame in physical memory to load the requested page info. ● Page replacement algorithms determine which page frame to evict when there is no free page frame available in memory. ● There are several common page replacement algorithms, such as LRU, FIFO, Optimal, and Clock. ● LRU evicts the page that has not been accessed for the longest time, FIFO evicts the page that was loaded into memory first, Optimal evicts the page that will not be needed for the longest time in the future, and Clock evicts the page that will not be needed for the longest time in the future.
  • 149. Least Recently Used ● LRU page replacement is an algorithm used by computer operating systems to manage memory when there is not enough physical memory (RAM) to store all the data needed by running programs. ● It evicts the least recently used page from memory when a new page needs to be loaded and there are no free page frames available. ● The LRU algorithm is designed to minimize the number of page faults by prioritizing pages that are more likely to be used again in the near future. ● To reduce the number of page faults and improve system performance, one common approach is to use a data structure called a "page table" to keep track of the access history of pages. ● The page table is typically implemented as a linked list or a tree, with each page having a timestamp that is updated whenever the page is accessed.
  • 150. Cont. ● Update frame whose value was used longest ● ● Example: ● Page 7 0 1 2 0 3 0 4 2 3 0 3 2 ● Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 ● Frame 2 0 0 0 0 0 0 0 0 3 3 3 3 ● Frame 3 1 1 1 3 3 3 2 2 2 2 2
  • 151. First-In, First-Out ● FIFO page replacement is a simple and commonly used algorithm for managing memory in computer operating systems. ● It operates by evicting the oldest page in memory, which was loaded into memory first, when a new page needs to be loaded into memory and there are no free page frames available. ● However, it has several shortcomings, such as the "Belady's Anomaly" issue, where increasing the number of page frames in memory can actually increase the number of page faults. ● Additionally, it does not take into account the access patterns of pages or their frequency of use. Despite its limitations, the FIFO page replacement algorithm is still widely used in some operating systems, particularly those with limited resources and simpler memory management requirements.
  • 152. Cont. ● Update frame whose value was updated first ● ● Example: ● Page 7 0 1 2 0 3 0 4 2 3 0 3 2 ● Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 ● Frame 2 0 0 0 0 3 3 3 2 2 2 2 2 ● Frame 3 1 1 1 1 0 0 0 3 3 3 3
  • 153. Optimal Page Replacement ● The Optimal page replacement algorithm, also known as the MIN (Minimum) algorithm, is a theoretical page replacement algorithm that provides an upper bound on the performance of other page replacement algorithms. ● It works by predicting which pages are least likely to be used in the future and evicting those pages from memory. ● To implement the Optimal algorithm, the operating system needs to have perfect knowledge of the future memory access pattern of the program, which is not possible in real-world situations. ● Despite its theoretical nature, the Optimal algorithm can be useful in some situations where the memory access patterns of a program are known in advance. ● It can provide a performance metric that can be used to evaluate the effectiveness of other page replacement algorithms.
  • 154. Cont. ● Replace page which will not be use in near future ● ● Example ● Page 7 0 1 2 0 3 0 4 2 3 0 3 2 ● Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2 ● Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 ● Frame 3 1 1 1 3 3 3 3 3 3 3 3
  • 155. Page Faults in each Replacement Algorithm ● LRU ○ ● FIFO ○ ● OPR ○
  • 156. Allocation of Frames ● The allocation of frames is the process of assigning a certain amount of memory to a running program or process. ● In a computer system, physical memory is divided into fixed-size chunks called "frames". These frames are allocated to processes or programs running on the system to store their data and instructions. ● There are several techniques used to allocate frames, such as fixed partitioning, dynamic partitioning, paging, and segmentation. ● Fixed partitioning involves dividing memory into fixed-size partitions, while dynamic partitioning involves dividing memory into variable-sized partitions. ● Page allocation involves dividing memory into fixed-size pages, while segmentation involves dividing memory into variable-sized segments.
  • 157. Cont. ● If each process gets equal numbers of frame than it is known as equal allocation whereas, if the allocation is based on the size of process or the need than it is called proportional allocation. ● In equal allocation each process gets (free_frames/total_process) frames. ● In proportional allocation each process receives ai = si /S × m where S is the total number of pages, m is the number of free frames, and a is the number of allocated frames to each process ● Consider 2 process contain 10 and 127 pages each, total free frames are 80, identify how many frames each process will get. ● a0 = (10/137)X80 = 5.8 = 6 ● a1 = (127/137)X80 = 74.1 = 74
  • 158. Thrashing ● Thrashing is a phenomenon that occurs in computer systems when the system spends a significant amount of time and resources continuously swapping pages between physical memory and virtual memory, without making any real progress in executing the program. ● It usually occurs when a system is under heavy load and there is not enough physical memory available to store all the pages needed by the active processes. ● To prevent thrashing, it is important to ensure that the system has enough physical memory to handle the workload, use efficient memory management techniques, and monitor the system's memory usage and page fault rates.
  • 159. Address Translation ● Address translation is the process of converting a virtual memory address used by a process to a physical memory address used by the system's memory management unit (MMU). ● This allows for efficient use of memory resources by allowing the operating system to manage the mapping of virtual addresses to physical addresses and allocate and free memory as needed. ●
  • 160. Cont. ● A process generates a virtual address of 0x12F8. The page size is 4KB and the process's page table maps virtual page number 0x31 to physical page frame number 0x2A. ○ What is the page number for the virtual address? ○ What is the offset for the virtual address? ○ What is the physical memory address corresponding to the virtual address? ● Note: You can assume that each page table entry is 4 bytes in size, and that the physical memory starts at address 0x00000.
  • 161. Cont. ● What is the page number for the virtual address? ○ Page_Number = Virtual_Address / Page_Size = 0x12F8/4KB = 0x1 ● What is the offset for the virtual address? ○ Offset = Virtual_Address % Page_Size = 0x12F8 % 4KB = 0x2F8 ● What is the physical memory address corresponding to the virtual address? ○ Physical_Memory_Address = (Physical_Page_Frame_Number * Page_Size) + Offset ○ = (0x2A * 4KB) + 0x2F8 ○ = 0x2A000 + 0x2F8 ○ = 0x2A2F8
  • 162. HDD Scheduling ● HDD (Hard Disk Drive) scheduling is the process of managing access to data stored on a hard disk drive. ● It is important in computer systems where multiple processes or users are accessing the hard disk simultaneously. ● The main objective of HDD scheduling is to optimize the use of the disk resources while maintaining high performance and minimizing the time required to access data. ● There are several approaches to HDD scheduling, such as FCFS, SSTF, SCAN, and C-SCAN. ○ FCFS services requests in the order in which they arrive, while ○ SSTF services the request that requires the least amount of movement of the read/write head. ○ SCAN services requests in a sweeping motion from one end of the disk to the other, then back to the starting point. ○ C-SCAN services requests only in one direction, then jumps to the other end of the disk and services requests in the same direction again.
  • 163. Cont. ● Example: 98, 183, 37, 122, 14, 124, 65, 67, and head is on 53 ○ FCFS: 98, 183, 37, 122, 14, 124, 65, 67 ○ SSTF: 65, 67, 37, 14, 98, 122, 124, 183 ○ SCAN: 65, 67, 98, 122, 124, 183, 37, 14 ○ S-SCAN: 65, 67, 98, 122, 124, 183, 14, 37 ● To find the total distance head has to move for in each we take sum of absolute difference: ○ FCFS: |53-98|+|98-183|+|183-37|+|37-122|+|122-14|+|14-124|+|124-65|+|65-67| = 640 ○ SSTF: |53-65|+|65-67|+|67-37|+|37-14|+|14-98|+|98-122|+|122-124|+|124-183| = 236 ○ SCAN : |53-65|+|65-67|+|67-98|+|98-122|+|122-124|+|124-183|+|183-37|+|37-14| = 299 ○ C-SCAN: |53-65|+|65-67|+|67-98|+|98-122|+|122-124|+|124-183|+|183-14|+|14-37| = 322
  • 164. NVM Scheduling ● NVM (Non-Volatile Memory) scheduling is the process of managing access to non-volatile memory devices such as flash memory. ● It is important in systems that use NVM as a storage medium, such as SSDs and hybrid memory systems. ● The main objective of NVM scheduling is to optimize the use of the NVM resources while maintaining high performance and minimizing wear on the NVM devices. ● There are several approaches to NVM scheduling, such as queue-based scheduling, deadline-based scheduling, and group-based scheduling. ○ Queue-based scheduling uses a queue to manage incoming requests to the NVM devices, while ○ deadline-based scheduling assigns a deadline to each request and services requests based on their deadline. ○ Group-based scheduling groups requests based on their access patterns and services them in batches, which reduces the number of erase and write operations required.
  • 165. Application I/O Interface ● An Application I/O Interface (AIO) is a software layer that provides a standard interface between applications and I/O devices. ● It abstracts the underlying hardware details and provides a common set of functions that applications can use to interact with I/O devices, regardless of the specific device or platform being used. ● One of the key benefits of an AIO interface is that it allows applications to be written in a platform-independent manner, while also simplifying the process of writing device drivers. ● Overall, an AIO provides a convenient and consistent way for applications to interact with I/O devices, while also allowing for flexibility and platform independence.
  • 166. Kernel I/O Subsystem ● The Kernel I/O Subsystem is a core component of an operating system that provides a unified and efficient way for applications to access I/O devices. ● It is responsible for managing the flow of data between applications and the hardware devices, and ensuring that I/O operations are performed in a reliable and secure manner. ● It typically includes device drivers, system calls, library functions, and system-level services that handle tasks such as buffering, caching, and queuing of I/O requests. ● It provides a consistent and standardized interface for applications to access I/O devices, and can improve the performance and efficiency of I/O operations by optimizing the use of system resources. ● Overall, the Kernel I/O Subsystem plays a critical role in providing a reliable and efficient way for applications to access I/O devices, and is a core component of most modern operating systems.
  • 167. Transforming I/O Requests to Hardware Operations ● Transforming I/O requests to hardware operations involves a number of steps that are typically handled by the Kernel I/O Subsystem of an operating system. ● These steps include: ○ application requests I/O operation: an application issues a system call to request an I/O operation specifying the device and data to be transferred; ○ I/O request queued: the Kernel I/O Subsystem places the I/O request in a queue, and schedules the request to be processed by the appropriate device driver; ○ device driver prepares hardware operation: the device driver prepares the necessary hardware commands to perform the requested I/O operation; ○ hardware operation executed: the hardware performs the requested I/O operation, and data is transferred between the device and memory; ○ interrupt generated: the device generates an interrupt signal to inform the CPU that the operation has finished; ○ interrupt handled: the Kernel I/O Subsystem wakes up the application thread that initiated the I/O request and signals that the operation has completed; ○ data returned to application: the application receives the data that was transferred during the I/O operation and resumes execution.
  • 168. File Concept ● File Attributes: ○ Name, Identifier, Type, Location, Size, Protection, Timestamps and User Identification ● File Operations ○ Creating, Opening, Writing, Reading, Repositioning within file, Deleting, Truncating ● File Structure ○ Contiguous allocation: This structure stores a file as a contiguous block of data on the storage device. It is simple and efficient but can lead to fragmentation as files are added, deleted, and modified. ○ Linked allocation: This structure stores a file as a linked list of data blocks on the storage device. Each block contains a pointer to the next block in the file. It is flexible and can handle files of any size, but can be slow to access as each block must be read separately. ○ Indexed allocation: This structure uses an index to store the addresses of the data blocks that make up a file. The index is stored separately from the file data, making it faster to access and less susceptible to fragmentation. ○ Combined allocation: This structure combines elements of both contiguous and linked allocation to provide a more efficient solution. The file is stored as a contiguous block of data until it reaches a certain size, after which it is stored using linked allocation.
  • 169. Access Methods ● Sequential Access: In this method, data is accessed in a sequential manner, i.e., the computer reads data from the beginning of a file or storage device and continues to read data until it reaches the end of the file. This method is commonly used for reading data from tapes. ● Direct Access: This method allows the computer to read data from any point in a file or storage device. With direct access, the computer can jump directly to the desired location and retrieve the data, rather than having to read all the data in sequence. This method is commonly used for reading data from hard disk drives. ● Random Access: This method allows the computer to access any location in a storage device directly and quickly. Random access is used in memory devices such as RAM and cache, where data can be accessed in any order. ● Indexed Access: This method involves the use of an index to locate data within a file or storage device. An index is a data structure that contains pointers to the locations of data in the file. This method is commonly used for reading data from databases.
  • 170. Directory Structure ● Single-level directory structure: In this structure, all files are stored in a single directory. This approach is simple but can become unmanageable when the number of files grows. ● Two-level directory structure: This structure uses a root directory to contain multiple user directories. Each user directory can then contain its own set of files. This approach is more organized than a single-level directory structure but can still become difficult to manage as the number of users and files increases. ● Hierarchical directory structure: This structure uses a tree-like hierarchy of directories to organize files. Each directory can contain its own set of files or subdirectories, creating a logical organization of files. This approach is used in most modern operating systems. ● Indexed directory structure: This structure uses an index to organize files. The index contains pointers to the data, allowing the system to quickly access specific files without having to scan the entire directory. ● Virtual file system: A virtual file system is a file structure that provides an interface for accessing different types of files and storage devices. This allows the operating system to manage all file types and storage devices using a single file structure.