Maximum CPU utilization obtained with multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
Round robin scheduling assigns CPU bursts to processes in time quantums of 4 time units. It was analyzed that with 3 processes having arrival times of 0 and burst times of 24, 3, and 3 time units respectively, the average turnaround time was 15.66 time units and the average waiting time was 5.666 time units.
Context switching allows a system to switch between processes by saving the state of the current process and loading the saved state for a new process. This allows multiple processes to share resources like the CPU and gives the appearance of parallel processing. Context switching has advantages like enabling multitasking but also has disadvantages like requiring time for the switching process itself.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
Scheduling is a method used to allocate computing resources like processor time, bandwidth, and memory to processes, threads, and applications. It aims to balance system load, ensure equal distribution of resources, and prioritize processes according to set rules. There are different types of scheduling including long-term, medium-term, and short-term scheduling. Scheduling algorithms decide which process from the ready queue is allocated the CPU based on whether the policy is preemptive or non-preemptive. Common algorithms include first-come first-served, shortest job first, priority scheduling, and round-robin scheduling.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
The document discusses several key process scheduling policies and algorithms:
1. Maximum throughput, minimize response time, and other policies aim to optimize different performance metrics like job completion time.
2. Common scheduling algorithms include first come first served (FCFS), shortest job next (SJN), priority scheduling, round robin, and multilevel queues. Each has advantages for different workload types.
3. The document also covers process synchronization challenges like deadlock and livelock that can occur when processes contend for shared resources in certain ordering. Methods to avoid or recover from such issues are important for system design.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
Round robin scheduling assigns CPU bursts to processes in time quantums of 4 time units. It was analyzed that with 3 processes having arrival times of 0 and burst times of 24, 3, and 3 time units respectively, the average turnaround time was 15.66 time units and the average waiting time was 5.666 time units.
Context switching allows a system to switch between processes by saving the state of the current process and loading the saved state for a new process. This allows multiple processes to share resources like the CPU and gives the appearance of parallel processing. Context switching has advantages like enabling multitasking but also has disadvantages like requiring time for the switching process itself.
The document discusses CPU scheduling in operating systems. It describes how the CPU scheduler selects processes that are ready to execute and allocates the CPU to one of them. The goals of CPU scheduling are to maximize CPU utilization, minimize waiting times and turnaround times. Common CPU scheduling algorithms discussed are first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin scheduling. Multilevel queue scheduling is also mentioned. Examples are provided to illustrate how each algorithm works.
Scheduling is a method used to allocate computing resources like processor time, bandwidth, and memory to processes, threads, and applications. It aims to balance system load, ensure equal distribution of resources, and prioritize processes according to set rules. There are different types of scheduling including long-term, medium-term, and short-term scheduling. Scheduling algorithms decide which process from the ready queue is allocated the CPU based on whether the policy is preemptive or non-preemptive. Common algorithms include first-come first-served, shortest job first, priority scheduling, and round-robin scheduling.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
The document discusses several key process scheduling policies and algorithms:
1. Maximum throughput, minimize response time, and other policies aim to optimize different performance metrics like job completion time.
2. Common scheduling algorithms include first come first served (FCFS), shortest job next (SJN), priority scheduling, round robin, and multilevel queues. Each has advantages for different workload types.
3. The document also covers process synchronization challenges like deadlock and livelock that can occur when processes contend for shared resources in certain ordering. Methods to avoid or recover from such issues are important for system design.
Hello....
Dear views
Scheduling is most important Role in OS..... in this ppt i described very Creatively about Process Scheduling...... I hope you like it..... and easily understand it...... :-) :-)
Operating Systems Process Scheduling Algorithmssathish sak
The document discusses various CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), round robin (RR), shortest job first (SJF), and shortest remaining time first (SRTF). It explains the assumptions, goals, and tradeoffs of each algorithm such as minimizing response time, maximizing throughput, and ensuring fairness. Examples are provided to illustrate how each algorithm works and its performance compared to others under different conditions involving job lengths. Predicting future job lengths is also discussed as it can impact the performance of algorithms like SRTF.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
The timing behavior of the OS must be predictable - services of the OS: Upper bound on the execution time!
2. OS must manage the timing and scheduling
OS possibly has to be aware of task deadlines;
(unless scheduling is done off-line).
3. The OS must be fast
The document discusses contiguous memory allocation and strategies for partition allocation such as first fit, best fit, and worst fit. It defines contiguous memory allocation as each process being contained in a single contiguous section of memory. It also describes fragmentation as either external, where total memory exists but is not contiguous, or internal, where allocated memory is slightly larger than requested. To reduce external fragmentation, it suggests compaction by shuffling memory contents to place all free memory together. Paging and segmentation are presented as two complementary techniques to non-contiguous allocation.
First Come First Serve & Shortest Job First-(FCFS & SJF)Adeel Rasheed
This document compares three CPU scheduling algorithms: first-come, first-served (FCFS); shortest job first (SJF); and preemptive FCFS. It provides examples of each with processes of different burst times and calculates their average turnaround times and waiting times. FCFS has the highest average turnaround time of 27 time units. Preemptive FCFS improves this to 13 time units by allowing short jobs to preempt longer jobs. SJF has the lowest average turnaround and waiting times of 13 and 7 time units respectively by always selecting the shortest job to run next.
Monitors provide mutual exclusion and condition variables to synchronize processes. A monitor consists of private variables and procedures, public procedures that act as system calls, and initialization procedures. Condition variables allow processes to wait for events within a monitor. When signaling a condition variable, either the signaling process waits or the released process waits, depending on whether it uses the Hoare type or Mesa type.
1) A semaphore consists of a counter, a waiting list, and wait() and signal() methods. Wait() decrements the counter and blocks if it becomes negative, while signal() increments the counter and resumes a blocked process if the counter becomes positive.
2) The dining philosophers problem is solved using semaphores to lock access to shared chopsticks, with one philosopher designated as a "weirdo" to avoid deadlock by acquiring locks in a different order.
3) The producer-consumer problem uses three semaphores - one to limit buffer size, one for empty slots, and one for locks - to coordinate producers adding to a bounded buffer
This document summarizes key concepts from Chapter 5 of the textbook "Operating System Concepts - 8th Edition" regarding CPU scheduling. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served, shortest job first, priority scheduling, and round robin. Criteria for evaluating scheduling algorithms include CPU utilization, throughput, turnaround time, waiting time, and response time. Ready queues can be partitioned into multiple levels with different scheduling policies to implement multilevel queue and feedback queue scheduling.
In operating system how frames are allocated and what is the algorithm of allocation of frames and also discussed about Thrashing for clear some ideas! . Thank u!.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
Here are the key steps in Preemptive SJF scheduling:
1. Jobs A, B, C, D arrive in that order with the given burst times
2. Job B has the shortest burst time of 4 so it runs first
3. Job A preempts B after B completes its 4 unit burst
4. Job D has the next shortest burst of 5 so it runs
5. Job C has the next shortest burst of 9 so it runs
6. Job A completes after running a total of 7 + 7 = 14 units
The average wait time is (9 + 0 + 15 + 2) / 4 = 6.5
This example shows how preemptive S
The document discusses the actions taken by the kernel during a context switch between processes. It explains that a context switch involves suspending the currently running process, storing its context in the Process Control Block (PCB), and loading and resuming the context of another process from its PCB. The PCB contains information about the process state, registers, memory management, and more. Context switching has significant overhead as it involves saving and loading all this process context data.
This document discusses semaphores, which are integer variables that coordinate access to shared resources. It describes counting semaphores, which allow multiple processes to access a critical section simultaneously up to a set limit, and binary semaphores, which only permit one process at a time. Key differences are that counting semaphores can have any integer value while binary semaphores are limited to 0 or 1, and counting semaphores allow multiple slots while binary semaphores provide strict mutual exclusion. Limitations of semaphores include potential priority inversion issues and deadlocks if not used properly.
- Paging is a memory management technique that divides logical memory into fixed-size pages and physical memory into frames. When a process is executed, its pages are loaded into any available frames. This allows physical memory to be non-contiguous while avoiding external fragmentation.
- Address translation uses a page table containing the frame number for each process page. A logical address is divided into a page number, which indexes the page table, and a page offset, which combined with the frame base address gives the physical memory location.
- Segmentation divides a process into variable-sized segments, each with a base and limit defined in a segment table. A logical address has a segment number and offset, with the offset added to the base
The document discusses memory management techniques used in operating systems. It describes logical vs physical addresses and how relocation registers map logical addresses to physical addresses. It covers contiguous and non-contiguous storage allocation, including paging and segmentation. Paging divides memory into fixed-size frames and pages, using a page table and translation lookaside buffer (TLB) for address translation. Segmentation divides memory into variable-sized segments based on a program's logical structure. Virtual memory and demand paging are also covered, along with page replacement algorithms like FIFO, LRU and optimal replacement.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
This document discusses threads and threading models. It defines a thread as the basic unit of CPU utilization consisting of a program counter, stack, and registers. Threads allow for simultaneous execution of tasks within the same process by switching between threads rapidly. There are three main threading models: many-to-one maps many user threads to one kernel thread; one-to-one maps each user thread to its own kernel thread; many-to-many maps user threads to kernel threads in a variable manner. Popular thread libraries include POSIX pthreads and Win32 threads.
Hello....
Dear views
Scheduling is most important Role in OS..... in this ppt i described very Creatively about Process Scheduling...... I hope you like it..... and easily understand it...... :-) :-)
Operating Systems Process Scheduling Algorithmssathish sak
The document discusses various CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), round robin (RR), shortest job first (SJF), and shortest remaining time first (SRTF). It explains the assumptions, goals, and tradeoffs of each algorithm such as minimizing response time, maximizing throughput, and ensuring fairness. Examples are provided to illustrate how each algorithm works and its performance compared to others under different conditions involving job lengths. Predicting future job lengths is also discussed as it can impact the performance of algorithms like SRTF.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
The timing behavior of the OS must be predictable - services of the OS: Upper bound on the execution time!
2. OS must manage the timing and scheduling
OS possibly has to be aware of task deadlines;
(unless scheduling is done off-line).
3. The OS must be fast
The document discusses contiguous memory allocation and strategies for partition allocation such as first fit, best fit, and worst fit. It defines contiguous memory allocation as each process being contained in a single contiguous section of memory. It also describes fragmentation as either external, where total memory exists but is not contiguous, or internal, where allocated memory is slightly larger than requested. To reduce external fragmentation, it suggests compaction by shuffling memory contents to place all free memory together. Paging and segmentation are presented as two complementary techniques to non-contiguous allocation.
First Come First Serve & Shortest Job First-(FCFS & SJF)Adeel Rasheed
This document compares three CPU scheduling algorithms: first-come, first-served (FCFS); shortest job first (SJF); and preemptive FCFS. It provides examples of each with processes of different burst times and calculates their average turnaround times and waiting times. FCFS has the highest average turnaround time of 27 time units. Preemptive FCFS improves this to 13 time units by allowing short jobs to preempt longer jobs. SJF has the lowest average turnaround and waiting times of 13 and 7 time units respectively by always selecting the shortest job to run next.
Monitors provide mutual exclusion and condition variables to synchronize processes. A monitor consists of private variables and procedures, public procedures that act as system calls, and initialization procedures. Condition variables allow processes to wait for events within a monitor. When signaling a condition variable, either the signaling process waits or the released process waits, depending on whether it uses the Hoare type or Mesa type.
1) A semaphore consists of a counter, a waiting list, and wait() and signal() methods. Wait() decrements the counter and blocks if it becomes negative, while signal() increments the counter and resumes a blocked process if the counter becomes positive.
2) The dining philosophers problem is solved using semaphores to lock access to shared chopsticks, with one philosopher designated as a "weirdo" to avoid deadlock by acquiring locks in a different order.
3) The producer-consumer problem uses three semaphores - one to limit buffer size, one for empty slots, and one for locks - to coordinate producers adding to a bounded buffer
This document summarizes key concepts from Chapter 5 of the textbook "Operating System Concepts - 8th Edition" regarding CPU scheduling. It introduces CPU scheduling as the basis for multiprogrammed operating systems. Various scheduling algorithms are described such as first-come first-served, shortest job first, priority scheduling, and round robin. Criteria for evaluating scheduling algorithms include CPU utilization, throughput, turnaround time, waiting time, and response time. Ready queues can be partitioned into multiple levels with different scheduling policies to implement multilevel queue and feedback queue scheduling.
In operating system how frames are allocated and what is the algorithm of allocation of frames and also discussed about Thrashing for clear some ideas! . Thank u!.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
This document discusses different types of scheduling algorithms used by operating systems to determine which process or processes will run on the CPU. It describes preemptive and non-preemptive scheduling, and provides examples of common scheduling algorithms like first-come, first-served (FCFS), shortest job first (SJF), round robin, and priority-based scheduling. Formulas for calculating turnaround time and waiting time are also presented.
Here are the key steps in Preemptive SJF scheduling:
1. Jobs A, B, C, D arrive in that order with the given burst times
2. Job B has the shortest burst time of 4 so it runs first
3. Job A preempts B after B completes its 4 unit burst
4. Job D has the next shortest burst of 5 so it runs
5. Job C has the next shortest burst of 9 so it runs
6. Job A completes after running a total of 7 + 7 = 14 units
The average wait time is (9 + 0 + 15 + 2) / 4 = 6.5
This example shows how preemptive S
The document discusses the actions taken by the kernel during a context switch between processes. It explains that a context switch involves suspending the currently running process, storing its context in the Process Control Block (PCB), and loading and resuming the context of another process from its PCB. The PCB contains information about the process state, registers, memory management, and more. Context switching has significant overhead as it involves saving and loading all this process context data.
This document discusses semaphores, which are integer variables that coordinate access to shared resources. It describes counting semaphores, which allow multiple processes to access a critical section simultaneously up to a set limit, and binary semaphores, which only permit one process at a time. Key differences are that counting semaphores can have any integer value while binary semaphores are limited to 0 or 1, and counting semaphores allow multiple slots while binary semaphores provide strict mutual exclusion. Limitations of semaphores include potential priority inversion issues and deadlocks if not used properly.
- Paging is a memory management technique that divides logical memory into fixed-size pages and physical memory into frames. When a process is executed, its pages are loaded into any available frames. This allows physical memory to be non-contiguous while avoiding external fragmentation.
- Address translation uses a page table containing the frame number for each process page. A logical address is divided into a page number, which indexes the page table, and a page offset, which combined with the frame base address gives the physical memory location.
- Segmentation divides a process into variable-sized segments, each with a base and limit defined in a segment table. A logical address has a segment number and offset, with the offset added to the base
The document discusses memory management techniques used in operating systems. It describes logical vs physical addresses and how relocation registers map logical addresses to physical addresses. It covers contiguous and non-contiguous storage allocation, including paging and segmentation. Paging divides memory into fixed-size frames and pages, using a page table and translation lookaside buffer (TLB) for address translation. Segmentation divides memory into variable-sized segments based on a program's logical structure. Virtual memory and demand paging are also covered, along with page replacement algorithms like FIFO, LRU and optimal replacement.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
This document discusses deadlocks and techniques for handling them. It begins by defining the four necessary conditions for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It then describes three approaches to handling deadlocks: prevention, avoidance, and detection and recovery. Prevention aims to ensure one of the four conditions never holds. Avoidance uses more information to determine if a request could lead to a deadlock. Detection and recovery allows deadlocks but detects and recovers from them after the fact. The document provides examples of different prevention techniques like limiting resource types that can be held, ordering resource types, and preemption. It also explains the banker's algorithm for deadlock avoidance.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
This document discusses threads and threading models. It defines a thread as the basic unit of CPU utilization consisting of a program counter, stack, and registers. Threads allow for simultaneous execution of tasks within the same process by switching between threads rapidly. There are three main threading models: many-to-one maps many user threads to one kernel thread; one-to-one maps each user thread to its own kernel thread; many-to-many maps user threads to kernel threads in a variable manner. Popular thread libraries include POSIX pthreads and Win32 threads.
In this presentation, I am explaining about Threads, types of threads, its advantages and disadvantages, difference between Process and Threads, multithreading and its type.
"Like the ppt if you liked the ppt"
LinkedIn - https://in.linkedin.com/in/prakharmaurya
The document discusses process scheduling and the five states a process can be in: new, ready, running, waiting, and terminated. It explains that scheduling is the arrangement of processes in execution order and involves assigning processes to a processor based on their state. The goal of scheduling is good throughput, CPU utilization, turnaround time, waiting time, and response time.
Threads allow a process to divide work into multiple simultaneous tasks. On a single processor system, multithreading uses fast context switching to give the appearance of simultaneity, while on multi-processor systems the threads can truly run simultaneously. There are benefits to multithreading like improved responsiveness and resource sharing.
There are several mechanisms for inter-process communication (IPC) in UNIX systems, including message queues, shared memory, and semaphores. Message queues allow processes to exchange data by placing messages into a queue that can be accessed by other processes. Shared memory allows processes to communicate by declaring a section of memory that can be accessed simultaneously. Semaphores are used to synchronize processes so they do not access critical sections at the same time.
The document discusses different process scheduling algorithms used by operating systems. It introduces key concepts like processes, CPU bursts, turnaround time and waiting time. It then describes common scheduling policies like preemptive and non-preemptive. Specific algorithms covered include First Come First Served (FCFS), Shortest Job First (SJF), Round Robin (RR) and Priority-based scheduling. Examples are provided to illustrate how each algorithm works.
The document discusses interprocess communication and summarizes the key points about client-server and group communication patterns. It describes the Java API for internet protocols, which provides datagram and stream communication using UDP and TCP. Specifically, it outlines how UDP supports message passing through datagrams, while TCP provides reliable, ordered streams between processes.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
This document discusses various inter-process communication (IPC) mechanisms in Linux, including pipes, FIFOs, and message queues. Pipes allow one-way communication between related processes, while FIFOs (named pipes) allow communication between unrelated processes through named pipes that persist unlike anonymous pipes. Message queues provide more robust messaging between unrelated processes by allowing messages to be queued until received and optionally retrieved out-of-order or by message type. The document covers the key functions and system calls for creating and using each IPC mechanism in both shell and C programming.
This document discusses the slides for Unit 2 of the Operating Systems course. It includes an index of lecture topics that will be covered, such as process concepts and threads, scheduling criteria and algorithms, thread scheduling, case studies of UNIX/Linux and Windows operating systems, and revision. Key concepts that will be covered include processes and threads, process state diagrams, process control blocks, CPU scheduling queues, producer-consumer problem solutions, scheduling criteria and algorithms like FCFS, SJF, priority and round robin, and thread scheduling models.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
The document discusses Windows XP's scheduling algorithm. It uses a priority-based, preemptive approach with 32 priority levels divided into variable and real-time classes. The scheduler ensures the highest priority thread runs by maintaining queues for each priority level and traversing from highest to lowest. Threads start at the process' base priority and may have their priority lowered after time quantums expire to limit CPU consumption of compute-intensive threads.
Here are the steps to solve this problem:
a) Non-Preemptive Priority Scheduling:
- Process order based on priority: P2, P4, P1, P3
- Number of context switches = 3
b) Round Robin Scheduling with time slice = 2:
- Process order: P1, P2, P1, P4, P1, P3, P1
- Number of context switches = 6
c) With RR, the behavior depends on the time slice size. With a small time slice of 2ms, most processes cannot complete within one time slice. This leads to a larger number of context switches compared to priority scheduling.
d)
The document discusses different CPU scheduling algorithms used in operating systems including first-come, first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It provides examples of how each algorithm works and compares their performance based on criteria like average waiting time, throughput, turnaround time, and response time. FCFS can lead to convoy effect with longer processes blocking shorter ones. SJF provides optimal waiting times but requires knowing future process lengths. Priority scheduling addresses starvation of low priority processes. RR gives each process a time slice or quantum before switching to ensure fairness.
The document discusses different CPU scheduling algorithms:
1. First Come First Served scheduling allocates CPU to the longest waiting process first, which can result in longer processes waiting behind shorter ones (convoy effect).
2. Shortest Job First scheduling allocates CPU to the process with the shortest estimated run time, minimizing average wait time. Preemptive SJF allows interrupting the current process if a shorter one arrives.
3. Priority scheduling assigns priority levels and allocates CPU to the highest priority ready process. Preemption and aging policies address starvation of lower priority processes.
4. Round Robin scheduling allocates a time quantum (e.g. 10-100ms) to each ready process
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple queue scheduling, real-time scheduling, and ways to evaluate scheduling algorithm performance like deterministic modeling and simulation.
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses various CPU scheduling concepts and algorithms. It covers basic concepts like CPU-I/O burst cycles and scheduling criteria. It then describes common scheduling algorithms like first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multi-level queue scheduling, multi-processor scheduling, and thread scheduling in Linux.
CPU scheduling involves selecting which process to execute next from among processes in memory. There are several criteria for evaluating CPU scheduling algorithms, including CPU utilization, throughput, turnaround time, waiting time, and response time. Common algorithms include first-come, first-served (FCFS), shortest-job-first (SJF), priority scheduling, and round robin (RR). Multilevel queue and feedback queue scheduling involve partitioning processes into multiple queues that use different scheduling policies.
The document discusses CPU scheduling in operating systems. It covers basic concepts like CPU bursts and scheduling criteria. It then describes common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses thread scheduling, multiprocessor scheduling, and examples of scheduling in Solaris, Windows, and Linux.
This document summarizes key concepts in CPU scheduling, including:
1) CPU scheduling algorithms like FCFS, SJF, priority, and round robin and how they optimize different criteria.
2) The role of dispatchers in context switching between processes.
3) Scheduling criteria like CPU utilization, throughput, turnaround time.
4) Advanced scheduling techniques like multilevel queues, multilevel feedback queues, and multiprocessor scheduling.
1. Process management is an integral part of operating systems for allocating resources, enabling information sharing, and protecting processes. The OS maintains data structures describing each process's state and resource ownership.
2. Processes go through discrete states and events can cause state changes. Scheduling selects processes to run from ready, device, and job queues using algorithms like round robin, shortest job first, and priority scheduling.
3. CPU scheduling aims to maximize utilization and throughput while minimizing waiting times using criteria like response time, turnaround time, and fairness between processes.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
The CPU scheduling chapter discusses key concepts like CPU utilization, the CPU-I/O burst cycle, and histogram of CPU burst times. The CPU scheduler selects processes from the ready queue to allocate the CPU. Scheduling can be preemptive or nonpreemptive. Preemptive scheduling can cause race conditions when processes share data. The dispatcher switches processes and handles context switching. Scheduling aims to optimize criteria like CPU utilization, throughput, turnaround time, waiting time and response time. First-come first-served scheduling considers processes in the order they arrive while shortest-job-first is optimal but requires knowing next CPU burst length, which can be estimated. Shortest-remaining-time-first is the preempt
This document discusses CPU scheduling in operating systems. It begins by introducing CPU scheduling and describing the goals of scheduling algorithms. It then explains common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). The document also covers multilevel queue scheduling, thread scheduling, multiple processor scheduling, and real-time CPU scheduling.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
This document discusses distributed operating systems and CPU scheduling. It covers basic concepts of CPU scheduling like processes, context switching, and dispatching. It then discusses different scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It also covers multiple processor scheduling, real-time scheduling, and algorithm evaluation. Deadlocks are discussed including characterization, handling methods like prevention, avoidance, and detection. Memory management techniques like swapping, paging, segmentation and their implementation are also summarized.
The document discusses hash tables and collision resolution techniques for hash tables. It defines hash tables as an implementation of dictionaries that use hash functions to map keys to array slots. Collisions occur when multiple keys hash to the same slot. Open addressing techniques like linear probing and quadratic probing search the array sequentially for empty slots when collisions occur. Separate chaining creates an array of linked lists so items can be inserted into lists when collisions occur.
The document discusses binary search trees and their operations. It defines key concepts like nodes, leaves, root, and tree traversal methods. It then explains how to search, insert, find minimum/maximum elements, and traverse a binary search tree. Searching a BST involves recursively comparing the target key to node keys and traversing left or right. Insertion finds the appropriate position by moving pointers down the tree until reaching an empty node.
The document discusses depth-first search (DFS) and breadth-first search (BFS) algorithms for graph traversal. It explains that DFS uses a stack to systematically visit all vertices in a graph by exploring neighboring vertices before moving to the next level, while BFS uses a queue to explore neighboring vertices at the same level before moving to the next. Examples are provided to illustrate how DFS can be used to check for graph connectivity and cyclicity.
The document discusses algorithms for finding minimum spanning trees in graphs. It describes Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by gradually adding the closest vertex and edge to a growing spanning tree. Kruskal's algorithm sorts all the edges by weight and adds edges to the spanning tree if they do not form cycles. The running time of Prim's algorithm is O(V^2) while Kruskal's algorithm has a running time of O(E log E + V) where V is vertices and E is edges. Examples are provided to illustrate how each algorithm works on sample graphs.
The document discusses algorithms for finding shortest paths in graphs. It describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest path problem. Dijkstra's algorithm runs in O(ElogV) time and works for graphs with non-negative edge weights, while Bellman-Ford algorithm runs in O(EV) time and can handle graphs with negative edge weights as long as there are no negative cycles. The document also discusses Floyd-Warshall algorithm for solving the all-pairs shortest path problem.
The document discusses greedy algorithms and their application to optimization problems. It provides examples of problems that can be solved using greedy approaches, such as fractional knapsack and making change. However, it notes that some problems like 0-1 knapsack and shortest paths on multi-stage graphs cannot be solved optimally with greedy algorithms. The document also describes various greedy algorithms for minimum spanning trees, single-source shortest paths, and fractional knapsack problems.
This document discusses greedy algorithms and dynamic programming. It explains that greedy algorithms find local optimal solutions at each step, while dynamic programming finds global optimal solutions by considering all possibilities. The document also provides examples of problems solved using each approach, such as Prim's algorithm and Dijkstra's algorithm for greedy, and knapsack problems for dynamic programming. It then discusses the matrix chain multiplication problem in detail to illustrate how a dynamic programming solution works by breaking the problem into overlapping subproblems.
The document discusses the quicksort algorithm. It begins by stating the learning goals which are to explain how quicksort works, compare it to other sorting algorithms, and discuss its advantages and disadvantages. It then provides an introduction and overview of quicksort, describing how it uses a divide and conquer approach. The document goes on to explain the details of how quicksort partitions arrays and provides examples. It analyzes the best, average, and worst case complexities of quicksort and discusses its strengths and limitations.
This document discusses the divide and conquer algorithm design strategy and provides an analysis of the merge sort algorithm as an example. It begins by explaining the divide and conquer strategy of dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. It then provides pseudocode and explanations for the merge sort algorithm, which divides an array in half, recursively sorts the halves, and then merges the sorted halves back together. It analyzes the time complexity of merge sort as Θ(n log n), proving it is more efficient than insertion sort.
The document discusses counting sort, a linear time sorting algorithm. Counting sort runs in O(n) time when the integers being sorted are in the range of 1 to k, where k is O(n). It works by counting the number of elements less than or equal to each unique input element, and using this to directly place each element in the correct position of the output array. Pseudocode and an example are provided to demonstrate how counting sort iterates through the input, counts the occurrences of each unique element, and uses this to sort the elements into the output array in linear time. However, counting sort has limitations and may not be practical for large datasets due to its required extra storage space.
This document describes a facial expression recognition system created by Mehwish S. Khan for her Masters in Computer Science. The system uses Viola-Jones algorithm for face detection, uniform Gabor features for feature extraction, and a Multi-Layer Feed Forward Neural Network for classification to distinguish seven universal facial expressions (disgust, anger, fear, happiness, sadness, surprise, and normal) from static images in a person-independent manner. The document includes sections on background research, system requirements, design, and implementation.
The document contains 17 programming problems assigned to Sunawar Khan for Assignment #4. The problems involve writing programs to find the minimum and median of input integers, compute sums and series, and produce various patterns and outputs using loops like for, while, do-while, and the ternary operator. Many problems require writing programs to find prime numbers, Armstrong numbers, sums of reciprocals and squares within a given range.
The document contains 10 questions asking to write programs to perform various tasks such as:
1) Determine the grade of steel based on hardness, carbon content, and tensile strength.
2) Identify if a character entered is a capital letter, lowercase letter, digit, or symbol based on ASCII values.
3) Calculate insurance premium based on health, age, location, gender, and policy amount.
4) Calculate total salary based on basic pay and allowances.
5) Determine if a 5-digit number is equal to its reverse.
This document contains 10 programming assignments for a college-level programming course. The assignments cover a range of programming concepts and techniques including variable types, arithmetic operations, conditional statements, loops, functions, and more. Students are asked to write programs that calculate sums, remainders, commissions, ASCII values, triangle areas, book reading progress, cash denominations, quadratic equations, number separation, and value swapping without a third variable. The assignments provide examples and hints to help students complete the programs correctly.
The document describes two encryption/decryption case studies. The first case study involves encrypting and decrypting 4-digit numbers by replacing each digit with its sum plus 7 modulus 10 and swapping the first and third digits and second and fourth digits. The second case study involves taking a text message and applying 14 rules to encrypt it by changing letters, removing vowels, and substituting letters and numbers for other letters and numbers. The encrypted text is then decrypted by applying the reverse rules.
The document discusses arrays and provides information about what arrays are, different types of arrays, initializing and accessing elements of arrays, and searching arrays. Some key points:
- An array is a group of consecutive memory locations with the same name and data type. It allows storing multiple values of the same type together.
- There are different types of arrays including one-dimensional, two-dimensional, and n-dimensional arrays.
- Elements of an array can be initialized when the array is declared and assigned values. Individual elements can also be accessed using their index.
- Searching an array involves finding a required value or element. Methods like sequential search and binary search can be used to search arrays. Sequential
The operating system maintains information about each process in a data structure called a process control block (PCB). The PCB is created when a new process is started by a user and contains information like the process state, program counter, CPU registers, scheduling information, memory management details, accounting information, and I/O status. PCBs allow the OS to efficiently manage and switch between processes by providing all necessary process details in one place.
This document discusses various topics related to data transmission including:
- Data transmission involves transferring electromagnetic signals over a physical communication channel like copper wires or wireless channels.
- Transmission modes can be parallel (multiple bits sent at once) or serial (one bit at a time). Serial transmission is further divided into asynchronous and synchronous types.
- Asynchronous transmission groups data into start-stop bit sequences while synchronous transmission uses device-generated clocks for synchronization.
A computer has basic hardware components that allow it to accept data through various input devices like keyboards and mice, process the data in the processing unit and memory, store data, and produce output through output devices like monitors and printers. The main hardware components are the input unit, processing unit, memory unit, output unit, and storage. The input unit consists of direct input devices like keyboards and pointing devices, as well as audio and video input devices. The output unit provides soft copy output through monitors and speakers, and hard copy output through various printers and plotters.
The document describes the selection sort and bubble sort algorithms. Selection sort works by iterating through a list, finding the minimum element, and swapping it into the current position. Bubble sort compares adjacent elements and swaps them if out of order, iterating through the list repeatedly to put elements in sorted order. Pseudocode is provided for the algorithms' underlying logic and processes.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
2. Reference:-
After Reading This Topic. . .
Goals of
processor
scheduling.
Preemptive
vs. Non
preemptive
scheduling.
Role of
priorities in
scheduling.
Scheduling
criteria.
Common
scheduling
algorithms.
3. Reference:-
What is Process Scheduling?
Assignment of Processor to process to accomplish the work
When process should be assigned to which process is Process Scheduling
When more than one process is runnable, UPU must decide priority
Part of OS concerned with this decision is called Scheduler.
To solve this problem with different algorithm is called Scheduling Algorithm
http://www.cs.kent.edu/~rmuhamma/
4. Reference:-
Basic Concept
■ Maximum CPU utilization obtained
with multiprogramming
■ CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
■ CPU burst followed by I/O burst
■ CPU burst distribution is of main
concern
Silberchatz, Galvin and Gagne Operating System, 9th Edition
5. Reference:-
Policy To Process Scheduling
Decides which process runs at given time
Different schedulers goals Maximize throughput
Minimize latency
Prevent indefinite postponement
Complete process by given deadline
Maximize processor utilization
http://www.cs.kent.edu/~rmuhamma/
6. Reference:-
What the Scheduler try to achieve
•Maximize throughput
•Response Time
•Minimize resource utilization
•Avoid indefinite postponement
•Enforce priorities
•Minimize overhead
•Ensure predictability
•Policy Enforcement
Different objectives depending on system
Deitel & Deitel, Operating System
8. Reference:-
Scheduling Levels
High-level
scheduling
•Determines which jobs can compete for resources
•Controls number of processes in system at one time
Intermediate-
level
scheduling
•Determines which processes can compete for processors
•Responds to fluctuations in system load
Low-level
scheduling
•Assigns priorities
•Assigns processors to processes
Deitel & Deitel, Operating System
10. Reference:-
Preemptive vs. Nonpreemptive Scheduling
Preemptive
processes
Can be removed from their current processor
Can lead to improved response times
Important for interactive environments
Preempted processes remain in memory
Nonpreemptive
processes
Run until completion or until they yield control of a
processor
Unimportant processes can block important ones
indefinitely
http://www.cs.kent.edu/~rmuhamma/
11. Reference:-
CPU Scheduler
Short-term scheduler selects from among the processes in ready queue, and
allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
Silberchatz, Galvin and Gagne Operating System, 9th Edition
12. Reference:-
Scheduling Criteria
■ CPU utilization – keep the CPU as busy as possible
■ Throughput – # of processes that complete their execution per
time unit
■ Turnaround time – amount of time to execute a particular
process
■ Waiting time – amount of time a process has been waiting in
the ready queue
■ Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)
Silberchatz, Galvin and Gagne Operating System, 9th Edition
13. Reference:-
Scheduling Algorithm Optimization Criteria
■ Max CPU utilization
■ Max throughput
■ Min turnaround time
■ Min waiting time
■ Min response time
Silberchatz, Galvin and Gagne Operating System, 9th Edition
16. Reference:-
FIFO
Process Burst Time
P1 24
P2 3
P3 3
■ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
■ Waiting time for P1 = 0; P2 = 24; P3 = 27
■ Average waiting time: (0 + 24 + 27)/3 = 17
P P P1 2 3
0 24 3027
17. Reference:-
FIFO (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
■ The Gantt chart for the schedule is:
■ Waiting time for P1 = 6; P2 = 0; P3 = 3
■ Average waiting time: (6 + 0 + 3)/3 = 3
■ Much better than previous case
■ Convoy effect - short process behind long process
– Consider one CPU-bound and many I/O-bound processes
P1
0 3 6 30
P2
P3
18. Reference:-
Round-Robin (RR) Scheduling
■ Round-robin scheduling
– Based on FIFO
– Processes run only for a limited amount of time called a time
slice or quantum
– Preemptible
– Requires the system to maintain several processes in memory
to minimize overhead
– Often used as part of more complex algorithms
19. Reference:-
Round Robin(RR) Scheduling
■ Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready
queue.
■ If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more
than (n-1)q time units.
■ Timer interrupts every quantum to schedule next process
■ Performance
– q large FIFO
– q small q must be large with respect to context switch, otherwise overhead is too
high
21. Reference:-
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
■ The Gantt chart is:
■ Typically, higher average turnaround than SJF, but better response
■ q should be large compared to context switch time
■ q usually 10ms to 100ms, context switch < 10 usec
P P P1 1 1
0 18 3026144 7 10 22
P2
P3
P1
P1
P1
24. Reference:-24
Round-Robin (RR) Scheduling
■ Selfish round-robin scheduling
– Increases priority as process ages
– Two queues
■ Active
■ Holding
– Favors older processes to avoids unreasonable
delays
25. Reference:-25
Round-Robin (RR) Scheduling
■ Quantum size
– Determines response time to interactive requests
– Very large quantum size
■ Processes run for long periods
■ Degenerates to FIFO
– Very small quantum size
■ System spends more time context switching than running processes
– Middle-ground
■ Long enough for interactive processes to issue I/O request
■ Batch processes still get majority of processor time
26. Reference:-
Shortest-Process-First (SPF) Scheduling
■ Scheduler selects process with smallest time to finish
– Lower average wait time than FIFO
■ Reduces the number of waiting processes
– Potentially large variance in wait times
– Nonpreemptive
■ Results in slow response times to arriving interactive requests
– Relies on estimates of time-to-completion
■ Can be inaccurate or falsified
– Unsuitable for use in modern interactive systems
27. Reference:-
Shortest-Process-First (SPF) Scheduling
■ Associate with each process the length of its next CPU
burst
– Use these lengths to schedule the process with the
shortest time
■ SJF is optimal – gives minimum average waiting time for a
given set of processes
– The difficulty is knowing the length of the next CPU
request
– Could ask the user
28. Reference:-
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
■ SJF scheduling chart
■ Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
P3
0 3 24
P4
P1
169
P2
29. Reference:-
Determining Length of Next CPU Burst
■ Can only estimate the length – should be similar to the previous one
– Then pick process with shortest predicted next CPU burst
■ Can be done by using the length of previous CPU bursts, using
exponential averaging
■ Commonly, α set to ½
■ Preemptive version called shortest-remaining-time-first
:Define4.
10,3.
burstCPUnexttheforvaluepredicted2.
burstCPUoflengthactual1.
1n
th
n nt
.11 nnn t
31. Reference:-
Examples of Exponential Averaging
■ =0
– n+1 = n
– Recent history does not count
■ =1
– n+1 = tn
– Only the actual last CPU burst counts
■ If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
■ Since both and (1 - ) are less than or equal to 1, each successive term has less
weight than its predecessor
32. Reference:-
Example of Shortest-remaining-time-first
■ Now we add the concepts of varying arrival times and preemption to the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
■ Preemptive SJF Gantt Chart
■ Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec
P4
0 1 26
P1
P2
10
P3
P1
5 17
33. Reference:-
Multilevel Queue
■ Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
■ Process permanently in a given queue
■ Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
■ Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
– 20% to background in FCFS
34. Reference:-34
Multilevel Feedback Queues
■ Different processes have different needs
– Short I/O-bound interactive processes should generally run before
processor-bound batch processes
– Behavior patterns not immediately obvious to the scheduler
■ Multilevel feedback queues
– Arriving processes enter the highest-level queue and execute with higher
priority than processes in lower queues
– Long processes repeatedly descend into lower levels
■ Gives short processes and I/O-bound processes higher priority
■ Long processes will run when short and I/O-bound processes terminate
– Processes in each queue are serviced using round-robin
■ Process entering a higher-level queue preempt running processes
35. Reference:-35
Multilevel Feedback Queues
■ Algorithm must respond to changes in environment
– Move processes to different queues as they alternate between
interactive and batch behavior
■ Example of an adaptive mechanism
– Adaptive mechanisms incur overhead that often is offset by
increased sensitivity to process behavior
38. Reference:-
Multilevel Feedback Queue
■ A process can move between the various queues; aging can be
implemented this way
■ Multilevel-feedback-queue scheduler defined by the following
parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will enter
when that process needs service
39. Reference:-
Example of Multilevel Feedback Queue
■ Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
■ Scheduling
– A new job enters queue Q0 which is served
FCFS
■ When it gains CPU, job receives 8
milliseconds
■ If it does not finish in 8
milliseconds, job is moved to queue
Q1
– At Q1 job is again served FCFS and
receives 16 additional milliseconds
■ If it still does not complete, it is
preempted and moved to queue Q2