This document discusses various concepts related to CPU scheduling. It begins with definitions of scheduling and explains that the CPU requires a mechanism to allocate time to different processes in a fair manner. It then covers key scheduling concepts like scheduling levels (high, intermediate, low), types (preemptive vs non-preemptive), objectives, and algorithms like FCFS, SJF, priority scheduling, and round robin. The document provides examples and comparisons of different scheduling techniques.
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
This document discusses various CPU scheduling algorithms such as FCFS, SJF, priority scheduling, and round robin. It covers the basic concepts of scheduling, criteria for evaluating algorithms, examples of single processor and multiprocessor scheduling, and examples of scheduling in operating systems like Windows and Mac OS. The goal of scheduling is to allocate CPU time effectively among competing processes or threads.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
Task scheduling is needed to maintain every process that comes with a processor in parallel processing. In several conditions, not every algorithm works better on the significant problem. Sometimes FCFS algorithm is better than the other in short burst time while Round Robin is better for multiple processes in every single time. But we cannot predict what process will come after. Average Waiting Time is a standard measure for giving credit to the scheduling algorithm. Several techniques have been applied to maintain the process in order to make the CPU performance in normal. The objective of this paper is to compare three algorithms, FCFS, SJF and Round Robin. Finally, we know which algorithm is more suitable for the certain process.
The document discusses various scheduling algorithms: First Come First Served (FCFS), Shortest Job First (SJF), priority scheduling, and round robin (RR). FCFS schedules processes in the order that they arrive. SJF selects the process with the shortest estimated runtime. Priority scheduling prioritizes processes based on assigned priorities. Round robin gives each process a time slice or quantum to use the CPU before switching to another process. Examples are provided to illustrate how each algorithm works.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
This document discusses various CPU scheduling algorithms such as FCFS, SJF, priority scheduling, and round robin. It covers the basic concepts of scheduling, criteria for evaluating algorithms, examples of single processor and multiprocessor scheduling, and examples of scheduling in operating systems like Windows and Mac OS. The goal of scheduling is to allocate CPU time effectively among competing processes or threads.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
Task scheduling is needed to maintain every process that comes with a processor in parallel processing. In several conditions, not every algorithm works better on the significant problem. Sometimes FCFS algorithm is better than the other in short burst time while Round Robin is better for multiple processes in every single time. But we cannot predict what process will come after. Average Waiting Time is a standard measure for giving credit to the scheduling algorithm. Several techniques have been applied to maintain the process in order to make the CPU performance in normal. The objective of this paper is to compare three algorithms, FCFS, SJF and Round Robin. Finally, we know which algorithm is more suitable for the certain process.
The document discusses various scheduling algorithms: First Come First Served (FCFS), Shortest Job First (SJF), priority scheduling, and round robin (RR). FCFS schedules processes in the order that they arrive. SJF selects the process with the shortest estimated runtime. Priority scheduling prioritizes processes based on assigned priorities. Round robin gives each process a time slice or quantum to use the CPU before switching to another process. Examples are provided to illustrate how each algorithm works.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses various concepts and algorithms related to process scheduling. It covers basic concepts like CPU bursts and scheduling criteria. It then describes several common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multiple processor scheduling, thread scheduling, and load balancing.
This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
Here are the key steps in Preemptive SJF scheduling:
1. Jobs A, B, C, D arrive in that order with the given burst times
2. Job B has the shortest burst time of 4 so it runs first
3. Job A preempts B after B completes its 4 unit burst
4. Job D has the next shortest burst of 5 so it runs
5. Job C has the next shortest burst of 9 so it runs
6. Job A completes after running a total of 7 + 7 = 14 units
The average wait time is (9 + 0 + 15 + 2) / 4 = 6.5
This example shows how preemptive S
The document discusses various scheduling algorithms used in operating systems including:
- First Come First Serve (FCFS) scheduling which services processes in the order of arrival but can lead to long waiting times.
- Shortest Job First (SJF) scheduling which prioritizes the shortest processes first to minimize waiting times. It can be preemptive or non-preemptive.
- Priority scheduling assigns priorities to processes and services the highest priority process first, which can potentially cause starvation of low priority processes.
- Round Robin scheduling allows equal CPU access to all processes by allowing each a small time quantum or slice before preempting to the next process.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
This document discusses various CPU scheduling algorithms used in operating systems. It describes the role of the CPU scheduler and dispatcher in selecting processes to run. Common scheduling criteria like CPU utilization, throughput, turnaround time and waiting time are discussed. First Come First Served (FCFS) scheduling and its example are provided. Shortest Job First (SJF) scheduling, both preemptive and non-preemptive variants, are explained with examples. Round Robin scheduling and its time quantum concept is outlined. Other scheduling algorithms like priority scheduling are also briefly covered.
This document summarizes a chapter on CPU scheduling from a textbook. It discusses key concepts in CPU scheduling like scheduling criteria, algorithms, and evaluation methods. The major scheduling algorithms covered are first-come, first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multilevel queue scheduling, multilevel feedback queues, multiple processor scheduling, and real-time scheduling.
CPU scheduling allows multiple processes to share the CPU using time multiplexing. There are three main types of schedulers: long-term, short-term, and medium-term. Scheduling algorithms can be preemptive or non-preemptive. Common algorithms include first-come, first-served (FCFS), shortest job first (SJF), and round robin scheduling (RRS). FCFS uses a FIFO queue, SJF aims to minimize response time, and RRS provides fair time sharing between processes.
17 cpu scheduling and scheduling criteria myrajendra
This document discusses CPU scheduling and scheduling criteria. It covers the CPU scheduler or short term scheduler which selects processes from the ready queue to allocate the CPU to. It describes various scheduling criteria like CPU utilization, throughput, turnaround time, waiting time and response time that are used to compare scheduling algorithms. The goal of scheduling is to always keep the CPU busy while maximizing throughput and minimizing waiting times and turnaround times of processes.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses the goals of CPU scheduling including maximizing CPU utilization and minimizing response time. Common single-processor scheduling algorithms like FCFS, SJF, priority, and round robin are described. The document also introduces the concept of multi-level queue scheduling to handle different process types and priorities.
The document discusses different CPU scheduling algorithms used in operating systems. It provides an overview of scheduling concepts like processes waiting in the ready queue for CPU time, and the dispatcher that allocates the CPU to the selected process. It then describes in detail common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multilevel queue scheduling, factors that influence algorithm performance like time quantum size, and methods for evaluating scheduling algorithms.
This document discusses different CPU scheduling algorithms. It covers scheduling criteria like CPU utilization and waiting time. It then describes common scheduling algorithms like First Come First Served, Shortest Job First, Priority Scheduling, and Round Robin. For each algorithm, it provides an example of how processes would be scheduled using a Gantt chart and calculates the average waiting time.
This document discusses different CPU scheduling algorithms. It introduces CPU scheduling and describes how processes are managed through job queues, ready queues, and I/O queues. There are four main scheduling algorithms covered - First Come First Served, Shortest Job First, Priority Scheduling. The algorithms differ in how they determine which waiting process should execute next using criteria like throughput, waiting time, response time.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
This presentation summarizes six different CPU scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Shortest Remaining Time, Priority Scheduling, Round Robin Scheduling, and Multilevel Queue Scheduling. For each algorithm, the presentation provides a brief overview of how it works, its advantages, and disadvantages. It also includes an example calculation of turnaround time and waiting time for FCFS and a comparison chart of the different algorithms. The presentation concludes that scheduling algorithms should not affect system behavior but can impact efficiency and response time, and the best are adaptive to changes.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
Here are the key steps in Preemptive SJF scheduling:
1. Jobs A, B, C, D arrive in that order with the given burst times
2. Job B has the shortest burst time of 4 so it runs first
3. Job A preempts B after B completes its 4 unit burst
4. Job D has the next shortest burst of 5 so it runs
5. Job C has the next shortest burst of 9 so it runs
6. Job A completes after running a total of 7 + 7 = 14 units
The average wait time is (9 + 0 + 15 + 2) / 4 = 6.5
This example shows how preemptive S
The document discusses various scheduling algorithms used in operating systems including:
- First Come First Serve (FCFS) scheduling which services processes in the order of arrival but can lead to long waiting times.
- Shortest Job First (SJF) scheduling which prioritizes the shortest processes first to minimize waiting times. It can be preemptive or non-preemptive.
- Priority scheduling assigns priorities to processes and services the highest priority process first, which can potentially cause starvation of low priority processes.
- Round Robin scheduling allows equal CPU access to all processes by allowing each a small time quantum or slice before preempting to the next process.
This document discusses different CPU scheduling algorithms. It describes the First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling (PS), and Round Robin (RR) algorithms. Each algorithm is evaluated based on criteria like average turnaround time, waiting time, and CPU utilization. FCFS is found to have the highest CPU utilization but higher average turnaround times. SJF provides the lowest average turnaround times but can cause starvation of longer jobs. RR provides fairness but the time quantum setting impacts efficiency. The best algorithm depends on the specific performance measures and system requirements.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
This document discusses various CPU scheduling algorithms used in operating systems. It describes the role of the CPU scheduler and dispatcher in selecting processes to run. Common scheduling criteria like CPU utilization, throughput, turnaround time and waiting time are discussed. First Come First Served (FCFS) scheduling and its example are provided. Shortest Job First (SJF) scheduling, both preemptive and non-preemptive variants, are explained with examples. Round Robin scheduling and its time quantum concept is outlined. Other scheduling algorithms like priority scheduling are also briefly covered.
This document summarizes a chapter on CPU scheduling from a textbook. It discusses key concepts in CPU scheduling like scheduling criteria, algorithms, and evaluation methods. The major scheduling algorithms covered are first-come, first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multilevel queue scheduling, multilevel feedback queues, multiple processor scheduling, and real-time scheduling.
CPU scheduling allows multiple processes to share the CPU using time multiplexing. There are three main types of schedulers: long-term, short-term, and medium-term. Scheduling algorithms can be preemptive or non-preemptive. Common algorithms include first-come, first-served (FCFS), shortest job first (SJF), and round robin scheduling (RRS). FCFS uses a FIFO queue, SJF aims to minimize response time, and RRS provides fair time sharing between processes.
17 cpu scheduling and scheduling criteria myrajendra
This document discusses CPU scheduling and scheduling criteria. It covers the CPU scheduler or short term scheduler which selects processes from the ready queue to allocate the CPU to. It describes various scheduling criteria like CPU utilization, throughput, turnaround time, waiting time and response time that are used to compare scheduling algorithms. The goal of scheduling is to always keep the CPU busy while maximizing throughput and minimizing waiting times and turnaround times of processes.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses the goals of CPU scheduling including maximizing CPU utilization and minimizing response time. Common single-processor scheduling algorithms like FCFS, SJF, priority, and round robin are described. The document also introduces the concept of multi-level queue scheduling to handle different process types and priorities.
The document discusses different CPU scheduling algorithms used in operating systems. It provides an overview of scheduling concepts like processes waiting in the ready queue for CPU time, and the dispatcher that allocates the CPU to the selected process. It then describes in detail common scheduling algorithms like FCFS, SJF, priority scheduling, and round robin. It also discusses more advanced topics like multilevel queue scheduling, factors that influence algorithm performance like time quantum size, and methods for evaluating scheduling algorithms.
This document discusses different CPU scheduling algorithms. It covers scheduling criteria like CPU utilization and waiting time. It then describes common scheduling algorithms like First Come First Served, Shortest Job First, Priority Scheduling, and Round Robin. For each algorithm, it provides an example of how processes would be scheduled using a Gantt chart and calculates the average waiting time.
This document discusses different CPU scheduling algorithms. It introduces CPU scheduling and describes how processes are managed through job queues, ready queues, and I/O queues. There are four main scheduling algorithms covered - First Come First Served, Shortest Job First, Priority Scheduling. The algorithms differ in how they determine which waiting process should execute next using criteria like throughput, waiting time, response time.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
This presentation summarizes six different CPU scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Shortest Remaining Time, Priority Scheduling, Round Robin Scheduling, and Multilevel Queue Scheduling. For each algorithm, the presentation provides a brief overview of how it works, its advantages, and disadvantages. It also includes an example calculation of turnaround time and waiting time for FCFS and a comparison chart of the different algorithms. The presentation concludes that scheduling algorithms should not affect system behavior but can impact efficiency and response time, and the best are adaptive to changes.
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
The document discusses processes and process management. It defines a process as an active program in execution. Processes fall into two categories - system processes started by the OS and user processes started by the user. Each process runs independently and has its own memory space. Process management allows controlling processes by starting, ending, and setting priorities. Processes pass through states like new, ready, running, wait, and termination. The OS performs operations on processes like creation, scheduling, execution, and deletion. Schedulers like long-term, short-term, and medium-term manage processes. A process control block tracks process information.
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
Scheduling is a method used to allocate computing resources like processor time, bandwidth, and memory to processes, threads, and applications. It aims to balance system load, ensure equal distribution of resources, and prioritize processes according to set rules. There are different types of scheduling including long-term, medium-term, and short-term scheduling. Scheduling algorithms decide which process from the ready queue is allocated the CPU based on whether the policy is preemptive or non-preemptive. Common algorithms include first-come first-served, shortest job first, priority scheduling, and round-robin scheduling.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses key scheduling concepts like multiprogramming and processes. It then covers various scheduling algorithms like first-come first-served, shortest job first, priority-based, and round robin. It also discusses scheduling criteria, multilevel queues, multiple processor scheduling, real-time scheduling, and how scheduling algorithms are evaluated. The goal of scheduling is to optimize criteria like wait time, response time, and throughput.
This document discusses CPU scheduling in operating systems. It covers basic scheduling concepts like multiprogramming and preemptive scheduling. It then describes the role of the scheduler and dispatcher in selecting which process runs on the CPU. Several common scheduling algorithms are explained like first-come first-served, shortest job first, priority scheduling, and round robin. Factors for evaluating scheduling performance and examples of scheduling in Linux and real-time systems are also summarized.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
A process represents a program in execution. It progresses sequentially through different states from start to termination. A process has sections for stack, heap, text, and data in memory. Shortest Job First (SJF) scheduling allocates the CPU to the process with the shortest estimated run time remaining. It aims to minimize average waiting times but requires knowing future process durations. First Come First Served (FCFS) scheduling handles processes in the order they arrive without preemption, but is prone to the convoy effect where short jobs wait behind long ones.
The document discusses process scheduling in operating systems. It describes how the scheduler determines which process moves from the ready state to the running state on the CPU. The main goals of scheduling are to keep the CPU busy and provide minimum response times. There are non-preemptive and preemptive schedulers. Processes exist in different queues depending on their state. The three main types of schedulers are long-term, short-term, and medium-term schedulers. Common scheduling algorithms include first-come, first-served; shortest job first; priority; round robin; and multilevel queue scheduling.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
This document discusses different types of process schedulers in an operating system:
1. Long term schedulers determine which programs are admitted to the system and load them into memory.
2. Short term (CPU) schedulers select ready processes and allocate the CPU to one of them.
3. Medium term schedulers handle swapped out processes and can reintroduce processes back into memory.
Context switching allows multiple processes to share the CPU by storing and restoring a process's state from its process control block when switching between processes. This switching is computationally expensive so some hardware uses multiple register sets.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
The document discusses CPU scheduling algorithms. It begins by explaining the basic concepts of CPU scheduling, including that the CPU scheduler selects ready processes to execute on the CPU. This allows for multi-programming by switching the CPU among ready processes instead of waiting for each process to finish. The document then discusses different scheduling algorithms like first come first served and shortest job first, and evaluates them based on criteria like CPU utilization, throughput, turnaround time, and waiting time.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
This document discusses various CPU scheduling algorithms including first-come first-served, shortest job first, priority scheduling, round robin scheduling, multilevel queue scheduling, multilevel feedback queue scheduling, and scheduling techniques for multiple processors and multicore processors. It provides examples and comparisons of how each algorithm works and considerations for optimization.
The document discusses various aspects of process scheduling and CPU scheduling. It describes the different queues that an operating system maintains for processes in different states. These include ready queues for processes ready to execute, and device queues for processes waiting on I/O. It also covers schedulers for long term, short term, and medium term scheduling and different scheduling algorithms like FCFS, priority scheduling, and round robin scheduling. Context switching is described as the mechanism to store and restore process states to enable time sharing of the CPU between processes.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
2. 2
• Scheduling
CPU is the brain of the computer system __ performs all the
processing inside the computer.
Programs enter to systems ___becomes process __ handled by
OS.
In a Multi-programming/Multi-user systems ___ many processes __
each process wait for its turn to utilize the CPU and to perform some
useful task.
That’s why CPU requires some type of mechanism ___ to serve all
the processes uniformly.
“This mechanism is handled by the OS and this management of the
CPU is called CPU/processor scheduling”.
EXPLANATION: more than one processes in the Ready state __
OS decision __ which one to run first __ which one to wait.
The part of OS which makes this decision is called “Scheduler”, the
algorithm it uses is called the scheduling algorithm.
izazroghani@gmail.com
3. 3
• Scheduling
Scheduling affects the performance of the system because it
determines which process will wait and which will progress.
Simply we can say that the problem of determining, when processor
should be assigned to which process (currently in memory) is called
scheduling.
Scheduling is a fundamental OS function.
Since the CPU is such an important resource, it is very important to
develop good scheduling algorithms.
izazroghani@gmail.com
4. 4
• Scheduling Objectives
Scheduling typically attempts to achieve some combination of the
following goals, and the overall scheduling effort is intended to meet
system’s performance and behavior.
THROUGHPUT: Maximize the number of jobs processed/completed
per unit time.
UTILIZATION OF RESOURCES: Maximize utilization of other
resources (disks, printers etc).
RESPONSE TIME: Minimize the response time for interactive users
(on-line users/time sharing) i.e. provide tolerable response time.
TURNAROUND TIME: Minimize the time user wait for the output
(batch users) = waiting time + computation time + I/O time
FAIRNESS: To make sure that each process gets its share of the
CPU i.e. treated equally.
EFFICIENCY: To keep the CPU busy 100% of the time.
izazroghani@gmail.com
5. 5
• Scheduling Objectives
GRACEFUL DEGRADATION: If the system becomes overloaded, it
should not ‘collapse’, but avoid further loading and temporarily
reduce the level of service (response time).
izazroghani@gmail.com
6. 6
• CPU I/O Burst Cycle
The execution of a process consist of an alternation of CPU Bursts
and I/O Bursts.
A process begins and ends with a CPU burst. In between, CPU
activity is suspended whenever an I/O operation is needed.
I/O BOUND: If the CPU bursts are relatively short compared to the
I/O bursts, then the process is said to be I/O bound.
For example, if the processor is capable of making rapid changes to
a large database stored on a disk faster than the drive mechanism
can perform the read and write operations, the computer is
input/output-bound.
CPU Bound: If the CPU bursts are relatively long compared to the
I/O bursts, then the process is said to be CPU bound.
For example, if a processor is involved in a long arithmetic
computations, then the process is CPU bound.
izazroghani@gmail.com
8. 8
• SCHEDULING LEVELS
In a process life cycle, scheduling can be used at THREE
levels/points. The scheduling levels discuss the level/point at which
scheduling mechanism can be used.
• High level Scheduling
Also called Job/Admission/long term scheduling.
When a job is entered into a system and become a process it is high
level scheduling.
Often admission, a process start competing actively for the
resources of the system.
HLS controls the admission of jobs into the system, i.e. it decides
which newly jobs are to be converted into processes and be put into
the READY queue to compete for the CPU.
This activity is only really applicable to Batch systems.
izazroghani@gmail.com
9. 9
• Intermediate level Scheduling (ILS)
Also called Medium level scheduling (MLS).
MLS is applicable to systems where a process within the system
(but not currently running i.e. in the Ready Queue) can be swapped
out of memory on to disk (Virtual Memory) in order to reduce the
system load.
MLS determines that which process shall be allowed to compete for
the CPU and which shall be suspended/blocked in response to
Virtual memory.
MLS acts as a buffer between the admission of jobs to the system,
and the assigning of the CPU to these jobs.
izazroghani@gmail.com
10. 10
• Low level Scheduling (LLS)
The LLS is the most complex and significant of the scheduling levels
LLS determines that which Ready process will be assigned to the
CPU, when it is next available, and actually assign the CPU to this
process.
LLS is performed by the Dispatcher, i.e. it dispatches a process to
the CPU, which operates many times per second.
The dispatcher must therefore resides all the time in main memory
The LLS will be invoked whenever the current process relinquishes
(leave/remove) control ( because of any I/O call or interrupt arises).
A number of policies have been devised for use in LLS, each of
which has its own advantages and disadvantages.
HOME WORK: Pages # 287….291 from H.M. Deitel.
izazroghani@gmail.com
11. 11
• Scheduling Levels
Job waiting for
Entry
Job waiting for
initiation
Suspended
Process waiting
For initiation
Active
processes
Running
Processes
Completed
izazroghani@gmail.com
13. 13
• SCHEDULING TYPES
Scheduling types determines how the processes use the CPU after
dispatching or at LLS.
Either the process remain in the CPU until its completion or it
remains there for a short time (quantum or time slice).
• Preemptive Scheduling
Preemption basically means that a process may be forcibly removed
from the CPU even if does not want to release the CPU i.e. it is still
executing but a higher priority process needs the CPU.
In preemptive scheme, the LLS may remove a process from the
RUNNING state in order to allow another process to run.
In a preemptive scheme, a running process may be forced to yield
the CPU ( thus returning to the ready list) by an external rather than
by its own action. Such external events can be:
A higher priority process enters the systems.
A waited process becomes ready (because of I/O interrupt).
izazroghani@gmail.com
14. 14
• Non-Preemptive Scheduling
In Non-Preemptive scheme, a process once given to the processor,
will be allowed to run until its completion or until it terminates or
incurs an I/O wait.
OR we can say that when a process can not “forcibly” lose the
processor.
Non-Preemptive scheduling is also known as “Run-to-Completion”,
which is slightly inaccurate, since the process will lose the processor
if it incurs an I/O wait.
ASSIGNMENT:
Q1: Compare Preemptive with Non-preemptive scheduling?
izazroghani@gmail.com
15. 15
• Preemptive VS Non-Preemptive Scheduling
Preemptive scheduling is useful, where high priority process
required rapid attention.
In a real time system and in a interactive timesharing system,
preemptive scheduling is important.
In Preemptive scheduling, interrupts disturbs the efficiency of the
CPU.
In a Non-preemptive scheduling systems, short job are made to wait
by long jobs.
Non-preemptive scheduling is easy to implement, they are simple.
In Non-preemptive scheme, high-priority jobs can not displace
waiting jobs.
izazroghani@gmail.com
16. 16
• Scheduling Criteria
There are many scheduling algorithms, and various criteria to judge
their performance. Different algorithms may favor different types of
processes. A few criteria are:
Max CPU utilization – try to keep the CPU as busy as possible
Max Throughput – Number of processes completed in a unit
time.
Min Turnaround time – Time needed to execute a process.
Min Response time & waiting time.
The CPU scheduling algorithm should try to maximize the above
points.
izazroghani@gmail.com
18. 18
• FIFO or FCFS (First-come-first-served)
The simplest scheduling algorithm/policy.
As the name implies, the FCFS policy simply assigns the processor to the
process which is first in the READY queue.
This is the Non-preemptive scheme. (Run-to-completion)
FAIR in a sense: Run the jobs till completion
UNFAIR in a sense: Long job make short job to wait for a long time and un-
important jobs make important job wait.
Its response time is predictable. ( By Gantt Chart)
FIFO/FCFS is rarely used as a master scheme in today’s system, but it is
often embedded with other schemes.
------- P4 P3 P2 P1 P 1
izazroghani@gmail.com
19. 19
• Illustration of FCFS policy
We assume the arrival in the READY queue for processes in the
numbered sequence, we can calculate how long each process has
to wait.
(a) (b)
Job Est. Burst/Run Time waiting
1 2 (unit time) 0
2 60 2
3 1 62
4 3 63
5 50 66
The average waiting time= (0+2+62+63+66)/5= ?
izazroghani@gmail.com
20. 20
• SJF (Shortest Job First – Non-Preemptive)
It is a Non-preemptive scheduling policy.
In this policy waiting jobs with shortest (shortest) estimated run-time is run
or dispatch next.
It is a priority scheme, in which SJF favors short job at the expense of
longer jobs.
If we use this scheme on the jobs in Table 1, above, we get a rather
different picture as shown in Table 2.
(a) (b)
Job Est. Burst/Run Time waiting
3 1 0
1 2 (unit time) 1
4 3 3
5 50 6
2 60 56
izazroghani@gmail.com
21. 21
• SJF (preemptive version)
We also have a Preemptive version of SJF.
In the preemptive version of SJF, a long job in the queue may be
delayed by a succession of smaller jobs arriving in the queue.
In the example of Table 2, it is assumed that the job list is constant,
but, in practice, before Time 3 is when job 5 is due to start, another
job of length, say, 10 minutes could arrive and be placed ahead of
job5.
This queue jumping effect could recur (occur again) many times,
effectively preventing job 5 from starting at all; this situation is
known as Starvation.
(a) (b)
Job Est. Burst/Run Time waiting
3 1 0
1 2 (unit time) 1
4 3 3
x 10
5 50 6
2 60 56
izazroghani@gmail.com
22. 22
• SRT ( Shortest Remaining Time)
SRT is a preemptive version of SJF, and is useful in timesharing
environment.
In SRT, suppose a process “A” is dispatched to the CPU and started
processing, however, during of this process, another process say “B”
arrives who’s Run-time is shorter than Job “A’s” remaining run time,
then job “A” will be preempted and job “B” will be allow to execute.
SRT favors short jobs even more than SJF, since a currently running
long job could be ousted (removed) by a new shorter one.
DANGER OF STARVATION:
The danger of starvation of longer jobs also exists in this scheme.
Longer jobs --- waits for shorter jobs.
Implementation of SRT requires an estimate of Total-run-time and
measurement of Elapsed run-time.
So SRT has a higher over head than SJF
izazroghani@gmail.com
23. 23
• HRN (Highest Response Ratio Next)
To correct some of the weaknesses (Danger of starvation) of the
SJF policy, Brinch Hansen developed the HRN strategy.
HRN is a Non-preemptive scheduling, (Run-to-completion).
In HRN, a job with the highest priority value will be selected for
running.
The priority of job is a function of its Service time and its waited
time
Dynamic priorities are calculated in HRN according to the formula
Priority, P = time waiting + runt time / burst time
run time
Longer and shorter job’s are both given a favorable treatment.
Time waiting and run-time is the system response time.
izazroghani@gmail.com
24. 24
• HRN
When processes first appear in the READY queue, the “time
waiting” will be zero and hence P will be equal to 1 for all processes.
Consider two jobs ‘A’ and ‘B, with run-time of 10 and 50 minutes
respectively. After each has waited 5 minutes, their respective
priorities are:
A : P = (5+10) = 1.5 B: P = (5 + 50) = 1.1
10 50
On this basis, the shorter job A will be selected.
However, that if ‘A’ has just started (wait time=0), then according to
the formula the priority of process ‘A’ will be P = 1, and since the
priority for process ‘B’ is p= 1.1, then in this situation process “B” will
be chosen in preference to ‘A’.
This technique that a job can’t be starved.
izazroghani@gmail.com
25. 25
• ASSIGNMENT
Q1: GIVE AN EXAMPLE OF HRN POLICY WITH THE HELP OF
GANTT CHART?
WHAT IS ONE OF THE MAJOR PROBLEM WITH PRIORITY
BASED SCHEDULING AND WHAT IS THEIR SOLUTION?
BOOK: CHAPTER # 6, BY SILBERSCHATZ
izazroghani@gmail.com
26. 26
• ROUND ROBIN SCHEDULING (RRS)
The oldest, simplest, fairest and most widely used scheduling policy.
In the RR scheme, a process is selected for running from the READY queue in
FIFO (First in First out) sequence.
Each process which enters in the CPU, is given limited amount of CPU time,
called a “time slice / quantum”.
However, if the process runs beyond the time slice, then it is interrupted and
returned to the end of the READY queue.
In other words, each active process is given a “time slice” in rotation.
Time slice is usually 10 – 100 millisecond.
------- P4 P3 P2 P1 P 1PA
izazroghani@gmail.com
27. 27
• RR Scheduling
RR is effective in timesharing environment in which the system
needs to guarantee reasonable response time for interactive users.
The RR scheme is preemptive, but preemption occurs only by
expiry of the time quantum.
izazroghani@gmail.com
28. 28
• Multi-level Feedback Queues (MFQ)
The MFQ scheme is an attempt to provide a more adaptive policy
which will treat processes in the basis of their past behavior.
Figure shows a typical set up for a MFQ system. It consists of a
number of separate queues of entries which represent active
processes.
Each queue represents a different priority, with the top queue being
highest priority and lower queues successively lower priorities.
Within each queue, the queued processes are treated in a
FIFO/FCFS fashion, with a time quantum being allotted to each
process in the queue.
A new process enters the system at the end of the top queue, and
will be execute for a time slice allotted to it.
Upon expiry of quantum, it will moves to the end of next lower
queue.
At last it will move to the Lowest level queue, where a RR scheme
is applied and then it execute their till the completion of the process.
izazroghani@gmail.com
30. 30
•Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
izazroghani@gmail.com
31. 31
ASSIGNMENT
Q1: WHAT IS DISPATCHER, AND WHAT IS MEANT BY
DISPATCH LATENCY?
Q2: WHAT ARE THE ADVANTAGES AND DIS-ADVANTAGES OF
PEEMPTIVE AND NON-PREEMPTIVE SCHEDULING
ALGORITHMS?
Q3: WHAT IS STATIC AND DYNAMIC PRIORITIES ?
Q4: WHAT IS CONVOY EFFECT IN FCFS SCHEDULING
ALGORITHM?
izazroghani@gmail.com