The document discusses process scheduling in operating systems. It covers basic concepts of process management including CPU scheduling algorithms. Several CPU scheduling algorithms are described in detail, including first-come, first-served (FCFS), shortest job first (SJF), priority-based scheduling, and round robin (RR). The goals of CPU scheduling algorithms are also discussed, such as minimizing waiting time and turnaround time while maximizing CPU utilization. Examples are provided to illustrate how each scheduling algorithm works.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The document discusses various aspects of process scheduling and CPU scheduling. It describes the different queues that an operating system maintains for processes in different states. These include ready queues for processes ready to execute, and device queues for processes waiting on I/O. It also covers schedulers for long term, short term, and medium term scheduling and different scheduling algorithms like FCFS, priority scheduling, and round robin scheduling. Context switching is described as the mechanism to store and restore process states to enable time sharing of the CPU between processes.
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive
- To introduce CPU scheduling, which is the basis for multi-programmed operating systems
- To describe various CPU-scheduling algorithms
- To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
- To examine the scheduling algorithms of several operating systems
A process represents a program in execution. It progresses sequentially through different states from start to termination. A process has sections for stack, heap, text, and data in memory. Shortest Job First (SJF) scheduling allocates the CPU to the process with the shortest estimated run time remaining. It aims to minimize average waiting times but requires knowing future process durations. First Come First Served (FCFS) scheduling handles processes in the order they arrive without preemption, but is prone to the convoy effect where short jobs wait behind long ones.
Scheduling Definition, objectives and types Maitree Patel
Scheduling is the process of determining which process will use the CPU when multiple processes are ready to execute. The objectives of scheduling are to maximize CPU utilization, throughput, and fairness while minimizing response time, turnaround time, and waiting time. There are three main types of schedulers: long-term schedulers manage process admission to the system; short-term or CPU schedulers select the next process to run on the CPU; and medium-term schedulers handle process suspension during I/O waits.
The document discusses processes and process management in operating systems. A process is a program in execution that has its own address space and CPU. Processes go through various states like ready, running, waiting, and terminated. The operating system uses process control blocks to manage processes and maintain process information. It also uses process scheduling queues and performs long, short, and medium-term scheduling of processes. Threads are lightweight sub-processes that can run independently within a process and allow for parallel execution.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The document discusses various aspects of process scheduling and CPU scheduling. It describes the different queues that an operating system maintains for processes in different states. These include ready queues for processes ready to execute, and device queues for processes waiting on I/O. It also covers schedulers for long term, short term, and medium term scheduling and different scheduling algorithms like FCFS, priority scheduling, and round robin scheduling. Context switching is described as the mechanism to store and restore process states to enable time sharing of the CPU between processes.
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive
- To introduce CPU scheduling, which is the basis for multi-programmed operating systems
- To describe various CPU-scheduling algorithms
- To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system
- To examine the scheduling algorithms of several operating systems
A process represents a program in execution. It progresses sequentially through different states from start to termination. A process has sections for stack, heap, text, and data in memory. Shortest Job First (SJF) scheduling allocates the CPU to the process with the shortest estimated run time remaining. It aims to minimize average waiting times but requires knowing future process durations. First Come First Served (FCFS) scheduling handles processes in the order they arrive without preemption, but is prone to the convoy effect where short jobs wait behind long ones.
Scheduling Definition, objectives and types Maitree Patel
Scheduling is the process of determining which process will use the CPU when multiple processes are ready to execute. The objectives of scheduling are to maximize CPU utilization, throughput, and fairness while minimizing response time, turnaround time, and waiting time. There are three main types of schedulers: long-term schedulers manage process admission to the system; short-term or CPU schedulers select the next process to run on the CPU; and medium-term schedulers handle process suspension during I/O waits.
The document discusses processes and process management in operating systems. A process is a program in execution that has its own address space and CPU. Processes go through various states like ready, running, waiting, and terminated. The operating system uses process control blocks to manage processes and maintain process information. It also uses process scheduling queues and performs long, short, and medium-term scheduling of processes. Threads are lightweight sub-processes that can run independently within a process and allow for parallel execution.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxTSha7
The document discusses key concepts related to CPU scheduling in operating systems. It defines CPU scheduling and its purpose of allowing concurrent process execution. It describes the criteria used for scheduling algorithms and their evaluation. It also explains the different states a process can be in, including new, ready, running, blocked/wait, and terminated. The types of schedulers - long term, short term, and medium term - and their different objectives and functions are outlined as well.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
The document discusses process schedulers. It defines scheduling as allowing one process to use the CPU while another process is on hold waiting for resources. The objectives of scheduling are to make the system efficient, fast, and fair. There are three types of schedulers: long term schedulers which select processes to load into memory, short term (CPU) schedulers which select the next process to run on the CPU, and medium term schedulers which swap processes in and out of memory during I/O waits. Common scheduling algorithms discussed include first come first serve, shortest job first, longest job first, round robin, and priority-based scheduling.
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
The document discusses processes and process management. It defines a process as an active program in execution. Processes fall into two categories - system processes started by the OS and user processes started by the user. Each process runs independently and has its own memory space. Process management allows controlling processes by starting, ending, and setting priorities. Processes pass through states like new, ready, running, wait, and termination. The OS performs operations on processes like creation, scheduling, execution, and deletion. Schedulers like long-term, short-term, and medium-term manage processes. A process control block tracks process information.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
This document discusses CPU scheduling in operating systems. It covers basic scheduling concepts like multiprogramming and preemptive scheduling. It then describes the role of the scheduler and dispatcher in selecting which process runs on the CPU. Several common scheduling algorithms are explained like first-come first-served, shortest job first, priority scheduling, and round robin. Factors for evaluating scheduling performance and examples of scheduling in Linux and real-time systems are also summarized.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a new CPU scheduling algorithm called Efficient Round Robin (ERR). ERR aims to improve upon traditional Round Robin scheduling by sorting processes by arrival time and burst time, and executing processes for a time slice before moving to the next process. The paper includes an overview of existing CPU scheduling algorithms, defines key scheduling parameters, provides pseudocode for ERR, and describes a simulation that compares ERR to FCFS, SJF, priority, and Round Robin scheduling. Results of the simulation show that on average, ERR improves waiting time and turnaround time over the other algorithms.
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
Scheduling is a method used to allocate computing resources like processor time, bandwidth, and memory to processes, threads, and applications. It aims to balance system load, ensure equal distribution of resources, and prioritize processes according to set rules. There are different types of scheduling including long-term, medium-term, and short-term scheduling. Scheduling algorithms decide which process from the ready queue is allocated the CPU based on whether the policy is preemptive or non-preemptive. Common algorithms include first-come first-served, shortest job first, priority scheduling, and round-robin scheduling.
The document discusses CPU scheduling algorithms. It begins by explaining the basic concepts of CPU scheduling, including that the CPU scheduler selects ready processes to execute on the CPU. This allows for multi-programming by switching the CPU among ready processes instead of waiting for each process to finish. The document then discusses different scheduling algorithms like first come first served and shortest job first, and evaluates them based on criteria like CPU utilization, throughput, turnaround time, and waiting time.
The document discusses CPU scheduling in operating systems. It defines key concepts like CPU-bound and I/O-bound processes. It describes the different types of schedulers - long term scheduler that controls multiprogramming, and short term scheduler that selects the next process to execute. The short term scheduler is invoked frequently while long term scheduler is invoked less frequently. Scheduling policies can be preemptive, where the currently running process can be interrupted, or non-preemptive. The document also discusses process control blocks, scheduling criteria like CPU utilization and turnaround time, and the goals of scheduling algorithms.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
1. CPU scheduling is the process of distributing the CPU time among the active processes in the ready queue. It becomes necessary when there are more processes waiting to use the CPU than there are CPUs.
2. There are different states that a process can be in such as ready, running, waiting, terminated etc. The short term scheduler selects a process from the ready queue to run on the CPU based on the scheduling algorithm used.
3. Common scheduling algorithms include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Each has its own merits and demerits in terms of CPU utilization, waiting time, response time and throughput.
Lecture 4 - Process Scheduling (1).pptxAmanuelmergia
The document discusses process scheduling in operating systems. It covers basic concepts of process management including that only one process can run at a time on a single processor system. It then discusses process scheduling, which involves selecting the next process to run from the queue of ready processes. Various CPU scheduling algorithms are covered, including first-come, first-served (FCFS), shortest job first (SJF), priority-based, and round robin (RR) scheduling. Key criteria for evaluating scheduling algorithms like CPU utilization, throughput, turnaround time and waiting time are also summarized.
Operating Systems chap 2_updated2 (1).pptxAmanuelmergia
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also summarized.
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxTSha7
The document discusses key concepts related to CPU scheduling in operating systems. It defines CPU scheduling and its purpose of allowing concurrent process execution. It describes the criteria used for scheduling algorithms and their evaluation. It also explains the different states a process can be in, including new, ready, running, blocked/wait, and terminated. The types of schedulers - long term, short term, and medium term - and their different objectives and functions are outlined as well.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
The document discusses process schedulers. It defines scheduling as allowing one process to use the CPU while another process is on hold waiting for resources. The objectives of scheduling are to make the system efficient, fast, and fair. There are three types of schedulers: long term schedulers which select processes to load into memory, short term (CPU) schedulers which select the next process to run on the CPU, and medium term schedulers which swap processes in and out of memory during I/O waits. Common scheduling algorithms discussed include first come first serve, shortest job first, longest job first, round robin, and priority-based scheduling.
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
The document discusses processes and process management. It defines a process as an active program in execution. Processes fall into two categories - system processes started by the OS and user processes started by the user. Each process runs independently and has its own memory space. Process management allows controlling processes by starting, ending, and setting priorities. Processes pass through states like new, ready, running, wait, and termination. The OS performs operations on processes like creation, scheduling, execution, and deletion. Schedulers like long-term, short-term, and medium-term manage processes. A process control block tracks process information.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
This document discusses CPU scheduling in operating systems. It covers basic scheduling concepts like multiprogramming and preemptive scheduling. It then describes the role of the scheduler and dispatcher in selecting which process runs on the CPU. Several common scheduling algorithms are explained like first-come first-served, shortest job first, priority scheduling, and round robin. Factors for evaluating scheduling performance and examples of scheduling in Linux and real-time systems are also summarized.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a new CPU scheduling algorithm called Efficient Round Robin (ERR). ERR aims to improve upon traditional Round Robin scheduling by sorting processes by arrival time and burst time, and executing processes for a time slice before moving to the next process. The paper includes an overview of existing CPU scheduling algorithms, defines key scheduling parameters, provides pseudocode for ERR, and describes a simulation that compares ERR to FCFS, SJF, priority, and Round Robin scheduling. Results of the simulation show that on average, ERR improves waiting time and turnaround time over the other algorithms.
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
CPU scheduling allows processes to share the CPU by pausing execution of some processes to allow others to run. The scheduler selects which process in memory runs on the CPU. There are four types of scheduling decisions: when a process pauses for I/O, switches from running to ready, finishes I/O, or terminates. Scheduling can be preemptive, where a higher priority process interrupts a running one, or non-preemptive. Common algorithms are first come first serve, shortest job first, priority, and round robin. Real-time scheduling aims to process data without delays and ensures the highest priority tasks run first.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
Scheduling is a method used to allocate computing resources like processor time, bandwidth, and memory to processes, threads, and applications. It aims to balance system load, ensure equal distribution of resources, and prioritize processes according to set rules. There are different types of scheduling including long-term, medium-term, and short-term scheduling. Scheduling algorithms decide which process from the ready queue is allocated the CPU based on whether the policy is preemptive or non-preemptive. Common algorithms include first-come first-served, shortest job first, priority scheduling, and round-robin scheduling.
The document discusses CPU scheduling algorithms. It begins by explaining the basic concepts of CPU scheduling, including that the CPU scheduler selects ready processes to execute on the CPU. This allows for multi-programming by switching the CPU among ready processes instead of waiting for each process to finish. The document then discusses different scheduling algorithms like first come first served and shortest job first, and evaluates them based on criteria like CPU utilization, throughput, turnaround time, and waiting time.
The document discusses CPU scheduling in operating systems. It defines key concepts like CPU-bound and I/O-bound processes. It describes the different types of schedulers - long term scheduler that controls multiprogramming, and short term scheduler that selects the next process to execute. The short term scheduler is invoked frequently while long term scheduler is invoked less frequently. Scheduling policies can be preemptive, where the currently running process can be interrupted, or non-preemptive. The document also discusses process control blocks, scheduling criteria like CPU utilization and turnaround time, and the goals of scheduling algorithms.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
1. CPU scheduling is the process of distributing the CPU time among the active processes in the ready queue. It becomes necessary when there are more processes waiting to use the CPU than there are CPUs.
2. There are different states that a process can be in such as ready, running, waiting, terminated etc. The short term scheduler selects a process from the ready queue to run on the CPU based on the scheduling algorithm used.
3. Common scheduling algorithms include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Each has its own merits and demerits in terms of CPU utilization, waiting time, response time and throughput.
Similar to Lecture 4 - Process Scheduling.pptx (20)
Lecture 4 - Process Scheduling (1).pptxAmanuelmergia
The document discusses process scheduling in operating systems. It covers basic concepts of process management including that only one process can run at a time on a single processor system. It then discusses process scheduling, which involves selecting the next process to run from the queue of ready processes. Various CPU scheduling algorithms are covered, including first-come, first-served (FCFS), shortest job first (SJF), priority-based, and round robin (RR) scheduling. Key criteria for evaluating scheduling algorithms like CPU utilization, throughput, turnaround time and waiting time are also summarized.
Operating Systems chap 2_updated2 (1).pptxAmanuelmergia
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also summarized.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also covered.
Operating systems use main memory management techniques like paging and segmentation to allocate memory to processes efficiently. Paging divides both logical and physical memory into fixed-size pages. It uses a page table to map logical page numbers to physical frame numbers. This allows processes to be allocated non-contiguous physical frames. A translation lookaside buffer (TLB) caches recent page translations to improve performance by avoiding slow accesses to the page table in memory. Protection bits and valid/invalid bits ensure processes only access their allocated memory regions.
Lecture 5- Process Synchonization_revised.pdfAmanuelmergia
This document discusses process synchronization and the critical section problem in operating systems. It provides background on how concurrent processes may need to access shared resources and data in a controlled manner to maintain consistency. The classic critical section problem is defined, where multiple processes need exclusive access to a critical section of code that manipulates shared data. Semaphores are introduced as a solution, where wait and signal operations on a semaphore can be used to control access to critical sections and ensure only one process is in its critical section at a time.
This document outlines a third-year operating systems course offered by Adama Science and Technology University. The course covers major operating system concepts over 14 weeks, including processes, memory management, storage, I/O systems, and security. Students will learn key topics through lectures, discussions, and hands-on lab exercises implementing concepts like process scheduling, semaphores, and file handling. Assessment includes assignments, a quiz, lab performance, and a final exam. The goal is for students to understand how operating systems work and make tradeoffs in design.
Threads are lightweight processes that can be used to improve concurrency and resource utilization. They allow multiple tasks to be performed simultaneously within the same process address space. The main advantages of multithreading are improved responsiveness, increased resource sharing, better economy compared to processes, and improved scalability on multi-core systems. Common thread libraries include POSIX pthreads, Win32 threads, and Java threads. Examples of multithreading include performing I/O while processing user input in a word processor or serving multiple web requests concurrently in a server.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that uses system resources like memory and CPU. The document outlines the different states a process can be in, like ready, running, waiting, and describes how processes transition between these states. It discusses the concept of a process control block that contains information about each process like its state, registers, and scheduling information. The document also covers topics like process creation, changing process states, suspending processes, and interprocess communication.
Lecture 1- Introduction to Operating Systems.pdfAmanuelmergia
The document provides an introduction to operating systems, including:
1. It defines an operating system as a program that acts as an intermediary between the user and computer hardware, managing resources and executing programs.
2. It describes the basic components of a computer system including users, application programs, the operating system, and computer hardware.
3. It explains some key operating system operations like interrupts, process scheduling, memory management, I/O management, and protection.
This document summarizes lecture notes on operating system deadlocks. It defines deadlock, describes the four conditions required for deadlock, and explains different approaches to handling deadlocks including prevention, avoidance, detection, and recovery. Prevention methods restrict how processes request resources to ensure at least one deadlock condition does not occur. Avoidance algorithms examine the resource allocation state to ensure the system never enters an unsafe state where deadlock is possible. Detection identifies deadlocks that do occur and recovery aims to free processes from the deadlocked state. Resource allocation graphs are used to model deadlock situations.
Lecture 5- Process Synchronization (1).pptxAmanuelmergia
This document discusses process synchronization in operating systems. It covers the critical section problem where multiple processes need synchronized access to shared resources. Several solutions to the critical section problem are presented, including Peterson's algorithm using shared variables, and using hardware-based synchronization methods like test-and-set and compare-and-swap atomic instructions. Mutex locks are also introduced as a software-based approach where a process must acquire a lock before entering a critical section and release it after.
The document discusses deadlocks in operating systems. It defines deadlock and describes the four conditions required for deadlock: mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include prevention, avoidance, detection, and recovery. Prevention ensures at least one deadlock condition does not occur. Avoidance uses a resource allocation graph and banker's algorithm to ensure the system never enters an unsafe state. Detection finds deadlocks and recovery releases processes to break deadlocks.
This document discusses virtual memory and demand paging. It begins with background on virtual memory, how it allows programs to be larger than physical memory. It then discusses demand paging specifically, how pages are brought into memory only when needed by a reference. It describes how page tables track valid/invalid pages and cause page faults when an invalid page is accessed. It also discusses page replacement algorithms which select a page to remove from memory when a new page is needed but no frame is available.
Threads are lightweight processes that can run concurrently within a single process. They share the process's resources like memory but have their own program counters, registers, and stacks. Using threads provides benefits like improved responsiveness, easier resource sharing between tasks, reduced overhead compared to processes, and ability to utilize multiple CPU cores. Common thread libraries are POSIX pthreads, Win32 threads, and Java threads which allow creating and managing threads via APIs. Multithreading can be implemented using different models mapping user threads to kernel threads.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 5
Lecture 4 - Process Scheduling.pptx
1. Operating Systems: CSE 3204
ASTU
Department of CSE
January 4, 2023 1
Operating Systems
Lecture 4: Process Scheduling
Chapter Two
Process Management
2. Basic Concepts
• In a single process/single processor system, only one process can run at a
time
• If there are multiple processes in the system, some processes must wait
until the CPU is free and can be allocated to them.
• At an instance of time, multiple processes may be willing to occupy the
CPU (just after the currently running process yields the control of CPU).
• Now, which process is going to get the CPU next to the current process
finish its turn is very important question form the point of view of not only
keeping system functioning, but we will see that the decision regarding
selection of next process can be made in many ways and can affect
performance of the system.
January 4, 2023 2
Operating Systems
3. Process scheduling
• The particular way in which a process is selected from
a queue of processes in order to assign the CPU is
called a CPU scheduling algorithm.
• While a CPU scheduling algorithm assigns the
processor(s) to process(es), the general problem of
process scheduling is little bit broader, in which we
study about keeping processes in different queues, in
memory, in swap area or at CPU, keeping eye on
system performance.
January 4, 2023 3
Operating Systems
4. CPU Scheduling
• Almost all resources are scheduled before use
• CPU is one of the primary resource of a computer
• CPU scheduling is the basis of multiprogrammed operating
systems
• By switching the CPU among processes, the operating
systems can be made more productive
Scheduling refers to the way processes are assigned to run
on the available CPUs. since there are typically many more
processes running than there are available CPUs.
This assignment is carried out by software known as a scheduler
and dispatcher.
January 4, 2023 4
Operating Systems
5. CPU - I/O burst
Each process is assigned a fixed number of time slices
to perform its execution at the processor or to
complete its I/O operation. There are two types of
“Bursts” on the timeline:
CPU burst
• CPU burst is the time allocated to a process or
required by a process to execute on CPU.
I/O Burst
I/O burst is the time allocated or required by a
process to perform its I/O operation.
CPU - I/O burst cycle: If we carefully see the
timeline of execution of all processes in the system,
most processes will be alternating between CPU and
I/O operations. i.e. on timeline we can observe CPU
burst followed by I/O burst. In this alternating burst
sequence, CPU intensive process have larger CPU
burst while I/O intensive processes have larger I/O
burst requirement.
January 4, 2023 5
Operating Systems
CPU-I/O burst
6. CPU Scheduling
• Whenever the CPU finishes executing a process, the operating system must select
another process in the ready queue (A process must be in ready/waiting
queue and not in any other state as per state transition diagram to be
scheduled next on the CPU)
• This selection of next process from ready queue is done by the scheduler
• The selection process is carried out by the short-term scheduler (CPU scheduler)
The scheduler selects a process from the list of processes in memory ready
for execution and allocates the CPU
Although CPU bursts differ from computer to computer & from process to
process, they tend to have a frequency curve shown in the diagram below,
with a large number of short CPU bursts and a small number of long CPU
bursts.
January 4, 2023 6
Operating Systems
CPU burst durations
7. Types of CPU Schedulers
There are three types of process schedulers based on
the source and destination location of the process
being scheduled
• Short term Scheduler(CPU scheduler)
• Medium term Scheduler
• Long term Scheduler
January 4, 2023 7
Operating Systems
8. Short term scheduler(CPU scheduler)
• A short term scheduler, also called CPU scheduler is
responsible for selecting the jobs from ready queue
and dispatch the selected job for execution at CPU.
• This scheduler is invoked frequently and should be
implemented in a very efficient manner with minimum
scheduling overhead.
• How much time will be allowed to a process on CPU is
dependent on the CPU scheduling algorithm used.
January 4, 2023 8
Operating Systems
9. Types of CPU schedulers
There are two types of CPU schedulers:
a)Preemptive Scheduler:
Preemptive scheduling is used when a process switches
from running state to ready state or from waiting
state to ready state. The resources (mainly CPU
cycles) are allocated to the process for the limited
amount of time and then is taken away, and the process
is again placed back in the ready queue if that process
still has CPU burst time remaining. That process stays in
ready queue till it gets next chance to execute.
January 4, 2023 9
Operating Systems
10. b)Non-preemptive Scheduler: Non-preemptive
Scheduling is used when a process terminates, or a
process switches from running to waiting state. In this
scheduling, once the resources (CPU cycles) is allocated
to a process, the process holds the CPU till it gets
terminated or it reaches a waiting state. In case of non-
preemptive scheduling does not interrupt a process
running CPU in middle of the execution. Instead, it waits
till the process complete its CPU burst time and then it
can allocate the CPU to another process. .
January 4, 2023 10
Operating Systems
11. When preemptive and non-preemptive are used
• CPU Scheduling decisions may take place when a process:
1. switches from running to waiting state
2. switches from running to ready state
3. switches from waiting to ready
4. Terminates/exits
• When scheduling occurs in either 1st or 4th way, then the scheduling scheme is
called non-preemptive or cooperative, all other scheduling scheme is termed as
preemptive(eg. scheduling 2 and 3)
• In non-preemptive scheduling, once the CPU is allocated to a process, the
process keeps using the CPU until it either finishes its execution or it enters in to a
waiting state
It is used on certain/ most hardware since it does not require special hardware
needed by preemptive scheduling
• In preemptive scheduling, An interrupt causes currently running process to give up
the CPU and be replaced by another process( the situations 2nd and 3rd )
The design of the operating system kernel is affected
it incur cost associated with access to shared data
January 4, 2023 11
Operating Systems
12. Dispatcher
• The dispatcher is the module that gives control of the CPU to the
process selected by the short-term scheduler.
• This function involves the following:
• switching context
• switching to user mode
• jumping to the proper location in the user program to
restart that program
• The dispatcher should be as fast as possible, since it is invoked
during every process switch. The time it takes for the dispatcher to
stop one process and start another running is known as the dispatch
latency.
January 4, 2023 12
Operating Systems
13. January 4, 2023 13
Operating Systems
Medium-term: which
process to swap in or
out?
• Controls the process remains resident
in memory and which jobs must be
swapped out to reduce degree of
multiprogramming
Long-term: which
process to admit?
Determines which programs are
admitted to the system for processing
& it Controls the degree of
multiprogramming
Attempts to keep a balanced mix of
processor-bound and I/O-bound
processes
14. CPU Scheduling Criteria
The most common criteria used to compare scheduling algorithms:
CPU Utilization
• The fraction of time a device is in use. ( ratio of in-use time / total observation
time )
Throughput
• The number of job completions in a period of time. (jobs / second )
Turnaround time
• The interval between the submission of a process to its execution
• It is the sum of the periods spent waiting to get the memory, waiting in the
ready queue, executing on the CPU an doing and I/O
Waiting time
• The sum of the periods spent waiting in the ready queue
Service time
• The time required by a device to handle a request.
Response time
• Amount of time it takes from the submission of a request till the first response
is produced
January 4, 2023 14
Operating Systems
15. CPU Scheduling Optimization criteria
• Maximum CPU utilization
• Maximum throughput time
• Minimize turnaround time
• Minimize waiting time
• Minimize response time
Note that:
It’s desirable to maximize CPU utilization and throughput
and minimize turnaround time, waiting time and response
time.
January 4, 2023 15
Operating Systems
16. CPU Scheduling Algorithms
January 4, 2023 16
Operating Systems
• Scheduling deals with the problem of deciding which of the outstanding
requests is to be allocated resources.
• Scheduling algorithms are used for distributing resources among parties
which simultaneously and asynchronously request them. In OS (to share
CPU time among both threads and processes )
• The main purposes of scheduling algorithms are to minimize resource
starvation and to ensure fairness amongst the parties utilizing the
resources.
• There are many different scheduling algorithms:
1. First-Come, First Served(FCFS)
2. Shortest Job First(SJF)
3. Priority Based Scheduling
4. Round Robin Scheduling
5. Multi-Level Queues
6. Multi-Level Feedback Queues
17. 1.First-Come, First Served(FCFS)
• In this algorithm, that process the requests the CPU first is allocated the
CPU first
• The implementation of this algorithm is handled by FIFO queue
• Arriving jobs are inserted in to the tail(rear) of the ready queue and
the process to be executed next is removed from the front (head) of
the ready queue
• Relative importance of jobs is measured by arrival time
• The average waiting time is quite too long
• Throughput can be low, since long processes can hog the CPU
• Turnaround time, waiting time and response time can be high
• A long CPU-bound process may hog the CPU and may force shorter
processes to wait for a prolonged period.
• This may lead to a long queue of ready jobs in the ready queue
(convoy effect)
January 4, 2023 17
Operating Systems
18. 1.First-Come, First Served(FCFS) (cont.)
• The convoy effect results in a lower CPU and device utilization
• It’s a non-preemptive algorithm
• A process runs until it blocks for an I/O or it terminates
• Favors CPU-bound processes
• A CPU-bound process monopolizes the processor
• I/O-bound processes have to wait until completion of CPU-bound
process
• I/O-bound processes may have to wait even after their I/Os are
completed (poor device utilization)
• Better I/O device utilization could be achieved if I/O bound processes
had higher priority
January 4, 2023 18
Operating Systems
19. 1.First-Come, First Served(FCFS) (cont.)
Process Burst Time
P1 27
P2 9
P3 3
January 4, 2023 19
Operating Systems
Example 1:
Consider the following processes that arrive at time zero, with the length of
the CPU burst given in milliseconds
• If the processes arrive in the order of P1,
P2 and P3 and are served in the FCFS
order, then the waiting time for each of the
processes will be as follows:
P1 P2 P3
0 27 36 39
• Waiting time for P1 is 0 ms, meaning it starts immediately
• Waiting time for P2 is 27 ms, before starting
• Waiting time for P3 is 36 ms
• Average waiting time = (0+27+36)/3=21 ms
20. 1.First-Come, First Served (FCFS) (cont.)
January 4, 2023 20
Operating Systems
Process Arrival
time
Service
time
1 0 8
2 1 4
3 2 9
4 3 5
* What if the order of the processes was P2, P3, P1? What will be the average
waiting time? Check [avg. waiting time= 7 ms] what do you notice from this?
Example 2:
0 8 12 21 26
P1 P2 P3 P4
Average wait =((0) + (8-1) + (12-2) + (21-3) )/4 = 35/4 =
8.75
Waiting time for P1 = 0; P2 = 8-1; P3 = 12-2; P4=21-3
21. 2. Shortest Job First (SJF), Shortest Job Next (SJN)
• SJF policy selects the job with the shortest (expected) processing time first.
• With this strategy the scheduler arranges processes with the least estimated
processing time remaining to be next in the queue. This requires advance
knowledge or estimations about the time required for a process to complete
• Two schemes:
Non-preemptive – once CPU is given to a process, it cannot be preempted in
the current CPU burst
Preemptive – if a new process arrives with CPU burst length less than the
remaining time of current process, preempt.
• One major difficulty with SJF is the need to know or estimate the processing time
of each job (can only predict the future!)
• This scheme is know as the Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting time for a given set of processes
• Starvation is possible, especially in a busy system with many small processes
being run.
January 4, 2023 21
Operating Systems
22. 2. Shortest Job First (SJF), Shortest Job Next (SJN)
January 4, 2023 22
Operating Systems
Process Arrival
time
Service
time
1 0 7
2 2 4
3 4 1
4 5 4
Example: SJF
Average waiting time=(0+(8-2)+(7-4)+(12-5))/4
= 4
a. NonPreemptive
b. preemptive
Average waiting time= ((11-2) + (5-4) + (4-4) +(7-5))/4 = 3
23. 3. Priority Based Scheduling
January 4, 2023 23
Operating Systems
• Assign each process a priority. Schedule highest priority first. All
processes within same priority are FCFS
• Priority may be determined by user or by some default mechanism
• The system may determine the priority based on memory
requirements, time limits, or other resource usage
• CPU allocated to process with highest priority
• Preemptive or non-preemptive
• Problem Starvation – low priority processes may never execute.
• Solution Aging – as time progresses increase the priority of the
process
• Delicate balance between giving favorable response for interactive jobs,
but not starving batch jobs
24. 3. Priority Based Scheduling(cont…)
EXAMPLE :Consider the following processes that arrive at time zero, with the
length of the CPU burst and priorities given.
Process Burst Time Priority
1 10 3
2 1 1
3 2 4
4 1 5
5 5 2
Using Priority scheduling, we would schedule these processes according to
the following Gant chart:
Average waiting time = (6+0+16+18+1)/5 = 8.2
Operating Systems
24
P2 P5 P1 P3 P4
19
18
16
6
1
0
January 4, 2023
25. 4. Round Robin (RR)
• Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and
added to the end of the ready queue
• The name of the algorithm comes from the round- robin principle known
from other fields, where each person takes an equal share of something in
turn
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units
• Performance: Choosing a time quantum q:
• q large FIFO
• q small q must be large with respect to context switch, otherwise
overhead is too high
January 4, 2023 25
Operating Systems
26. 4. Round Robin (RR) (cont.)
January 4, 2023 26
Operating Systems
Typically, higher average turnaround than SJF, but better response
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121134 154162
Process Burst time
P1 53
P2 17
P3 68
P4 24
Example 1: RR with time quantum =20
27. 4. Round Robin (RR) (cont.)
January 4, 2023 27
Operating Systems
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
Example 2: RR with time quantum =4, no priority-based preemption
Average wait = ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 74/4 = 18.5
28. 5.Multilevel Queue Scheduling
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
January 4, 2023 28
Operating Systems
29. 5.Multilevel Queue Scheduling
January 4, 2023 29
Operating Systems
For example, could separate system processes, interactive, batch, favored,
unfavored processes.
30. 6. Multilevel Feedback Queue Scheduling
• A process can move between the various queues
• aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the following
parameters:
• number of queues
• scheduling algorithms for each queue
• method to determine when to upgrade a process
• method to determine when to demote a process
• method used to determine which queue a process will enter
when that process needs service
January 4, 2023 30
Operating Systems
31. 6. Multilevel Feedback Queue
Example:
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0: when it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved to
queue Q1
– At Q1 job: it receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2
January 4, 2023 31
Operating Systems
33. CPU Scheduling: using priorities
January 4, 2023 33
Operating Systems
Here’s how the priorities are used in Windows systems
34. Scheduling Algorithms
• Real-time systems
• Hard real-time systems – required to complete a critical task within a
guaranteed amount of time
• Soft real-time computing – requires that critical processes receive priority over
less fortunate ones
• Multiple Processor Scheduling
Different rules for homogeneous or heterogeneous processors
Load sharing in the distribution of work, such that all processors have an equal
amount to do.
Each processor can schedule from a common ready queue (equal machines)
OR can use a master slave arrangement
• Thread Scheduling
• Local Scheduling – How the user threads library decides which thread to put
onto an available LWP (Light Weight Process)--- process contention scope
• Global Scheduling – How the kernel decides which kernel thread to run next
January 4, 2023 34
Operating Systems
35. Linux Scheduling
January 4, 2023 35
Operating Systems
• Two algorithms:
• time-sharing and real-time
• Time-sharing
– Prioritized credit-based – process with most credits is scheduled next
– Credit subtracted when timer interrupt occurs
– When credit = 0, another process chosen
– When all processes have credit = 0, recrediting occurs
• Based on factors including priority and history
• Real-time
– Soft real-time
– Posix.1b compliant – two classes
• FCFS and RR
• Highest priority process runs first
36. Algorithm Evaluation Summary
• Which algorithm is the best?
• The answer depends on many factors:
• the system workload (extremely variable)
• hardware support for the dispatcher
• relative importance of performance criteria (response time, CPU
utilization, throughput...)
• The evaluation method used (each has its limitations...)
• Which one works the best is application dependent
• General purpose OS will use priority based, round robin, preemptive
• Real Time OS will use priority, no preemption
January 4, 2023 36
Operating Systems
37. Terminology for Examples
• AT: Arrival time of a process
• BT: Burst time of a process
• CT: completion time of a process
• WT: waiting time of a process
• TAT: turn around time
• ST: Scheduled time of a process
January 4, 2023 37
Operating Systems
38. Formulas
• Turn around time is the total time in which a
process is present in the system, irrespective to the
fact that process was waiting, doing I/O or executing.
TAT=CT-AT=BT+WT
Weighted TAT= (CT-AT)/BT
• Wait Time of a process
WT=TAT-BT=CT-AT-BT
January 4, 2023 38
Operating Systems
39. Gantt Chart
• A graphical representation of process scheduling
information
• A timeline represented by rectangular blocks, each
block has a process Id,
• At time 0 P1 was scheduled for execution, at time 2 P2
was scheduled
• Schedule length is 11 unit of time
January 4, 2023 39
Operating Systems
Timeline
40. • Schedule length is the difference between maximum
completion time of any process and the minimum arrival
time of any process.
**(!3 number of schedules are possible with 3 processes)
SL= Max(CT)-Min(AT)
=11-0
=11
Throughput is the number of processes competed per unit
time
Th=No. of process completed/Shedule length
Th=3/11=.27
January 4, 2023 40
Operating Systems
41. FCFS example
• Selection Criteria: Basic criteria of selection of
process is AT(Arrival Time)
• Mode: It is Non-preemptive mode
• Assumptions: context switch time is negligible, each
process have only CPU burst time and zero I/O burst
time.
• Example: Given following processes, compute TAT,
WT for each process using FCFS
January 4, 2023 41
Operating Systems
SN PID AT BT
1 P1 0 2
2 P2 1 3
3 P3 2 5
4 P4 3 4
5 P5 4 1
42. solution
• First compute sum of all Burst times
ℇBT= 2+3+5+4+1
=15
So, on Gantt chart we need to make a timeline of length
15, because in this amount of time all the processes will
finish their burst time.
So let us make Gantt chart for schedule using FCFS:
Step 1: Select the process whose AT is smallest: here
P1 is arrived on time=0, so we select the P1 and
schedule it on the processor.
January 4, 2023 42
Operating Systems
43. Gantt chart for P1
• For process P1, Scheduled
time is 0, because it was
given processor at time 0,
its Burst time is 2 units, so
after completing its BT, it
was completed, so CT of P1
is 2. its WT is zero because
it was immediately
scheduled. Also TAT=CT-
AT=2-0=2
January 4, 2023 43
Operating Systems
44. Gantt Chart for P2
• P2 was arrived at AT=1,
but at that time CPU was
held by P1. In, non-
preemptive settings P2 will
wait for P1. On time 2, it is
Schedule time for P2. its
BT is 3 so its completion
time is CT=BT+ST=3+2=5
• TAT(P2)=CT(P2)-AT(P2)
=5-1=4
WT(P2)=TAT(P2)-BT(P2)
=4-3=1
January 4, 2023 44
Operating Systems
45. Complete Gantt Chart
• Important to see AT and
Scheduled times of
processes in the Gantt
chart.
• Here schedule length is
SL=15-0=15
January 4, 2023 45
Operating Systems
47. Non preemptive Shortest Job First
algorithm
Criteria
• The next process will be selected for execution whose Burst
time is least.
• Non preemptive SJF allow a process to finish its BT once
scheduled and do not allow forceful yield of processor
• In case of preemptive version of SJF, if another process of
shorter BT arrives during schedule of a process, we prefer to
force the previous process to leave the CPU and execute
shorter BT process first.
January 4, 2023 47
Operating Systems
49. Schedule First process
• Since at time 0, only P1
was in system so w e have
to schedule it.
• However process P4 and
P6 have shorter burst time
but they have not arrived
at time 0,
• P1 will execute upto time
=3, in the mean time P2
and P3 will arrive.
• However AT(P2)<AT(P3)
• But BT(P3)<BT(P2)
So after completion of P1, P3
will be selected.
January 4, 2023 49
Operating Systems
50. Full solution
• Here the SL is 13
• Each process is selected
and scheduled based on
BT
• You can compute individual
TAT, WT and ST, CT etc.
January 4, 2023 50
Operating Systems
51. Assignment: SJF-NP
Question
• Solve following using NP-
SJF and compute TAT, CT,
WT, ST for each process
• Use preemptive SJF
January 4, 2023 51
Operating Systems