This document discusses various CPU scheduling algorithms. It provides details on First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, Round Robin Scheduling, and Multi-Level Queue Scheduling. Examples are given to illustrate how each algorithm works. The key points covered include how each algorithm determines which process gets CPU time, whether they are preemptive or not, and how to calculate waiting times.
The document discusses various CPU scheduling algorithms including First-Come-First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, Round Robin Scheduling, and examples of how each works. It provides details on how each algorithm prioritizes and schedules processes waiting in the ready queue for CPU allocation. Examples are given to illustrate the average waiting times that can result from different scheduling approaches.
Task scheduling is needed to maintain every process that comes with a processor in parallel processing. In several conditions, not every algorithm works better on the significant problem. Sometimes FCFS algorithm is better than the other in short burst time while Round Robin is better for multiple processes in every single time. But we cannot predict what process will come after. Average Waiting Time is a standard measure for giving credit to the scheduling algorithm. Several techniques have been applied to maintain the process in order to make the CPU performance in normal. The objective of this paper is to compare three algorithms, FCFS, SJF and Round Robin. Finally, we know which algorithm is more suitable for the certain process.
The document discusses several topics related to CPU scheduling, including basic concepts, scheduling criteria, algorithms like first-come first-served (FCFS) and shortest-job-first (SJF), determining CPU burst lengths, priority scheduling, and approaches to scheduling on multiple processors. Key concepts are that the CPU scheduler selects ready processes to run and aims to maximize CPU utilization while minimizing waiting times using algorithms like FCFS, SJF, priority, and round robin scheduling. Load sharing and affinity are considerations for multiprocessor scheduling.
Priority scheduling assigns priorities to processes and executes the highest priority process first. If processes have equal priorities, they are executed in first come first served order. The example document shows how priority scheduling works by assigning priorities from 1 to 5 to five processes, then calculating their waiting times and turnaround times based on executing the highest priority processes first. Round robin scheduling assigns a fixed time quantum to each process, preempting and resuming processes to ensure all get CPU time and avoid starvation. The example shows how time is divided between processes using a 2ms time quantum.
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
The document describes CPU scheduling concepts in a multiprogramming operating system. It discusses how CPU scheduling depends on CPU bursts and I/O waits as processes alternate between the two states. The scheduler selects processes from the ready queue to run on the CPU. Scheduling can be preemptive, occurring when a process changes states, or non-preemptive. Common scheduling algorithms like first-come, first-served, shortest job first, priority, and round robin are described. Optimization criteria for scheduling like CPU utilization, throughput, turnaround time and waiting time are also covered.
This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware
CPU scheduling is the process by which the CPU selects which process to execute next from among processes in memory that are ready to execute. The CPU scheduler selects processes from the ready queue to execute. The goal of CPU scheduling is to maximize CPU utilization and throughput while minimizing waiting time and response time. Common CPU scheduling algorithms include first come first serve (FCF) which services processes in the order they arrive, and shortest job first (SJF) which selects the process with the shortest estimated run time to execute next.
The document discusses various CPU scheduling algorithms including First-Come-First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, Round Robin Scheduling, and examples of how each works. It provides details on how each algorithm prioritizes and schedules processes waiting in the ready queue for CPU allocation. Examples are given to illustrate the average waiting times that can result from different scheduling approaches.
Task scheduling is needed to maintain every process that comes with a processor in parallel processing. In several conditions, not every algorithm works better on the significant problem. Sometimes FCFS algorithm is better than the other in short burst time while Round Robin is better for multiple processes in every single time. But we cannot predict what process will come after. Average Waiting Time is a standard measure for giving credit to the scheduling algorithm. Several techniques have been applied to maintain the process in order to make the CPU performance in normal. The objective of this paper is to compare three algorithms, FCFS, SJF and Round Robin. Finally, we know which algorithm is more suitable for the certain process.
The document discusses several topics related to CPU scheduling, including basic concepts, scheduling criteria, algorithms like first-come first-served (FCFS) and shortest-job-first (SJF), determining CPU burst lengths, priority scheduling, and approaches to scheduling on multiple processors. Key concepts are that the CPU scheduler selects ready processes to run and aims to maximize CPU utilization while minimizing waiting times using algorithms like FCFS, SJF, priority, and round robin scheduling. Load sharing and affinity are considerations for multiprocessor scheduling.
Priority scheduling assigns priorities to processes and executes the highest priority process first. If processes have equal priorities, they are executed in first come first served order. The example document shows how priority scheduling works by assigning priorities from 1 to 5 to five processes, then calculating their waiting times and turnaround times based on executing the highest priority processes first. Round robin scheduling assigns a fixed time quantum to each process, preempting and resuming processes to ensure all get CPU time and avoid starvation. The example shows how time is divided between processes using a 2ms time quantum.
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
The document describes CPU scheduling concepts in a multiprogramming operating system. It discusses how CPU scheduling depends on CPU bursts and I/O waits as processes alternate between the two states. The scheduler selects processes from the ready queue to run on the CPU. Scheduling can be preemptive, occurring when a process changes states, or non-preemptive. Common scheduling algorithms like first-come, first-served, shortest job first, priority, and round robin are described. Optimization criteria for scheduling like CPU utilization, throughput, turnaround time and waiting time are also covered.
This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware
CPU scheduling is the process by which the CPU selects which process to execute next from among processes in memory that are ready to execute. The CPU scheduler selects processes from the ready queue to execute. The goal of CPU scheduling is to maximize CPU utilization and throughput while minimizing waiting time and response time. Common CPU scheduling algorithms include first come first serve (FCF) which services processes in the order they arrive, and shortest job first (SJF) which selects the process with the shortest estimated run time to execute next.
operating systems , ch-05, (CPU Scheduling), 3rd level, College of Computers, Seiyun University. انظمة التشغيل لطلاب المستوى الثالث بكلية الحاسبات بجامعة سيئون المحاضرة 05
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
Introduction to shortest job first schedulingZeeshan Iqbal
The document describes an experiment on implementing the shortest job first (SJF) CPU scheduling algorithm. It includes pseudocode to calculate the waiting times and turnaround times of processes based on their burst times under SJF scheduling. The pseudocode loops through the processes, calculates their waiting times as the previous process's burst time plus its waiting time, and sums the total waiting and turnaround times to find the average times.
AlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorit...Shanmuganathan C
The document discusses various CPU scheduling algorithms used in operating systems. It begins by listing common scheduling algorithms like First-Come First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, Round Robin, Multilevel Queue Scheduling, and Multilevel Feedback Queue Scheduling. It then provides examples and explanations of how FCFS, SJF, and priority scheduling work. It discusses concepts like waiting time, turnaround time, starvation, and aging. The document also covers round robin scheduling and provides examples comparing different time quantums. Finally, it discusses priority-based real-time scheduling algorithms like Rate Monotonic Scheduling and conditions under which deadlines may be missed.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
The document discusses various scheduling algorithms: First Come First Served (FCFS), Shortest Job First (SJF), priority scheduling, and round robin (RR). FCFS schedules processes in the order that they arrive. SJF selects the process with the shortest estimated runtime. Priority scheduling prioritizes processes based on assigned priorities. Round robin gives each process a time slice or quantum to use the CPU before switching to another process. Examples are provided to illustrate how each algorithm works.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple queue scheduling, real-time scheduling, and ways to evaluate scheduling algorithm performance like deterministic modeling and simulation.
The document discusses key concepts related to process management in operating systems, including processes, process states, process scheduling, and interprocess communication. It defines a process as a program in execution that includes a program counter, stack, and data section. Processes can be in one of several states like new, running, waiting, ready, and terminated. Process scheduling is managed using data structures like process control blocks and queues to track process states and allocate CPU resources. Common scheduling algorithms like FCFS, SJF, priority, and round robin are also covered.
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent schedules and examples are provided to demonstrate how different algorithms work.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent the scheduling of different processes over time on the CPU. Examples are provided to demonstrate how each algorithm works.
Operating System - FCFS and Priority Scheduling Algorithm and Code Tamir Azrab
Presentation contains brief explanation of first come first serve algorithm it's code and some sample solvable example, Also includes priority scheduling it's algorithm, implementation and example.
Great time saver if you got presentation on same topics.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
The document discusses various CPU scheduling concepts and algorithms. It covers basic concepts like CPU-I/O burst cycles and scheduling criteria. It then describes common scheduling algorithms like first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multi-level queue scheduling, multi-processor scheduling, and thread scheduling in Linux.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
operating systems , ch-05, (CPU Scheduling), 3rd level, College of Computers, Seiyun University. انظمة التشغيل لطلاب المستوى الثالث بكلية الحاسبات بجامعة سيئون المحاضرة 05
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
Introduction to shortest job first schedulingZeeshan Iqbal
The document describes an experiment on implementing the shortest job first (SJF) CPU scheduling algorithm. It includes pseudocode to calculate the waiting times and turnaround times of processes based on their burst times under SJF scheduling. The pseudocode loops through the processes, calculates their waiting times as the previous process's burst time plus its waiting time, and sums the total waiting and turnaround times to find the average times.
AlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorithmsAlgorit...Shanmuganathan C
The document discusses various CPU scheduling algorithms used in operating systems. It begins by listing common scheduling algorithms like First-Come First-Served (FCFS), Shortest Job First (SJF), Priority Scheduling, Round Robin, Multilevel Queue Scheduling, and Multilevel Feedback Queue Scheduling. It then provides examples and explanations of how FCFS, SJF, and priority scheduling work. It discusses concepts like waiting time, turnaround time, starvation, and aging. The document also covers round robin scheduling and provides examples comparing different time quantums. Finally, it discusses priority-based real-time scheduling algorithms like Rate Monotonic Scheduling and conditions under which deadlines may be missed.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
The document discusses various scheduling algorithms: First Come First Served (FCFS), Shortest Job First (SJF), priority scheduling, and round robin (RR). FCFS schedules processes in the order that they arrive. SJF selects the process with the shortest estimated runtime. Priority scheduling prioritizes processes based on assigned priorities. Round robin gives each process a time slice or quantum to use the CPU before switching to another process. Examples are provided to illustrate how each algorithm works.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple queue scheduling, real-time scheduling, and ways to evaluate scheduling algorithm performance like deterministic modeling and simulation.
The document discusses key concepts related to process management in operating systems, including processes, process states, process scheduling, and interprocess communication. It defines a process as a program in execution that includes a program counter, stack, and data section. Processes can be in one of several states like new, running, waiting, ready, and terminated. Process scheduling is managed using data structures like process control blocks and queues to track process states and allocate CPU resources. Common scheduling algorithms like FCFS, SJF, priority, and round robin are also covered.
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses various CPU scheduling algorithms and concepts. It covers scheduling criteria like CPU utilization and turnaround time. Algorithms discussed include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also covers multiple processor scheduling, real-time scheduling, and evaluating scheduling algorithms.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent schedules and examples are provided to demonstrate how different algorithms work.
This document discusses different CPU scheduling algorithms and their criteria for evaluation. It describes that a good scheduling algorithm should maximize CPU utilization and throughput while minimizing waiting time, response time, and turnaround time. Common algorithms mentioned are First Come First Served (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin. Gantt charts are used to visually represent the scheduling of different processes over time on the CPU. Examples are provided to demonstrate how each algorithm works.
Operating System - FCFS and Priority Scheduling Algorithm and Code Tamir Azrab
Presentation contains brief explanation of first come first serve algorithm it's code and some sample solvable example, Also includes priority scheduling it's algorithm, implementation and example.
Great time saver if you got presentation on same topics.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
The document discusses various CPU scheduling concepts and algorithms. It covers basic concepts like CPU-I/O burst cycles and scheduling criteria. It then describes common scheduling algorithms like first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multi-level queue scheduling, multi-processor scheduling, and thread scheduling in Linux.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
2. Scheduling Algorithms
CPU Scheduling algorithms deal with the problem
of deciding which process in ready queue should
be allocated to CPU.
Following are the commonly used scheduling
algorithms:
15-Feb-2011
2 www.eazynotes.com
4. First-Come-First-Served
Scheduling (FCFS)
15-Feb-2011
www.eazynotes.com
4
In this scheduling, the process that requests the
CPU first, is allocated the CPU first.
Thus, the name First-Come-First-Served.
The implementation of FCFS is easily managed
with a FIFO queue.
D C B A CPU
Tail of
Queue
Head of
Queue
Ready Queue
5. 15-Feb-2011
www.eazynotes.com
5
When a process enters the ready queue, its PCB
is linked to the tail of the queue.
When CPU is free, it is allocated to the process
which is at the head of the queue.
FCFS scheduling algorithm is non-preemptive.
Once the CPU is allocated to a process, that
process keeps the CPU until it releases the CPU,
either by terminating or by I/O request.
First-Come-First-Served
Scheduling (FCFS)
6. Example of FCFS Scheduling
15-Feb-2011
www.eazynotes.com
6
Consider the following set of processes that arrive
at time 0 with the length of the CPU burst time in
milliseconds:
Process Burst Time (in milliseconds)
P1 24
P2 3
P3 3
7. Example of FCFS Scheduling
15-Feb-2011
www.eazynotes.com
7
Suppose that the processes arrive in the order:
P1, P2, P3.
The Gantt Chart for the schedule is:
Waiting Time for P1 = 0 milliseconds
Waiting Time for P2 = 24 milliseconds
Waiting Time for P3 = 27 milliseconds
P1 P2 P3
24 27 30
0
P1 24
P2 3
P3 3
8. Example of FCFS Scheduling
15-Feb-2011
www.eazynotes.com
8
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (0 + 24 + 27) / 3
= 51 / 3
= 17 milliseconds
9. Example of FCFS Scheduling
15-Feb-2011
www.eazynotes.com
9
Suppose that the processes arrive in the order:
P2, P3, P1.
The Gantt chart for the schedule is:
Waiting Time for P2 = 0 milliseconds
Waiting Time for P3 = 3 milliseconds
Waiting Time for P1 = 6 milliseconds
P1
P3
P2
6
3 30
0
10. Example of FCFS Scheduling
15-Feb-2011
www.eazynotes.com
10
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (0 + 3 + 6) / 3
= 9 / 3
= 3 milliseconds
Thus, the average waiting time depends on the
order in which the processes arrive.
12. 15-Feb-2011
www.eazynotes.com
12
In SJF, the process with the least estimated execution
time is selected from the ready queue for execution.
It associates with each process, the length of its next
CPU burst.
When the CPU is available, it is assigned to the
process that has the smallest next CPU burst.
If two processes have the same length of next CPU
burst, FCFS scheduling is used.
SJF algorithm can be preemptive or non-preemptive.
Shortest Job First Scheduling
(SJF)
13. Non-Preemptive SJF
15-Feb-2011
www.eazynotes.com
13
In non-preemptive scheduling, CPU is assigned
to the process with least CPU burst time.
The process keeps the CPU until it terminates.
Advantage:
It gives minimum average waiting time for a given
set of processes.
Disadvantage:
It requires knowledge of how long a process will run
and this information is usually not available.
14. Preemptive SJF
15-Feb-2011
www.eazynotes.com
14
In preemptive SJF, the process with the smallest
estimated run-time is executed first.
Any time a new process enters into ready queue,
the scheduler compares the expected run-time of
this process with the currently running process.
If the new process’s time is less, then the
currently running process is preempted and the
CPU is allocated to the new process.
15. Example of Non-Preemptive SJF
15-Feb-2011
www.eazynotes.com
15
Consider the following set of processes that arrive
at time 0 with the length of the CPU burst time in
milliseconds:
Process Burst Time (in milliseconds)
P1 6
P2 8
P3 7
P4 3
16. Example of Non-Preemptive SJF
15-Feb-2011
www.eazynotes.com
16
The Gantt Chart for the schedule is:
Waiting Time for P4 = 0 milliseconds
Waiting Time for P1 = 3 milliseconds
Waiting Time for P3 = 9 milliseconds
Waiting Time for P2 = 16 milliseconds
P4 P3 P2
9 16 24
0
P1
3
P1 6
P2 8
P3 7
P4 3
17. Example of Non-Preemptive SJF
15-Feb-2011
www.eazynotes.com
17
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (0 + 3 + 9 + 16 ) / 4
= 28 / 4
= 7 milliseconds
19. Example of Preemptive SJF
15-Feb-2011
www.eazynotes.com
19
Consider the following set of processes. These
processes arrived in the ready queue at the times
given in the table:
Process Arrival Time Burst Time (in milliseconds)
P1 0 8
P2 1 4
P3 2 9
P4 3 5
20. Example of Preemptive SJF
15-Feb-2011
www.eazynotes.com
20
The Gantt Chart for the schedule is:
Waiting Time for P1 = 10 – 1 – 0 = 9
Waiting Time for P2 = 1 – 1 = 0
Waiting Time for P3 = 17 – 2 = 15
Waiting Time for P4 = 5 – 3 = 2
P1 P1
P4
1 17
0 10
P3
26
P2
5
P AT BT
P1 0 8
P2 1 4
P3 2 9
P4 3 5
21. Example of Preemptive SJF
15-Feb-2011
www.eazynotes.com
21
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (9 + 0 + 15 + 2) / 4
= 26 / 4
= 6.5 milliseconds
22. Explanation of the Example
15-Feb-2011
www.eazynotes.com
22
Process P1 is started at time 0, as it is the only
process in the queue.
Process P2 arrives at the time 1 and its burst time
is 4 milliseconds.
This burst time is less than the remaining time of
process P1 (7 milliseconds).
So, process P1 is preempted and P2 is scheduled.
23. Explanation of the Example
15-Feb-2011
www.eazynotes.com
23
Process P3 arrives at time 2. Its burst time is 9
which is larger than remaining time of P2 (3
milliseconds).
So, P2 is not preempted.
Process P4 arrives at time 3. Its burst time is 5.
Again it is larger than the remaining time of P2 (2
milliseconds).
So, P2 is not preempted.
24. Explanation of the Example
15-Feb-2011
www.eazynotes.com
24
After the termination of P2, the process with
shortest next CPU burst i.e. P4 is scheduled.
After P4, processes P1 (7 milliseconds) and then
P3 (9 milliseconds) are scheduled.
25. Priority Scheduling
15-Feb-2011
www.eazynotes.com
25
In priority scheduling, a priority is associated with
all processes.
Processes are executed in sequence according
to their priority.
CPU is allocated to the process with highest
priority.
If priority of two or more processes are equal than
FCFS is used to break the tie.
26. Priority Scheduling
15-Feb-2011
www.eazynotes.com
26
Priority scheduling can be preemptive or non-
preemptive.
Preemptive Priority Scheduling:
In this, scheduler allocates the CPU to the new process
if the priority of new process is higher tan the priority of
the running process.
Non-Preemptive Priority Scheduling:
The running process is not interrupted even if the new
process has a higher priority.
In this case, the new process will be placed at the head
of the ready queue.
28. Priority Scheduling
15-Feb-2011
www.eazynotes.com
28
Solution:
Aging is a technique which gradually increases the
priority of processes that are victims of starvation.
For e.g.: Priority of process X is 10.
There are several processes with higher priority in
the ready queue.
Processes with higher priority are inserted into
ready queue frequently.
In this situation, process X will face starvation.
30. Example of Priority Scheduling
15-Feb-2011
www.eazynotes.com
30
Consider the following set of processes that arrive
at time 0 with the length of the CPU burst time in
milliseconds. The priority of these processes is
also given:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
31. Example of Priority Scheduling
15-Feb-2011
www.eazynotes.com
31
The Gantt Chart for the schedule is:
Waiting Time for P2 = 0
Waiting Time for P5 = 1
Waiting Time for P1 = 6
Waiting Time for P3 = 16
Waiting Time for P4 = 18
P BT Pr
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
P2 P5 P1 P3 P4
0 1 6 16 18 19
32. Example of Priority Scheduling
15-Feb-2011
www.eazynotes.com
32
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (0 + 1 + 6 + 16 + 18) / 5
= 41 / 5
= 8.2 milliseconds
34. Another Example of Priority
Scheduling
15-Feb-2011
www.eazynotes.com
34
Processes P1, P2, P3 are the processes with their
arrival time, burst time and priorities listed in table
below:
Process Arrival Time Burst Time Priority
P1 0 10 3
P2 1 5 2
P3 2 2 1
35. Another Example of Priority
Scheduling
15-Feb-2011
www.eazynotes.com
35
The Gantt Chart for the schedule is:
Waiting Time for P1 = 0 + (8 – 1) = 7
Waiting Time for P2 = 1 + (4 – 2) = 3
Waiting Time for P3 = 2 P AT BT Pr
P1 0 10 3
P2 1 5 2
P3 2 2 1
P1 P2 P3 P2 P1
0 1 2 4 8 17
36. Another Example of Priority
Scheduling
15-Feb-2011
www.eazynotes.com
36
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (7 + 3 + 2) / 3
= 12 / 3
= 4 milliseconds
37. Round Robin Scheduling (RR)
15-Feb-2011
www.eazynotes.com
37
In Round Robin scheduling, processes are
dispatched in FIFO but are given a small amount
of CPU time.
This small amount of time is known as Time
Quantum or Time Slice.
A time quantum is generally from 10 to 100
milliseconds.
38. Round Robin Scheduling (RR)
15-Feb-2011
www.eazynotes.com
38
If a process does not complete before its time
slice expires, the CPU is preempted and is given
to the next process in the ready queue.
The preempted process is then placed at the tail
of the ready queue.
If a process is completed before its time slice
expires, the process itself releases the CPU.
The scheduler then proceeds to the next process
in the ready queue.
39. Round Robin Scheduling (RR)
15-Feb-2011
www.eazynotes.com
39
Round Robin scheduling is always preemptive as
no process is allocated the CPU for more than
one time quantum.
If a process’s CPU burst time exceeds one time
quantum then that process is preempted and is
put back at the tail of ready queue.
The performance of Round Robin scheduling
depends on several factors:
Size of Time Quantum
Context Switching Overhead
40. Example of Round Robin
Scheduling
15-Feb-2011
www.eazynotes.com
40
Consider the following set of processes that arrive
at time 0 with the length of the CPU burst time in
milliseconds:
Time quantum is of 2 milliseconds.
Process Burst Time
P1 10
P2 5
P3 2
41. Example of Round Robin
Scheduling
15-Feb-2011
www.eazynotes.com
41
The Gantt Chart for the schedule is:
Waiting Time for P1 = 0 + (6 – 2) + (10 – 8) + (13 – 12)
= 4 + 2 + 1 = 7
Waiting Time for P2 = 2 + (8 – 4) + (12 – 10)
= 2 + 4 + 2 = 8
Waiting Time for P3 = 4
P BT
P1 10
P2 5
P3 2
P1 P2 P3 P1 P2 P1 P2 P1 P1
0 2 6 10 12 13
4 8 15 17
42. Example of Round Robin
Scheduling
15-Feb-2011
www.eazynotes.com
42
Average Waiting Time = (Total Waiting Time) /
No. of Processes
= (7 + 8 + 4) / 3
= 19 / 3
= 6.33 milliseconds
44. Multi-Level Queue Scheduling
(MLQ)
15-Feb-2011
www.eazynotes.com
44
Multi-Level Queue scheduling classifies the
processes according to their types.
For e.g.: a MLQ makes common division between
the interactive processes (foreground) and the
batch processes (background).
These two processes have different response
times, so they have different scheduling
requirements.
Also, interactive processes have higher priority
than the batch processes.
45. Multi-Level Queue Scheduling
(MLQ)
15-Feb-2011
www.eazynotes.com
45
In this scheduling, ready queue is divided into
various queues that are called subqueues.
The processes are assigned to subqueues,
based on some properties like memory size,
priority or process type.
Each subqueue has its own scheduling algorithm.
For e.g.: interactive processes may use round
robin algorithm while batch processes may use
FCFS.
47. Multi-Level Feedback Queue
Scheduling (MFQ)
15-Feb-2011
www.eazynotes.com
47
Multi-Level Feedback Queue scheduling is an
enhancement of MLQ.
In this scheme, processes can move between
different queues.
The various processes are separated in different
queues on the basis of their CPU burst times.
If a process consumes a lot of CPU time, it is placed
into a lower priority queue.
If a process waits too long in a lower priority queue, it
is moved into higher priority queue.
Such an aging prevents starvation.
49. Multi-Level Feedback Queue
Scheduling (MFQ)
15-Feb-2011
www.eazynotes.com
49
The top priority queue is given smallest CPU time
quantum.
If the quantum expires before the process terminates,
it is then placed at the back of the next lower queue.
Again, if it does not complete, it is put to the last
priority queue.
The processes in this queue runs on FCFS
scheduling.
If a process becomes a victim of starvation, it is
promoted to the next higher priority queue.