Presentation on the paper: "Virtual time round-robin O(1) proportional share scheduler"
If you need a copy of this presentation, feel free to msg me at parang[DOT]saraf[AT]gmail
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
The document discusses several algorithms for process synchronization and mutual exclusion in distributed systems:
- Token ring algorithm passes an exclusive token around a logical ring of processes to control access to shared resources.
- Ricart-Agrawala algorithm uses message ordering and timestamps to elect a process to enter the critical section.
- Lamport's mutual exclusion algorithm uses request queues and timestamps to ensure only one process is in the critical section at a time.
- Election algorithms like the Bully and Ring algorithms elect a coordinator process to handle synchronization and recovery from failures.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
Remote Procedure Calls (RPC) allow a program to execute a procedure in another address space without needing to know where it is located. RPC uses client and server stubs that conceal the underlying message passing between client and server processes. The client stub packs the procedure call into a message and sends it to the server stub, which unpacks it and executes the procedure before returning any results. This makes remote procedure calls appear as local procedure calls to improve transparency. IDL is used to define interfaces and generate client/server stubs automatically to simplify development of distributed applications using RPC.
Contents
1. Concepts of Real time Systems (RTS)
2. Characteristics of RTS
3. Types of RTS
4. Comparison between Types of RTS
5. Applications of RTS
6. Challenges of RTS
Symmetric encryption uses the same key to encrypt and decrypt data, providing confidentiality. Keys must be distributed securely between parties. Common approaches involve using a key distribution center (KDC) that shares secret keys with parties and can provide temporary session keys. Link encryption protects data as it travels over each network link, while end-to-end encryption protects data for its entire journey but leaves some header data unencrypted. Key distribution, storage, renewal and replacement are important aspects of maintaining security when using symmetric encryption.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
Memory Management is the way toward controlling and planning the system memory, allocating packets called the blocks to different running projects in order to optimize the system process. Memory management can be done in hardware, in operating system, in programs as well as applications. Copy the link given below and paste it in new browser window to get more information on Memory Management:- http://www.transtutors.com/homework-help/computer-science/memory-management.aspx
Deadlocks occur when processes are waiting for resources held by other processes, resulting in a circular wait. Four conditions must be met: mutual exclusion, hold and wait, no preemption, and circular wait. Deadlocks can be handled through avoidance, prevention, or detection and recovery. Avoidance algorithms allocate resources only if it ensures the system remains in a safe state where deadlocks cannot occur. Prevention methods make deadlocks impossible by ensuring at least one condition is never satisfied, such as through collective or ordered resource requests. Detection finds existing deadlocks by analyzing resource allocation graphs or wait-for graphs to detect cycles.
The document discusses several algorithms for process synchronization and mutual exclusion in distributed systems:
- Token ring algorithm passes an exclusive token around a logical ring of processes to control access to shared resources.
- Ricart-Agrawala algorithm uses message ordering and timestamps to elect a process to enter the critical section.
- Lamport's mutual exclusion algorithm uses request queues and timestamps to ensure only one process is in the critical section at a time.
- Election algorithms like the Bully and Ring algorithms elect a coordinator process to handle synchronization and recovery from failures.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
Remote Procedure Calls (RPC) allow a program to execute a procedure in another address space without needing to know where it is located. RPC uses client and server stubs that conceal the underlying message passing between client and server processes. The client stub packs the procedure call into a message and sends it to the server stub, which unpacks it and executes the procedure before returning any results. This makes remote procedure calls appear as local procedure calls to improve transparency. IDL is used to define interfaces and generate client/server stubs automatically to simplify development of distributed applications using RPC.
Contents
1. Concepts of Real time Systems (RTS)
2. Characteristics of RTS
3. Types of RTS
4. Comparison between Types of RTS
5. Applications of RTS
6. Challenges of RTS
Symmetric encryption uses the same key to encrypt and decrypt data, providing confidentiality. Keys must be distributed securely between parties. Common approaches involve using a key distribution center (KDC) that shares secret keys with parties and can provide temporary session keys. Link encryption protects data as it travels over each network link, while end-to-end encryption protects data for its entire journey but leaves some header data unencrypted. Key distribution, storage, renewal and replacement are important aspects of maintaining security when using symmetric encryption.
The document discusses various CPU scheduling algorithms including first come first served, shortest job first, priority, and round robin. It describes the basic concepts of CPU scheduling and criteria for evaluating algorithms. Implementation details are provided for shortest job first, priority, and round robin scheduling in C++.
Memory Management is the way toward controlling and planning the system memory, allocating packets called the blocks to different running projects in order to optimize the system process. Memory management can be done in hardware, in operating system, in programs as well as applications. Copy the link given below and paste it in new browser window to get more information on Memory Management:- http://www.transtutors.com/homework-help/computer-science/memory-management.aspx
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
This document discusses different types of interconnection network topologies for parallel machines. It provides details on:
1) Linear array networks have nodes connected in a line, with diameter of n-1, node degree of 2, and bisection width of 1.
2) Mesh networks connect nodes in a grid, with diameter of 2(n-1), node degree of 4, and bisection width of n for an nXn mesh.
3) Hypercube networks have nodes connected by log2N routing functions, with diameter and node degree of log2N and bisection width of 2n-1 for a network with N nodes.
RANDOM ACCESS PROTOCOL IN COMMUNICATION AMOGHA A K
In random access ,each station has right to send the data. However , if more than one station tries to send ,collision will occur .To avoid this collision , protocols came into existence.
In random access method , no stations are superior & none is assigned the control over the other .
When a station has a data to send , it uses a procedure defined by a protocol whether to send or not .
Discussed different types of dynamic interconnection networks. Graphically demonstrated single and multiple bus interconnection networks. Discussed different types of switch based interconnection networks. Graphically shown the mechanisms of crossbar, single and multistage interconnection networks. Graphically explained the working principle of omega network, Benes network, and baseline networks.
This document discusses two common models for distributed computing communication: message passing and remote procedure calls (RPC). It describes the basic primitives and design issues for each model. For message passing, it covers synchronous vs asynchronous and blocking vs non-blocking primitives. For RPC, it explains the client-server model and how stubs are used to convert parameters and return results between machines. It also discusses binding, parameter passing techniques, and ensuring error handling and execution semantics.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
Control Units : Microprogrammed and Hardwired:control unitabdosaidgkv
The document discusses control units in CPUs. There are two main methods for implementing control units: hardwired and microprogrammed. A hardwired control unit generates control signals through circuitry using logic gates, while a microprogrammed control unit generates control signals by executing a stored microprogram. Overall, hardwired control units are faster but less flexible, while microprogrammed control units are slower but more flexible and modifiable.
Deadlock Detection in Distributed SystemsDHIVYADEVAKI
The document discusses deadlocks in computing systems. It defines deadlocks and related concepts like livelock and starvation. It presents various approaches to deal with deadlocks including detection and recovery, avoidance through runtime checks, and prevention by restricting resource requests. Graph-based algorithms are described for detecting and preventing deadlocks by analyzing resource allocation graphs. The Banker's algorithm is introduced as a static prevention method. Finally, it discusses ways to eliminate the conditions required for deadlocks, like mutual exclusion, hold-and-wait, and circular wait.
loader and linker are both system software which capable of loads the object code, assembled by an assembler, (loader) and link a different kind of block of a huge program. both software works at the bottom of the operation (i.e. closer to the hardware). in fact, both have machine dependent and independent features.
This document discusses key aspects of distributed file systems including file caching schemes, file replication, and fault tolerance. It describes different cache locations, modification propagation techniques, and methods for replica creation. File caching schemes aim to reduce network traffic by retaining recently accessed files in memory. File replication provides increased reliability and availability through independent backups. Distributed file systems must also address being stateful or stateless to maintain information about file access and operations.
The document discusses various scheduling algorithms used in operating systems including:
- First Come First Serve (FCFS) scheduling which services processes in the order of arrival but can lead to long waiting times.
- Shortest Job First (SJF) scheduling which prioritizes the shortest processes first to minimize waiting times. It can be preemptive or non-preemptive.
- Priority scheduling assigns priorities to processes and services the highest priority process first, which can potentially cause starvation of low priority processes.
- Round Robin scheduling allows equal CPU access to all processes by allowing each a small time quantum or slice before preempting to the next process.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
A fault tolerant system is able to continue operating despite failures in hardware or software components. It gracefully degrades performance as more faults occur rather than collapsing suddenly. The goal is to ensure the probability of total system failure remains acceptably small. Redundancy is a key technique, with hardware redundancy using multiple redundant components and voting on outputs to mask faults. Static pairing and N modular redundancy are two hardware redundancy methods.
A Distributed File System(DFS) is simply a classical model of a file system distributed across multiple machines.The purpose is to promote sharing of dispersed files.
This document summarizes the Banker's Algorithm, which is used to determine if a set of pending processes can safely acquire resources or if they should wait due to limited resources. It outlines the key data structures used like Available, Max, Allocation, and Need matrices to track current resources. The Safety Algorithm is described to check if the system is in a safe state by finding a process that can terminate and release resources. The Resource-Request Algorithm simulates allocating resources to a process and checks if it leads to a safe state before actual allocation.
The document discusses CPU scheduling in Windows operating systems. It covers scheduling algorithms used in different versions of Windows like Windows 3.1x, 95, NT, XP, 7, and 8. The key points discussed are:
- Windows uses a pre-emptive, priority-based scheduler with 32 priority levels and multi-level queues.
- The dispatcher determines thread execution order based on priority class and relative priority within class.
- Interactive threads get priority boosts after waits to improve response time.
- The foreground process in Windows XP gets preferential treatment over background processes.
- Later versions introduced improvements like user-mode scheduling and CPU cycle-based scheduling.
Unit 1 architecture of distributed systemskaran2190
The document discusses the architecture of distributed systems. It describes several models for distributed system architecture including:
1) The mini computer model which connects multiple minicomputers to share resources among users.
2) The workstation model where each user has their own workstation and resources are shared over a network.
3) The workstation-server model combines workstations with centralized servers to manage shared resources like files.
The document discusses process management in distributed operating systems, focusing on process migration which involves moving a running process from one node to another to optimize resource utilization, and covering key aspects like process freezing, address space transfer, message forwarding, and maintaining communication between related processes migrated to different nodes. Efficient process migration requires mechanisms that provide transparency, minimize interference and overhead, and handle heterogeneity between system nodes.
The document discusses the concept of virtual memory. Virtual memory allows a program to access more memory than what is physically available in RAM by storing unused portions of the program on disk. When a program requests data that is not currently in RAM, it triggers a page fault that causes the needed page to be swapped from disk into RAM. This allows the illusion of more memory than physically available through swapping pages between RAM and disk as needed by the program during execution.
The document discusses alternatives to round robin reading for developing fluent readers. Round robin reading is discouraged as it decreases comprehension and creates anxiety. Alternatives mentioned include paired reading, reciprocal teaching, echo reading, choral reading, and shared reading. These strategies allow for more reading practice in an engaging way and help develop skills of fluent readers like self-correcting errors. The document encourages teachers to implement these strategies and seek help from literacy specialists.
This document discusses various topics related to synchronization in distributed systems, including distributed algorithms, logical clocks, global state, and leader election. It provides definitions and examples of key synchronization concepts such as coordination, synchronization, and determining global states. Examples of logical clock algorithms like Lamport clocks and vector clocks are provided. Challenges around clock synchronization and calculating global system states are also summarized.
This document discusses different types of interconnection network topologies for parallel machines. It provides details on:
1) Linear array networks have nodes connected in a line, with diameter of n-1, node degree of 2, and bisection width of 1.
2) Mesh networks connect nodes in a grid, with diameter of 2(n-1), node degree of 4, and bisection width of n for an nXn mesh.
3) Hypercube networks have nodes connected by log2N routing functions, with diameter and node degree of log2N and bisection width of 2n-1 for a network with N nodes.
RANDOM ACCESS PROTOCOL IN COMMUNICATION AMOGHA A K
In random access ,each station has right to send the data. However , if more than one station tries to send ,collision will occur .To avoid this collision , protocols came into existence.
In random access method , no stations are superior & none is assigned the control over the other .
When a station has a data to send , it uses a procedure defined by a protocol whether to send or not .
Discussed different types of dynamic interconnection networks. Graphically demonstrated single and multiple bus interconnection networks. Discussed different types of switch based interconnection networks. Graphically shown the mechanisms of crossbar, single and multistage interconnection networks. Graphically explained the working principle of omega network, Benes network, and baseline networks.
This document discusses two common models for distributed computing communication: message passing and remote procedure calls (RPC). It describes the basic primitives and design issues for each model. For message passing, it covers synchronous vs asynchronous and blocking vs non-blocking primitives. For RPC, it explains the client-server model and how stubs are used to convert parameters and return results between machines. It also discusses binding, parameter passing techniques, and ensuring error handling and execution semantics.
Distributed shared memory (DSM) provides processes with a shared address space across distributed memory systems. DSM exists only virtually through primitives like read and write operations. It gives the illusion of physically shared memory while allowing loosely coupled distributed systems to share memory. DSM refers to applying this shared memory paradigm using distributed memory systems connected by a communication network. Each node has CPUs, memory, and blocks of shared memory can be cached locally but migrated on demand between nodes to maintain consistency.
Control Units : Microprogrammed and Hardwired:control unitabdosaidgkv
The document discusses control units in CPUs. There are two main methods for implementing control units: hardwired and microprogrammed. A hardwired control unit generates control signals through circuitry using logic gates, while a microprogrammed control unit generates control signals by executing a stored microprogram. Overall, hardwired control units are faster but less flexible, while microprogrammed control units are slower but more flexible and modifiable.
Deadlock Detection in Distributed SystemsDHIVYADEVAKI
The document discusses deadlocks in computing systems. It defines deadlocks and related concepts like livelock and starvation. It presents various approaches to deal with deadlocks including detection and recovery, avoidance through runtime checks, and prevention by restricting resource requests. Graph-based algorithms are described for detecting and preventing deadlocks by analyzing resource allocation graphs. The Banker's algorithm is introduced as a static prevention method. Finally, it discusses ways to eliminate the conditions required for deadlocks, like mutual exclusion, hold-and-wait, and circular wait.
loader and linker are both system software which capable of loads the object code, assembled by an assembler, (loader) and link a different kind of block of a huge program. both software works at the bottom of the operation (i.e. closer to the hardware). in fact, both have machine dependent and independent features.
This document discusses key aspects of distributed file systems including file caching schemes, file replication, and fault tolerance. It describes different cache locations, modification propagation techniques, and methods for replica creation. File caching schemes aim to reduce network traffic by retaining recently accessed files in memory. File replication provides increased reliability and availability through independent backups. Distributed file systems must also address being stateful or stateless to maintain information about file access and operations.
The document discusses various scheduling algorithms used in operating systems including:
- First Come First Serve (FCFS) scheduling which services processes in the order of arrival but can lead to long waiting times.
- Shortest Job First (SJF) scheduling which prioritizes the shortest processes first to minimize waiting times. It can be preemptive or non-preemptive.
- Priority scheduling assigns priorities to processes and services the highest priority process first, which can potentially cause starvation of low priority processes.
- Round Robin scheduling allows equal CPU access to all processes by allowing each a small time quantum or slice before preempting to the next process.
The document discusses various algorithms for achieving distributed mutual exclusion and process synchronization in distributed systems. It covers centralized, token ring, Ricart-Agrawala, Lamport, and decentralized algorithms. It also discusses election algorithms for selecting a coordinator process, including the Bully algorithm. The key techniques discussed are using logical clocks, message passing, and quorums to achieve mutual exclusion without a single point of failure.
A fault tolerant system is able to continue operating despite failures in hardware or software components. It gracefully degrades performance as more faults occur rather than collapsing suddenly. The goal is to ensure the probability of total system failure remains acceptably small. Redundancy is a key technique, with hardware redundancy using multiple redundant components and voting on outputs to mask faults. Static pairing and N modular redundancy are two hardware redundancy methods.
A Distributed File System(DFS) is simply a classical model of a file system distributed across multiple machines.The purpose is to promote sharing of dispersed files.
This document summarizes the Banker's Algorithm, which is used to determine if a set of pending processes can safely acquire resources or if they should wait due to limited resources. It outlines the key data structures used like Available, Max, Allocation, and Need matrices to track current resources. The Safety Algorithm is described to check if the system is in a safe state by finding a process that can terminate and release resources. The Resource-Request Algorithm simulates allocating resources to a process and checks if it leads to a safe state before actual allocation.
The document discusses CPU scheduling in Windows operating systems. It covers scheduling algorithms used in different versions of Windows like Windows 3.1x, 95, NT, XP, 7, and 8. The key points discussed are:
- Windows uses a pre-emptive, priority-based scheduler with 32 priority levels and multi-level queues.
- The dispatcher determines thread execution order based on priority class and relative priority within class.
- Interactive threads get priority boosts after waits to improve response time.
- The foreground process in Windows XP gets preferential treatment over background processes.
- Later versions introduced improvements like user-mode scheduling and CPU cycle-based scheduling.
Unit 1 architecture of distributed systemskaran2190
The document discusses the architecture of distributed systems. It describes several models for distributed system architecture including:
1) The mini computer model which connects multiple minicomputers to share resources among users.
2) The workstation model where each user has their own workstation and resources are shared over a network.
3) The workstation-server model combines workstations with centralized servers to manage shared resources like files.
The document discusses process management in distributed operating systems, focusing on process migration which involves moving a running process from one node to another to optimize resource utilization, and covering key aspects like process freezing, address space transfer, message forwarding, and maintaining communication between related processes migrated to different nodes. Efficient process migration requires mechanisms that provide transparency, minimize interference and overhead, and handle heterogeneity between system nodes.
The document discusses the concept of virtual memory. Virtual memory allows a program to access more memory than what is physically available in RAM by storing unused portions of the program on disk. When a program requests data that is not currently in RAM, it triggers a page fault that causes the needed page to be swapped from disk into RAM. This allows the illusion of more memory than physically available through swapping pages between RAM and disk as needed by the program during execution.
The document discusses alternatives to round robin reading for developing fluent readers. Round robin reading is discouraged as it decreases comprehension and creates anxiety. Alternatives mentioned include paired reading, reciprocal teaching, echo reading, choral reading, and shared reading. These strategies allow for more reading practice in an engaging way and help develop skills of fluent readers like self-correcting errors. The document encourages teachers to implement these strategies and seek help from literacy specialists.
The document discusses fair-share scheduling and traditional UNIX scheduling. For fair-share scheduling, it describes how CPU cycles are distributed among user groups based on their weights. It provides an example where three groups with different number of users are each allocated 33.3% of CPU cycles, which are then divided among the users in each group. For traditional UNIX scheduling, it explains the goal of ensuring good interactive response time while preventing background processes from starving. Priorities are recomputed every second based on process type, execution history, base priority, and user-adjustable nice value. Processes are divided into priority bands or queues.
This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware
This document discusses the evolution of software defined networking (SDN) and application-centric infrastructure. It describes how SDN has progressed from early implementations using OpenFlow (SDN 1.0) to separating the control and data planes (SDN 2.0) to the current approach of an application-centric infrastructure with a centralized controller and policy-based automation (SDN 3.0). It emphasizes how the new approach simplifies infrastructure management, enables intelligent services, and provides dynamic security through a centralized control plane.
Microkernel is the central core of an operating system that controls everything in the system including startup, data processing, input/output, configuration, and memory management. It provides the basic mechanisms needed to implement an operating system, such as low-level address space management, thread management, and interprocess communication. Unlike monolithic kernels, drivers, protocols, file systems and other low-level systems are developed in user space. This increases security and stability since less code runs in kernel mode, and new features can be added without recompiling the kernel. Examples of microkernels include Mach and L4.
The document provides an overview of operating systems, including:
1) An operating system controls application program execution and acts as an interface between applications and hardware. Main objectives are convenience, efficiency, and ability to evolve.
2) Early operating systems evolved from serial processing to simple batch systems to multiprogrammed batch systems and time-sharing systems to better manage resources and improve processor utilization.
3) Modern operating systems employ processes to structure programs in execution, manage memory and system resources, and provide security and scheduling functions. Processes allow concurrent multi-tasking and protection between programs.
This document discusses symmetric multiprocessing (SMP). It defines SMP as processing programs using multiple processors that share common operating system and memory. The key aspects of SMP are:
- Processors share memory and I/O bus or data path
- A single operating system controls all processors
- SMP architecture allows processors to access same memory simultaneously
- Advantages of SMP include improved performance, availability, ability to incrementally grow, and scaling
- Limitations include additional complexity in operating system and potential bottlenecking of master CPU.
This document discusses microkernel operating system design. It begins by explaining the differences between kernel mode and user mode. It then describes different approaches to system structure, including monolithic, layered, and microkernel designs. The main advantages of microkernels are that they improve reliability, security, and extensibility by running most operating system services outside the kernel in user space. Specific microkernel-based operating systems mentioned include Mach, QNX, MINIX 3, and Apple MacOS Server.
This document discusses operating system architecture and kernel types. It defines the kernel as the fundamental part of the OS that provides secure access to hardware and decides resource allocation. Kernels can take different forms: monolithic kernels have all services in kernel space for good performance but are difficult to maintain; microkernels minimize the kernel to essential functions and put most services in user space for better modularity but more overhead; hybrid kernels combine aspects of monolithic and microkernels; nano and exokernels are more minimal.
This document discusses symmetric and asymmetric multiprocessor systems. Symmetric multiprocessing involves two or more identical processors that connect to a single shared memory and are controlled by a single operating system instance. An example is a Solaris system that can use multiple processors equally. Asymmetric multiprocessing means processors are not treated equally, such as only allowing one processor to perform I/O. Burroughs B5000/B5500 systems are examples where only one processor could access peripherals. Symmetric multiprocessing provides better load balancing and fault tolerance compared to asymmetric multiprocessing, which is simpler but can fail entirely if the operating system processor fails.
Here are the key steps in Preemptive SJF scheduling:
1. Jobs A, B, C, D arrive in that order with the given burst times
2. Job B has the shortest burst time of 4 so it runs first
3. Job A preempts B after B completes its 4 unit burst
4. Job D has the next shortest burst of 5 so it runs
5. Job C has the next shortest burst of 9 so it runs
6. Job A completes after running a total of 7 + 7 = 14 units
The average wait time is (9 + 0 + 15 + 2) / 4 = 6.5
This example shows how preemptive S
1. The document discusses the basics of operating systems, including definitions, architecture, and booting process. It describes how an OS manages hardware and software resources and acts as an interface between users, applications, and hardware.
2. The key components of an OS architecture include the user, user applications, system programs, the operating system itself, and underlying hardware. The booting process involves POST, BIOS initialization, and loading the operating system kernel into memory.
3. The document provides several examples of OS functions like process management, memory management, file management, I/O management, security, and user interfaces. It also discusses different types of OSs such as batch, multiprogramming, time-sharing,
The document discusses the First Come First Serve (FCFS) disk scheduling algorithm. FCFS is the simplest disk scheduling algorithm as requests are served in the order they arrive. It is easy to program and intrinsically fair, but does not provide optimal disk head movement as requests are not ordered for proximity. The document provides an example of the FCFS algorithm applied to a disk request queue, showing the path the disk head takes to fulfill the requests and the total head movement distance.
The document discusses different types of scheduling algorithms. It describes cyclic scheduling, where a set of periodic tasks are executed repeatedly in a defined cycle. Round robin scheduling is also covered, where each task gets a time slice to execute in a cyclic queue before the next task runs. The round robin algorithm aims to be fair by giving each task an equal share of CPU time. Examples of using these algorithms for orchestra robots and VoIP are provided.
Ramification of games_round robin_by Lester B. Panemlgabp
In a round robin tournament, the number of teams determines whether there will be a bye in each round or not. An even number of teams means no bye, while an odd number of teams means one bye per round. The number of games is calculated using the formula N(N-1)/2, where N is the number of teams. The document then provides examples of how 5, 6, 7, 8, 9, and 10 team tournaments would be structured over multiple rounds based on this formula.
The document discusses different CPU scheduling algorithms used in operating systems. It describes non-preemptive and preemptive scheduling and explains the key differences. It then covers four common scheduling algorithms - first come first served (FCFS), round robin, priority scheduling, and shortest job first (SJF) - and compares their advantages and disadvantages.
строим Microkernel architecture на базе паттерна pipes and filterscorehard_by
1. The microkernel architectural pattern separates a minimal core functionality from extended functionality and customer-specific parts.
2. A microkernel system consists of a microkernel, internal servers that implement core functionality, external servers that implement interfaces and APIs, adapters that interface between clients and external servers, and clients.
3. This separation of concerns allows all versions of an application to share a common core while providing customized functionality and interfaces, improving flexibility, extensibility, and portability.
The Email Analyzer interface helps an analyst visualize the email network and identify local group of people who frequently exchange emails amongst themselves. This interface was developed as an entry for VAST Challenge 2014.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Slides: Safeguarding Abila through Multiple Data PerspectivesParang Saraf
Abstract: This paper introduces a system for visual analysis of news articles, emails, GPS tracking data, financial transactions and streaming micro-blog data. The system was developed in response to the 2014 VAST Grand Challenge and comprises of several interfaces for mining textual, network, spatio-temporal, financial, and streaming data.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Abstract: This paper introduces a system for visualization and analysis of geo-spatial and temporal data from call center and microblog sources. The system provides a streaming client for interacting with the data in real time. The system presents the data in a partitioned format using coordinated visualizations that allows the analyst to view the data in multiple dimension simultaneously. This allows the user to see patterns that occur in space and time. This project was developed in response to the VAST 2014 Mini-Challenge 3.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Abstract: This paper introduces a system for visual analysis of GPS tracking and financial data. The system was developed in response to VAST Mini-Challenge 2 and comprises of different interfaces for mining spatio-temporal and financial data.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
This document summarizes two interfaces presented as part of a VAST challenge solution: a News Analyzer and Email Analyzer. The News Analyzer allows exploration of news articles to identify events using search, entity recognition, and visualization tools. The Email Analyzer visualizes email networks through a co-occurrence matrix, radial graph, and word cloud to examine relationships. Both interfaces were designed specifically for the challenge and utilize techniques like search engines, named entity recognition, spectral co-clustering, and D3 visualization.
The News Analyzer interface provides novel ways to visualize and analyze news articles collected from different sources. An analyst can identify trending entities like name of people, organizations and locations in the news articles and can also see how the corresponding trends have evolved over time. This was developed as an entry for VAST Challenge 2014.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
EMBERS AutoGSR: Automated Coding of Civil Unrest EventsParang Saraf
Abstract: We describe the EMBERS AutoGSR system that conducts automated coding of civil unrest events from news articles published in multiple languages. The nuts and bolts of the AutoGSR system constitute an ecosystem of filtering, ranking, and recommendation models to determine if an article reports a civil unrest event and, if so, proceed to identify and encode specific characteristics of the civil unrest event such as the when, where, who, and why of the protest. AutoGSR is a deployed system for the past 6 months continually processing data 24x7 in languages such as Spanish, Portuguese, English and encoding civil unrest events in 10 countries of Latin America: Argentina, Brazil, Chile, Colombia, Ecuador, El Salvador, Mexico, Paraguay, Uruguay, and Venezuela. We demonstrate the superiority of AutoGSR over both manual approaches and other state-of-the-art en- coding systems for civil unrest.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
EMBERS at 4 years: Experiences operating an Open Source Indicators Forecastin...Parang Saraf
Abstract: EMBERS is an anticipatory intelligence system forecasting population-level events in multiple countries of Latin America. A deployed system from 2012, EMBERS has been generating alerts 24x7 by ingesting a broad range of data sources including news, blogs, tweets, machine coded events, currency rates, and food prices. In this paper, we describe our experiences operating EMBERS continuously for nearly 4 years, with specific attention to the discoveries it has enabled, correct as well as missed forecasts, and lessons learnt from participating in a forecasting tournament including our perspectives on the limits of forecasting and ethical considerations.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Slides: Forex-Foreteller: Currency Trend Modeling using News ArticlesParang Saraf
Abstract: Financial markets are quite sensitive to unanticipated news and events. Identifying the effect of news on the market is a challenging task. In this demo, we present Forex-foreteller (FF) which mines news articles and makes forecasts about the movement of foreign currency markets. The system uses a combination of language models, topic clustering, and sentiment analysis to identify relevant news articles. These articles along with the historical stock index and currency exchange values are used in a linear regression model to make forecasts. The system has an interactive visualizer designed specifically for touch-sensitive devices which depicts forecasts along with the chronological news events and financial data used for making the forecasts.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Slides: Epidemiological Modeling of News and Rumors on TwitterParang Saraf
This document presents research on modeling the spread of news and rumors on Twitter using epidemiological models. It finds that a SEIZ (Susceptible-Exposed-Infected-Skeptics) model more accurately describes the spread of information on Twitter compared to a simpler SIS (Susceptible-Infected-Susceptible) model, especially at the initial stages. The researchers analyzed several real-world events and found the SEIZ model produced lower errors between the model and actual tweet volumes. Parameters extracted from fitting the SEIZ model to data could help identify rumors versus factual news on Twitter. Limitations include not incorporating information about followers or population characteristics.
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Parang Saraf
Abstract: Topic modeling techniques have been widely used to uncover dominant themes hidden inside an unstructured document collection. Though these techniques first originated in the probabilistic analysis of word distributions, many deep learning approaches have been adopted recently. In this paper, we propose a novel neural network based architecture that produces distributed representation of topics to capture topical themes in a dataset. Unlike many state-of-the-art techniques for generating distributed representation of words and documents that directly use neighboring words for training, we leverage the outcome of a sophisticated deep neural network to estimate the topic labels of each document. The networks, for topic modeling and generation of distributed representations, are trained concurrently in a cascaded style with better runtime without sacrificing the quality of the topics. Empirical studies reported in the paper show that the distributed representations of topics represent intuitive themes using smaller dimensions than conventional topic modeling approaches.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
EMBERS AutoGSR: Automated Coding of Civil Unrest EventsParang Saraf
EMBERS AutoGSR is a novel, web based framework that generates a comprehensive database of validated civil unrest events using minimal human effort. AutoGSR is a deployed system for the past 6 months that is continually processing data 24X7 in an automated fashion. The system extracts civil unrest events of type "who protested where, when and why?" from news articles published in over 7 languages, and collected from 16 countries.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
DMAP: Data Aggregation and Presentation FrameworkParang Saraf
DMAP (Data Mining and Automation for Platforms) is an online framework that presents a wide variety of official data, news and information about companies. It was developed with an aim to act as a one-stop platform for displaying everything official related to a company and its competitors. It aggregates data from the following sources: Bing News, Companys' official blogs, RSS Sources, Facebook, Twitter, Google Trends, Crunchbase, Financial Data Sources, and Alexa Web Analytics.
The aggregated information is shown in an intuitive fashion that allows a user to perform exploratory analysis of a particular company. Specifically, the interface makes it easy to do comparative analysis of a company with respect to its competitors. The user can use filters to select a given set of companies, data sources and date ranges. All the information is presented within the context of the framework such that the user doesn't have to go to different domains. The fetched news articles are cleaned, enriched, and presented in a manner that allows an analyst to navigate through news articles using named entities. For each news article, named entities: people, location, organization, and name of companies are extracted. For a given search query, the interface returns matching news articles along with associated entities which are shown using word clouds. This allows for easy discovery of connections between entities.
The framework was developed as a part of the 2015 summer internship with The Center for Global Enterprise (CGE). Conceptualization, Design and Development of the framework was done by me during the three month period.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Concurrent Inference of Topic Models and Distributed Vector RepresentationsParang Saraf
Abstract: Topic modeling techniques have been widely used to uncover dominant themes hidden inside an unstructured document collection. Though these techniques first originated in the probabilistic analysis of word distributions, many deep learning approaches have been adopted recently. In this paper, we propose a novel neural network based architecture that produces distributed representation of topics to capture topical themes in a dataset. Unlike many state-of-the-art techniques for generating distributed representation of words and documents that directly use neighboring words for training, we leverage the outcome of a sophisticated deep neural network to estimate the topic labels of each document. The networks, for topic modeling and generation of distributed representations, are trained concurrently in a cascaded style with better runtime without sacrificing the quality of the topics. Empirical studies reported in the paper show that the distributed representations of topics represent intuitive themes using smaller dimensions than conventional topic modeling approaches.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Bayesian Model Fusion for Forecasting Civil UnrestParang Saraf
Abstract: With the rapid rise in social media, alternative news sources, and blogs, ordinary citizens have become information producers as much as information consumers. Highly charged prose, images, and videos spread virally, and stoke the embers of social unrest by alerting fellow citizens to relevant happenings and
spurring them into action. We are interested in using Big Data approaches to generate forecasts of civil unrest from open source indicators. The heterogenous nature of data coupled with the rich and diverse origins of civil unrest call for a multi-model approach to such forecasting. We present a modular approach wherein a collection of models use overlapping sources of data to independently forecast protests. Fusion of alerts into one single alert stream becomes a key system informatics problem and we present a statistical framework to accomplish such fusion. Given an alert from one of the numerous models, the decision space
for fusion has two possibilities: (i) release the alert or (ii) suppress the alert. Using a Bayesian decision theoretical framework, we present a fusion approach for releasing or suppressing alerts. The resulting system enables real-time decisions and more importantly tuning of precision and recall. Supplementary materials for this article are available online.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
‘Beating the News’ with EMBERS: Forecasting Civil Unrest using Open Source In...Parang Saraf
Abstract: We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific
predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Safeguarding Abila through Multiple Data PerspectivesParang Saraf
This document describes a visual analytics system developed to analyze multiple datasets related to the disappearance of employees from an organization. The system allows analysis of unstructured news articles, email headers, GPS tracking data, financial transactions, and streaming microblog data through different interactive interfaces. It was created as a solution to the 2014 VAST Grand Challenge to help law enforcement investigate the case by combining insights from all the different data sources.
Abstract: This paper introduces a system for visualization and analysis of geo-spatial and temporal data from call center and microblog sources. The system provides a streaming client for interacting with the data in real time. The system presents the data in a partitioned format using coordinated visualizations
that allows the analyst to view the data in multiple dimension
simultaneously. This allows the user to see patterns that occur in space and time. This project was developed in response to the VAST 2014 Mini-Challenge 3.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Award: This system won the "Honorable Mention for Effective Presentation" award.
Abstract: This paper introduces a system for visual analysis of GPS tracking and financial data. The system was developed in response to VAST Mini-Challenge 2 and comprises of different interfaces for mining spatio-temporal and financial data.
For more information, please visit: http://people.cs.vt.edu/parang/ or contact parang at firstname at cs vt edu
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
3. Proportional Share Scheduling
• Given a set of clients with associated weights, a
proportional share scheduler should allocate
resources to each client in proportion to its
respective weight.
• Two types of proportional sharing:
o Varying Frequency
o Varying Time quantum
3
4. Perfect Fairness
• An ideal state in which each client receives service
exactly proportional to its share.
where
• WA(t1, t2) is the amount of service received by client A
during the time interval (t1, t2).
• SA is the proportional share of client A
• ∑i Si is the sum of shares of all the clients.
4
5. Service Time Error (EA)
• The error is the difference between the amount of
time allocated to the client during interval (t1, t2)
under the given algorithm, and the amount of time
what would have been allocated under an ideal
scheme that maintains perfect fairness for all clients
over all intervals.
• Measured in time units (tu).
5
8. Weighted Round-Robin (WRR)
• Clients are placed in a queue and allowed to
execute in turn.
• WRR provides proportional sharing by running all
clients with the same frequency but adjusting the
size of their time quanta.
• Scheduling overhead: O(1)
8
9. WRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
(3 tu) (2 tu) (1 tu)
A B C
• The error range is -1 tu to +1.5 tu.
9
10. Fair-Share Scheduling
• Fair-Share provides proportional sharing among
users by adjusting the priorities of a user’s clients in a
suitable way.
• Proportional sharing is achieved by running the
clients at different frequencies.
• Scheduling overhead: O(1) – O(N)
Where N is the number of clients
10
12. Lottery Scheduling
• Each client is given a number of tickets proportional
to its share.
• A ticket is randomly selected by the scheduler and
the client that owns the selected ticket is scheduled
to run for a time quantum.
• Scheduling Overhead: O(log N) – O(N).
• Scheduling accuracy is random, because it relies
on the law of randomness.
12
13. Weighted Fair Queuing (WFQ)
• Virtual Time VTi(t)
o is a measure of the degree to which a client has received its
proportional allocation relative to other clients.
o advances at the rate inversely proportional to the client’s share.
Where
WA(t) is the amount of service received by client A
SA is the share of client A
• Virtual Finish Time (VFT)
is the virtual time after running the client for one time quantum.
13
14. Weighted Fair Queuing (WFQ)
• Clients are ordered in a queue sorted from smallest
to largest VFT.
• The first client in the queue is selected for execution.
• After the client is executed, its VFT is updated and
the client is inserted back into the queue.
• Its position in the queue is determined by its
updated VFT.
14
15. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
A B C
1/3 1/2 1/1
15
16. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
B A C
1/2 2/3 1/1
A
16
17. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
A C B
2/3 1/1 2/2
A B
17
18. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
C B A
1/1 2/2 3/3
A B A
18
19. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
B A C
2/2 3/3 2/1
A B A C
19
20. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
A B C
3/3 3/2 2/1
A B A C B
20
21. WFQ Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
• Initial VFTs are 1/3, 1/2 and 1/1 respectively.
A B C
4/3 3/2 2/1
A B A C B A
21
22. WFQ Performance
• For last example, service time error ranges from -5/6
to +1 tu.
• It never gets below -1 tu. This means that a client
can never fall behind its ideal allocation by more
than a single time quantum.
• Scheduling Overhead: O(log N) – O(N).
• Most closest to proportional sharing.
22
23. Virtual-Time Round-Robin (VTRR)
Algorithm:
1) Order the clients in the run queue from largest to smallest
share. Unlike fair queue, the client's position in the run
queue only changes when its share changes.
2) Starting from the beginning of the queue, run each client
for one time quantum in a round-robin manner.
3) In step 2, if a client has received more than its
proportional allocation, skip the remaining clients in the
run queue and start from the beginning. Since, the clients
with larger share values are placed first in the queue, this
allows them to get more service than the lower-share
clients at the end of the queue.
23
24. Basic VTRR Algorithm
For each client, VTRR associates the following:
1) Share: the resource rights of a client.
2) Virtual Finish Time (VFT): Number of time quanta executed
divided by the share of the client.
3) Time Counter: tracks the number of quanta the client must
receive before a period is over.
4) ID Number: uniquely identifies a client.
5) Run State: whether the client can be executed or not.
24
25. Basic VTRR Algorithm
VTRR maintains a scheduler state, having following:
1) Time Quantum: How long is one time quantum
2) Run Queue: sorted queue of all runnable clients ordered
from largest to smallest share client.
3) Total Shares: Sum of the shares of all the clients.
4) Queue virtual time (QVT): is the measure of what a client's
VFT should be if it has received exactly its proportional
share allocation.
25
26. Basic VTRR Algorithm
Queue Virtual Time (QVT)
QVT advances whenever any client executes at a rate
inversely proportional to the total shares (same for every
client):
Where,
Q is the system time quantum
Si is the share of client i
The difference between the QVT and a client's virtual time is a
measure of whether the respective client has consumed its
proportional allocation of resources or not.
26
27. Basic VTRR Algorithm
Role of Time Counters
o Scheduling Cycle: a sequence of allocations whose length
is equal to the sum of all client shares. In our example its 6
(3+2+1).
o Time counter for each client is reset at the beginning of
each scheduling cycle to the client's share value, and is
decremented every time a client receives a time quantum.
o At the end of a scheduling cycle, the time counter of every
client should be equal to zero.
27
28. Basic VTRR Algorithm
Role of Time Counters
o Time Counter Invariant: for any two consecutive clients in
the queue A and B,
TCA ≥ TCB
o Preserving Time Counter Invariant: If the counter value of the
next client is greater than the counter of the current client,
the scheduler makes the next client the current client and
executes it for a quantum, without question.
28
29. Basic VTRR Algorithm
VFT Inequality
Where,
Q is the time quantum
SC is the share of client C
o Only if the VFT inequality is true, the scheduler selects
and executes the next client in the run queue.
o If at any point, the VFT inequality is not true, the
scheduler returns to the beginning of the run queue
and selects the first client to execute.
29
30. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 3 2 1
Client A B C
VFT 1/3 1/2 1/1
Execution
QVT 1/6 2/6 3/6 4/6 5/6 6/6
30
31. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 3 2 1
Client A B C
VFT 1/3 1/2 1/1
Execution
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 1/3 – 2/6 < 1/3
31
32. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 2 2 1
Client A B C
VFT 2/3 1/2 1/1
Execution A
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 1/2 – 3/6 < 1/2
32
33. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 2 1 1
Client A B C
VFT 2/3 2/2 1/1
Execution A B
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 1/1 – 4/6 < 1/1
33
34. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 2 1 0
Client A B C
VFT 2/3 2/2 2/1
Execution A B C
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 2/3 – 5/6 < 1/3
34
35. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 1 1 0
Client A B C
VFT 3/3 2/2 2/1
Execution A B C A
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 2/2 – 6/6 < 1/2
35
36. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 1 0 0
Client A B C
VFT 3/3 3/2 2/1
Execution A B C A B
QVT 1/6 2/6 3/6 4/6 5/6 6/6
36
37. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 1 0 0
Client A B C
VFT 3/3 3/2 2/1
Execution A B C A B
QVT 1/6 2/6 3/6 4/6 5/6 6/6
:: 3/3 – 7/6 < 1/3
37
38. VTRR Example
• 3 clients A, B, and C having shares 3, 2, and 1
respectively.
Time Counter 1 0 0
Client A B C
VFT 3/3 3/2 2/1
Execution A B C A B A
QVT 1/6 2/6 3/6 4/6 5/6 6/6
38
39. VTRR Dynamic Considerations
Inserting a new client
o A new client is inserted into the run queue so that the run
queue remains sorted from largest to smallest client share.
o We set the client’s implicit virtual time to be the same as
the QVT.
o Subsequently, VFT is calculated as follows:
o The initial counter value is set as follows:
39
40. VTRR Dynamic Considerations
Removing and re-inserting an existing client
o When a client becomes not runnable, it is removed from
the queue and its last-previous and last-next clients are
recorded.
o VFT is not updated any more. While re-inserting VFT is
calculated as follows:
o Similarly, if the client is inserted in the same cycle in which it
was removed, the counter is set to the minimum of CA and
the previous counter value.
40
41. VTRR Complexity
• A client is selected to execute in O(1) time.
• Updating the client’s VFT and the current run queue
are both O(1) time operations.
• The complete counter reset takes O(N) time, where
N is the number of clients.
o However, this reset is done after every scheduling cycle.
o As a result, the reset of the time counters is amortized over
all the clients giving an effective running time of O(1).
41
42. VTRR: Measurements & Results
• Conducted simulation studies to compare the
proportional sharing accuracy of VTRR against both
WRR and WFQ.
• The System used has following configuration:
o Gateway 2000 E1400 system
o One 433 MHz Intel Celeron CPU
o 128 MB RAM
o 10 GB hard drive
o Red Hat Linux 6.1 distribution running Linux 2.2.12-20 kernel.
• Timestamps were read from hardware cycle
counter registers.
42
48. Conclusion & Future Work
• VTRR is simple to implement and easy to integrate
into existing commercial operating systems.
o Implementing VTRR requires just 100 lines of code in Linux.
• VTRR combines the benefits of accurate
proportional share resource management with very
low overhead.
• Future work involves, evaluating VTRR in a multi-
processor context.
o Group Ratio Round-Robin: O(1) Proportional Share
Scheduling for Uniprocessor and Multiprocessor Systems;
2005 USENIX Annual Technical Conference
48