View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline
View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app!
From feasibility perspective, preemptive scheduling strictly dominates non preemptive scheduling. Every task system that is feasible under non preemptive scheduling is feasible under preemptive scheduling too, but the converse is not always true.
Preemptive scheduling has overhead associated with itself - context switching and arbitrating access in critical section to shared resources which allows access to only one task at any instant of time.
Objective is to reduce the number of preemptions.
Determine the largest size chunk in which the tasks can be scheduled non preemptively. Also if a task needs a shared resource for a period smaller than this chunk, then access to this shared resource is arbitrated by just having the task execute the resource non preemptively.
A sporadic task system in which every real-time sporadic task is defined by
τ i = (e i , d i , p i )
e i => Worst case execution requirement
d i => A relative deadline
p i => Minimum inter arrival separation
It is further assumed that the task system is schedulable under preemptive scheduling.
Objective: To determine the largest value q i for each τ i Є τ such that τ remains feasible if the jobs of τ i are scheduled in non preemptive chunks, each of size no larger than q i .
Priority driven scheduling algorithm, with higher priority assigned to request which has earlier deadline
Is an optimal uniprocessor – if EDF cannot schedule a task set on a uniprocessor, there is no scheduling algorithm that can
If all the tasks are periodic and have relative deadlines equal to their periods , this algorithm will feasibly schedule a periodic task set as long as its utilization factor verifies:
Define Demand Bound Function DBF( τ i , t) as the largest cumulative execution requirement of all jobs that can be generated by τ i to have both their arrival times and deadline times within a contiguous interval of length t.
DBF in an interval [t 0 , t 0 + t) is maximised when the first job arrives at t 0 and the successive jobs arrive as soon as possible, i.e, t 0 + p i , t 0 + 2p i , …
Approach : Compute the largest values of q i such that the infeasibility conditions are not satisfied.
Suppose a system τ which is not schedulable under EDF, and derive properties that it must satisfy.
As τ is not schedulable, it must generate a legal collection of jobs upon which EDF would miss some deadline. Let σ ( τ ) denote the smallest such legal collection of jobs. Let t f denote the instance at which deadline is missed, and t a the earliest arrival time of any job in σ ( τ ).
The processor is never idled over [t a , t f ) in the EDF schedule of σ ( τ ).
Atmost one job with deadline greater than t f .
If no jobs with deadlines > t f , then
Suppose there is 1 job with deadline > t f . Let τ j denote the task which generates this job, and let [t 1 , t 2 ] denote the last contiguous time interval during which it executes in non-preemptive mode. Then
From these we conclude that Restricted Sporadic Task System is not schedulable
if
Or there is a
From these, the value of q i can be calculated iteratively while performing the feasibility analysis.
In uniprocessor systems, preemptive scheduling dominates over non-preemptive one considering feasibility, ignoring the overhead of context switching which can be significant for many applications (processor preemption, resource sharing, etc).
The author here has given an algorithm to determine the largest chunk size that can be scheduled non-preemptively and yet the system does not miss its deadline.
Advantages :
If a resource is required for a time less than this chunk size, then the resource access can be arbitrated by simply having the task use the resource non-preemptively, instead of using complex resource sharing algorithms.
Run time scheduling can be simplified by knowing that a task that gets access to shared processor may run for a certain time prior to being preempted by some other task. Context switching overhead is reduced.
Other attempts in this direction were by
Baker, who assigned preemption levels and allowed tasks to preempt only those which are at a lower level than itself.
7.
Deadline Fair Scheduling: Bridging the Theory and Practice of Proportionate Fair Scheduling in Multiprocessor Systems Abhishek Chandra, Micah Adler and Prashant Shenoy - 2001
Streaming audio, video and multiprocessor have timing constraints, but unlike hard-real time applications, occasional violations of these deadlines do not result in catastrophic consequences.
A P-Fair scheduler allows an application to request x i time units every y i time quanta and guarantees that over any T quanta, a continuously running application receives between floor[(x i / y i ) * T] and ceiling[(x i / y i ) * T] quanta of service.
Simulations have shown that in practical application of the P-Fair schedulers, asynchrony in scheduling multi-processors, frequent arrivals and departures of tasks can cause the system to be non-work conserving.
Most of the research focuses on theoretical analysis of these schedulers.
The authors here have considered practical issues of implementing the scheduler into a multi-processor operating system kernel.
Also to make the system work conserving, the P-Fair scheduler is coupled with Airport Scheduling.
Also as this is a multiprocessor environment, processor affinities are to be considered for better performance by making use of cached data.
A P-Fair scheduler allows an application to request x i time units every y i time quanta and guarantees that over any T quanta, a continuously running application receives between floor[(x i / y i ) * T] and ceiling[(x i / y i ) * T] quanta of service.
Strong notion of fairness as at any given instant, no given application is more than one quantum away from its due share.
Let Φ i denote the share of processor bandwidth that is requested by task I in a p-processor system.
Then over any T time quanta, a continuously running application should receive between floor[( Φ i /summation( Φ i )).pT] and ceiling[( Φ i /summation( Φ i )).pT].
DFS schedules each ask periodically depending on its share Φ i . It uses an eligibility criteria to determine the tasks eligible for scheduling. Once scheduled, the task becomes ineligible until the next period begins. Each eligible task is stamped with an internally generated deadline, and the DFS schedules these eligible tasks in the earliest-deadline-first order.
Each task is associated with a share Φ i , a start tag S i and a finish tag F i . When a task is executed, its start tag is updated at the end of the quantum as S i + q/ Φ i ,where q is the duration for which the task ran. If a suspended task wakes up, its S i is max of present S i and the virtual time.
The finish tag F i is updated to S i + q`/ Φ i , where q` is the maximum time for which the task can run the next time it is scheduled.
At scheduling instance, the scheduler determines the eligible tasks using an eligibility criterion, and then computes the deadlines for these tasks – for both operations S i and F i are used.
DFS has been proven to be work conserving under the assumption of fixed task set and synchronize fixed length quanta. These need not hold in a typical multiprocessor system.
DFS is combined with an auxiliary scheduler to make the system work conserving. It maintains two queues – one which contains eligible tasks and the other contains all the tasks. If there are no eligible tasks to be scheduled by the DFS, then the auxiliary scheduler uses the second task set and schedules a task which is currently ineligible.
To take into account the processor affinities, instead of just using the deadline as the sorting factor, a combination (ex. Linear) of deadline and affinity (affinity is 0 for the processor on which the task ran last and 1 for all others) can be used. He authors have called this as the goodness factor, and the scheduler picks the smallest goodness factor task to be scheduled – thus it is global scheduling.
12.
Early Release Fair Scheduling James H Anderson and Anand Srinivasan - 2000
P-Fair scheduling algorithms schedule tasks by breaking them into quantum-length subtasks which are given intermediate deadlines.
In P-Fair, if some subtask of a task executes early in its window, the task becomes in-eligible for scheduling again until the start of the window of its next subtask.
Each task T is associated with a period T.p and execution cost T.e. Each T.p time units, an invocation of T with a cost of T.e takes place. This is a job of T. It is required that each job of a task must complete before the next job can of the same task can begin.
T can be allocated on different processors, provided it is not scheduled on more than one processor at the same time.
T.e / T.p is the weight of a task. It is assumed that the weight is strictly less than 1 (weight 1 task would require a dedicated processor which makes the scheduling decision easier).
A P-Fair scheduler allows an application to request x i time units every y i time quanta and guarantees that over any T quanta, a continuously running application receives between floor[(x i / y i ) * T] and ceiling[(x i / y i ) * T] quanta of service.
Strong notion of fairness as at any given instant, no given application is more than one quantum away from its due share.
Assumptions:
Quantum direction is fixed.
Set of tasks in the system is fixed.
Lag of a task T is defined as the difference between the amount of time allocated to the task and what would have been allocated to it in an ideal system.
The Early Release Scheduling Algorithm is derived by dropping the -1 lag constraint.
Lag(T, t) < 1
Every P-Fair schedule is ER-Fair, but the converse need not be true.
Every ER-Fair is periodic
Lag(T, t) = 0 for t = T.p, 2T.p, 3T.p, …
Reason is that for these values of t, (T.e / T.p)*t is an integer. Now by the constraint of the lag(T, t) < 1, the lag has to be either 0 or negative. But a negative lag implies that the task received more time than what it had requested. Hence lag(t, t) = 0 for these values of t.
Baruah et al. showed that a periodic task set has a P-Fair schedule in a system of M processors iff summation(T.e/T.p) < M. Every PFair being an ER-Fair, same is the feasibility condition for ER-Fair schedule.
Each subtask of T has an associated pseudo-release and pseudo-deadline (referred to as release and deadline)
r(T i ) = floor[(i-1) * T.e / T.p]
r(Ti) is the first slot into which T i can be scheduled.
d(T i ) = ceiling[i * T.e / T.p] – 1
w(T i ) = [r(T i ), d(T i )]
w is the window of a subtask.
The only difference between PFair and ERFair is in their eligibility criterion. In PFair, a subtask T i is eligible at time t if t Є w(T i ) and if T i-1 has been scheduled prior to t but T i has not. In ERFair, if T i and T i+1 are part of the same job, then T i+1 becomes eligible for execution immediately after T i executes.
The authors have proposed the ER-Fair scheduling algorithm by dropping the -1 constraint of PFair Scheduling, whereby some subtasks can execute early – before their window. This overcomes the non work-conserving nature of the PFair schedules.
A hybrid system can be proposed in which only few selected tasks can be released early. This might be useful if a small subset of tasks are subject to stringent response-time requirements.
It may also be possible to determine dynamically when and by how much the subtasks might be released early.
18.
Bounds on the Performance of Heuristic Algorithm for Multiprocessor Scheduling of Hard-Real Time Tasks Fuxing Wang, Krithi Ramamritham and John A Stankovic - 1992
To determine feasible non preemptive schedules on multiprocessor systems, list scheduling can be used. But while list scheduling has a good worst case schedule length, it does not have good average case performance.
An heuristic approach is the H scheduling algorithm. It has a good average case performance with respect to meeting deadline, but a poor worse case schedule length.
So try to combine the features of both the list scheduling algorithm and H scheduling algorithm, such that it performs well in finding feasible schedule and has a good schedule length bound.
To assign a set of real time tasks to processors and additional resources, such that all tasks meet their resource requirements and timing requirements.
Given
A set of m identical processors in a homogeneous multiprocessor system. Each processor is capable of executing any task.
A set of r resources such as data sets and buffers. Resources are discreet and may have multiple instances. Resources are continuous (a resource is continuous if a task can request any portion of it) and renewable (a resource is renewable if its total amount is always fixed and they are not consumed by the tasks).
A set of n tasks, each characterized by its worst case computation time, its deadline and its resource requirement vector.
Assume that the tasks are aperiodic, independent and non preemptable. Also assume that the resources requested by a task are used throughout the tasks execution time.
The performance criteria is to minimize the maximum completion time.
Scheduling problem with these characteristics is a hard problem. Heuristic approach is considered here.
Every task has a priority defined (may depend on deadline, resource requirement or some combination).
Tasks are arranged in a ready queue sorted on their priority values.
When a processor becomes idle, it scans the ready queue and selects the first task that does not violate resource constraints.
Does not have good average case performance because sometimes moving a lower priority task up in schedule when higher priority tasks are blocked by resource constraints causes the higher priority tasks to miss some deadlines.
The H Scheduling Algorithm
Priority is calculated as
h(T i ) = d i + W i . b i
Higher the value of h, lower is the priority.
The tasks are sorted on this priority value.
Whenever a processor becomes free, it schedules the task with the highest priority.
It does not, as list scheduling does, try to be greedy about processor resource usage.
The authors have proposed a combination of the list and H scheduling algorithms, calling it the H K scheduling algorithm.
It uses the same heuristic as H scheduling algorithm, but tries to be greedy to a certain degree with respect to processor usage.
H K maintains a variable t CK which divides the schedule into two parts. The first portion satisfies that in any sub-interval, either at least k processors are busy or less than k processors are busy, but addition of any task causes a resource contention.
t ck is formally set to the maximum possible value that satisfies-
In any sub-interval [x, y] of [0, t ck ) at least k processors are busy
Less than k processors are busy, but no other tasks can be as their addition causes resource conflict.
The H K then applies highest priority first rule to schedule a task which can fit into the partial schedule before t CK .
The time complexity of H k scheduling algorithm is very high when k > 2. Only H 2 has time complexity comparable to those of H scheduling and list scheduling algorithms. Also for non-uniform tasks the length bound does not improve for k > 2. So the authors focused mainly on H 2 .
Analyses showed that the H 2 produces better worst-case bound lengths than H algorithm, and is also almost as good as H scheduling algorithm in finding the feasible schedule.
Views
Actions
Embeds 0
Report content