Processes are distributed equitably, so that no node is idle while there is queue for some node</li></li></ul><li>Features ofof Good Global Scheduling<br /><ul><li>No a-priori knowledge about processes
Dynamic assignment elements may work in isolation in the non-cooperative mode. In the co-operative mode they share information so that a more uniform assignment can be made</li></li></ul><li>Issues in Designing Load Balancing Algorithms<br /><ul><li>Load estimation policy
Threshold is calculated as a product of the average workload of all nodes. State information is exchanged to determine the threshold at any time
One may use a single threshold policy or the dual threshold policy</li></li></ul><li>Location Policies<br /><ul><li>Once transfer policy is decided where to locate a process is decided by one of the following methods
Poll random nodes. Stop when a partner is found or polling limit is reached</li></li></ul><li>Priority Assignment policies<br /><ul><li>Priority assignment rules required for local as well as remote processes
Depending on no of local processes & remote processes. Local processes get priority if more no. of local processes & vice versa</li></li></ul><li>Migration limiting policies<br /><ul><li>Uncontrolled
Any process arriving at a node is welcome. Can cause instability
Use migration count on a process to limit how many times a process may be migrated. Some designers favor a migration count of 1. Some others favor a value >1 particularly for long tasks</li></li></ul><li>Load-sharing approach<br /><ul><li>Issues in load-sharing algorithms
Lightly loaded nodes broadcast that they can receive processes or poll randomly for heavily loaded nodes to share load. A node is eligible to send a process only if sending that process does not get him below the threshold and make it lightly loaded.</li></li></ul><li>State Information Exchange Policies<br /><ul><li>Broadcast when state changes
State information request message is sent out only when a node goes belo9w/above threshold
Robustness: other than failure of the current execution node, nothing else should matter
Communication between co-processes of a job: if co-processes are distributed they should be able to communicate freely</li></li></ul><li>Process MigrationProcess Migration Mechanisms<br /><ul><li>Migration activities
Freezing on source node, restarting on destination node
Process is not executing a system call, block immediately
Process is executing a system call and sleeping at a interruptible priority level, block immediately.
Process is executing a system call and sleeping on non-interruptible level waiting for a kernel event to occur, delay blocking until system call is complete.</li></li></ul><li>Fast & Slow I/O Operations<br /><ul><li>Freezing occurs after fast I/O such as disk I/O are completed.
Waiting for slow I/O such as terminal I/O may not be feasible.
Some means of continuing slow I/O after migration will be required
A complete path name from the new node is created</li></li></ul><li>Re-instating on Destination Node<br /><ul><li>A new empty process state is created on destination node
In some implementations this may have a different process id for a while
After the migration happens completely the id may be changed to the original id and the original process deleted
In some cases such as when slow I/O is kept open, special handling may be needed. System call may have to be executed again.</li></li></ul><li>Address Space Transfer Mechanisms<br /><ul><li>Processor state needs to be transferred. This includes I/O queues, I/O buffer contents, interrupt signals, etc as I/O state, process id, user and group identifier, files opened etc are other memory data required
Process’s address space that includes code, data and stack
Processor state memory is of the order of kilobytes while the process address space typically is in megabytes. There’s some flexibility in transferring address space</li></li></ul><li>Transfer Mechanisms<br /><ul><li>Total freezing
Source process is frozen until not only the process but also address space transfer is complete, takes a longer freeze time
Address space transfer takes place only when the restarted, migrated process makes reference to anywhere in the address space</li></li></ul><li>Message Forwarding Mechanisms<br /><ul><li>Kinds of messages
Messages that arrive after source process is frozen but the destination process has not started
Message at source node after the process has started at destination node
Messages to the migrant process after the execution at destination has started
System wide links are maintained. On completion of migration the links held by kernels of communicating processes are updated to point to the destination link. </li></li></ul><li>Handling Co-processes<br /><ul><li>Disallowing separation of co-processes
Disallow migration of a processes whose children are yet to complete
When a process does migrate, ensure all the children go with it
A home node is defined, all communications work though this node. So that when separated they still can communicate. Network traffic increases though</li></li></ul><li>Process MigrationinHeterogeneous Systems<br /><ul><li>Main issue is different data representation on machines
Good for situations where one part produces output that is used by another part as consumer. Can be set up as a pipeline where output of one is used by the second, out put of which may be used by another thread in a sequence or pipe line</li></li></ul><li>IssuesinDesigning a Threads Package<br /><ul><li>Threads creation
Static or dynamic. Static calls create all the threads required at compile time. While dynamic creation creates threads as needed through a system call. Stack size is a parameter along with scheduling priority and the process that includes the thread . System call returns a thread id
A thread is scheduled on the last node it ran on. Assuming some of the address space it used may still be in the cache</li></li></ul><li>Signal Handling<br /><ul><li>A signal must be handled no matter what
Typically a routine within the process handles it to ensure that
Signals should not get lost. Exceptions can interfere, overwrite global variables
Exception handlers for all types of exception should be present to manage properly</li></li></ul><li>Implementing Threads Package<br /><ul><li>User level
A runtime system manages threads, a status information table is also maintained. Each entry has state of thread, registers, priority etc.
Scheduling via kernel for the process, then the runtime divides the time quantum to the threads in the process
No separate runtime required, thread status information table maintained at kernel level, kernel is able to schedule thread the same way a process is scheduled. A single level scheduling process</li></li></ul><li>User Level vs. Kernel LevelThreads Packages<br /><ul><li>User level package can be implemented on top of an OS that does not support threads
Two level scheduling provides scheduling flexibility
Switching is faster in user level package as the run time rather than the kernel does it
On user level a thread can be stopped only if the process is stopped as the process containing the thread only can be stopped by the kernel
Threads should not be allowed to make blocking system calls as it’ll block the process. Jacket calls can be used</li>
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.