Thread scheduling in Operating Systems

10,248 views
9,762 views

Published on

Published in: Technology
2 Comments
8 Likes
Statistics
Notes
No Downloads
Views
Total views
10,248
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
2
Likes
8
Embeds 0
No embeds

No notes for slide
  • private storage area(used for DLLs),register set(status of the processor), and a stack(user stack when running in user mode & kernel stack for kernel mode)
  • As is clear from here, shares code & data section
  • What exactly is medium grained parallelism? In 1 sentenceSingle application is a collection of threadsThreads usually interact frequently, affecting the performance of the entire application
  • Processes not assigned to a particular processorA global queue of ready threads is maintainedEach processor, when idle , selects a thread from the queue.
  • Explain 2nd point…when processor is available,schedulingalgo is run to select next ttread according to the desire of the programmer
  • Add about Mach OSpointA refinement of the load-sharing technique is used in the Mach operating system[BLAC90, WEND89]. The operating system maintains a local run queue foreach processor and a shared global run queue. The local run queue is used bythreads that have been temporarily bound to a specific processor. A processor examinesthe local run queue first to give bound threads absolute preference over unboundthreads. As an example of the use of bound threads, one or more processorscould be dedicated to running processes that are part of the operating system.May be a bottleneck when more than one processor looks for work at the same time
  • There are two observations regarding this extreme strategy that indicate better than expected performance:In a highly parallel system, with tens or hundreds of processors, each of which represents a small fraction of the cost of the system, processor utilization is no longer an extremely important metric for effectiveness or performance.Total avoidance of process switching during the lifetime of a program should result in a substantial speedup of that program
  • As seen from the diag., the scheduler converts the threads priority into a global priority and then schedules the highest one foremost.The real time processes are given highest priority.
  • Priority and time slice are inversely proportionalInteractive processes have a higher priorityCPU bound processes have a lower priority
  • Interactivity is determined depending on the sleep time of the task, ie, how long it has been waiting for I/OTasks that are more interactive have higher sleep times
  • A runnable task is considered eligible for execution as long as it has its remaining time quantumThe runnable tasks are maintained on a runqueue data structure which contains 2 priority arraysOn multiprocessors,each processor schedules the highest priority task from its own runqueueWhen exhausted the 2 arrays are exchanged
  • Schedules threads using a priority-based, preemptive scheduling algorithm.
  • divided according to the Win32 API.Each thread has a bWhen released from wait operation, its
  • Addeg of foreground & background process
  • Thread scheduling in Operating Systems

    1. 1. THREAD SCHEDULING By- Nitish Gulati
    2. 2. THREADS… It is the basic unit of CPU utilization Comprises of  thread ID,  private storage area,  register set, and  a stack
    3. 3. ProcessesSingle Threaded Multi Threaded
    4. 4. TYPES OF THREADS Threads User level Kernel level
    5. 5. OPERATING SYSTEM ARCHITECTURE USER APPN. KERNEL H/W
    6. 6. COMPARISONParameters User Level Threads Kernel Level Threads1. Support Managed without the Managed by the OS kernel2 . Implementation Through the following Through the following thread models thread models •many to many •One to one •many to one3. Examples Depends on the •Windows XP application •Solaris 9 •Linux
    7. 7. NEED OF THREAD SCHEDULING? To exploit the power of parallelism in a multiprocessor. Utilized in medium grained parallelism.
    8. 8. APPROACHES TO THREAD SCHEDULING Load Sharing Dedicated Processor Assignment Dynamic Scheduling
    9. 9. LOAD SHARING T1 T2 T3 T4 T5 P1 P2 P3 P4
    10. 10. ADVANTAGES OF LOAD SHARING Load distributed evenly across the processors No centralized scheduler required Global queue can be organized and accessed
    11. 11. DISADVANTAGES OF LOAD SHARING Central queue needs mutual exclusion
    12. 12. DEDICATED PROCESSOR ASSIGNMENT When application is scheduled, its threads are assigned to a processor that remains dedicated to it. Some processors may be idle No multiprogramming of processors
    13. 13. Test results for multiprocessor system with 16 processors Speedup drops off when number of threads exceeds number of processors
    14. 14. DYNAMIC SCHEDULING Number of threads in a process are altered dynamically by the application Operating system adjusts the load to improve utilization
    15. 15. CASE STUDY: SOLARIS Schedules threads based on priority. Priority Classes TimeReal Time System Interactive Sharing
    16. 16. SOLARIS SCHEDULING
    17. 17. FEATURES OF PRIORITY IN SOLARIS Altered dynamically 250 200 Time Quantum 150 100 50 0 1 2 3 4 5 Priority Benefits-  Good response time to interactive processes  Good throughput
    18. 18. CASE STUDY: LINUX Uses a priority-based, preemptive scheduling algorithm. Interactive tasks are assigned higher priority Priority Scheme Real time Nice [0-99] [100-140]
    19. 19. RELATIONSHIP BETWEEN TIME SLICE AND PRIORITIES
    20. 20. 250 200Time Quantum 150 100 50 0 1 2 3 4 5 Priority
    21. 21. LIST OF TASKS INDEXED ACCORDING TO PRIORITY
    22. 22. CASE STUDY: WINDOWS XP Uses a priority-based, preemptive scheduling algorithm. The dispatcher uses a 32 bit priority scheme. 32 bit priority Real time Variable class class [1-15] [1-15]
    23. 23.  Distributed into classes Base priority. Priority is increased by the dispatcher. The increase depends on the operation.
    24. 24. EXAMPLE Initial Priority=8 Initial Priority=9 After increase=11 After increase=12
    25. 25. THANK YOU

    ×