Your SlideShare is downloading. ×
Os2 2
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Os2 2

528
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
528
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction 主講人:虞台文
  • 2. Content
    • The Process Notion
    • Defining and Instantiating Processes
      • Precedence Relations
      • Implicit Process Creation
      • Dynamic Creation With fork And join
      • Explicit Process Declarations
    • Basic Process Interactions
      • Competition: The Critical Problem
      • Cooperation
    • Semaphores
      • Semaphore Operations and Data
      • Mutual Exclusion
      • Producer/Consumer Situations
    • Event Synchronization
  • 3. The Process Notion Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction
  • 4. What is a process?
    • A process is a program in execution .
      • Also, called a task .
    • It includes
      • Program itself (i.e., code or text)
      • Data
      • a thread of execution (possibly several threads)
      • Resources (such as files)
      • Execution info (process relation information kept by OS)
    • Multiple processes are simultaneously existent in a system.
  • 5. Virtualization
    • Conceptually,
      • each process has its own CPU and main memory;
      • processes are running concurrently.
    • Many computers are equipped with a single CPU .
    • To achieve concurrency, the followings are needed
      • CPU sharing  to virtualize the CPU
      • Virtual memory  to virtualize the memory
    • Usually done by the kernel of OS
      • Each process may be viewed in isolation .
      • The kernel provides few simple primitives for process interaction.
  • 6. Physical/Logical Concurrencies
    • An OS must handle a high degree of parallelism .
    • Physical concurrency
      • Multiple CPUs or Processors required
    • Logical concurrency
      • Time-share CPU
  • 7. Interaction among Processes
    • The OS and users applications are viewed as a collection of processes , all running concurrently .
    • These processes
      • almost operate independently of one another;
      • cooperate by sharing memory or by sending messages and synchronization signals to each other; and
      • compete for resources .
  • 8. Why Use Process Structure?
    • Hardware-independent solutions
      • Processes cooperate and compete correctly, regardless of the number of CPUs
    • Structuring mechanism
      • Tasks are isolated with well-defined interfaces
  • 9. Defining and Instantiating Processes Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction
  • 10. A Process Flow Graph User session at a workstation
  • 11. Serial and Parallel Processes Serial Parallel S/P Notation: execution of process p 1 through p n . serial parallel
  • 12. Properly Nested Process Flow Graphs S/P Notation: execution of process p 1 through p n . A process flow graph is properly nested if it can be described by the functions S and P , and only function composition . Properly Nested serial parallel
  • 13. Properly Nested Process Flow Graphs Properly Nested improperly Nested
  • 14. Example: Evaluation of Arithmetic Expressions Expression tree Process flow graph
  • 15. Implicit Process Creation
    • Processes are created dynamically using language constructs
      • no process declaration.
    • cobegin/coend
      • syntax:
      • cobegin C1 // C2 // … // Cn coend
      • meaning:
        • All C i may proceed concurrently
        • When all terminate, the statement following cobegin/coend continues.
  • 16. Implicit Process Creation cobegin C1 // C2 // C3 // Cn coend C1 ; C2 ; C3 ; C4 ;
  • 17. Example: Use of cobegin/coend User session at a workstation Initialize; cobegin Time_Date // Mail // Edit; cobegin Complile; Load; Execute // Edit; cobegin Print // Web coend coend coend ; Terminate
  • 18. Data Parallelism
    • Same code is applied to different data
    • The forall statement
      • syntax:
      • forall ( parameters ) statements
      • Meaning:
        • Parameters specify set of data items
        • Statements are executed for each item concurrently
  • 19. Example: Matrix Multiplication
    • Each inner product is computed sequentially
    • All inner products are computed in parallel
      • forall ( i:1..n, j:1..m ){
      • A[i][j] = 0;
      • for ( k=1; k<=r; ++k )
      • A[i][j] = A[i][j] + B[i][k]*C[k][j];
      • }
  • 20. Explicit Process Creation
    • cobegin/coend
      • limited to properly nested graphs
    • forall
      • limited to data parallelism
    • fork/join
      • can express arbitrary functional parallelism
    implicit process creation explicit process creation
  • 21. The fork/join/quit primitives
    • Syntax: fork x
    • Meaning:
      • create new process that begins executing at label x
    • Syntax: join t,y
    • Meaning: t = t –1;
    • if ( t ==0) goto y ;
      • The operation must be indivisible . (Why?)
    • Syntax: quit
    • Meaning:
      • Process termination
  • 22. Example t1 t2 Synchronization needed here. Use down-counter t1=2 and t2=3 for synchronization.
  • 23. Example The starting point of process p i has label L i . t1 t2
      • t1 = 2; t2 = 3;
      • L1: p1 ; fork L2; fork L5; fork L7; quit ;
      • L2: p2 ; fork L3; fork L4; quit ;
      • L5: p5 ; join t1,L6; quit ;
      • L7: p7 ; join t2,L8; quit ;
      • L4: p4 ; join t1,L6; quit ;
      • L3: p3 ; join t2,L8; quit ;
      • L6: p6 ; join t2,L8; quit ;
      • L8: p8 ; quit;
  • 24. The Unix fork procid = fork (); if (procid==0) do_ child _processing else do_ parent _processing
  • 25. The Unix fork procid = fork (); if (procid==0) do_ child _processing else do_ parent _processing
    • Replicates calling process.
    • Parent and child are identical
    • except for the value of procid .
    Use procid to diverge parent and child.
  • 26. Explicit Process Declarations
    • Designate piece of code as a unit of execution
      • Facilitates program structuring
    • Instantiate:
      • Statically (like cobegin ) or
      • Dynamically (like fork )
  • 27. Explicit Process Declarations Syntax: process p { declarations_for_p ; executable_code_for_p ; }; process type p { declarations_for_p ; executable_code_for_p ; };
  • 28. Example: Explicit Process Declarations
    • process p{
    • process p1{
    • declarations_for_p1 ;
    • executable_code_for_p1;
    • }
    • process type p2{
    • declarations_for_p2;
    • executable_code_for_p2;
    • }
    • other_declaration_for_p ;
    • ...
    • q = new p2;
    • ...
    • }
    declarations_for_p; executable_code_for_p;
  • 29. Example: Explicit Process Declarations
    • process p{
    • process p1{
    • declarations_for_p1 ;
    • executable_code_for_p1;
    • }
    • process type p2{
    • declarations_for_p2;
    • executable_code_for_p2;
    • }
    • other_declaration_for_p ;
    • ...
    • q = new p2;
    • ...
    • }
    similar to cobegin/coend similar to fork
  • 30. Basic Process Interactions Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction
  • 31. Competition and Cooperation
    • Competition
      • Processes compete for resources
      • Each process could exist without the other
    • Cooperation
      • Each process aware of the other
      • Processes Synchronization
      • Exchange information with one another
        • Share memory
        • Message passing
  • 32. Resource Competition Shared source, e.g., common data
  • 33. Resource Competition
    • When several process may asynchronously access a common data area , it is necessary to protect the data from simultaneous change by two or more processes.
    • If not , the updated area may be left in an inconsistent state.
  • 34. Race Conditions
    • When two or more processes / threads are executing concurrently , the result can depend on the precise interleaving of the two instruction streams.
    • Race conditions may cause:
      • undesired computation results .
      • bugs which are hard to reproduce !
  • 35. Example x = 0; cobegin p1: ... x = x + 1; ... // p2: ... x = x + 1; ... coend What value of x should be after both processes execute?
  • 36. Example p1: . . . R1 = x; R1 = R1 + 1; x = R1; . . . p2: . . . R2 = x; R2 = R2 + 1; x = R2; . . . R1 = 0 R1 = 1 x = 1 R2 = 1 R2 = 2 x = 2
  • 37. Example p1: . . . R1 = x; R1 = R1 + 1; x = R1; . . . p2: . . . R2 = x; R2 = R2 + 1; x = R2; . . . R1 = 0 R1 = 1 x = 1 x = 1 R2 = 0 R2 = 1
  • 38. The Critical Section (CS)
    • Any section of code involved in reading and writing a share data area is called a critical section.
    • Mutual Exclusion  At most one process is allowed to enter critical section.
  • 39.
    • Guarantee mutual exclusion : At any time, at most one process executing within its CSi .
    The Critical Problem
        • cobegin
        • p1: while(1) { CS1 ; program1;}
        • //
        • p2: while(1) { CS2 ; program2;}
        • //
        • ...
        • //
        • pn: while(1) { CSn ; programn;}
        • coend
  • 40. The Critical Problem
    • Guarantee mutual exclusion : At any time, at most one process executing within its CSi .
    • In addition, we need to prevent mutual blocking :
      • Process outside of its CS must not prevent other processes from entering its CS. (No “ dog in manger ” )
      • Process must not be able to repeatedly reenter its CS and starve other processes ( fairness )
      • Processes must not block each other forever ( deadlock )
      • Processes must not repeatedly yield to each other (“after you”--“after you” livelock )
  • 41. Software Solutions
    • Solve the problem without taking advantage of special machine instructions and other hardware.
  • 42. Algorithm 1
      • int turn = 1;
      • cobegin
      • p1: while (1) {
      • while ( turn ==2); /*wait*/
      • CS1;
      • turn = 2;
      • program1;
      • }
      • //
      • p2: while (1) {
      • while ( turn ==1); /*wait*/
      • CS2;
      • turn = 1;
      • program2;
      • }
      • coend
  • 43. Algorithm 1
    • Mutual Exclusion
    • No mutual blocking
      • No dog in manger
      • Fairness
      • No deadlock
      • No livelock
      • int turn = 1;
      • cobegin
      • p1: while (1) {
      • while ( turn ==2); /*wait*/
      • CS1;
      • turn = 2;
      • program1;
      • }
      • //
      • p2: while (1) {
      • while ( turn ==1); /*wait*/
      • CS2;
      • turn = 1;
      • program2;
      • }
      • coend
         What happens if p1 fail?
  • 44. Algorithm 2
      • int c1 = 0, c2 = 0;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • while (c2); /*wait*/
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • while (c1); /*wait*/
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • coend
  • 45. Algorithm 2
    • Mutual Exclusion
    • No mutual blocking
      • No dog in manger
      • Fairness
      • No deadlock
      • No livelock
      • int c1 = 0, c2 = 0;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • while (c2); /*wait*/
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • while (c1); /*wait*/
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • coend
         What happens if c1=1 and c2=1 ?
  • 46. Algorithm 3
      • int c1 = 0, c2 = 0;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • if (c2) c1=0;
      • else{
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • if (c1) c2=0;
      • else{
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • }
      • coend
  • 47. Algorithm 3
    • Mutual Exclusion
    • No mutual blocking
      • No dog in manger
      • Fairness
      • No deadlock
      • No livelock
      • int c1 = 0, c2 = 0;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • if (c2) c1=0;
      • else{
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • if (c1) c2=0;
      • else{
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • }
      • coend
        When timing is critical, fairness and livelock requirements may be violated. 
  • 48. Algorithm 3
      • p1: while (1) {
      • c1 = 1;
      • if (c2) c1=0;
      • else{
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • }
      • p2: while (1) {
      • c2 = 1;
      • if (c1) c2=0;
      • else{
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • }
    May violate the fairness requirement.
  • 49. Algorithm 3
      • p1: while (1) {
      • c1 = 1;
      • if (c2) c1=0;
      • else{
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • }
      • p2: while (1) {
      • c2 = 1;
      • if (c1) c2=0;
      • else{
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • }
    May violate the livelock requirement.
  • 50. Algorithm 4 (Peterson)
      • int c1 = 0, c2 = 0, WillWait;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • willWait = 1;
      • while (c2 && (WillWait==1)); /*wait*/
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • willWait = 2;
      • while (c1 && (WillWait==2)); /*wait*/
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • coend
  • 51. Algorithm 4 (Peterson)
    • Mutual Exclusion
    • No mutual blocking
      • No dog in manger
      • Fairness
      • No deadlock
      • No livelock
      • int c1 = 0, c2 = 0, WillWait;
      • cobegin
      • p1: while (1) {
      • c1 = 1;
      • willWait = 1;
      • while (c2 && (WillWait==1)); /*wait*/
      • CS1;
      • c1 = 0;
      • program1;
      • }
      • //
      • p2: while (1) {
      • c2 = 1;
      • willWait = 2;
      • while (c1 && (WillWait==2)); /*wait*/
      • CS2;
      • c2 = 0;
      • program2;
      • }
      • coend
        
  • 52. Cooperation: Producer/Consumer Buffer Producer Consumer Producer must not overwrite any data before the Consumer can remove it. Consumer must be able to wait for the Producer when the latter falls behind and does not fill the buffer on time. Deposit Remove
  • 53. Client/Server Architecture Processes communicate by message passing .
  • 54. Competition and Cooperation
    • Problems with software solutions:
      • Difficult to program and to verify
      • Competition and cooperation use entirely different solutions
      • Processes loop while waiting (busy-wait)
  • 55. Process States running ready running ready ready_queue time-out dispatch
  • 56. Process States running ready ready blocked running ready_queue time-out dispatch wait(si) wait-queues s1 sn . . . . . . . . .
  • 57. Process States running ready ready blocked ready_queue time-out dispatch wait(si) wait-queues s1 sn . . . . . . . . .
  • 58. Process States running ready ready blocked A running process calls signal(si) . running ready_queue time-out dispatch wait(si) wait-queues s1 sn . . . . . . . . . signal(si)
  • 59. P/V Operators running ready ready blocked running P operation V operation ready_queue time-out dispatch wait(si) wait-queues s1 sn . . . . . . . . . signal(si)
  • 60. Dijkstra’s Semaphores
    • Dijkstra’s (1968) introduced two new operations, called P and V , that considerably simplify the coordination of concurrent processes .
      • Universally applicable to competition and cooperation among any number of processes.
      • Avoid the performance degradation resulting from waiting.
  • 61. Semaphores
    • A semaphore s is an non-negative integer, a down counter, with two operations P and V .
    • P and V are indivisible operations (atomic)
    P(s) /* wait(s) */ { if (s>0) s-- ; else{ queue the process on s , change its state to blocked, schedule another process } } V(s) /* signal(s) */ { if (s==0 && queue is not empty){ pick a process from the queue on s , change its state from blocked to ready; } else s++ ; } There is a wait-queue associated with each semaphore. s
  • 62. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s
  • 63. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 P(s) Process 4 Process 3 Process 2 Process 1 s s
  • 64. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Program2 P(s) Process 4 Process 3 Process 2 Process 1 s s
  • 65. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s
  • 66. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Program4 Process 4 Process 3 Process 2 Process 1 s s Process 3
  • 67. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s Process 3 Process 3 Process 4
  • 68. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s Process 3 Process 4
  • 69. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s Process 3 Process 4
  • 70. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s Process 4
  • 71. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 Process 4 Process 3 Process 2 Process 1 s s Process 4
  • 72. Semaphores There is a wait-queue associated with each semaphore. P(s) CS1 V(s) Program1 P(s) CS2 V(s) Program2 P(s) CS3 V(s) Program3 P(s) CS4 V(s) Program4 V(s) V(s) CS4 Process 4 Process 3 Process 2 Process 1 s s
  • 73. Semaphores
    • A semaphore s is an non-negative integer with two operations P and V .
    • P and V are indivisible operations (atomic)
    P(s) /* wait(s) */ { if (s>0) s-- ; else{ queue the process on s , change its state to blocked, schedule another process } } V(s) /* signal(s) */ { if (s==0 && queue is not empty){ pick a process from the queue on s , change its state from blocked to ready; } else s++ ; } There is a wait-queue associated with each semaphore. s
  • 74. Mutual Exclusion with Semaphores semaphore mutex = 1; /* binary semaphore */ cobegin p1: while (1) { P ( mutex ); CS1; V ( mutex ); program1; } // p2: while (1) { P ( mutex ); CS2; V ( mutex ); program2; } // . . . // pn: while (1) { P ( mutex ); CSn; V ( mutex ); programn; } coend ;
  • 75. Producer/Consumer with Semaphores // counting semaphore semaphore s = 0; /* zero item produced */ cobegin p1: ... P ( s ); /* wait for signal from other process if needed */ ... // p2: ... V ( s ); /* send signal to wakeup an idle process*/ ... coend ; Consumer Producer
  • 76. Producer/Consumer with Semaphores // counting semaphore semaphore s = 0; /* zero item produced */ cobegin p1: ... P ( s ); /* wait for signal from other process if needed */ ... // p2: ... V ( s ); /* send signal to wakeup an idle process*/ ... coend ;
  • 77. The Bounded Buffer Problem semaphore e = n, f = 0, b = 1; cobegin Producer: while (1) { Produce_next_record; P ( e ); /* wait if zero cell empty */ P ( b ); /* mutually excusive on access the buffer */ Add_to_buf; V ( b ); /* release the buffer */ V ( f ); /* signal data available */ } // Consumer: while (1) { P ( f ); /* wait if data not available */ P ( b ); /* mutually excusive on access the buffer */ Take_from_buf; V ( b ); /* release the buffer */ V ( e ); /* signal empty cell available*/ Process_record; } coend e : the number of empty cells f : the number of items available b : mutual exclusion
  • 78. Event Synchronization Operating Systems Principles Process Management and Coordination Lecture 2: Process and Their Interaction
  • 79. Events
    • An event denotes some change in the state of the system that is of interest to a process.
    • Usually principally through hardware interrupts and traps , either directly or indirectly.
  • 80. Two Type of Events
    • Synchronous event (e.g. I/O completion)
      • Process waits for it explicitly
      • Constructs: E.wait, E.post
    • Asynchronous event (e.g. arithmetic error)
      • Process provides event handler
      • Invoked whenever event is posted
  • 81. Case study: Windows 2000
    • WaitForSingleObject
    • WaitForMultipleObjects
    Process blocks until object is signaled I/O operation terminates file item placed on queue queue expires timer posted event released mutex incremented semaphore terminates thread all threads complete process signaled when: object type