Operating Systems

42,788 views

Published on

Published in: Education, Technology
1 Comment
34 Likes
Statistics
Notes
No Downloads
Views
Total views
42,788
On SlideShare
0
From Embeds
0
Number of Embeds
354
Actions
Shares
0
Downloads
2,687
Comments
1
Likes
34
Embeds 0
No embeds

No notes for slide
  • OS is a program which is very intimate with the hardwareProvides stable way to interact with the hardware
  • There is no hard and fast rule as to what all an OS should provideSome os provide very good GUI while some do not even have a full screen editorNevertheless, there is one program which is always running. Called ‘kernel’Platform that consists of specific set of libraries and infrastructure for applications to be built upon and interact with each other
  • This is memory hierarchyAs we move up the triangle, access time decreases but cost increasesAlso data that is in Cache must be in Main Memory
  • Little more about cache hereThere are generally two mechanisms to update data from cache to its backing store
  • Open(), Close(), Exec(), Fork() etc.
  • Modern operating systems are interrupt driven.There is something called interrupt vector which maintains a map from interrupt to the corresponding action.Keyboard exampleDistinguish between the execution of os code and user code
  • Interview questions on the possible states a process can be in
  • At any instant, only one PCB is in active state.When an interrupt occurs, save the current context of the running processState save and state restore. This task is called context switch
  • A process may create several processes. The creating process is called a parent and the new processes are called children. There is a unique pid for every process.
  • execl, execlp, execle, execv, execvp - execute a file
  • Operating Systems

    1. 1. OS 1
    2. 2. Operating System  Just a program  Provides a stable, consistent way for applications to deal with the hardware 2
    3. 3. What constitutes an OS? 3  Kernel  System Programs  Application Programs
    4. 4. Storage Device Hierarchy 5
    5. 5. Cache  Writing policies  Write back  Initially, writing is done only to the cache  Mark them as dirty for later writing to the backing store  Write through  Write is done synchronously both to the cache and to the backing store 6  Replacement Policy?
    6. 6. System Calls  Instructions that allow access to privileged or sensitive resources on the CPU  System calls provide a level of portability  Parameters to system calls can be passed through registers, tables or stack  Is “printf” a system call? 7
    7. 7. System Calls  File copy program  Acquire input file name  Acquire output file name  If file cannot be created abort  Open input file  If file doesn't exist abort  Read from input file  Write to output file  Repeat until read fails  Close output file  Terminate normally by returning closing status to OS 8
    8. 8. Operating System Operation  Interrupt driven  Dual Mode Operation  “Kernel” mode (or “supervisor” or “protected”)  “User” mode: Normal programs executed 9
    9. 9. Operating System Operation  Timer  Interrupts the computer after a specified period  May be treated as a fatal error or may give the program more time 10 Penny, Penny, Penny
    10. 10. Process  A program in execution  A single „thread‟ of execution  Uniprogramming: one thread at a time  Multiprogramming: more than one thread at a time 11
    11. 11. Process States  New : Process is being created  Ready : Process waiting to run  Running : Instructions are being executed  Waiting : Process waiting for some event to occur  Terminated : The process has finished execution 12
    12. 12. Process Control Block(PCB)  Only one PCB active at a time 13
    13. 13. Process Creation vi catemacs ls init CSH CSH pid=1 Pid=7778 Pid=1400  Parent followed by a child process 14
    14. 14. Process Creation – Fork() 15
    15. 15. A Question (Microsoft) 16
    16. 16. Inter Process Communication 17  Shared-Memory Systems  A process creates a shared-memory segment  Other processes attach it to their address space  Message-Passing Systems  send(P, message)  receive(id, message)
    17. 17. IPC - Pipes 18  Pipes are OS level communication links between processes  Pipes are treated as file descriptors in most OS
    18. 18. Multithreading 19  Each thread has its own stack – current execution state  Threads encapsulate concurrency
    19. 19. Multithreading 20  Advantages  Responsiveness  Resource Sharing  Economy  Scalability  Pthreads  POSIX standard defining an API for thread creation and synchronization
    20. 20. A Question (Google) 21
    21. 21. Scheduling 22 Scheduling – Deciding which threads are given access to resources from moment to moment The CPU should not be idle At least on process should use CPU
    22. 22. Scheduling 23 Scheduling – Deciding which threads are given access to resources from moment to moment
    23. 23. Scheduling 24  Goals/Criteria  Minimize Response Time  Maximize Throughput  Fairness  First-Come, First-Served (FCFS) Scheduling  Run until done  Shorts jobs get behind long ones P1 P2 P3 24 27 300
    24. 24. Preemption 25  Capability to preempt a process in execution  Execution prioritized  Higher priority processes preempt the lower ones
    25. 25. Round Robin (RR) 26  Each process gets a small unit of CPU time(time quantum)  After quantum expires, process is preempted and added to the end of ready queue  N processes in ready queue and time quantum is q  No process waits more than (N-1)q time units  Performance  q large -> FCFS  q must be large with respect to context switch, otherwise too much overhead
    26. 26. Round Robin (RR) 27 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Waiting time for  P1 = ?, P2 = ?, P3 = ?, P4 = ? Average Waiting Time = ? Average Completion Time = ?
    27. 27. Round Robin (RR) 28 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Waiting time for  P1 = (68-20) + (112-88) = 72, P2 = 20, P3 = 85, P4 = 88 Average Waiting Time = 66.5 Average Completion Time = 104.5
    28. 28. Round Robin (RR) 29 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Pros and Cons:  Better for short jobs  Context-switching adds up for long jobs
    29. 29. What if we know future? 30  Shortest Job First (SJF)  Run whatever job has the least amount of computation to do  Shortest Remaining Time First (SRTF)  Preemptive version of SJF: if job arrives and has a shorter time to completion than the remaining time on the current job, immediately preempt CPU Basically  Idea is to get short jobs out of the system  Big effect on short jobs, only small effect on long ones  Result is better average response time
    30. 30. Synchronization 31  Most of the time, threads are working on separate data, so scheduling doesn‟t matter  But, what happens when they work on a shared variable Atomic Operations  An operation that always runs to completion or not at all  Indivisible  Fundamental Building Block
    31. 31. Synchronization 32  Synchronization  Using atomic operations to ensure cooperation between threads  Mutual Exclusion  Only one thread does a particular thing at a time  Critical Section  Piece of code that only one thread can execute at once  Lock  Prevents someone from doing something
    32. 32. Synchronization 33 Load/Store Disable Ints Test&Set Comp&Swap Locks Semaphores Monitors Send/Receive Shared Programs Hardware Higher- level API Programs Everything is pretty painful if only atomic primitives are load and store
    33. 33. Semaphores 34  A kind of generalized lock  Definition: A semaphore has a non-negative integer value and supports the following two operations  P() : An atomic operation that waits for semaphore to become positive, then decrements it by 1  Also called the wait() operation  V() : An atomic operation that increments the semaphore by 1, waking up a waiting P, if any  Also called the signal() operation
    34. 34. Semaphores Implementation 35
    35. 35. Semaphore vs Mutex 36  Nope, the purpose of mutex and semaphore are different  Mutex is locking mechanism used to synchronize access to resource. Ownership associated with mutex. Only the owner can release the lock.  Semaphore is Signaling mechanism (“I am done, you can carry on” kind of signal)  More at http://www.geeksforgeeks.org/mutex-vs-semaphore/  Are binary semaphore and mutex same?
    36. 36. Readers-Writers 37  Problem : Several readers and writers accessing the same file  Need to control access to buffer  If several readers are reading, no problem  If writer is writing and reader is reading, some problem  Variations  First-Writers-Then-Readers  First-Readers-Then-Writers
    37. 37. Readers-Writers 38
    38. 38. Deadlocks 39 P0 Wait(Q) Wait(S) Some code here Signal(Q) Signal(S) P1 Wait(S) Wait(Q) Some code here Signal(S) Signal(Q) A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.
    39. 39. Deadlock Requirements 40  Mutual Exclusion  Only one thread at a time can use a resource  Hold and Wait  Thread holding at least one resource is waiting to acquire additional resources held by other threads  No preemption  Resources are released only voluntarily by the thread holding the resource, after thread is finished with it  Circular Wait  There exists a set {T1, …, Tn} of waiting threads  T1 is waiting for a resource that is held by T2  T2 is waiting for a resource that is held by T3  …  Tn is waiting for a resource that is held by T1
    40. 40. Some Techniques 41  Deadlock Detection  Resource allocation graph etc.  Prevention  Circular Wait  Avoidance  Banker‟s algorithm

    ×