OS
1
Operating System
 Just a program
 Provides a stable, consistent way for applications
to deal with the hardware
2
What constitutes an OS? 3
 Kernel
 System Programs
 Application Programs
Storage Device Hierarchy 5
Cache
 Writing policies
 Write back
 Initially, writing is done only to the cache
 Mark them as dirty for later writing to the
backing store
 Write through
 Write is done synchronously both to the cache
and to the backing store
6
 Replacement Policy?
System Calls
 Instructions that allow access to privileged or
sensitive resources on the CPU
 System calls provide a level of portability
 Parameters to system calls can be passed
through registers, tables or stack
 Is “printf” a system call?
7
System Calls
 File copy program
 Acquire input file name
 Acquire output file name
 If file cannot be created abort
 Open input file
 If file doesn't exist abort
 Read from input file
 Write to output file
 Repeat until read fails
 Close output file
 Terminate normally by returning closing status to OS
8
Operating System Operation
 Interrupt driven
 Dual Mode Operation
 “Kernel” mode (or “supervisor” or “protected”)
 “User” mode: Normal programs executed
9
Operating System Operation
 Timer
 Interrupts the
computer after a
specified period
 May be treated as a
fatal error or may give
the program more
time
10
Penny, Penny, Penny
Process
 A program in execution
 A single „thread‟ of execution
 Uniprogramming: one thread at a time
 Multiprogramming: more than one thread at a time
11
Process States
 New : Process is being created
 Ready : Process waiting to run
 Running : Instructions are being executed
 Waiting : Process waiting for some event to occur
 Terminated : The process has finished execution
12
Process Control Block(PCB)
 Only one PCB active at a time
13
Process Creation
vi catemacs ls
init
CSH CSH
pid=1
Pid=7778
Pid=1400
 Parent followed by a child process
14
Process Creation – Fork() 15
A Question (Microsoft) 16
Inter Process Communication 17
 Shared-Memory Systems
 A process creates a shared-memory segment
 Other processes attach it to their address space
 Message-Passing Systems
 send(P, message)
 receive(id, message)
IPC - Pipes 18
 Pipes are OS level communication links between processes
 Pipes are treated as file descriptors in most OS
Multithreading 19
 Each thread has its own stack – current execution state
 Threads encapsulate concurrency
Multithreading 20
 Advantages
 Responsiveness
 Resource Sharing
 Economy
 Scalability
 Pthreads
 POSIX standard defining an API for thread creation
and synchronization
A Question (Google) 21
Scheduling 22
Scheduling – Deciding which
threads are given access to
resources from moment to
moment
The CPU should not be idle
At least on process should
use CPU
Scheduling 23
Scheduling – Deciding which threads are given access to resources
from moment to moment
Scheduling 24
 Goals/Criteria
 Minimize Response Time
 Maximize Throughput
 Fairness
 First-Come, First-Served (FCFS) Scheduling
 Run until done
 Shorts jobs get behind long ones
P1 P2 P3
24 27 300
Preemption 25
 Capability to preempt a
process in execution
 Execution prioritized
 Higher priority processes
preempt the lower ones
Round Robin (RR) 26
 Each process gets a small unit of CPU time(time quantum)
 After quantum expires, process is preempted and added to the
end of ready queue
 N processes in ready queue and time quantum is q
 No process waits more than (N-1)q time units
 Performance
 q large -> FCFS
 q must be large with respect to context switch, otherwise
too much overhead
Round Robin (RR) 27
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 28 48 68 88 108 112 125 145 153
Process Burst Time
P1 53
P2 8
P3 68
P4 24
Waiting time for
 P1 = ?, P2 = ?, P3 = ?, P4 = ?
Average Waiting Time = ?
Average Completion Time = ?
Round Robin (RR) 28
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 28 48 68 88 108 112 125 145 153
Process Burst Time
P1 53
P2 8
P3 68
P4 24
Waiting time for
 P1 = (68-20) + (112-88) = 72, P2 = 20, P3 = 85, P4 = 88
Average Waiting Time = 66.5
Average Completion Time = 104.5
Round Robin (RR) 29
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 28 48 68 88 108 112 125 145 153
Process Burst Time
P1 53
P2 8
P3 68
P4 24
Pros and Cons:
 Better for short jobs
 Context-switching adds up for long jobs
What if we know future? 30
 Shortest Job First (SJF)
 Run whatever job has the least amount of
computation to do
 Shortest Remaining Time First (SRTF)
 Preemptive version of SJF: if job arrives and
has a shorter time to completion than the
remaining time on the current job,
immediately preempt CPU
Basically
 Idea is to get short jobs out of the system
 Big effect on short jobs, only small effect on long ones
 Result is better average response time
Synchronization 31
 Most of the time, threads are working on separate data, so
scheduling doesn‟t matter
 But, what happens when they work on a shared variable
Atomic Operations
 An operation that always runs to completion or not at all
 Indivisible
 Fundamental Building Block
Synchronization 32
 Synchronization
 Using atomic operations to ensure cooperation between
threads
 Mutual Exclusion
 Only one thread does a particular thing at a time
 Critical Section
 Piece of code that only one thread can execute at once
 Lock
 Prevents someone from doing something
Synchronization 33
Load/Store Disable Ints Test&Set Comp&Swap
Locks Semaphores Monitors Send/Receive
Shared Programs
Hardware
Higher-
level
API
Programs
Everything is pretty painful if only atomic primitives are load and store
Semaphores 34
 A kind of generalized lock
 Definition: A semaphore has a non-negative integer value and
supports the following two operations
 P() : An atomic operation that waits for semaphore to
become positive, then decrements it by 1
 Also called the wait() operation
 V() : An atomic operation that increments the semaphore
by 1, waking up a waiting P, if any
 Also called the signal() operation
Semaphores Implementation 35
Semaphore vs Mutex 36
 Nope, the purpose of mutex and semaphore are different
 Mutex is locking mechanism used to synchronize access to
resource. Ownership associated with mutex. Only the owner
can release the lock.
 Semaphore is Signaling mechanism (“I am done, you can carry
on” kind of signal)
 More at http://www.geeksforgeeks.org/mutex-vs-semaphore/
 Are binary semaphore and mutex same?
Readers-Writers 37
 Problem : Several readers and writers accessing the same file
 Need to control access to buffer
 If several readers are reading, no problem
 If writer is writing and reader is reading, some problem
 Variations
 First-Writers-Then-Readers
 First-Readers-Then-Writers
Readers-Writers 38
Deadlocks 39
P0
Wait(Q)
Wait(S)
Some code here
Signal(Q)
Signal(S)
P1
Wait(S)
Wait(Q)
Some code here
Signal(S)
Signal(Q)
A deadlock is a situation in which two or more
competing actions are each waiting for the other to
finish, and thus neither ever does.
Deadlock Requirements 40
 Mutual Exclusion
 Only one thread at a time can use a resource
 Hold and Wait
 Thread holding at least one resource is waiting to acquire
additional resources held by other threads
 No preemption
 Resources are released only voluntarily by the thread holding the
resource, after thread is finished with it
 Circular Wait
 There exists a set {T1, …, Tn} of waiting threads
 T1 is waiting for a resource that is held by T2
 T2 is waiting for a resource that is held by T3
 …
 Tn is waiting for a resource that is held by T1
Some Techniques 41
 Deadlock Detection
 Resource allocation graph etc.
 Prevention
 Circular Wait
 Avoidance
 Banker‟s algorithm

Operating Systems

  • 1.
  • 2.
    Operating System  Justa program  Provides a stable, consistent way for applications to deal with the hardware 2
  • 3.
    What constitutes anOS? 3  Kernel  System Programs  Application Programs
  • 4.
  • 5.
    Cache  Writing policies Write back  Initially, writing is done only to the cache  Mark them as dirty for later writing to the backing store  Write through  Write is done synchronously both to the cache and to the backing store 6  Replacement Policy?
  • 6.
    System Calls  Instructionsthat allow access to privileged or sensitive resources on the CPU  System calls provide a level of portability  Parameters to system calls can be passed through registers, tables or stack  Is “printf” a system call? 7
  • 7.
    System Calls  Filecopy program  Acquire input file name  Acquire output file name  If file cannot be created abort  Open input file  If file doesn't exist abort  Read from input file  Write to output file  Repeat until read fails  Close output file  Terminate normally by returning closing status to OS 8
  • 8.
    Operating System Operation Interrupt driven  Dual Mode Operation  “Kernel” mode (or “supervisor” or “protected”)  “User” mode: Normal programs executed 9
  • 9.
    Operating System Operation Timer  Interrupts the computer after a specified period  May be treated as a fatal error or may give the program more time 10 Penny, Penny, Penny
  • 10.
    Process  A programin execution  A single „thread‟ of execution  Uniprogramming: one thread at a time  Multiprogramming: more than one thread at a time 11
  • 11.
    Process States  New: Process is being created  Ready : Process waiting to run  Running : Instructions are being executed  Waiting : Process waiting for some event to occur  Terminated : The process has finished execution 12
  • 12.
    Process Control Block(PCB) Only one PCB active at a time 13
  • 13.
    Process Creation vi catemacsls init CSH CSH pid=1 Pid=7778 Pid=1400  Parent followed by a child process 14
  • 14.
  • 15.
  • 16.
    Inter Process Communication17  Shared-Memory Systems  A process creates a shared-memory segment  Other processes attach it to their address space  Message-Passing Systems  send(P, message)  receive(id, message)
  • 17.
    IPC - Pipes18  Pipes are OS level communication links between processes  Pipes are treated as file descriptors in most OS
  • 18.
    Multithreading 19  Eachthread has its own stack – current execution state  Threads encapsulate concurrency
  • 19.
    Multithreading 20  Advantages Responsiveness  Resource Sharing  Economy  Scalability  Pthreads  POSIX standard defining an API for thread creation and synchronization
  • 20.
  • 21.
    Scheduling 22 Scheduling –Deciding which threads are given access to resources from moment to moment The CPU should not be idle At least on process should use CPU
  • 22.
    Scheduling 23 Scheduling –Deciding which threads are given access to resources from moment to moment
  • 23.
    Scheduling 24  Goals/Criteria Minimize Response Time  Maximize Throughput  Fairness  First-Come, First-Served (FCFS) Scheduling  Run until done  Shorts jobs get behind long ones P1 P2 P3 24 27 300
  • 24.
    Preemption 25  Capabilityto preempt a process in execution  Execution prioritized  Higher priority processes preempt the lower ones
  • 25.
    Round Robin (RR)26  Each process gets a small unit of CPU time(time quantum)  After quantum expires, process is preempted and added to the end of ready queue  N processes in ready queue and time quantum is q  No process waits more than (N-1)q time units  Performance  q large -> FCFS  q must be large with respect to context switch, otherwise too much overhead
  • 26.
    Round Robin (RR)27 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Waiting time for  P1 = ?, P2 = ?, P3 = ?, P4 = ? Average Waiting Time = ? Average Completion Time = ?
  • 27.
    Round Robin (RR)28 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Waiting time for  P1 = (68-20) + (112-88) = 72, P2 = 20, P3 = 85, P4 = 88 Average Waiting Time = 66.5 Average Completion Time = 104.5
  • 28.
    Round Robin (RR)29 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 28 48 68 88 108 112 125 145 153 Process Burst Time P1 53 P2 8 P3 68 P4 24 Pros and Cons:  Better for short jobs  Context-switching adds up for long jobs
  • 29.
    What if weknow future? 30  Shortest Job First (SJF)  Run whatever job has the least amount of computation to do  Shortest Remaining Time First (SRTF)  Preemptive version of SJF: if job arrives and has a shorter time to completion than the remaining time on the current job, immediately preempt CPU Basically  Idea is to get short jobs out of the system  Big effect on short jobs, only small effect on long ones  Result is better average response time
  • 30.
    Synchronization 31  Mostof the time, threads are working on separate data, so scheduling doesn‟t matter  But, what happens when they work on a shared variable Atomic Operations  An operation that always runs to completion or not at all  Indivisible  Fundamental Building Block
  • 31.
    Synchronization 32  Synchronization Using atomic operations to ensure cooperation between threads  Mutual Exclusion  Only one thread does a particular thing at a time  Critical Section  Piece of code that only one thread can execute at once  Lock  Prevents someone from doing something
  • 32.
    Synchronization 33 Load/Store DisableInts Test&Set Comp&Swap Locks Semaphores Monitors Send/Receive Shared Programs Hardware Higher- level API Programs Everything is pretty painful if only atomic primitives are load and store
  • 33.
    Semaphores 34  Akind of generalized lock  Definition: A semaphore has a non-negative integer value and supports the following two operations  P() : An atomic operation that waits for semaphore to become positive, then decrements it by 1  Also called the wait() operation  V() : An atomic operation that increments the semaphore by 1, waking up a waiting P, if any  Also called the signal() operation
  • 34.
  • 35.
    Semaphore vs Mutex36  Nope, the purpose of mutex and semaphore are different  Mutex is locking mechanism used to synchronize access to resource. Ownership associated with mutex. Only the owner can release the lock.  Semaphore is Signaling mechanism (“I am done, you can carry on” kind of signal)  More at http://www.geeksforgeeks.org/mutex-vs-semaphore/  Are binary semaphore and mutex same?
  • 36.
    Readers-Writers 37  Problem: Several readers and writers accessing the same file  Need to control access to buffer  If several readers are reading, no problem  If writer is writing and reader is reading, some problem  Variations  First-Writers-Then-Readers  First-Readers-Then-Writers
  • 37.
  • 38.
    Deadlocks 39 P0 Wait(Q) Wait(S) Some codehere Signal(Q) Signal(S) P1 Wait(S) Wait(Q) Some code here Signal(S) Signal(Q) A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.
  • 39.
    Deadlock Requirements 40 Mutual Exclusion  Only one thread at a time can use a resource  Hold and Wait  Thread holding at least one resource is waiting to acquire additional resources held by other threads  No preemption  Resources are released only voluntarily by the thread holding the resource, after thread is finished with it  Circular Wait  There exists a set {T1, …, Tn} of waiting threads  T1 is waiting for a resource that is held by T2  T2 is waiting for a resource that is held by T3  …  Tn is waiting for a resource that is held by T1
  • 40.
    Some Techniques 41 Deadlock Detection  Resource allocation graph etc.  Prevention  Circular Wait  Avoidance  Banker‟s algorithm

Editor's Notes

  • #3 OS is a program which is very intimate with the hardwareProvides stable way to interact with the hardware
  • #4 There is no hard and fast rule as to what all an OS should provideSome os provide very good GUI while some do not even have a full screen editorNevertheless, there is one program which is always running. Called ‘kernel’Platform that consists of specific set of libraries and infrastructure for applications to be built upon and interact with each other
  • #6 This is memory hierarchyAs we move up the triangle, access time decreases but cost increasesAlso data that is in Cache must be in Main Memory
  • #7 Little more about cache hereThere are generally two mechanisms to update data from cache to its backing store
  • #8 Open(), Close(), Exec(), Fork() etc.
  • #10 Modern operating systems are interrupt driven.There is something called interrupt vector which maintains a map from interrupt to the corresponding action.Keyboard exampleDistinguish between the execution of os code and user code
  • #13 Interview questions on the possible states a process can be in
  • #14 At any instant, only one PCB is in active state.When an interrupt occurs, save the current context of the running processState save and state restore. This task is called context switch
  • #15 A process may create several processes. The creating process is called a parent and the new processes are called children. There is a unique pid for every process.
  • #16 execl, execlp, execle, execv, execvp - execute a file