SlideShare a Scribd company logo
1 of 92
Download to read offline
Week 2: Introduction &
Operating System Services
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. What Operating System Do............................................................................................................1
II. Computer System Organization..................................................................................................2
III. Computer System Operations .....................................................................................................3
IV. Types of OS ................................................................................................................................5
V. Components of OS......................................................................................................................6
VI. Virtualization ..............................................................................................................................8
VII. Computing Environments...........................................................................................................9
VIII. Operating System Services....................................................................................................10
IX. System Calls..............................................................................................................................10
I. What Operating System Do
What is an Operating System
A Program that is in-between user and the computer hardware
Important Goals
• Providing abstractions for the user.
• Using hardware in an optimized way.
Kernel
• A program that runs all the time on the computer (part of the operating
system)
Type of Programs
• System Program: Comes with the operating system. (Not part of the Kernel)
• Application Program: Any program that is not related to the operating
system.
• Middleware: software that lies between an operating system and the
applications running on it and adds additional services to the applications.
Note: General purpose OSes
and mobile computing include a
middleware
2 | P a g e www.MoussaAcademy.com
00201007153601
II. Computer System Organization
Key Points About
Computer Organization
• The computer can consist of more than one CPU and other controllers which are
connected through a bus.
• Bus: a wire connecting between different controllers
I/O Operation Overview
EXPLAIN STEPS OF I/O OPERATION:
1. A program requests an I/O operation
2. The device controller is loaded to determine which operation to do
3. The controller starts to transfer data to a buffer.
4. When operation is done, the device controller informs the device that the request
is done.
5. The device controller gives control to the OS
6. The device driver also returns a status with either operation done, or the device is
busy.
7. The controller informs the driver that it is done by an Interrupt
Interrupts
• Hardware trigger interrupt by passing a signal to the CPU by the system bus
EXPLAIN STEPS OF INTERRUPT HANDLING:
1. The OS preserve state of the CPU by storing the registers and the program
counter.
2. Determines which type of interrupt occurred
3. Segments of code to determine the appropriate action taken by each type of
interrupt.
Each controller has a device
driver managed by the OS
3 | P a g e www.MoussaAcademy.com
00201007153601
III. Computer System Operations
Key Points About
Computer
System
Operation
• I/O devices and CPU run concurrently
• CPU moves data from/to main memory to/from the buffers.
• Device controller informs CPU the operation is done by causing an Interrupt
Common
Functions of
Interrupts
• The operating system is interrupt driven
• Interrupt transfers the control to the interrupt service routine (ISR) through the interrupt
vector
• Address of interrupted instruction (the instruction that called/caused the interrupt) is saved.
• A Trap or an Exception is a software generated interrupt
I/O Structure
Two Types of I/O Structures
1. After I/O starts, control returns to the user
program only when I/O operation completion.
2. After I/O starts, control returns to
the user program without waiting
for the completion of the I/O
operation.
• System call: Request to the
OS to allow user to wait for
I/O completion.
• Device-status table: contains
an entry for each I/O device.
Computer
Startup
• Firmware: Typically stored in ROM or EPROM
• Bootstrap Program Loads the operating system kernel and starts the daemons
• Daemons: services provided outside of the kernel
Main memory can be
viewed as a cache for
secondary memory.
4 | P a g e www.MoussaAcademy.com
00201007153601
Storage Structure
• Main Memory: Large Storage media that the
CPU can access directly
1. Random Access
2. Volatile
3. Dynamic Random Access Memory (DRAM)
• Secondary Storage: Expansion of
Main Memory that provides
nonvolatile capacity.
• Hard disk Drives (HDD)
• Non-volatile Memory (NVM)
Most popular.
Storage Hierarchy
• Storage Systems organized in hierarchy with respect to Speed, Cost, Volatility.
• Caching: Copying information into a faster system.
• Device Driver: provides interface between the controller and the kernel.
Direct Memory
Access Structure
• Direct Access: Device controller transfers blocks of data to main memory without CPU
permission
• An interrupt is generated per block, rather than per byte.
-Tracks are divided into sectors
5 | P a g e www.MoussaAcademy.com
00201007153601
IV. Types of OS
1. Multiprogramming
(Batch System)
• Process after process
2. Multitasking
(Time Sharing)
• CPU switches jobs so frequently that the user can interact while the jobs are running.
• Low Response Time (< 1 second)
• Swapping moves processes that do not fit in the memory.
• Virtual Memory allows the execution of processes that are not completely available in
the memory.
3. Dual-Mode
Operation
• There are two modes:
1. User Mode.
2. Kernel Mode.
• Mode bit: 0 or 1 to specify current mode.
• Some instructions are only executed in Kernel Mode
6 | P a g e www.MoussaAcademy.com
00201007153601
V. Components of OS
1. Timer
• Timers are needed to prevent infinite loops or processes using too many
resources.
• Timers interrupt the computer after some time period.
• A counter is decremented by the physical clock (which is set by the OS in
kernel mode)
• When counter is set to zero, an interrupt is generated.
• Control is given to the operating system and process scheduling is done.
2. Process Management
• Process:
• A program in execution
• Unit of work within the system
• A Program is a passive entity
• A Process is an active entity
• Single-threaded process has one program counter
• Where each instruction is done sequentially
• Multi-threaded process has one program counter for each thread.
• Process Management Activities:
1. Process synchronization
2. Inter-process communication (IPC)
3. Deadlock handling
3. Memory Management
• Memory Management Activities
1. Keeps track of which parts of memory is being used
2. Allocating and deallocation memory.
We will study it in
future chapters
7 | P a g e www.MoussaAcademy.com
00201007153601
4. File-System Management • Files: logical storage unit
• Directories (folders): organize files
• Access Control: Who can access what files & folders
• File-System Management Activities:
1. Manipulation of files & directories
2. Mapping files onto secondary storage
3. Backup
4. Caching
WHAT IS CACHING? WHY WE DO IT?
• Information in use is copied from slower mediums to faster mediums
EXPLAIN STEPS OF USING CACHING
1. When data is being accessed, we check for the data at faster mediums.
• If the data exists then, we can get the data directly from the cache.
• If it does not exist, data is retrieved and copied to cache and used there.
5. I/O Subsystem
• I/O Management Activities
1. Manage device drivers
Cache Coherency:
making sure that the data
in the cache is consistent.
Cache can be flawed if
the data that we want to
cache is larger than the
total size of cache.
8 | P a g e www.MoussaAcademy.com
00201007153601
VI. Virtualization
What is Virtualization
• Allows operating systems to run applications within other operating systems.
• Emulation: when source CPU is different from target type
• Virtualization: OS natively compiled for CPU
Computer System
Architecture
• Multiprocessors
• Asymmetric: each processor is assigned a specific task
• Symmetric: each processor performs all tasks
Also known as: Parallel system,
tightly coupled systems
Single-core system
2+ processors
1 core
Multi-core system
1 processor
2+ core
Host OS: Actual OS
running on PC
Guest OS: virtual OS
running compiled
9 | P a g e www.MoussaAcademy.com
00201007153601
VII. Computing Environments
Traditional
• Stand-alone general-purpose machines
• Portals provide web access to internal systems
• Network computer: provide web access to internal systems
Client Server
• Compute-server system: provides an interface to request services
• File-server system: provides an interface to send/receive files
Cloud Computing
• Extension of virtualization
WHAT ARE THE TYPES OF CLOUDS?
• Public cloud: available via internet to anyone
• Private cloud: run by a company for its usage
• Hybrid cloud: includes both public and private
WHAT DOES CLOUD COMPUTING PROVIDE TO US?
• Software as a Service (SaaS): one or more applications available via the
internet
• Platform as a service (Baas): software stack which is ready for applications to
use via the internet
• Infrastructure as a service (Iaas): servers or storages available over the
internet.
10 | P a g e www.MoussaAcademy.com
00201007153601
VIII. Operating System Services
Services Provided
by OS to User
MENTION SERVICES/BENEFITS THAT OS PROVIDE
• User Interface
• Program Execution
• I/O Operations
• File-system manipulation
• Process Communication
• Error-detection
• Resource allocation
• Logging
• Registry
• Background services
• Protection and Security
o Protection: Ensuring that access to the system is controlled
o Security: authentication (passwords)
• Programming language support
o Compilers, assemblers, debuggers, interpreters
IX. System Calls
What Are System Calls • Interfaces provided by the operating system
System Calls Implementation
• A table is maintained by the system-call interface
• The system-call interface invokes the system call and return its result
System Call Parameter
Passing
EXPLAIN WAYS WE CAN PASS PARAMETERS IN SYSTEM CALLS
1. Simplest: passing the parameters in register
2. Parameters stored in a block, table, memory, or address
3. Parameters pushed onto the stack
Notes on System Calls
• Apps compiled on one system is usually not executable on other operating
systems.
• Each OS has its own unique system calls.
• Application Binary Interface (ABI):
Architecture equivalent of API
2nd
and 3rd
methods do
not limit the number of
parameters being passed
REMEMBER
Known as
-Services
-subsystems
-daemons
Summary of components and
operations we mentioned above
Apps can also be multi-
operating systems
Week 3: Process & Threads
& Concurrency
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Process Concept..............................................................................................................................2
II. Process Scheduling .....................................................................................................................3
III. Operations on Process.................................................................................................................4
IV. Inter Process Communication (IPC)...........................................................................................4
V. Multicore Programming..............................................................................................................6
VI. Multithreading Architecture........................................................................................................7
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Process Concept
What is a process
A Program in execution
• Process execution is sequential
Process Layout
WHAT ARE THE PARTS OF THE PRCOESS:
1. Executable code (Text Section)
2. Data Section (global variables)
3. Heap Section (dynamically allocated memory)
4. Stack Section (temporary date, parameters, local variables)
Process States
• New
• Ready
• Running
• Waiting: waiting for some event to occur
• Terminated
Process Control
Block (PCB)
WHAT INFORMATION DOES THE PCB CONTAIN:
1. Process State
2. Program Counter (contains address of next instruction)
3. Registers
4. Scheduling Information (includes priority queues)
5. Memory Management Information (page tables, segmented tables)
6. Accounting Information
7. I/O status information (allocated I/O devices, open files)
PCB is how process gets
represented in the OS
also known as
Task Control Block
3 | P a g e www.MoussaAcademy.com
00201007153601
II. Process Scheduling
Process Scheduler
• Selects which process to execute next by the CPU
WHAT IS THE GOAL OF THE PROCESS SCHEDULER:
• Maximize CPU Usage.
• Maintains Scheduling Queues
 Ready Queue
 Wait Queue
Context Switch
WHAT ARE THE STEPS OF CONTEXT SWITCH
1. Saving the state of the old process
2. Loading the state of the new process
3. Context switch is an overhead
4. Some hardware provides multiple context switch
Selects which process to
execute by the CPU next
4 | P a g e www.MoussaAcademy.com
00201007153601
III. Operations on Process
Process Creation
• Parent Process: Process that creates another process
• Child Process: Process that gets created
• On Execution:
 Parents continue to execute concurrently with its children.
 Parent waits until all children are terminated.
• Address Space of New Processes
 Duplicate of the parent process
New Program
Process Termination
WHAT ARE THE REASONS FOR TERMINATION OF CHILD PROCESS:
• High usage of resources.
• Task assigned to the child is not required anymore.
• Parent process is terminating
IV. Inter Process Communication (IPC)
Process Creation
• A process can be either independent or cooperating.
REASONS FOR PROCESS COOPERATION:
1. Information Sharing
2. Computation speedup
3. Modularity
Shared Memory
• An area that processes share information
• Under control of the processes not the OS
• Has Synchronizing problem
Models of IPC:
- Shared
Memory
- Message
Passing
A process must have a
parent
Same program, data as
parent
5 | P a g e www.MoussaAcademy.com
00201007153601
Message Passing
• Processes communicates with each other with no shared variables.
• Operations:
 Send
 Receive
• Message size can be fixed or variable.
• Has Implementation Issues
Direct
Communication
• Every process must name each other.
WHAT ARE THE PROPERTIES OF DIRECT COMMUNICATION LINK:
1. Established Automatically
2. Associated with one pair
3. Between each pair only one link
4. Bidirectional (may be unidirectional)
Indirect
Communication
• Message are sent and received from mailboxes ( ports)
WHAT IS THE PROPERIES OF INDIRECT COMMUNICATION LINK:
1. Established only if processes share a common mailbox
2. Associated with many processes
3. Unidirectional or bidirectional
Threading
WHAT ARE THE BENFITS OF THREADING:
1. Responsiveness
2. Resource Sharing
3. Economical (cheap)
4. Scalability
Lightweight process
that executes code
6 | P a g e www.MoussaAcademy.com
00201007153601
V. Multicore Programming
Multicore
Challenges
WHAT ARE THE MULTICORE CHALLENGES:
1. Dividing activities
2. Balance
3. Data splitting
4. Data dependency
5. Testing and debugging
Parallelism and
Concurrency
• Parallelism: performing more than one task at the same time
• Concurrency: making more than one task making progress
• Data Parallelism: distributing data across multiple cores.
• Tasks parallelism: distributing threads across multiple cores
Types of Threads
User Threads Kernel Threads
• Managed by a library
 PTHREAD
 Windows Threads
• Managed by kernel
• Most general OSes use it
7 | P a g e www.MoussaAcademy.com
00201007153601
VI. Multithreading Architecture
One to One
• Each user thread maps to kernel thread
• Creation of a user thread results in a creation of a kernel thread
• More concurrent than many to one
• Restriction of number of threads per process
One to
Many
• Many users thread maps to a kernel thread
• One thread block cause all to block
Many to
Many
• Many users thread maps to multiple kernel threads
• Allows the OS to create sufficient number of kernel threads.
Threading
Issues
• Semantics of system calls (fork and exec)
• Synchronous and asynchronous signal handling
• Thread cancellation
• Local storage
• Scheduler Activations
Week 4: CPU Scheduling
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Basic Concepts ...............................................................................................................................2
II. Scheduling Algorithms ...............................................................................................................3
III. Thread Scheduling ......................................................................................................................4
IV. Multiple-Processor Scheduling...................................................................................................4
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Basic Concepts
Main Points
• Best CPU utilization is obtained with multiprogramming
• CPU or I/O burst cycles consist of a cycle of
CPU execution and I/O wait
• CPU burst is followed by an I/O burst.
CPU Scheduler • Select which process to execute next from the ready queue
WHAT ARE THE DECISIOSNS ASSOCIATED WITH CPU SCHEDULING:
1. Running to Waiting State
2. Running to Ready State
3. Waiting to Ready
4. Termination
WHAT ARE THE TYPES OF SCHEDULING:
1. Non preemptive: the process keeps running until it has finished
2. Preemptive: the process can be kicked from and let another process execute
Dispatcher
• Gives control of the CPU to the selected process
 Switching context
 Switching mode
 Jumping to a location to restart program
• Dispatcher Latency: time took for the dispatcher to
stop one process and start another one.
Scheduling Criteria
• CPU Utilization: keeping the CPU as busy as possible (efficiency)
• Throughput: amount of work done per unit time
• Turnaround time: amount of time needed to execute a process
• Waiting time: amount of time waited by the process in the ready queue
• Response time: amount of time between the request and the response of the
process
Can result in a
race condition
3 | P a g e www.MoussaAcademy.com
00201007153601
II. Scheduling Algorithms
1. First Come First Server
(FCFS)
• Most basic algorithm
• Convery effect: short process behind long process
2. Shortest Job First
(SJF)
• Uses length of the jobs to select which job to execute first
• Gives minimum average waiting time
• Difficulty in knowing the length of the next CPU request
• Preemptive version called Shortest Remaining Time First
• Determining the length of the CPU burst
 Ask the user.
 Estimate.
• Bootstrap program
Estimating Length of Next
CPU Burst
• Can be done by using the length of previous CPU
burst using exponential averaging
3. Round Robin
• Each process gets a quantum, then the process is preempted and added to the ready
queue.
• If the Quantum is large, then it will behave like FIFO
• If the Quantum is small, it will result in a lot of context switches and the overhead
will be too high
Priority Scheduling • A priority number is associated with each process
• The process with the highest priority will execute first.
• Can cause starvation
• The starvation can be solved using Aging
4. Multi Level Queue
• Have separate queue each with its own priority.
Previously
Discussed
Processes never executing
(indefinitely waiting) As time progresses
the priority increases.
Period of time
4 | P a g e www.MoussaAcademy.com
00201007153601
III. Thread Scheduling
Main Points
• Threads are scheduled not processes
• Many-to-one and Many-to-Many models, thread library schedules user-level
threads to run on LWP
 Known as process-contention scope (PCS)
 Done by setting a priority queue by the programmer.
• Kernel threads scheduled onto available CPU is system-contention scope (SCS).
IV. Multiple-Processor Scheduling
Main Points
• Symmetric multiprocessing (SMP) is where each process is self-scheduling
• All threads in a common ready queue
• Each processor may have its own private queue of threads
Multi-Core Processors
• Faster and consumes less power
• Takes advantages of memory stalls to make progress
5 | P a g e www.MoussaAcademy.com
00201007153601
Multi-Threaded Multi-Core
System
• Each core has more than one hardware threads
• Chip-multithreading(CMT): assigns each core multiple hardware thread
WHAT ARE THE LEVELS OF SCHEDULING:
1. The OS deciding which software thread
to run on the logical CPU.
2. How each core decides which hardware
to run on the physical core.
Loading Balancing
• Load Balancing: attempting to keep the workload of each processor evenly
distributed.
• Push Migration: Pushing overloading processors to less busy processors.
• Pull Migration: idle processors pull waiting tasks from busy processor.
NUMA and CPU
Scheduling
• If the OS is NUMA-aware, it will assign memory close to the CPU that the thread
is currently running on.
Intel refers to this as
hyperthreading
Week 5: Synchronization
tools & Synchronization
Examples
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Critical Section Problem.................................................................................................................2
II. Peterson’s Solution .....................................................................................................................3
III. Hardware Synchronization..........................................................................................................3
IV. Mutex Locks ...............................................................................................................................3
V. Semaphores.................................................................................................................................4
VI. Synchronization Classical Problem ............................................................................................5
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Critical Section Problem
Main Points
• Processes run concurrently.
• Concurrent access to data may lead to data inconsistency.
• Maintaining data consistency requires mechanisms to ensure cooperation of processes.
• Critical Section: changing values in the memory which may results in data inconsistency or other
synchronization problems.
• To Solve this: each process must ask permission to enter the critical section
WHAT ARE THE STEPS TO ENTER CRITICAL SECTIONS
1. Asking permission in entry section
2. After critical section, an Exit section is entered.
3. Remained Section
Critical
Section
Requirements
1. Mutual Exclusion: no other processes can be in their own critical section.
2. Progress: not postponing a critical section execution if no other processes are in their critical
section.
3. Bounded Waiting: a limit on how many times a specific process entered their critical section
while other processes were waiting for their critical section to be executed
Race
Condition
• Accessing the same unique data or changed data at the same time (forking a process with two
identical IDs)
3 | P a g e www.MoussaAcademy.com
00201007153601
II. Peterson’s Solution
Peterson’s Solution
• Two process model
• Sharing a memory in which there is a turn variable that indicates whose turn is to enter
the critical section.
• A flag array is used to indicate if a process is ready to enter the critical section
Analysis on Peterson’s
Solution
• Peterson’s solution is not guaranteed to work as compilers may reorder applications
that have no dependencies.
• Inconsistency in multithreaded environments
III. Hardware Synchronization
Main Points
• Many systems provide support for implementing critical section code.
• Uniprocessors could disable interrupts.
 Code would execute without preemption.
 Generally, too inefficient on multiprocessor systems
Hardware Instructions
• Special instructions that allow to test and modify content of a word atomically.
1. Test and Set Instruction
2. Compare and Swap Instruction
IV. Mutex Locks
Atomic Variables • Uninterruptible update on basic data types (int and Booleans)
Mutex Lock
• Previous solutions are complicated.
• Simplest is just a mutex lock ( variable indicating if it is locked or not)
• Protecting a critical section by:
 Acquire() a lock (locking)
 Release() the lock (unlocking)
• Acquire() and release() must be atomic.
 Usually implemented via hardware instructions
This solution requires
busy waiting
Lock called spinlock
Busy Waiting: Constantly
waiting and checking for a
condition to happen before
resuming execution
4 | P a g e www.MoussaAcademy.com
00201007153601
V. Semaphores
Semaphore
• Synchronization tool that provides ways for processes to synchronize their activities.
• Can only be accessed via two atomic operations.
 Wait()
 Signal()
WHAT ARE THE TYPES OF SEMPAHORES
1. Counting Semaphore: Integer value with no restriction
2. Binary Semaphore: 0 or 1 (same as mutex lock)
Semaphore
Implemenation
• Must guarantee that no two processes can execute wait() or signal() at the same time.
• Could now have busy waiting in critical section.
 Implementation code is short.
 Little busy waiting if critical section rarely used.
• Applications may spend a lot of time in critical sections and therefore this is not a good
solution
Semaphore
Implementation
Without Busy Wait
• Each semaphore has an associated waiting queue.
• Each entry has a value, and a pointer to the next record.
• There exist two operations:
 Block: place the process calling the block on the appropriate waiting queue
 Wakeup: removing one of the processes in the waiting queue and placing it in the read
queue
Can implement a
counting semaphore as a
binary semaphore.
5 | P a g e www.MoussaAcademy.com
00201007153601
VI. Synchronization Classical Problem
Types
3. Bounded-Buffer Problem
4. Readers and Writers Problem
5. Dining-Philosophers Problem
1. Bounded-Buffer
Problem
• N buffers, each buffer can hold one item
• Mutex Semaphore initialized to 1
• Full initialized to 0
• Empty initialized to value N
2. Readers and
Writers Problem
• Problem Statement: allowing multiple readers to read at the same time, only one
writer can access the dataset.
• A dataset is shared among a number of concurrent processes
• Readers: only reads the dataset
• Writers: can both write and read
• Rw_mutex initialized to 1
• Mutex initialized to 1
• Read_count initialized to 0
Readers and Writers
Problem Variation
• Once a writer is ready to write, no newly arrived readers is allowed to read.
• Both the first and second variation of the problem may lead to starvation
• Problem is solved on system by providing reader-writer locks
3. Dining
Philosophers
Problem
• N philosopher’s sit at a round table with a bowel of rice in the middle.
• They occasionally try to pick up 2 chopsticks to eat from the bowl (they need two
chopsticks to eat)
Kernel Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor system
• On multiprocessor system, spinlocking-thread is used
• Provides dispatcher objects which may act as mutexes, semaphores, events, timers
• Events: a condition variable
• Timers: Notify threads when time expires
• Dispatcher Objects are either signaled state (object available) or non-signaled state
(thread will block)
Week 6: Deadlocks
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. System Model.................................................................................................................................2
II. Deadlock Characterization..........................................................................................................2
III. Methods for Handling Deadlock.................................................................................................2
IV. Deadlock Prevention...................................................................................................................3
V. Deadlock Avoidance...................................................................................................................3
VI. Avoidance Algorithms................................................................................................................4
VII. Deadlocks Detection...................................................................................................................5
VIII. Recovery From Deadlock........................................................................................................5
2 | P a g e www.MoussaAcademy.com
00201007153601
I. System Model
Main Points
• System consists of resources ( CPU cycles, memory space, I/O devices)
• Process utilizes a resource as follows: request, use, release.
II. Deadlock Characterization
Main Points
WHAT ARE THE CONDITIONS THAT MAKES DEADLOCK ARISE:
1. Mutual Exclusion: only one process can use a resource at a time.
2. Hold and Wait: A process holding a resource waiting to acquire additional resources held by
another process.
3. No preemption: Resources can only be released voluntarily.
4. Circular wait: P0 waiting for P1, P1 waiting for P2, P2 waiting for P0
Resource-
Allocation
Graph
• Request Edge: directional edge from P to R ( where P is processes and R is resource types)
• Assignment Edge: from R to P
Basic Facts
on Cycles
and
Deadlock
• If graph has no cycles, then, no deadlocks.
• If graph contains a cycle
 If only instance per resource type, deadlock
 If several instances, possibility of a deadlock
III. Methods for Handling Deadlock
Methods
5. Ensuring the system will never enter a deadlock state:
 Deadlock prevention
 Deadlock avoidance
6. Allow the system to enter a deadlock state and then recover.
7. Ignore the problem and pretend that deadlocks never occur in the system
3 | P a g e www.MoussaAcademy.com
00201007153601
IV. Deadlock Prevention
Main Points
• Just invalidate one of the four necessary conditions for deadlock.
• Mutual Exclusion: Not required for sharable resources but must hold for non-sharable resources.
• Hold and Wait: Must guarantee that when a process requests and resource, it will not hold any
other resources.
 Requires allocating all of the resources needed by the process before execution.
• No Preemption:
 Releasing all resources of a process in case it is requesting another resource that cannot be
immediately allocated.
 Preempted resources are added to the list of resources for which the process is waiting for.
 Process will be restarted only when it can regain its old resources, as well as the new ones that
it was requesting.
• Circular Wait:
 Imposing a total ordering of all resources type and require each process requests resources in
an increasing order of enumeration.
V. Deadlock Avoidance
Main Points
• Requires system to have additional information (priori information).
• Simplest and most useful model requires each process to declare the maximum number of
resources it may need.
• Dynamically examines the resource allocation state to ensure that circular wait never happens.
• Resource allocation state is defined by the number of available and allocated resource, and the
maximum
Safe State
• System decides if allocation space for a process leaves it in a safe state.
• Safe state: a state such that for each process (pi) the requested resources can be satisfied by the
currently available resources + resources held from other processes (pj) where j < i.
• If pi resources needs are not immediately available, pi can wait until pj have finished.
• When pi terminates pi+1 can obtain the needed resources.
Basic Facts on
Safe State and
Deadlock
• If the system is in safe mode, then there is no deadlock.
• If the system is not in safe mode, a deadlock may happen.
• Avoidance: ensure that a system will never enter an unsafe state.
May Cause Starvation
4 | P a g e www.MoussaAcademy.com
00201007153601
VI. Avoidance Algorithms
Main Points
Single Instance Multiple Instances
• Use a resource-allocation graph • Banker’s Algorithm
Resource-Allocation Graph
Scheme
• Claim edge converts to request edge when a process requests a resource.
• Request edge is converted to an assignment edge when the resource is allocated to
the process.
• Resources must be claimed a priori in the system.
Resource-Allocation Graph
Algorithm
• Request can be granted only if converting the request edge into an assignment edge
that does not form a cycle.
Banker’s Algorithm
• Used for multiple instances of resource types.
• Each process must a priori claim maximum use
Data Structures for
Banker’s Algorithm
• Available (vector): type of resource available
• Max (matrix): maximum number of resources needed by a specific process (for
each resource type)
• Allocation (matrix): keeps track of the currently allocated types for each process.
• Need (matrix): keeps track of how much each process need for each resource
type.
5 | P a g e www.MoussaAcademy.com
00201007153601
VII. Deadlocks Detection
Single Instance of
Each Resource Type
• Maintain wait-for graph.
• Periodically invoke an algorithm that searches for a cycle in the graph ,if a cycle exists
then, there is a deadlock ( algorithm requires n2
operations).
Several Instances of
a Resource Type
• Available : vector of length m indicating the number of available resources for each type.
• Allocation: a matrix (n x m) that defines the number of resources of each type allocated to
a process
• Request: a matrix (n x m) indicating the request of each process.
Detection Algorithm
(not very important)
Detection Algorithm
Usage
• If detection mechanism is run randomly which results in many cycles in graph, We
CAN’T know which caused deadlock
VIII. Recovery From Deadlock
Process Termination
WHAT ARE THE POSSIBLE WAYS TO RECVOER FROM DEADLOCK?
8. Abort all deadlocked processes.
9. Abort one process at a time until the deadlock cycle is eliminated
Resource
Preemption
WHAT ARE THE STEPS FOR RESOURCE PREEMPTION?
• Selecting a victim
• Rollback: Returning to some safe state
• Starvation
Some process may always be
the victim.
Previously Discussed n: number of vertices
Important
Week 7: Main Memory
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Memory Introduction......................................................................................................................2
II. Contiguous Allocation ................................................................................................................3
III. Paging .........................................................................................................................................4
IV. Page Table Structure...................................................................................................................5
V. Swapping.....................................................................................................................................6
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Memory Introduction
Main Points
• Main memory and registers are the only storage CPU can directly access
• Register Access is done in one CPU clock.
• main memory can take many cycles which causes a stall.
• Cache is between main memory and CPU registers.
Protection
• a process can only access the addresses in its address space.
 HOW?  use base and limit registers
• CPU checks every memory access to be
 Memory >= base
 Memory < base + limit
Memory
Binding
WHAT ARE THE STAGES THAT DATA BINDINGS CAN
HAPPEN?
1. Compile Time:
• Memory locations known
• Generates Absolute code.
2. Load Time:
• Locations are unknown at compile time
• Generates relocatable code.
3. Execution Time: at run time (Need hardware support).
Address
Space
• Logical Address(Virtual Address) is generated by the CPU.
• Physical Address: address that is seen by the memory unit.
• WHEN ARE THEY EQUAL?
1. compile time
2. load-time address-binding schemes.
• WHEN ARE THEY DIFFERENT?
1. execution-time address-binding scheme.
3 | P a g e www.MoussaAcademy.com
00201007153601
Memory
Management
Unit
(MMU)
• MMU maps logical address dynamically.
 Logical Address =
User Address + relocation (base) register
 This is called Execution-time binding
(binding happens at each line of code).
II. Contiguous Allocation
Main Points
• Main memory usually consists into two partitions.
 OS, in low memory with interrupt vector.
 User processes in high memory.
• Base register contains value of the smallest physical address.
• Limit register contains the range of logical addresses
Variable
Partition
• Hole: block of available memory scattered throughout memory
WHAT IS THE INFORMATION MAINTAINED BY THE OS:
1. Allocated partitions.
2. Free Partitions (holes).
4 | P a g e www.MoussaAcademy.com
00201007153601
Dynamic
Storage
Allocation
• First fit: Allocate the first hole that is big enough.
• Best-it: Allocate the smallest hole that is big enough (must search the entire list).
• Worst-fit: Allocate the largest hole (must search the entire list).
• First and Best-fit are better than worst-fit in terms of speed and storage utilization
Fragmentation
• External Fragmentation: memory space between 2 processes results in holes
• Internal Fragmentation: memory space that is left by 1 process after the use of a block.
• In first fit , given N blocks allocated, 0.5 N blocks are lost to fragmentation.
Address Space
• Reduce external Fragmentation by Compaction
 Shuffle memory contents to place all free memory together in one large block.
 Compaction is only possible if relocation is dynamic, and it is done at execution time.
III. Paging
Main Points
• WHAT ARE THE MAIN STEPS NEEDED IN PAGING?
1. Divide physical memory into frames
2. Divide logical memory into pages
3. Keep track of all free frames
4. Set up a page table to translate logical to physical addresses
• Still have Internal fragmentation
To run a program of size N
pages, need to find N free
frames and load program
5 | P a g e www.MoussaAcademy.com
00201007153601
Address
Translation
HOW IS AN ADDRESS DESCRIBED IN TERMS OF PAGES:
1. Page Number: used as an index into a page table which contains the base address of each page in
the physical memory.
2. Page offset: combined with base address to define the physical memory address
IV. Page Table Structure
Page Table
Implementation
• Page-table base register (PTBR): points to the page table.
• Page-table length register (PTLR): indicates the size of the page table.
• Every data/instruction requires 2 memory accesses (one for page table and one for the data).
• This problem can resolve using a special fast-look-up hardware cache called translation look-
aside buffers (TLBs)
Translation
Look-aside
Buffer
• Some TLBs store address-space identifies (ASIDs) in each TLB entry
• On a TLB miss, value is loaded into the TLB for faster access next time.
 Some entries can be wired down for permanent fast access.
page number page offset
p d
m - n n
Also called Associative Memory
6 | P a g e www.MoussaAcademy.com
00201007153601
Effective
Access Time
• Hit Ratio: percentage of times that a page number is found in TLB.
Memory
Protection
• Adding a protection bit with each frame (read or read-write permission)
• Valid-invalid bit is attached to each entry in the page table
 Valid: the associated page is in the process logical address space.
 Invalid: the page is not in the process logical address space.
Inverted Page
Table
• One entry for each real page of memory.
• Decreases memory needed to store each page table.
• Using hash table we can limit search to one page-table entry.
V. Swapping
Main Points
• Backing store : A fast disk large enough to accommodate copies of memory images.
• Roll out, roll in: swapping variant used for priority-based on scheduling algorithms (lower
priority is swapped for higher priority)
Any violation will
result in a trap
Major part of
swap time is
transfer time
We can also use
PTLR for this.
Week 9: Virtual Memory
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Virtual Memory..............................................................................................................................2
II. Demand Paging...........................................................................................................................2
III. Allocation of Frames...................................................................................................................5
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Virtual Memory
Definitions
• Virtual Memory: separation of logical and physical memory.
• Virtual Address Space: logical view of how process is stored in memory.
Main Points
• Program can execute when it is partially loaded into the memory.
• Programs takes less memory (more programs can run at the same time)
 Decreases CPU utilization.
 Increases throughput without any increase in Response time, Turnaround time.
Implementation
Q: MENTION THE TWO METHODS TO IMPLEMENT THE VIRTUAL
MEMORY?
1. Demand Paging
2. Demand Segmentation
II. Demand Paging
Main Points
• Can bring entire process into memory at load time.
• Bringing a page into memory only when it is needed.
 Less unnecessary I/O needed.
 Less memory needed.
• Similar to paging with swapping
• Lazy Swapper: Not swapping a page unless it is
needed.
• Pager: A swapper that deals with pages.
3 | P a g e www.MoussaAcademy.com
00201007153601
• If pages needed are already in memory:
 No different from normal paging (non edmand-paging)
• Else
 Need to detect and load the page into memory.
• IN/VALID BIT: V in memory, I not in memory
Page Fault
WHAT HAPPENS WHEN AN INSTRUCTION RESULTS IN A PAGE FAULT?
1. A trap will be generated to the OS.
2. Save user registers and process state.
3. OS look at a table if invalid reference then, Abort / not in memory.
4. If the page is not present in the memory then, we wil have to find a free frame.
5. Swap page into frame via a scheduled disk operation
6. set valid/invalid bit to V (valid).
7. Restore user registers and prcoess state.
8. Restart Intrstruction that resulted in a page fault.
Page Replacement
• Happens when there are not free frames in memory.
• We want to minimize the number of page faults.
• Using modify(dirty) bit so only modified pages are written to disk.
WHAT ARE THE STEPS FOR PAGE REPLCEMENT?
1. Find location of the desired page on disk.
2. Find free frame.
3. If not free frame is found we use a page replacement algorithm to select a victim frame.
4. Bring page into the free frame and update page and frame tables.
5. Restart instruction that caused the trap (page fault)
Types of
Replacement
Algorithms
1. FIFO (First-In-First-Out)
2. Optimal
3. LRU (Least-Recently-Used)
Without needing
to change code
Without needing
to change code
4 | P a g e www.MoussaAcademy.com
00201007153601
1. FIFO
• First-In-First-Out approach for each frame.
• Belady’s Anomaly: Adding more frames will result in more page faults.
2. Optimal
• Replace pages that will not be used for the longest period of time.
• Not applicable as we can’t read the future.
• Used for measure how well an algorithm performs.
3. Least
Recently
Used (LRU)
• Use past knoweldge.
• Replaces pages that has not been used the least.
Second-Chance
Algorithm
• It is a LRU approximation.
• Generally FIFO, plus hardware reference bit.
• If page to be replacted has a refernce bit of:
 0  Repalce it
 1  set to 0 and leave it.
12-page faults (better than FIFO)
5 | P a g e www.MoussaAcademy.com
00201007153601
III. Allocation of Frames
Main Points
• Total frames in the system are the maximum number of allocations.
• Allocation Schemes: 1. Fixed Allocation 2. Priority Allocation
Definitions
• (Fixed) Equal allocation: divide the total number of frames to each process equally.
• (Fixed) Proportional Allocation: Allocate relative to the size of each process.
• Global Replacement: select a frame as a victim from the whole system.
• Local Replacement: select a frame as a victim from the process that the page is being
replaced to. (Select frame which belongs to the process which caused the page fault)
Thrashing
• Thrashing: A process is busy swapping pages in and out.
• Thrashing leads to:
 Low CPU Utilization
 OS thinking it needs to increase degree of multiprogramming
Size of locality > totally memory size
Week 10: Mass Storage
Systems
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Mass Storage Structure Overview..................................................................................................2
II. HDD Scheduling.........................................................................................................................4
III. Selecting Disk-scheduling Algorithm.........................................................................................5
IV. Storage Attachment.....................................................................................................................6
V. RAID Structure...........................................................................................................................7
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Mass Storage Structure Overview
Definitions
• Transfer Rate: Rate at which data flow between drive and computer
• Positioning time (Random Access Time): time to move arm to desired cylinder (seek time)
and time to move it to the desired sector (rotational latency)
• Head crash: results when disk head contact disk surface.
Hard Disk Drives
(HDD)
WHAT ARE THE TYPES OF STORAGES?
1. Hard Disk Drives (HDD)
2. Nonvolatile Memory (NVM) or NVMe (NVM-Express).
3. Volatile Memory (VM)
• Performance:
1. Transfer Rate: 6 Gb/sec (theoretical)
2. Effective Transfer Rate: 1 Gb/sec (real)
3. Seek time: 3ms – 12 (common 9ms)
• Access latency = Avg. access time = avg. seek time + avg. latency (rotational delay)
• Avg. I/O time = avg. access time + (
𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂 𝒐𝒐𝒐𝒐 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕
𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓
) + controller overhead
Nonvolatile Memory
(NVM)
• If disk-drive called Solid-state disks (SSD).
• Includes: USB drives, DRAM, surface-mounted storage on motherboard, mobiles storage.
Advantages Disadvantages
4. More reliable than HDDs
 No head crash.
 No mechanical parts.
5. Much faster than HDDs.
6. No moving parts (no mechanical parts).
1. More expensive.
2. Shorter life span.
3. Less capacity.
4. Busses can be too slow.
5. Can’t overwrite in place.
6. Erases happens in blocks.
7. Can only be erased a number of times
before worn out.
3 | P a g e www.MoussaAcademy.com
00201007153601
NAND flash memory
(NVM Memory)
• Controller maintains flash translation layer (FTL) table.
• Garbage Collection: Free invalid page space.
• Overprovisioning: working space for GC
Magnetic Tape
• Access time slower than HDD.
• Random-access slower than HDD.
• Not useful as secondary storage and mainly used for backup
Volatile Memory
• DRAM is used as mass-storage device, not secondary storage because volatile.
• RAM drives: present raw block devices, file system formatted.
• RAM: used as high-speed temporary storage.
Disk Attachment
• Storage accessed through I/O busses.
WHAT ARE THE TYPES OF DISKS ATTACHEMENT?
1. Advanced Technology Attachment (ATA)
2. Serial ATA (SATA)
3. eSata
4. Serial Attached SCSI (SAS)
5. Universal Serial Bus (USB)
6. Fiber channel (FC)
• Because NVM is faster that HDD, NVMe is created (connecting directly to PCI bus).
• Data transfers are carried out by controllers called Host-bus adapters (HBAs).
Address Mapping
• Disk drives are addressed like 1-d array of logical blocks.
• Logical to physical mapping made easy,
 Except for: Bad sectors, non-constant # of sectors per track.
4 | P a g e www.MoussaAcademy.com
00201007153601
II. HDD Scheduling
Main Points
• Disk Bandwidth:
𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏 𝒐𝒐𝒐𝒐 𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕
𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇 𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓 𝒂𝒂𝒂𝒂𝒂𝒂 𝒍𝒍𝒍𝒍𝒍𝒍𝒍𝒍 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕
WHAT ARE THE SOURCES OF I/O REQUESTS?
1. OS.
2. System Processes.
3. User Processes.
• I/O Request includes:
1. I/O Mode
2. Disk Addresses
3. Memory Addresses
4. Number of sectors to transfer.
• OS maintains queue of requests, per disk or device.
• In the past, OS was responsible for queue management.
• Now, it is built into the storage devices.
WHAT ARE THE SCHEUDLING ALGORITHMS USED?
1. FCFS
2. SCAN
3. C-SCAN
FCFS
5 | P a g e www.MoussaAcademy.com
00201007153601
SCAN
• Called Elevator Algorithm.
• Disk arm starts at one end of the disk, srcivicing
requests until the head is at the other side.
• If requests are uniformly dense, largest density at
other end of disk and those wait the longest.
C-SCAN
• More unform wait time than SCAN.
• Head moves from one end to the other but, it resets to
the beginning of the disk (without servicing any
requests while reversing).
• Treats the cylinders as a circular list
III. Selecting Disk-scheduling Algorithm
Main Points
• SSTF is common and has natural appeal.
• SCAN and C-SCAN performs better for heavyloads on disk.
• To avoid starvation, Linux implements Deadline Scheduler.
• NOOP, CFQ (completely fair queueing) is available on RHEL 7 (Read Hat Enterprise Linux)
Deadline
Scheduler
• Separate Read and Write Queues (More priority to read).
• 4 Queues (2 read, 2 write).
• 1 read, 1 write queues sorted in LBA (Implementing C-SCAN).
• 1 read, 1 write queues FCFS order.
• Checks if any requesst in FCFS older that configured age (default 500ms) then LBA queue is
selected for next batch of I/O requests.
6 | P a g e www.MoussaAcademy.com
00201007153601
Storage Device
Management
• Low level formatting/physical formatting: dividing disk into sectors that the controller can
read/write.
• OS needs to record its own DS on the disk.
• Partition the disk into cylinders (each trated as a logical disk).
• To increase effeciency most file system group blocks into clusters.
 Disk I/O into Blocks.
 File I/O into Clusters.
• Root Partition: contains OS, file systems.
• At mount time, file system is checked
 if all the meta data is correct then, Add ot mount table.
 if not, fix it and try again.
• Bootstrap loader: Program stored in boot blocks.
• Sector sparing is used to handle bad blocks.
IV. Storage Attachment
Storage
Attachment
HOW DO COMPUTER ACCES STORAGE?
1. Host-attached Storage. (HAS) through local I/O ports.
 To attach tomany devices we use USB, firewrite, thunderbold used in.
 High-end systems uses Fiber channel (FC).
2. Network-attached Storage (NAS).
 Common Protocols are NFS, CIFS.
 Implemented via remote procedure calls (RPCs).
 iSCSI uses IP netwrok to carry SCSI protocol.
3. Cloud Storage is API based.
Storage Arrays
• Avoid NAS drawbacks (using network bandwidth)
WHAT ARE THE FEATURES PROVIDED BY SOTRAGE ARRAY TO HOSTS?
1. Ports to connect hosts to array
2. Memory controlling software
3. RAID
4. Shared storage
5. Snapshots, clones,thin provisioing, replication, deduplication,
7 | P a g e www.MoussaAcademy.com
00201007153601
Storage Area
Network
• Storage made via LUN Masking
• Easy to add or remove storage.
V. RAID Structure
Main Points
• RAID: Redundant Array of Inexpensive Disks
• Mean time to repair: exposure time when
another failture could cause data loss.
• Increases mean time to failure.
• Frequently combined with NVRAM to
improve write performance.
• Arranged into six different levels.
• Disk stripping used a group of disks as one
storage unit.
• RAID alone does not prevent or detect data
corruption but, Adding checksums does.
• RAID1: Mirroring/Shadowing (keeps
duplicate)
• RAID 1+0/RAID 0+1:
 Striped mirrors
 High performance
 Reliability.
• RAID 4,5,6 Uses much less redundancy.
Object Storage
• Object Storage Management software like Hadoop File system (HDFS) and Ceph.
 Typically store N copies, across N systems
 Horizontally scalabel
 Content addressable, unstructured.
Week 11: I/O Systems
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. I/O Hardware..................................................................................................................................2
II. Application I/O Interface............................................................................................................4
III. Kernel I/O Subsystem.................................................................................................................5
Error Handling ...................................................................................................................................6
I/O Protection.....................................................................................................................................6
Power Management ...........................................................................................................................6
IV. Transforming I/O requests to hardware Operations....................................................................6
2 | P a g e www.MoussaAcademy.com
00201007153601
I. I/O Hardware
Introduction
WHAT ARE THE TYPES OF I/O DEVICES?
1. Storage.
2. Transmission.
3. Human Interface.
WHAT ARE THE COMMON CONCEPTS OF I/O INTERFACES?
1. Port: connection point for a device.
2. Bus – daisy chain / shared direct access.
 PCI/e is used in PCs and
Servers
 Expansion bus is relatively
slow devices.
 Serial-attached SCSI is a disk
interface.
3. Controller operates port, bus,
devices.
 Integrated or separate circuit
board.
• Devices usually have register where device drivers places commands, addresses, data on
the registers in which a FIFO Buffer is used.
• Data-in register: host reads input.
• Data-out register: host sends output.
• Status register: contains status about instructions (completed, available, error).
• Control register: written by host to start command, change mode of device.
• Memory-mapped I/O: device data/registers mapped to processor address space.
Polling
• Polling happens in 3 cycles.
1. ready status.
2. extract status bit
3. branch if not zero.
WHAT ARE THE STEPS OF POLLING?
1. Ready busy bit: until 0. (Busy wait)
2. Set write bit: write data into data-out register.
3. command-ready bit is set to 1.
4. Controller sets busy bit.
5. Controller Clears command-ready bit.
3 | P a g e www.MoussaAcademy.com
00201007153601
Interrupts
• CPU interrupt-request line is checked by processor after each instruction.
• Interrupt handler: receives interrupt.
WHAT ARE THE TYPES OF
INTERRUPTS?
1. Maskable: can be ignored or delayed.
2. Non-maskable: must be handled
immediately.
• Interrupt vector: dispatches interrupts to
correct handlers.
• Interrupt mechanism is also used for
exceptions.
• System calls executes via trap which
triggers kernel to handle request.
• Multi-CPU devices can handle multiple
interrupts concurrently.
• Used for time-sensitive processing.
WHAT ARE THE INTERRUPT HANDLING FEATURES?
1. Interrupt handling during critical processing.
2. Dispatching interrupt handler without polling.
3. Multilevel interrupts.
4. Instruction to get the OS attention directly such as: Division by zero (this is a trap).
Direct Memory
Access
(DMA)
• Used to avoid programmed I/O.
• Requires DMA controller.
• OS writes DMA command into
memory.
WHAT ARE THE CONTENTS OF
COMMAND BLOCK?
1. Source and Destination
Addresses.
2. Read or Write mode.
3. Bytes count.
4. Writes location of command
block to DMA controller.
• Cycle Stealing: accessing computer
memory without interfering with
CPU.
4 | P a g e www.MoussaAcademy.com
00201007153601
II. Application I/O Interface
Main Points
• Each OS has its own I/O subsystem and device driver frameworks.
WHAT ARE THET DEVICE VARIATIONS?
1. Stream or block.
2. Sequential or random-access.
3. Synchronous or asynchronous.
4. Sharable or dedicated.
5. Speed of operation.
6. Read-write, read only or write-
only.
Characteristics
HOW CAN I/O DEVICES BE GROUPED BY OS?
1. Block I/O.
2. Character I/O (stream).
3. Memory-mapped file access.
4. Networks sockets.
Block Devices
• Disk drives.
• Commands includes Read, Write, Seek
• Raw I/O, Direct I/O, file-system access.
Character Devices
• Keyboard, mice, serial ports.
• Command includes get(), put().
• Libraries layers allow line editing.
Network Devices
• Different from block and character devices to have its own interface.
• Linux, Unix, Windows use Separate network protocol from network operations.
• Commands includes select().
• Varying approaches.
 pipes, FIFO, stream, queues, mailboxes.
5 | P a g e www.MoussaAcademy.com
00201007153601
Clock and Timers
• Provide current time, elapsed time, timer.
• Programmable interval timer such timings, periodic interrupts.
Non-blocking I/O
And
Asynchronous I/O
• Blocking: suspended process until I/O completion.
• Non-blocking: I/O call returns as
much as is available.
 Implemented via multi-
threading.
 Return with count of bytes read
or written.
• Asynchronous: Process rune while
I/O executes but, Difficult to use.
Vectorized I/O
• Vectorized I/O: allow one system call to perform multiple I/O operations.
• This method is called scatter-gather.
 Decreases context switching and system calls overhead.
 Provide atomicity.
III. Kernel I/O Subsystem.
1. Scheduling
• Some I/O request ordering does a queue per-device.
• Some OSs try fairness and does Quality of Service(IPQOS) technique.
2. Buffering
• Buffering: Store data in memory while
transferring between devices.
WHY BUFFERING?
1. Speed mismatch.
2. Transfer size mismatch.
3. Maintaining copy semantics.
• Double Buffering: two copies of the data.
3. Cashing
• Caching: faster device which holds a copy of the data.
• Primary key for performance.
• Sometimes combined with buffering.
4. Spooling
• Spooling: holding output for a device.
• If the device can only serve one request at a time, such as: Printing.
5. Device Reservation
• Exclusive access to a device.
• Allocation and de-allocation system calls
• Watching out for deadlocks.
In Unix, ioctl()
In Unix, readve()
6 | P a g e www.MoussaAcademy.com
00201007153601
Error Handling
Main Points
• OS can recover from disk read, device unavailable, write failures.
• Most return error number or code when I/O request fails.
• System hold problem report logs.
I/O Protection
Main Points
• All I/O instructions are privileged.
• I/O must be performed via a system call so Memory locations must be protected.
Power Management
Block Devices • Cloud computing environments move virtual machines between servers
IV. Transforming I/O requests to hardware Operations
I/O Life Cycle
Week 12: File System
& File-System
Implementation
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. File Concept....................................................................................................................................2
II. Access Methods ..........................................................................................................................2
III. Disk Structure .............................................................................................................................3
IV. Protection....................................................................................................................................5
V. File-system Structure ..................................................................................................................5
VI. File-system Operations ...............................................................................................................6
VII. Directory Implementation...........................................................................................................6
VIII. Allocation Methods.................................................................................................................6
IX. Free-space Management .............................................................................................................8
2 | P a g e www.MoussaAcademy.com
00201007153601
I. File Concept
Main Points
• Contiguous logical address space.
• Types:
 Data can either be Numeric, Character, Binary.
 Program.
File attributes
1. Name 2. Identifier 3. Type 4. Location
5. Size 6. Protection 7. Time, date, and
user identification
File
Operations
1. Create 2. Write 3. Read
4. Seek(Reposition) 5. Delete 6. Truncate
7. Open 8. Close
File Locking
• Shared lock: Reader lock.
• Exclusive lock: Writer lock.
• Mandatory: Access is denied depending on locks held and requested.
• Advisory: processes can find the status of locks.
II. Access Methods
Main Points
• Sequential Access
1. Read next.
2. Write next.
3. Reset
4. No read after last write (rewrite).
• Direct Access
1. Read n.
2. Write n.
3. Seek n which is equal to:
 read next, write next, rewrite n
Other
Access
Methods
• Generally, involves creating an index.
• If the index is too large, we can create an index to the index.
• IBM indexed sequential-access method (ISAM)
 Small master index
 File sorted on a key.
 Done by OS.
• VMS OS provides index and relative files.
Write uses a write pointer
Read uses a read pointer
3 | P a g e www.MoussaAcademy.com
00201007153601
III. Disk Structure
Main Points
• Disk can be divided into partitions. And can be:
1. RAID, which makes the disk protected against failures.
2. raw (without a file system).
3. Formatted (with a file system).
• Volume: entity containing a file system.
• There are many special-purpose file system within the same OS on the computer.
Operations
on Directory
1. Search for a file
2. Create a file.
3. Delete a file.
4. List a directory.
5. Rename a file.
6. Traverse file system.
Directory
Organization
• Directory is organized to obtain
 Efficiency: location a file quickly.
 Naming: conveniet for users.
WHAT ARE THE TYPES OF DIRECTORIES?
1. Single-level.
2. Two-level.
3. Three-level.
4. Acyclic-graph.
5. General-graph.
Single Level
Directory
• Naming Problem: Only one file name can exist.
• Grouping problem: No grouping can happen.
also known as minidisks,
slices.
4 | P a g e www.MoussaAcademy.com
00201007153601
Two-level
Directory
• Efficient searching.
• No group capability.
• Can have same file name for different users.
Three-level
directory
Acyclic-
graph
directories
• Shared subdirectories and files.
• Two different names (Aliasing).
• Directory is delete (if with a list) results in
dangling pointer. How do we solve?
1. Back pointers: using daisy chaining
organization.
2. Entry-hold-count solution.
• New directroy entry type
 Link: pointer to an existing file.
 Resolve the link: follow pointer to locate
the file.
5 | P a g e www.MoussaAcademy.com
00201007153601
General-
graph
Directory
HOW DO WE GUARANTEE NO
CYCLES?
1. Allow only links to files not
subdirectories.
2. Garbage collection.
3. Use cycle detection for each new link
IV. Protection
Main Points
WHAT RE THE TYPE OF ACCESSES?
1. Read 2. Write 3. Execute
4. Append 5. Delete 6. List
V. File-system Structure
Main Points
• File structure: collection of related information.
• File system: a system that is on disk and organized into layers.
 User interface to storage, mapping logical to physical.
 Provide efficient and convenient access for storing data, retrieval easily.
• File Control Block (FCB): structure containing information about a file.
• Device driver: control the physical device.
File System
Layers
• Device drivers manages I/O device at I/O control layer.
• Basic file system is given “retrieve block 123” and then translates to device driver.
 Manages memory buffers and caches.
• File organization module translates logical block to physical block,
 manages free space and disk allocation.
• Logical file system manages metadata information.
• Translates file name into file number, file handle, location.
• Manages directories and provide protection.
• Layers are useful for reducing complexity and redundancy.
• Windows uses FAT, FAT32, NTFS file systems.
• Linux uses ext3, ext4 file systems.
New ones:
ZFS, GoogleFS, Orace ASM,
6 | P a g e www.MoussaAcademy.com
00201007153601
VI. File-system Operations
Main Points
• Boot Control Block: contains info needed by system to
boot OS.
• Volume Control Block(superblock, master file table):
contains volume details.
• OS maintains FCB for each file.
In-Memory
File System
Structures
• Mount table: stores file system mounts, file system types.
• System-wide open-file table: contains copy of the FCB of each file.
• Per-process open-file table: contains pointers to appropiate enteries in system-wide open-table
table.
VII. Directory Implementation
Main Points
HOW CAN WE IMPLEMENT A DIRECTORY?
1. Linear List
 Simple to program
 Time consuming to execute.
2. Hash Table
 Shorted search time.
 Collisions: two files hash to the same function.
VIII. Allocation Methods
Main Points
• Refers to how disk blocks are allocated.
HOW CAN WE ALLOCATED DISK BLOCKS?
1. Contiguous.
2. Linked.
3. File Allocation Table (FAT).
7 | P a g e www.MoussaAcademy.com
00201007153601
Contiguous
Allocation
• Best performance in most cases.
• Simple.
• Problmes include:
 Finding space on the disk for a file.
 Knowing the size of a file.
 External fragmentation needs compaction off-line (downtime) or on-line.
• Extent Based system
 Many newer file system use modified contiguous allocation scheme.
 Extent: contiguous blocks of disks.
Linked
Allocation
• Each file is a linked list of blocks
• File ends at nil pointer.
• No external fragmentation.
• Each block contains a pointer to the next block
• When new block needed the Free space management system
called.
• Clustering blocks improves efficeincy, increases internal
fragmentation.
• Locating a block result in many I/O requests and disk seeks.
FAT
• Beginning of value has table which is indexed by
block number.
• Like linked lists are fast on disk and can be cached.
• Simple new block allocation.
Indexed
Allocation
Method
• Indexed Allocation method: Each file has its own
index block
• For Small files we use random access with 1 block
for index table.
• For Large files we use linked scheme, multi-level
indexing.
 Two-level linked scheme
• A Combined Scheme is used by UNIX UFS.
8 | P a g e www.MoussaAcademy.com
00201007153601
Performance
• Best method depends on file access type.
 Contiguous great for sequential and random
• Linked good for sequential, not random
• Declaring access type at creation (Select either contiguous or linked).
• Indexed more complex
 Single block access could require 2 index block reads then data block read
 Clustering can help improve throughput, reduce CPU overhead
• For NVM
 Old algorithm uses many CPU cycles.
 Our Goal is to reduce CPU cycles and path needed for I/O.
IX. Free-space Management
Main Points • File system maintains a free-space list.
Linked Free
Space List on
Disk
• Linked list
 Cannot get contiguous space.
 No waste.
 No need to traverse entire lise.
• Grouping: modify list to store address of next n-1 free blocks.
• Counting is used because space is frequently contiguous (principle of locality).
 keep addresses of first free block and count of the following free blocks.
 Free space lsit then has enteries containing free addresses and count.
Space Maps
• Used in ZFS.
• Metata-data I/O on very large file systems.
• Divice is divided into metaslab units.
• Each metaslab has associated a space map (uses counting algorithm).
• Logging of file rather than file system.
• Metaslab activity: load space map into memory in a balanced-tree structure, indexed by offset
Week 13: Security &
Protection
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Security...........................................................................................................................................2
II. Program Threats..........................................................................................................................3
III. System and Network Threads.....................................................................................................4
IV. Goal of Protection.......................................................................................................................5
V. Principles of Protection...............................................................................................................5
VI. Domain of Protection..................................................................................................................5
VII. Access Matrix .............................................................................................................................6
VIII. Implementation of Access Matrix...........................................................................................7
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Security
Introduction
• Intruders (crackers) attempt to breach security.
• Threat: potential security violation.
• Attack: attempt to breach security.
Security Violation
Categories
WHAT ARE THE SECURITY VIOLATION CATEGORIES?
1. Breach of confidentiality: unauthorized reading of data.
2. Breach of integrity: unauthorized modification of data.
3. Breach of availability: unauthorized destruction of data.
4. Theft of service: unauthorized use of resources.
5. Denial of Service(DOS): Prevention of legitimate use.
Security Violation
Methods
WHAT ARE THE SECURITY VIOLATION METHODS?
1. Masquerading (authentication breach):
pretending to be an authorized user.
2. Replay Attack: Replaying the same message
or adding a modification to it.
3. Man-in-the-middle attack: intruder sits in data
flow, masquerading as sender to receiver and
vice versa.
4. Session hijacking: intercepting a session
which is already on going to by-pass
authentication.
5. Privilege escalation: A really Common attack,
access of resources that a user is not supposed
to have.
Security Measure
Levels
1. Physical: Servers, Data centers, terminals
2. Application.
3. Operating System: protection mechanisms, debugging.
4. Network: Interruption, DOS, intercepted communications.
• Security is as weak as the weakest link in the chain.
• Humans are a risk due to phishing and social engineering.
Impossible to have
absolute security.
3 | P a g e www.MoussaAcademy.com
00201007153601
II. Program Threats
Definitions
• Malware: software designed to exploit, disable, damage computer.
• Trojan horse: type of malware that is disguised as a legitimate program.
• Spyware: program installed legitimate software which display adds, capture user data.
• Ransomware: locks data/files via encryption demanding money to decrypt data/files.
• Keystroke logger: grab passwords, credit card numbers.
Main Points
• Other threats include trap doors, logic bombs.
• Most threats try to violate the principle of least privilege.
• Goal: Leave Remote Access Tool (RAT) for repeated access.
Code Injection
• Code-injection attack: system code has bugs which allows code to be added or modified.
• Results from poor programmer or language (low level languages).
• Can be run by script kiddies as there are already written tools.
• Goal: buffer overflow.
Viruses
• Code embedded into a legitimate program.
• Designed to infect other computers.
• Specific to CPU architecture and OS.
• Usually carried via E-mail or macro.
WHAT ARE THE CATEGORIES
OF VIRUSES?
1. File/ parasitic.
2. Boot/ memory.
3. Macro.
4. Source code.
5. Polymorphic (avoids having a virus
signature).
6. Encrypted.
7. Stealth.
8. Multipartite
9. Armored.
4 | P a g e www.MoussaAcademy.com
00201007153601
• Trojan Horse
 Code that misuses its environment allowing programs written by user to be executed
by others.
 Spyware, browser pop-ups, covert channels.
• Trap Door
 Specific user identifier or password that circumvents normal security procedures.
 Could be included in a compiler.
Windows
WHY IS WINDOWS TARGETED FOR MOST ATTACKS?
1. Most Common OS.
2. Everyone is an administrator.
3. Monoculture considered harmful.
III. System and Network Threads
Network Attacks
• Harder to detect and prevent.
• Difficult to have a shared secret on which to address.
• No physical limits once connected to the internet.
• Difficult to determine the location of connected system as only IP address is available.
Main Points
• Worms: standalone program that uses spawn mechanism.
• Internet Worms
 exploits UNIX networking features.
 Exploits trust-relationship mechanism used by rsh to access friendly systems.
 Grappling hook: A program uploaded main worm program (99 lines of C code).
 Hooked system then upload main code.
Denial of Service
(DOS)
• Overload targeted computer (send too many requests).
• Distributed Denial-of-Service (DDoS): comes from multiple sites at once (multiple
computers).
• How many connections the OS can handle needs to be considered
at the start of handshake (SYN).
 telling difference between being a target or being popular.
• Port scanning: looking for network accepting connections (can be used for good or evil).
 Nmap: scans all ports in a given IP range.
 Nessus: has database of protocol and bugs to apply against a system.
 Launched from zombie system which decrease traceability.
Transfer information
between process
Group of computers
running identical software
5 | P a g e www.MoussaAcademy.com
00201007153601
IV. Goal of Protection
Main Points • Ensure that each object is accessed correctly by the processes that are allowed to do so.
V. Principles of Protection
Guiding Principle
• Principle of least privilege.
• Programs, users, and systems should only be given the needed privileges.
• Setting permissions properly can limit damaged if any bugs get abused.
• Can be Static (during life of system, life of process).
• Can be Dynamic (changed by process as needed) such as: domain switching, privilege
escalation.
• Compartmentalization: protecting each individual system component through permissions.
• Grain aspect
 Rough-grained privilege management easier, simpler, but least privilege now done
in large chunks.
 Fine-grained management more complex, more overhead, but more protective.
• Audit trail: recording all protection-orientated activities.
• defense in depth: No single principle is a panacea for security vulnerabilities.
VI. Domain of Protection
Main Points
• Rings of protection separate functions into domains and order them hierarchically.
• Process should only have access to objects it currently
requires completing its task (the need-to-know principle).
• Associations can be static or dynamic.
• If dynamic, processes can domain switch.
Domain
Structure
• User can access objects depends on the identity of the user.
• Process can be accessed depending on identity of the process.
• Procedure can be accessed corresponding to the local variables defined within the procedure.
6 | P a g e www.MoussaAcademy.com
00201007153601
VII. Access Matrix
Main
Points
• Rows = domains.
• Columns = objects.
Usage
• If a process in Domain Di tries to do operation on Oj
then the operation must be in the access matrix.
• Users who create object can define access column
for that object.
• Dynamic Protection:
 Owner of Oi.
 copy op from Oi to Oj (denoted by “*”).
 Control: Di can modify Dj access rights.
 Transfer: switch from domain Di to Dj.
• Mechanism: OS provides access-matrix + rules.
 matrix only changed by authorized users.
• Policy: user dictates policy who can access what.
• Does not solve general confinement problem.
ACCESS MATRIX WITH COPY RIGHTS
7 | P a g e www.MoussaAcademy.com
00201007153601
VIII. Implementation of Access Matrix
Global Table
• Store ordered triples (domain, object, right-set).
• Table could be large and will not fit main memory.
• Difficult to group object.
Access List for
Objects
• Row = capability list (key) .
• Column = access-control list for one object.
• Resulting per-object list (domain, right-set).
• Easily extended to contain default set.
Capability List for
Domains
• Instead of object-based, list is domain based.
• Capability list is used for domain list of objects together with the allowed operations.
• Capability: object represented by its name or address.
• Capability list associated with domain but never directly
accessible by domain (like a secured pointer).
Lock Key
• Compromise between access lists and capability lists.
• Each object has list of unique bit patterns, called locks.
• Each domain as list of unique bit patterns called keys.
Week 14:
Virtual Machines
& Networks and Distributed
Systems
IT241
Operating
Systems
Moussa Academy
00201007153601
WWW.MOUSSAACADEMY.COM
1 | P a g e www.MoussaAcademy.com
00201007153601
Contents
Contents .................................................................................................................................................1
I. Overview ........................................................................................................................................2
II. Benefits and Features..................................................................................................................2
III. Types of Virtual Machines and Implementations.......................................................................3
IV. Operating System Components...................................................................................................5
V. Distributed Systems ....................................................................................................................6
VI. Distributed File Systems.............................................................................................................7
2 | P a g e www.MoussaAcademy.com
00201007153601
I. Overview
System Models
Implementation
of VMMS
WHAT ARE THE TYPES OF HYPERVISORS?
1. Type 0: Hardware-based solution via firmware
 IBM LPRAs, Oracle LDOMs
2. Type 1: OS like software that provides virtualization.
 VMWare ESX, Joyent, SmartOS, Critix XenServer.
 Also Includes general-purpose OS such as: Windows HyperV, Linux Redhata with KVM.
3. Type 2: Applications run on OSs.
 VMWare workstation, Fusion, Parallels Desktop, Oracle VirtualBox.
WHAT ARE THE OTHER VARIATIONS OF HYPERVISORS?
1. Paravirtualization: guest OS is modified to work with VMM.
2. Programming-environment virtualization: VMMS do not virtualize hardware which creates
optimized virtual system (Used by Oracle Java and Microsoft.Net).
3. Emulators: Allows applications written for one hardware to run on different hardware
environments.
4. Application Containment: Not virtualization but, provides features like it by segregating
application making them more secure and manageable.
 Oracle Solaris Zones, BSD Jails, IBM AIX WPARs.
• Much variation is due to breadth, depth, and importance of virtualization.
II. Benefits and Features
Main Points
• Templating: Create OS + application VM.
• Live Migration: move running VM from one host to another (No interruption of access).
• Cloud computing: Templating + Live Migration.
 Using APIs, Programs to tell cloud infrastructure to create new guests,
VMs, virtual desktops.
Virtual
Machine
Management
Service
3 | P a g e www.MoussaAcademy.com
00201007153601
III. Types of Virtual Machines and Implementations
VM Life
Cycle
WHAT IS THE LIFE CYCLE OF A VM?
1. Created by VMM.
2. Resources assigned to it (number of cores, memory, networking details, storage details).
 In Type 0, Resources usually dedicated.
 Other types, shared resources, or mix.
Type 0
Hypervisor
• Implemented by firmware.
• Small feature set than other types.
• Each guest has dedicated Hardware.
• I/O is a challenge as it is difficult to have
enough devices, controllers for each guest.
• VMM implements a control partition
running daemons that is used for shared
I/O.
• Can provide virtualization-within-
virtualization.
Type 1
Hypervisor
• Found in company datacenters (Data Center Oss).
 Move guests between systems to balance performance.
 Snapshots and cloning.
• Special purpose operating systems that run natively on hardware.
 Rather than providing system call interface (creates, runs and manage guest Oss).
 Can run on Type 0 hypervisors but not on other Type 1s.
 Guests generally don’t know they are running in a VM.
 Implement device drivers for host HW.
 provides traditional OS services such as CPU, memory management
• Another variation is a general purpose OS also provides VMM functionality.
 RedHat Enterprise Linux with KVM, Windows with Hyper-V, Oracle Solaris.
 Perform normal duties as well as VMM duties.
 Typically less features than dedicated Type 1 hypervisors.
• Treat guests OSs as just another process.
Type 2
Hypervisor
• Very little OS involvement in virtualization.
• VMM is simply another process, run and managed by host which requires no changes to host OS.
• Host doesn’t know they are a VMM running guests.
• Poor overall performance as it can’t take advantage of some HW features
Can lead to virtual machine sprawl
due to its simplicity.
4 | P a g e www.MoussaAcademy.com
00201007153601
Paravirtualization
• Does not fit virtualization definition.
• Less needed as hardware support for VMs grows.
• Xen, leader in Paravirtualized space.
• Paravirtualization allowed virtualization of older CPUs without binary translation.
• Guest had to be modified to use run on paravirtualized VMM.
Programming
Environment
Virtualization
• Not really virtualization.
• Similar to interpreted languages.
• Programming languages that is designed to work on a VM (Ex. Java Virtual Machine(JVM)).
Emulation
• Virtualization needs VM CPU to be same as host.
• Can run on different CPUs.
• Translates instruction from guest CPU to native CPU.
• Useful when host system has one architecture.
• Slower than native code.
• Very popular especially in gaming.
Application
Containment
• From the goals of virtualization is segregatiomn of applications.
 Can do it without full virtualization if the application is compiledfor the host OS.
• Oracle containers / zones
 1 kernel running (Host OS).
 Each zone has its own applications such as: adresses, ports, networking stacks, user
accounts.
 CPU and memory is divided between zones.
5 | P a g e www.MoussaAcademy.com
00201007153601
IV. Operating System Components
CPU
Scheduling
• When virtualized single-CPU system act like multiprocessor one.
• if not enough CPUs (if one more than CPU) which results in CPU overcommitment.
• VMM Cycle stealing: guest do not get CPU cycles they expect.
• Some VMMs provide application to run in each guest to fix time-of-day and provide other
integration features.
I/O
• Easier for VMMs to integrate as I/O has a lot of variations but, compliated.
• Network is complicated as both host and guest need internet access.
 VMM can bridge guest to network.
 VMM can provide a Network Address Translation (NAT).
Storage
Management
• Both boot disk and general disk access need to be provided.
• Type 1 is provided by VMM as a disk image.
• Type 2 is stored as files in host OS.
• Physical-to-virutal (P-to-V): convert native disk into VMM format.
• Virtual-to-Physical (V-to-P): convert VMM format into a native disk format.
Live
Migration
• Moving guests between systems without interrupting access.
WHAT ARE THE STEPS OF LIVE MIGRATION?
1. VMM start connection with the target VMM.
2. Target created a new guest (by creating a new VCPU).
3. VMM sends read-only files to target VMM.
4. VMM sends read-write files to target VMM.
5. Repeat 4 unit done as not all read-write data can be sent (could be a dirty read).
6. If step 4 and 5 becomes very short then, VMM freezes guest, send remaining stuff.
7. Target starts running the freezed guest.
6 | P a g e www.MoussaAcademy.com
00201007153601
V. Distributed Systems
Overview
• Distributed system: collection of loosely
coupled nodes.
• Site: location of the machine.
• Nodes can be processors, computers,
machines, hosts.
• Nodes may exist in client-server, peer-to-peer, hyprid.
 Client-server: server has resources that
a client wants to use.
 Peer-to-peer: each node shares equal
responsibilities.
• Communication over network is done by message passing.
Reasons for
Distributed
Systems
WHAT ARE THE REASONS FOR DISTRIBUTED SYSTEMS?
1. Resource sharing
 Sharing files, information, printing.
 Using remote GPUs.
2. Computation speedup
 dsitribute processing needed to multicomputers.
 Load balancing: moving jobs to more lightly-laoded sites.
3. Reliability: detect and recover failurs.
Design Issues of
Distributed
Systems
1. Robustness: Making a high fault-tolerant system.
 Failure detection: Detecting hardware failure is difficult then, we must use heartbeat
protocol.
 Reconfiguration and Recovery: when a link becomes available again
the information that was not broadcasted must be sent again.
2. Transparency: system should appear as a conventional system (centralized).
3. Scalability: the system should be easy to accept new resources.
 React gracefully to increased load.
 Adding more resources but, it may generate indirect load.
 Data compression and deduplication which cuts down storage and networks used.
7 | P a g e www.MoussaAcademy.com
00201007153601
VI. Distributed File Systems
Definitions
• Distributed File System (DFS): file system whose clients, server, storage are distributed among
machines.
• Service: entity running on one or more machines supplying unkown clients.
• Server: software running on a single machine.
• Client: process than can invoke a service.
Main Points
• Low level inter-machine interface for cross-machine interaction.
WHAT ARE THE WIDELY USED ARCHITECTURES?
1. Client-server model.
2. Cluster-based model.
WHAT ARE THE CHALLENGES?
1. Naming and transparency.
2. Remote file access.
3. Caching and caching consistency.
Client-
Server
Model
• Server store files and metadata on storage.
• Client contact server to request files.
• Design Problems
 If server crashes results in full failure.
 Bottleneck is the server (can cause
problems with scalability and
bandwidth)
• Examples: NFS, OpenAFS.
Cluster-
based Model
• Built to be more fault-tolerant and scalable than
client-server DFS.
• Clients connected to master metadata server
where multiple servers have portions of files.
• File chunks replicated n times.
• Examples: Google File System (GFS), Hadoop
Distributed File System (HDFS).
WHAT WAS GFS WAS INFLUNECNED BY?
1. Hardware failure should be expected routinely.
2. Most files are changed by appending new data (rather than overwriting existing data).
3. Modularized software layer MapReduce sit on top of GFS to carry out large-scale parallel
computations.
• Hadoop framework is also stackable and modularized.
8 | P a g e www.MoussaAcademy.com
00201007153601
Naming and
Transparency
• Naming: mapping between logicla and physical objects
• Multi-level mapping: abstraction of file that hides details about it.
• Transparent DFS: hides location where is the file stores on the network.
• File replicated multiple times so, mapping returns set of locations of the file replicas.
• Location transparency: file name does not reveal file physical location.
• Location independence: file name does not have to be changed when the physical location of
the file is changed.
• Most DFs use static, location-transparent mapping for user-level names.
 OpenAFS supports file migration.
 Hadoop: file migration but without POSIX standards which hides information from
clients.
 Amazon S3: provides storage on demand via APIs, placing storage and moving data as
necessary.
WHAT ARE THE NAMING SCHEMES APPROACHES?
1. File names = hostname + local name (not location transparent or location independent).
2. Attach remote directories to local directories.
 Gives appearance of coherent directory tree.
 Only previously mounted remote directories can be accessed transparently.
3. Single global structure (spans all files).
 If a server is unavailable some directories on different machines also becomes
unavailable.
Remote File
Access
• Remote-service mechanics (which is one transfer approach).
 Request for access is sent to server and results forwarded back to user.
 RPC is the most common way of implementing remote service.
• Reduce network traffic by storing recently accessed blocks in cache.
• Cache-consistency problem: keeping the cached copies consistent with the master file.
 can be called network virtual memory.
9 | P a g e www.MoussaAcademy.com
00201007153601
Caching and
Caching
Consistency
WHAT ARE THE ADVANTAGES OF DISK CACHES?
1. Reliable
2. Cached data kept on disk do not need to be fetched again while recovery.
WHAT ARE THE ADVANTAGES OF MAIN MEMORY CACHES?
1. Can make workstations diskless.
2. Quicker data access.
3. Performance speed up in bigger memories.
4. Server cache in main memory regardless of user location.
5. Single caching mechanics for servers and users.
Cache
Update
Policy
• Write-through: write data as soon as they are placed on cache.
 Reliable, poor performance.
• Write-back (Delayed-write): modifications are written later.
 Write-access is done quickly (two users writing to same object will not results in one written
operation on disk)
 Unreliable as unwritten data will be lost.
 Variation #1: scan cache regularly and flush blocks that has been changed since last scan.
 Variation #2: write-on-close: write data back to server when file is closed.
Consistency
• Client-initiated approach
 Client begins validity check.
 Server check whether the local data is consistent with master copy.
• Server-initiated approach
 Server has records for each client.
• In cluster based DFS
 cache consistency complicated.
 presence of metadata server and replicated data chunks.
 GFS allows random writs with concurrent writers.
 HDFS allows append-only write operations.

More Related Content

Similar to IT241 - Full Summary.pdf

Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxakhilagajjala
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.pptFarhanaMariyam1
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
Introduction to Operating Systems - Mary Margarat
Introduction to Operating Systems - Mary MargaratIntroduction to Operating Systems - Mary Margarat
Introduction to Operating Systems - Mary MargaratMary Margarat
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301cpjcollege
 
OPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESOPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESsuthi
 
cs-intro-os.ppt
cs-intro-os.pptcs-intro-os.ppt
cs-intro-os.pptinfomerlin
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxssuser3dfcef
 
Introduction to Operating Systems.pdf
Introduction to Operating Systems.pdfIntroduction to Operating Systems.pdf
Introduction to Operating Systems.pdfHarika Pudugosula
 

Similar to IT241 - Full Summary.pdf (20)

Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptx
 
1_to_10.pdf
1_to_10.pdf1_to_10.pdf
1_to_10.pdf
 
Ch1 introduction
Ch1   introductionCh1   introduction
Ch1 introduction
 
ch1.ppt
ch1.pptch1.ppt
ch1.ppt
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.ppt
 
Operating System Overview.pdf
Operating System Overview.pdfOperating System Overview.pdf
Operating System Overview.pdf
 
Os1
Os1Os1
Os1
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Operating System
Operating SystemOperating System
Operating System
 
Introduction to Operating Systems - Mary Margarat
Introduction to Operating Systems - Mary MargaratIntroduction to Operating Systems - Mary Margarat
Introduction to Operating Systems - Mary Margarat
 
Os introduction
Os introductionOs introduction
Os introduction
 
Os introduction
Os introductionOs introduction
Os introduction
 
Operating system Chapter One
Operating system Chapter OneOperating system Chapter One
Operating system Chapter One
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301
 
OPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTESOPERATING SYSTEM - SHORT NOTES
OPERATING SYSTEM - SHORT NOTES
 
cs-intro-os.ppt
cs-intro-os.pptcs-intro-os.ppt
cs-intro-os.ppt
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptx
 
Introduction to Operating Systems.pdf
Introduction to Operating Systems.pdfIntroduction to Operating Systems.pdf
Introduction to Operating Systems.pdf
 
OS chapter 1.pptx
OS chapter 1.pptxOS chapter 1.pptx
OS chapter 1.pptx
 

Recently uploaded

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 

Recently uploaded (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 

IT241 - Full Summary.pdf

  • 1. Week 2: Introduction & Operating System Services IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 2. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. What Operating System Do............................................................................................................1 II. Computer System Organization..................................................................................................2 III. Computer System Operations .....................................................................................................3 IV. Types of OS ................................................................................................................................5 V. Components of OS......................................................................................................................6 VI. Virtualization ..............................................................................................................................8 VII. Computing Environments...........................................................................................................9 VIII. Operating System Services....................................................................................................10 IX. System Calls..............................................................................................................................10 I. What Operating System Do What is an Operating System A Program that is in-between user and the computer hardware Important Goals • Providing abstractions for the user. • Using hardware in an optimized way. Kernel • A program that runs all the time on the computer (part of the operating system) Type of Programs • System Program: Comes with the operating system. (Not part of the Kernel) • Application Program: Any program that is not related to the operating system. • Middleware: software that lies between an operating system and the applications running on it and adds additional services to the applications. Note: General purpose OSes and mobile computing include a middleware
  • 3. 2 | P a g e www.MoussaAcademy.com 00201007153601 II. Computer System Organization Key Points About Computer Organization • The computer can consist of more than one CPU and other controllers which are connected through a bus. • Bus: a wire connecting between different controllers I/O Operation Overview EXPLAIN STEPS OF I/O OPERATION: 1. A program requests an I/O operation 2. The device controller is loaded to determine which operation to do 3. The controller starts to transfer data to a buffer. 4. When operation is done, the device controller informs the device that the request is done. 5. The device controller gives control to the OS 6. The device driver also returns a status with either operation done, or the device is busy. 7. The controller informs the driver that it is done by an Interrupt Interrupts • Hardware trigger interrupt by passing a signal to the CPU by the system bus EXPLAIN STEPS OF INTERRUPT HANDLING: 1. The OS preserve state of the CPU by storing the registers and the program counter. 2. Determines which type of interrupt occurred 3. Segments of code to determine the appropriate action taken by each type of interrupt. Each controller has a device driver managed by the OS
  • 4. 3 | P a g e www.MoussaAcademy.com 00201007153601 III. Computer System Operations Key Points About Computer System Operation • I/O devices and CPU run concurrently • CPU moves data from/to main memory to/from the buffers. • Device controller informs CPU the operation is done by causing an Interrupt Common Functions of Interrupts • The operating system is interrupt driven • Interrupt transfers the control to the interrupt service routine (ISR) through the interrupt vector • Address of interrupted instruction (the instruction that called/caused the interrupt) is saved. • A Trap or an Exception is a software generated interrupt I/O Structure Two Types of I/O Structures 1. After I/O starts, control returns to the user program only when I/O operation completion. 2. After I/O starts, control returns to the user program without waiting for the completion of the I/O operation. • System call: Request to the OS to allow user to wait for I/O completion. • Device-status table: contains an entry for each I/O device. Computer Startup • Firmware: Typically stored in ROM or EPROM • Bootstrap Program Loads the operating system kernel and starts the daemons • Daemons: services provided outside of the kernel Main memory can be viewed as a cache for secondary memory.
  • 5. 4 | P a g e www.MoussaAcademy.com 00201007153601 Storage Structure • Main Memory: Large Storage media that the CPU can access directly 1. Random Access 2. Volatile 3. Dynamic Random Access Memory (DRAM) • Secondary Storage: Expansion of Main Memory that provides nonvolatile capacity. • Hard disk Drives (HDD) • Non-volatile Memory (NVM) Most popular. Storage Hierarchy • Storage Systems organized in hierarchy with respect to Speed, Cost, Volatility. • Caching: Copying information into a faster system. • Device Driver: provides interface between the controller and the kernel. Direct Memory Access Structure • Direct Access: Device controller transfers blocks of data to main memory without CPU permission • An interrupt is generated per block, rather than per byte. -Tracks are divided into sectors
  • 6. 5 | P a g e www.MoussaAcademy.com 00201007153601 IV. Types of OS 1. Multiprogramming (Batch System) • Process after process 2. Multitasking (Time Sharing) • CPU switches jobs so frequently that the user can interact while the jobs are running. • Low Response Time (< 1 second) • Swapping moves processes that do not fit in the memory. • Virtual Memory allows the execution of processes that are not completely available in the memory. 3. Dual-Mode Operation • There are two modes: 1. User Mode. 2. Kernel Mode. • Mode bit: 0 or 1 to specify current mode. • Some instructions are only executed in Kernel Mode
  • 7. 6 | P a g e www.MoussaAcademy.com 00201007153601 V. Components of OS 1. Timer • Timers are needed to prevent infinite loops or processes using too many resources. • Timers interrupt the computer after some time period. • A counter is decremented by the physical clock (which is set by the OS in kernel mode) • When counter is set to zero, an interrupt is generated. • Control is given to the operating system and process scheduling is done. 2. Process Management • Process: • A program in execution • Unit of work within the system • A Program is a passive entity • A Process is an active entity • Single-threaded process has one program counter • Where each instruction is done sequentially • Multi-threaded process has one program counter for each thread. • Process Management Activities: 1. Process synchronization 2. Inter-process communication (IPC) 3. Deadlock handling 3. Memory Management • Memory Management Activities 1. Keeps track of which parts of memory is being used 2. Allocating and deallocation memory. We will study it in future chapters
  • 8. 7 | P a g e www.MoussaAcademy.com 00201007153601 4. File-System Management • Files: logical storage unit • Directories (folders): organize files • Access Control: Who can access what files & folders • File-System Management Activities: 1. Manipulation of files & directories 2. Mapping files onto secondary storage 3. Backup 4. Caching WHAT IS CACHING? WHY WE DO IT? • Information in use is copied from slower mediums to faster mediums EXPLAIN STEPS OF USING CACHING 1. When data is being accessed, we check for the data at faster mediums. • If the data exists then, we can get the data directly from the cache. • If it does not exist, data is retrieved and copied to cache and used there. 5. I/O Subsystem • I/O Management Activities 1. Manage device drivers Cache Coherency: making sure that the data in the cache is consistent. Cache can be flawed if the data that we want to cache is larger than the total size of cache.
  • 9. 8 | P a g e www.MoussaAcademy.com 00201007153601 VI. Virtualization What is Virtualization • Allows operating systems to run applications within other operating systems. • Emulation: when source CPU is different from target type • Virtualization: OS natively compiled for CPU Computer System Architecture • Multiprocessors • Asymmetric: each processor is assigned a specific task • Symmetric: each processor performs all tasks Also known as: Parallel system, tightly coupled systems Single-core system 2+ processors 1 core Multi-core system 1 processor 2+ core Host OS: Actual OS running on PC Guest OS: virtual OS running compiled
  • 10. 9 | P a g e www.MoussaAcademy.com 00201007153601 VII. Computing Environments Traditional • Stand-alone general-purpose machines • Portals provide web access to internal systems • Network computer: provide web access to internal systems Client Server • Compute-server system: provides an interface to request services • File-server system: provides an interface to send/receive files Cloud Computing • Extension of virtualization WHAT ARE THE TYPES OF CLOUDS? • Public cloud: available via internet to anyone • Private cloud: run by a company for its usage • Hybrid cloud: includes both public and private WHAT DOES CLOUD COMPUTING PROVIDE TO US? • Software as a Service (SaaS): one or more applications available via the internet • Platform as a service (Baas): software stack which is ready for applications to use via the internet • Infrastructure as a service (Iaas): servers or storages available over the internet.
  • 11. 10 | P a g e www.MoussaAcademy.com 00201007153601 VIII. Operating System Services Services Provided by OS to User MENTION SERVICES/BENEFITS THAT OS PROVIDE • User Interface • Program Execution • I/O Operations • File-system manipulation • Process Communication • Error-detection • Resource allocation • Logging • Registry • Background services • Protection and Security o Protection: Ensuring that access to the system is controlled o Security: authentication (passwords) • Programming language support o Compilers, assemblers, debuggers, interpreters IX. System Calls What Are System Calls • Interfaces provided by the operating system System Calls Implementation • A table is maintained by the system-call interface • The system-call interface invokes the system call and return its result System Call Parameter Passing EXPLAIN WAYS WE CAN PASS PARAMETERS IN SYSTEM CALLS 1. Simplest: passing the parameters in register 2. Parameters stored in a block, table, memory, or address 3. Parameters pushed onto the stack Notes on System Calls • Apps compiled on one system is usually not executable on other operating systems. • Each OS has its own unique system calls. • Application Binary Interface (ABI): Architecture equivalent of API 2nd and 3rd methods do not limit the number of parameters being passed REMEMBER Known as -Services -subsystems -daemons Summary of components and operations we mentioned above Apps can also be multi- operating systems
  • 12. Week 3: Process & Threads & Concurrency IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 13. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Process Concept..............................................................................................................................2 II. Process Scheduling .....................................................................................................................3 III. Operations on Process.................................................................................................................4 IV. Inter Process Communication (IPC)...........................................................................................4 V. Multicore Programming..............................................................................................................6 VI. Multithreading Architecture........................................................................................................7
  • 14. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Process Concept What is a process A Program in execution • Process execution is sequential Process Layout WHAT ARE THE PARTS OF THE PRCOESS: 1. Executable code (Text Section) 2. Data Section (global variables) 3. Heap Section (dynamically allocated memory) 4. Stack Section (temporary date, parameters, local variables) Process States • New • Ready • Running • Waiting: waiting for some event to occur • Terminated Process Control Block (PCB) WHAT INFORMATION DOES THE PCB CONTAIN: 1. Process State 2. Program Counter (contains address of next instruction) 3. Registers 4. Scheduling Information (includes priority queues) 5. Memory Management Information (page tables, segmented tables) 6. Accounting Information 7. I/O status information (allocated I/O devices, open files) PCB is how process gets represented in the OS also known as Task Control Block
  • 15. 3 | P a g e www.MoussaAcademy.com 00201007153601 II. Process Scheduling Process Scheduler • Selects which process to execute next by the CPU WHAT IS THE GOAL OF THE PROCESS SCHEDULER: • Maximize CPU Usage. • Maintains Scheduling Queues  Ready Queue  Wait Queue Context Switch WHAT ARE THE STEPS OF CONTEXT SWITCH 1. Saving the state of the old process 2. Loading the state of the new process 3. Context switch is an overhead 4. Some hardware provides multiple context switch Selects which process to execute by the CPU next
  • 16. 4 | P a g e www.MoussaAcademy.com 00201007153601 III. Operations on Process Process Creation • Parent Process: Process that creates another process • Child Process: Process that gets created • On Execution:  Parents continue to execute concurrently with its children.  Parent waits until all children are terminated. • Address Space of New Processes  Duplicate of the parent process New Program Process Termination WHAT ARE THE REASONS FOR TERMINATION OF CHILD PROCESS: • High usage of resources. • Task assigned to the child is not required anymore. • Parent process is terminating IV. Inter Process Communication (IPC) Process Creation • A process can be either independent or cooperating. REASONS FOR PROCESS COOPERATION: 1. Information Sharing 2. Computation speedup 3. Modularity Shared Memory • An area that processes share information • Under control of the processes not the OS • Has Synchronizing problem Models of IPC: - Shared Memory - Message Passing A process must have a parent Same program, data as parent
  • 17. 5 | P a g e www.MoussaAcademy.com 00201007153601 Message Passing • Processes communicates with each other with no shared variables. • Operations:  Send  Receive • Message size can be fixed or variable. • Has Implementation Issues Direct Communication • Every process must name each other. WHAT ARE THE PROPERTIES OF DIRECT COMMUNICATION LINK: 1. Established Automatically 2. Associated with one pair 3. Between each pair only one link 4. Bidirectional (may be unidirectional) Indirect Communication • Message are sent and received from mailboxes ( ports) WHAT IS THE PROPERIES OF INDIRECT COMMUNICATION LINK: 1. Established only if processes share a common mailbox 2. Associated with many processes 3. Unidirectional or bidirectional Threading WHAT ARE THE BENFITS OF THREADING: 1. Responsiveness 2. Resource Sharing 3. Economical (cheap) 4. Scalability Lightweight process that executes code
  • 18. 6 | P a g e www.MoussaAcademy.com 00201007153601 V. Multicore Programming Multicore Challenges WHAT ARE THE MULTICORE CHALLENGES: 1. Dividing activities 2. Balance 3. Data splitting 4. Data dependency 5. Testing and debugging Parallelism and Concurrency • Parallelism: performing more than one task at the same time • Concurrency: making more than one task making progress • Data Parallelism: distributing data across multiple cores. • Tasks parallelism: distributing threads across multiple cores Types of Threads User Threads Kernel Threads • Managed by a library  PTHREAD  Windows Threads • Managed by kernel • Most general OSes use it
  • 19. 7 | P a g e www.MoussaAcademy.com 00201007153601 VI. Multithreading Architecture One to One • Each user thread maps to kernel thread • Creation of a user thread results in a creation of a kernel thread • More concurrent than many to one • Restriction of number of threads per process One to Many • Many users thread maps to a kernel thread • One thread block cause all to block Many to Many • Many users thread maps to multiple kernel threads • Allows the OS to create sufficient number of kernel threads. Threading Issues • Semantics of system calls (fork and exec) • Synchronous and asynchronous signal handling • Thread cancellation • Local storage • Scheduler Activations
  • 20. Week 4: CPU Scheduling IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 21. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Basic Concepts ...............................................................................................................................2 II. Scheduling Algorithms ...............................................................................................................3 III. Thread Scheduling ......................................................................................................................4 IV. Multiple-Processor Scheduling...................................................................................................4
  • 22. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Basic Concepts Main Points • Best CPU utilization is obtained with multiprogramming • CPU or I/O burst cycles consist of a cycle of CPU execution and I/O wait • CPU burst is followed by an I/O burst. CPU Scheduler • Select which process to execute next from the ready queue WHAT ARE THE DECISIOSNS ASSOCIATED WITH CPU SCHEDULING: 1. Running to Waiting State 2. Running to Ready State 3. Waiting to Ready 4. Termination WHAT ARE THE TYPES OF SCHEDULING: 1. Non preemptive: the process keeps running until it has finished 2. Preemptive: the process can be kicked from and let another process execute Dispatcher • Gives control of the CPU to the selected process  Switching context  Switching mode  Jumping to a location to restart program • Dispatcher Latency: time took for the dispatcher to stop one process and start another one. Scheduling Criteria • CPU Utilization: keeping the CPU as busy as possible (efficiency) • Throughput: amount of work done per unit time • Turnaround time: amount of time needed to execute a process • Waiting time: amount of time waited by the process in the ready queue • Response time: amount of time between the request and the response of the process Can result in a race condition
  • 23. 3 | P a g e www.MoussaAcademy.com 00201007153601 II. Scheduling Algorithms 1. First Come First Server (FCFS) • Most basic algorithm • Convery effect: short process behind long process 2. Shortest Job First (SJF) • Uses length of the jobs to select which job to execute first • Gives minimum average waiting time • Difficulty in knowing the length of the next CPU request • Preemptive version called Shortest Remaining Time First • Determining the length of the CPU burst  Ask the user.  Estimate. • Bootstrap program Estimating Length of Next CPU Burst • Can be done by using the length of previous CPU burst using exponential averaging 3. Round Robin • Each process gets a quantum, then the process is preempted and added to the ready queue. • If the Quantum is large, then it will behave like FIFO • If the Quantum is small, it will result in a lot of context switches and the overhead will be too high Priority Scheduling • A priority number is associated with each process • The process with the highest priority will execute first. • Can cause starvation • The starvation can be solved using Aging 4. Multi Level Queue • Have separate queue each with its own priority. Previously Discussed Processes never executing (indefinitely waiting) As time progresses the priority increases. Period of time
  • 24. 4 | P a g e www.MoussaAcademy.com 00201007153601 III. Thread Scheduling Main Points • Threads are scheduled not processes • Many-to-one and Many-to-Many models, thread library schedules user-level threads to run on LWP  Known as process-contention scope (PCS)  Done by setting a priority queue by the programmer. • Kernel threads scheduled onto available CPU is system-contention scope (SCS). IV. Multiple-Processor Scheduling Main Points • Symmetric multiprocessing (SMP) is where each process is self-scheduling • All threads in a common ready queue • Each processor may have its own private queue of threads Multi-Core Processors • Faster and consumes less power • Takes advantages of memory stalls to make progress
  • 25. 5 | P a g e www.MoussaAcademy.com 00201007153601 Multi-Threaded Multi-Core System • Each core has more than one hardware threads • Chip-multithreading(CMT): assigns each core multiple hardware thread WHAT ARE THE LEVELS OF SCHEDULING: 1. The OS deciding which software thread to run on the logical CPU. 2. How each core decides which hardware to run on the physical core. Loading Balancing • Load Balancing: attempting to keep the workload of each processor evenly distributed. • Push Migration: Pushing overloading processors to less busy processors. • Pull Migration: idle processors pull waiting tasks from busy processor. NUMA and CPU Scheduling • If the OS is NUMA-aware, it will assign memory close to the CPU that the thread is currently running on. Intel refers to this as hyperthreading
  • 26. Week 5: Synchronization tools & Synchronization Examples IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 27. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Critical Section Problem.................................................................................................................2 II. Peterson’s Solution .....................................................................................................................3 III. Hardware Synchronization..........................................................................................................3 IV. Mutex Locks ...............................................................................................................................3 V. Semaphores.................................................................................................................................4 VI. Synchronization Classical Problem ............................................................................................5
  • 28. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Critical Section Problem Main Points • Processes run concurrently. • Concurrent access to data may lead to data inconsistency. • Maintaining data consistency requires mechanisms to ensure cooperation of processes. • Critical Section: changing values in the memory which may results in data inconsistency or other synchronization problems. • To Solve this: each process must ask permission to enter the critical section WHAT ARE THE STEPS TO ENTER CRITICAL SECTIONS 1. Asking permission in entry section 2. After critical section, an Exit section is entered. 3. Remained Section Critical Section Requirements 1. Mutual Exclusion: no other processes can be in their own critical section. 2. Progress: not postponing a critical section execution if no other processes are in their critical section. 3. Bounded Waiting: a limit on how many times a specific process entered their critical section while other processes were waiting for their critical section to be executed Race Condition • Accessing the same unique data or changed data at the same time (forking a process with two identical IDs)
  • 29. 3 | P a g e www.MoussaAcademy.com 00201007153601 II. Peterson’s Solution Peterson’s Solution • Two process model • Sharing a memory in which there is a turn variable that indicates whose turn is to enter the critical section. • A flag array is used to indicate if a process is ready to enter the critical section Analysis on Peterson’s Solution • Peterson’s solution is not guaranteed to work as compilers may reorder applications that have no dependencies. • Inconsistency in multithreaded environments III. Hardware Synchronization Main Points • Many systems provide support for implementing critical section code. • Uniprocessors could disable interrupts.  Code would execute without preemption.  Generally, too inefficient on multiprocessor systems Hardware Instructions • Special instructions that allow to test and modify content of a word atomically. 1. Test and Set Instruction 2. Compare and Swap Instruction IV. Mutex Locks Atomic Variables • Uninterruptible update on basic data types (int and Booleans) Mutex Lock • Previous solutions are complicated. • Simplest is just a mutex lock ( variable indicating if it is locked or not) • Protecting a critical section by:  Acquire() a lock (locking)  Release() the lock (unlocking) • Acquire() and release() must be atomic.  Usually implemented via hardware instructions This solution requires busy waiting Lock called spinlock Busy Waiting: Constantly waiting and checking for a condition to happen before resuming execution
  • 30. 4 | P a g e www.MoussaAcademy.com 00201007153601 V. Semaphores Semaphore • Synchronization tool that provides ways for processes to synchronize their activities. • Can only be accessed via two atomic operations.  Wait()  Signal() WHAT ARE THE TYPES OF SEMPAHORES 1. Counting Semaphore: Integer value with no restriction 2. Binary Semaphore: 0 or 1 (same as mutex lock) Semaphore Implemenation • Must guarantee that no two processes can execute wait() or signal() at the same time. • Could now have busy waiting in critical section.  Implementation code is short.  Little busy waiting if critical section rarely used. • Applications may spend a lot of time in critical sections and therefore this is not a good solution Semaphore Implementation Without Busy Wait • Each semaphore has an associated waiting queue. • Each entry has a value, and a pointer to the next record. • There exist two operations:  Block: place the process calling the block on the appropriate waiting queue  Wakeup: removing one of the processes in the waiting queue and placing it in the read queue Can implement a counting semaphore as a binary semaphore.
  • 31. 5 | P a g e www.MoussaAcademy.com 00201007153601 VI. Synchronization Classical Problem Types 3. Bounded-Buffer Problem 4. Readers and Writers Problem 5. Dining-Philosophers Problem 1. Bounded-Buffer Problem • N buffers, each buffer can hold one item • Mutex Semaphore initialized to 1 • Full initialized to 0 • Empty initialized to value N 2. Readers and Writers Problem • Problem Statement: allowing multiple readers to read at the same time, only one writer can access the dataset. • A dataset is shared among a number of concurrent processes • Readers: only reads the dataset • Writers: can both write and read • Rw_mutex initialized to 1 • Mutex initialized to 1 • Read_count initialized to 0 Readers and Writers Problem Variation • Once a writer is ready to write, no newly arrived readers is allowed to read. • Both the first and second variation of the problem may lead to starvation • Problem is solved on system by providing reader-writer locks 3. Dining Philosophers Problem • N philosopher’s sit at a round table with a bowel of rice in the middle. • They occasionally try to pick up 2 chopsticks to eat from the bowl (they need two chopsticks to eat) Kernel Synchronization • Uses interrupt masks to protect access to global resources on uniprocessor system • On multiprocessor system, spinlocking-thread is used • Provides dispatcher objects which may act as mutexes, semaphores, events, timers • Events: a condition variable • Timers: Notify threads when time expires • Dispatcher Objects are either signaled state (object available) or non-signaled state (thread will block)
  • 32. Week 6: Deadlocks IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 33. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. System Model.................................................................................................................................2 II. Deadlock Characterization..........................................................................................................2 III. Methods for Handling Deadlock.................................................................................................2 IV. Deadlock Prevention...................................................................................................................3 V. Deadlock Avoidance...................................................................................................................3 VI. Avoidance Algorithms................................................................................................................4 VII. Deadlocks Detection...................................................................................................................5 VIII. Recovery From Deadlock........................................................................................................5
  • 34. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. System Model Main Points • System consists of resources ( CPU cycles, memory space, I/O devices) • Process utilizes a resource as follows: request, use, release. II. Deadlock Characterization Main Points WHAT ARE THE CONDITIONS THAT MAKES DEADLOCK ARISE: 1. Mutual Exclusion: only one process can use a resource at a time. 2. Hold and Wait: A process holding a resource waiting to acquire additional resources held by another process. 3. No preemption: Resources can only be released voluntarily. 4. Circular wait: P0 waiting for P1, P1 waiting for P2, P2 waiting for P0 Resource- Allocation Graph • Request Edge: directional edge from P to R ( where P is processes and R is resource types) • Assignment Edge: from R to P Basic Facts on Cycles and Deadlock • If graph has no cycles, then, no deadlocks. • If graph contains a cycle  If only instance per resource type, deadlock  If several instances, possibility of a deadlock III. Methods for Handling Deadlock Methods 5. Ensuring the system will never enter a deadlock state:  Deadlock prevention  Deadlock avoidance 6. Allow the system to enter a deadlock state and then recover. 7. Ignore the problem and pretend that deadlocks never occur in the system
  • 35. 3 | P a g e www.MoussaAcademy.com 00201007153601 IV. Deadlock Prevention Main Points • Just invalidate one of the four necessary conditions for deadlock. • Mutual Exclusion: Not required for sharable resources but must hold for non-sharable resources. • Hold and Wait: Must guarantee that when a process requests and resource, it will not hold any other resources.  Requires allocating all of the resources needed by the process before execution. • No Preemption:  Releasing all resources of a process in case it is requesting another resource that cannot be immediately allocated.  Preempted resources are added to the list of resources for which the process is waiting for.  Process will be restarted only when it can regain its old resources, as well as the new ones that it was requesting. • Circular Wait:  Imposing a total ordering of all resources type and require each process requests resources in an increasing order of enumeration. V. Deadlock Avoidance Main Points • Requires system to have additional information (priori information). • Simplest and most useful model requires each process to declare the maximum number of resources it may need. • Dynamically examines the resource allocation state to ensure that circular wait never happens. • Resource allocation state is defined by the number of available and allocated resource, and the maximum Safe State • System decides if allocation space for a process leaves it in a safe state. • Safe state: a state such that for each process (pi) the requested resources can be satisfied by the currently available resources + resources held from other processes (pj) where j < i. • If pi resources needs are not immediately available, pi can wait until pj have finished. • When pi terminates pi+1 can obtain the needed resources. Basic Facts on Safe State and Deadlock • If the system is in safe mode, then there is no deadlock. • If the system is not in safe mode, a deadlock may happen. • Avoidance: ensure that a system will never enter an unsafe state. May Cause Starvation
  • 36. 4 | P a g e www.MoussaAcademy.com 00201007153601 VI. Avoidance Algorithms Main Points Single Instance Multiple Instances • Use a resource-allocation graph • Banker’s Algorithm Resource-Allocation Graph Scheme • Claim edge converts to request edge when a process requests a resource. • Request edge is converted to an assignment edge when the resource is allocated to the process. • Resources must be claimed a priori in the system. Resource-Allocation Graph Algorithm • Request can be granted only if converting the request edge into an assignment edge that does not form a cycle. Banker’s Algorithm • Used for multiple instances of resource types. • Each process must a priori claim maximum use Data Structures for Banker’s Algorithm • Available (vector): type of resource available • Max (matrix): maximum number of resources needed by a specific process (for each resource type) • Allocation (matrix): keeps track of the currently allocated types for each process. • Need (matrix): keeps track of how much each process need for each resource type.
  • 37. 5 | P a g e www.MoussaAcademy.com 00201007153601 VII. Deadlocks Detection Single Instance of Each Resource Type • Maintain wait-for graph. • Periodically invoke an algorithm that searches for a cycle in the graph ,if a cycle exists then, there is a deadlock ( algorithm requires n2 operations). Several Instances of a Resource Type • Available : vector of length m indicating the number of available resources for each type. • Allocation: a matrix (n x m) that defines the number of resources of each type allocated to a process • Request: a matrix (n x m) indicating the request of each process. Detection Algorithm (not very important) Detection Algorithm Usage • If detection mechanism is run randomly which results in many cycles in graph, We CAN’T know which caused deadlock VIII. Recovery From Deadlock Process Termination WHAT ARE THE POSSIBLE WAYS TO RECVOER FROM DEADLOCK? 8. Abort all deadlocked processes. 9. Abort one process at a time until the deadlock cycle is eliminated Resource Preemption WHAT ARE THE STEPS FOR RESOURCE PREEMPTION? • Selecting a victim • Rollback: Returning to some safe state • Starvation Some process may always be the victim. Previously Discussed n: number of vertices Important
  • 38. Week 7: Main Memory IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 39. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Memory Introduction......................................................................................................................2 II. Contiguous Allocation ................................................................................................................3 III. Paging .........................................................................................................................................4 IV. Page Table Structure...................................................................................................................5 V. Swapping.....................................................................................................................................6
  • 40. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Memory Introduction Main Points • Main memory and registers are the only storage CPU can directly access • Register Access is done in one CPU clock. • main memory can take many cycles which causes a stall. • Cache is between main memory and CPU registers. Protection • a process can only access the addresses in its address space.  HOW?  use base and limit registers • CPU checks every memory access to be  Memory >= base  Memory < base + limit Memory Binding WHAT ARE THE STAGES THAT DATA BINDINGS CAN HAPPEN? 1. Compile Time: • Memory locations known • Generates Absolute code. 2. Load Time: • Locations are unknown at compile time • Generates relocatable code. 3. Execution Time: at run time (Need hardware support). Address Space • Logical Address(Virtual Address) is generated by the CPU. • Physical Address: address that is seen by the memory unit. • WHEN ARE THEY EQUAL? 1. compile time 2. load-time address-binding schemes. • WHEN ARE THEY DIFFERENT? 1. execution-time address-binding scheme.
  • 41. 3 | P a g e www.MoussaAcademy.com 00201007153601 Memory Management Unit (MMU) • MMU maps logical address dynamically.  Logical Address = User Address + relocation (base) register  This is called Execution-time binding (binding happens at each line of code). II. Contiguous Allocation Main Points • Main memory usually consists into two partitions.  OS, in low memory with interrupt vector.  User processes in high memory. • Base register contains value of the smallest physical address. • Limit register contains the range of logical addresses Variable Partition • Hole: block of available memory scattered throughout memory WHAT IS THE INFORMATION MAINTAINED BY THE OS: 1. Allocated partitions. 2. Free Partitions (holes).
  • 42. 4 | P a g e www.MoussaAcademy.com 00201007153601 Dynamic Storage Allocation • First fit: Allocate the first hole that is big enough. • Best-it: Allocate the smallest hole that is big enough (must search the entire list). • Worst-fit: Allocate the largest hole (must search the entire list). • First and Best-fit are better than worst-fit in terms of speed and storage utilization Fragmentation • External Fragmentation: memory space between 2 processes results in holes • Internal Fragmentation: memory space that is left by 1 process after the use of a block. • In first fit , given N blocks allocated, 0.5 N blocks are lost to fragmentation. Address Space • Reduce external Fragmentation by Compaction  Shuffle memory contents to place all free memory together in one large block.  Compaction is only possible if relocation is dynamic, and it is done at execution time. III. Paging Main Points • WHAT ARE THE MAIN STEPS NEEDED IN PAGING? 1. Divide physical memory into frames 2. Divide logical memory into pages 3. Keep track of all free frames 4. Set up a page table to translate logical to physical addresses • Still have Internal fragmentation To run a program of size N pages, need to find N free frames and load program
  • 43. 5 | P a g e www.MoussaAcademy.com 00201007153601 Address Translation HOW IS AN ADDRESS DESCRIBED IN TERMS OF PAGES: 1. Page Number: used as an index into a page table which contains the base address of each page in the physical memory. 2. Page offset: combined with base address to define the physical memory address IV. Page Table Structure Page Table Implementation • Page-table base register (PTBR): points to the page table. • Page-table length register (PTLR): indicates the size of the page table. • Every data/instruction requires 2 memory accesses (one for page table and one for the data). • This problem can resolve using a special fast-look-up hardware cache called translation look- aside buffers (TLBs) Translation Look-aside Buffer • Some TLBs store address-space identifies (ASIDs) in each TLB entry • On a TLB miss, value is loaded into the TLB for faster access next time.  Some entries can be wired down for permanent fast access. page number page offset p d m - n n Also called Associative Memory
  • 44. 6 | P a g e www.MoussaAcademy.com 00201007153601 Effective Access Time • Hit Ratio: percentage of times that a page number is found in TLB. Memory Protection • Adding a protection bit with each frame (read or read-write permission) • Valid-invalid bit is attached to each entry in the page table  Valid: the associated page is in the process logical address space.  Invalid: the page is not in the process logical address space. Inverted Page Table • One entry for each real page of memory. • Decreases memory needed to store each page table. • Using hash table we can limit search to one page-table entry. V. Swapping Main Points • Backing store : A fast disk large enough to accommodate copies of memory images. • Roll out, roll in: swapping variant used for priority-based on scheduling algorithms (lower priority is swapped for higher priority) Any violation will result in a trap Major part of swap time is transfer time We can also use PTLR for this.
  • 45. Week 9: Virtual Memory IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 46. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Virtual Memory..............................................................................................................................2 II. Demand Paging...........................................................................................................................2 III. Allocation of Frames...................................................................................................................5
  • 47. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Virtual Memory Definitions • Virtual Memory: separation of logical and physical memory. • Virtual Address Space: logical view of how process is stored in memory. Main Points • Program can execute when it is partially loaded into the memory. • Programs takes less memory (more programs can run at the same time)  Decreases CPU utilization.  Increases throughput without any increase in Response time, Turnaround time. Implementation Q: MENTION THE TWO METHODS TO IMPLEMENT THE VIRTUAL MEMORY? 1. Demand Paging 2. Demand Segmentation II. Demand Paging Main Points • Can bring entire process into memory at load time. • Bringing a page into memory only when it is needed.  Less unnecessary I/O needed.  Less memory needed. • Similar to paging with swapping • Lazy Swapper: Not swapping a page unless it is needed. • Pager: A swapper that deals with pages.
  • 48. 3 | P a g e www.MoussaAcademy.com 00201007153601 • If pages needed are already in memory:  No different from normal paging (non edmand-paging) • Else  Need to detect and load the page into memory. • IN/VALID BIT: V in memory, I not in memory Page Fault WHAT HAPPENS WHEN AN INSTRUCTION RESULTS IN A PAGE FAULT? 1. A trap will be generated to the OS. 2. Save user registers and process state. 3. OS look at a table if invalid reference then, Abort / not in memory. 4. If the page is not present in the memory then, we wil have to find a free frame. 5. Swap page into frame via a scheduled disk operation 6. set valid/invalid bit to V (valid). 7. Restore user registers and prcoess state. 8. Restart Intrstruction that resulted in a page fault. Page Replacement • Happens when there are not free frames in memory. • We want to minimize the number of page faults. • Using modify(dirty) bit so only modified pages are written to disk. WHAT ARE THE STEPS FOR PAGE REPLCEMENT? 1. Find location of the desired page on disk. 2. Find free frame. 3. If not free frame is found we use a page replacement algorithm to select a victim frame. 4. Bring page into the free frame and update page and frame tables. 5. Restart instruction that caused the trap (page fault) Types of Replacement Algorithms 1. FIFO (First-In-First-Out) 2. Optimal 3. LRU (Least-Recently-Used) Without needing to change code Without needing to change code
  • 49. 4 | P a g e www.MoussaAcademy.com 00201007153601 1. FIFO • First-In-First-Out approach for each frame. • Belady’s Anomaly: Adding more frames will result in more page faults. 2. Optimal • Replace pages that will not be used for the longest period of time. • Not applicable as we can’t read the future. • Used for measure how well an algorithm performs. 3. Least Recently Used (LRU) • Use past knoweldge. • Replaces pages that has not been used the least. Second-Chance Algorithm • It is a LRU approximation. • Generally FIFO, plus hardware reference bit. • If page to be replacted has a refernce bit of:  0  Repalce it  1  set to 0 and leave it. 12-page faults (better than FIFO)
  • 50. 5 | P a g e www.MoussaAcademy.com 00201007153601 III. Allocation of Frames Main Points • Total frames in the system are the maximum number of allocations. • Allocation Schemes: 1. Fixed Allocation 2. Priority Allocation Definitions • (Fixed) Equal allocation: divide the total number of frames to each process equally. • (Fixed) Proportional Allocation: Allocate relative to the size of each process. • Global Replacement: select a frame as a victim from the whole system. • Local Replacement: select a frame as a victim from the process that the page is being replaced to. (Select frame which belongs to the process which caused the page fault) Thrashing • Thrashing: A process is busy swapping pages in and out. • Thrashing leads to:  Low CPU Utilization  OS thinking it needs to increase degree of multiprogramming Size of locality > totally memory size
  • 51. Week 10: Mass Storage Systems IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 52. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Mass Storage Structure Overview..................................................................................................2 II. HDD Scheduling.........................................................................................................................4 III. Selecting Disk-scheduling Algorithm.........................................................................................5 IV. Storage Attachment.....................................................................................................................6 V. RAID Structure...........................................................................................................................7
  • 53. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Mass Storage Structure Overview Definitions • Transfer Rate: Rate at which data flow between drive and computer • Positioning time (Random Access Time): time to move arm to desired cylinder (seek time) and time to move it to the desired sector (rotational latency) • Head crash: results when disk head contact disk surface. Hard Disk Drives (HDD) WHAT ARE THE TYPES OF STORAGES? 1. Hard Disk Drives (HDD) 2. Nonvolatile Memory (NVM) or NVMe (NVM-Express). 3. Volatile Memory (VM) • Performance: 1. Transfer Rate: 6 Gb/sec (theoretical) 2. Effective Transfer Rate: 1 Gb/sec (real) 3. Seek time: 3ms – 12 (common 9ms) • Access latency = Avg. access time = avg. seek time + avg. latency (rotational delay) • Avg. I/O time = avg. access time + ( 𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂𝒂 𝒐𝒐𝒐𝒐 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓 ) + controller overhead Nonvolatile Memory (NVM) • If disk-drive called Solid-state disks (SSD). • Includes: USB drives, DRAM, surface-mounted storage on motherboard, mobiles storage. Advantages Disadvantages 4. More reliable than HDDs  No head crash.  No mechanical parts. 5. Much faster than HDDs. 6. No moving parts (no mechanical parts). 1. More expensive. 2. Shorter life span. 3. Less capacity. 4. Busses can be too slow. 5. Can’t overwrite in place. 6. Erases happens in blocks. 7. Can only be erased a number of times before worn out.
  • 54. 3 | P a g e www.MoussaAcademy.com 00201007153601 NAND flash memory (NVM Memory) • Controller maintains flash translation layer (FTL) table. • Garbage Collection: Free invalid page space. • Overprovisioning: working space for GC Magnetic Tape • Access time slower than HDD. • Random-access slower than HDD. • Not useful as secondary storage and mainly used for backup Volatile Memory • DRAM is used as mass-storage device, not secondary storage because volatile. • RAM drives: present raw block devices, file system formatted. • RAM: used as high-speed temporary storage. Disk Attachment • Storage accessed through I/O busses. WHAT ARE THE TYPES OF DISKS ATTACHEMENT? 1. Advanced Technology Attachment (ATA) 2. Serial ATA (SATA) 3. eSata 4. Serial Attached SCSI (SAS) 5. Universal Serial Bus (USB) 6. Fiber channel (FC) • Because NVM is faster that HDD, NVMe is created (connecting directly to PCI bus). • Data transfers are carried out by controllers called Host-bus adapters (HBAs). Address Mapping • Disk drives are addressed like 1-d array of logical blocks. • Logical to physical mapping made easy,  Except for: Bad sectors, non-constant # of sectors per track.
  • 55. 4 | P a g e www.MoussaAcademy.com 00201007153601 II. HDD Scheduling Main Points • Disk Bandwidth: 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏 𝒐𝒐𝒐𝒐 𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇 𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓𝒓 𝒂𝒂𝒂𝒂𝒂𝒂 𝒍𝒍𝒍𝒍𝒍𝒍𝒍𝒍 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 WHAT ARE THE SOURCES OF I/O REQUESTS? 1. OS. 2. System Processes. 3. User Processes. • I/O Request includes: 1. I/O Mode 2. Disk Addresses 3. Memory Addresses 4. Number of sectors to transfer. • OS maintains queue of requests, per disk or device. • In the past, OS was responsible for queue management. • Now, it is built into the storage devices. WHAT ARE THE SCHEUDLING ALGORITHMS USED? 1. FCFS 2. SCAN 3. C-SCAN FCFS
  • 56. 5 | P a g e www.MoussaAcademy.com 00201007153601 SCAN • Called Elevator Algorithm. • Disk arm starts at one end of the disk, srcivicing requests until the head is at the other side. • If requests are uniformly dense, largest density at other end of disk and those wait the longest. C-SCAN • More unform wait time than SCAN. • Head moves from one end to the other but, it resets to the beginning of the disk (without servicing any requests while reversing). • Treats the cylinders as a circular list III. Selecting Disk-scheduling Algorithm Main Points • SSTF is common and has natural appeal. • SCAN and C-SCAN performs better for heavyloads on disk. • To avoid starvation, Linux implements Deadline Scheduler. • NOOP, CFQ (completely fair queueing) is available on RHEL 7 (Read Hat Enterprise Linux) Deadline Scheduler • Separate Read and Write Queues (More priority to read). • 4 Queues (2 read, 2 write). • 1 read, 1 write queues sorted in LBA (Implementing C-SCAN). • 1 read, 1 write queues FCFS order. • Checks if any requesst in FCFS older that configured age (default 500ms) then LBA queue is selected for next batch of I/O requests.
  • 57. 6 | P a g e www.MoussaAcademy.com 00201007153601 Storage Device Management • Low level formatting/physical formatting: dividing disk into sectors that the controller can read/write. • OS needs to record its own DS on the disk. • Partition the disk into cylinders (each trated as a logical disk). • To increase effeciency most file system group blocks into clusters.  Disk I/O into Blocks.  File I/O into Clusters. • Root Partition: contains OS, file systems. • At mount time, file system is checked  if all the meta data is correct then, Add ot mount table.  if not, fix it and try again. • Bootstrap loader: Program stored in boot blocks. • Sector sparing is used to handle bad blocks. IV. Storage Attachment Storage Attachment HOW DO COMPUTER ACCES STORAGE? 1. Host-attached Storage. (HAS) through local I/O ports.  To attach tomany devices we use USB, firewrite, thunderbold used in.  High-end systems uses Fiber channel (FC). 2. Network-attached Storage (NAS).  Common Protocols are NFS, CIFS.  Implemented via remote procedure calls (RPCs).  iSCSI uses IP netwrok to carry SCSI protocol. 3. Cloud Storage is API based. Storage Arrays • Avoid NAS drawbacks (using network bandwidth) WHAT ARE THE FEATURES PROVIDED BY SOTRAGE ARRAY TO HOSTS? 1. Ports to connect hosts to array 2. Memory controlling software 3. RAID 4. Shared storage 5. Snapshots, clones,thin provisioing, replication, deduplication,
  • 58. 7 | P a g e www.MoussaAcademy.com 00201007153601 Storage Area Network • Storage made via LUN Masking • Easy to add or remove storage. V. RAID Structure Main Points • RAID: Redundant Array of Inexpensive Disks • Mean time to repair: exposure time when another failture could cause data loss. • Increases mean time to failure. • Frequently combined with NVRAM to improve write performance. • Arranged into six different levels. • Disk stripping used a group of disks as one storage unit. • RAID alone does not prevent or detect data corruption but, Adding checksums does. • RAID1: Mirroring/Shadowing (keeps duplicate) • RAID 1+0/RAID 0+1:  Striped mirrors  High performance  Reliability. • RAID 4,5,6 Uses much less redundancy. Object Storage • Object Storage Management software like Hadoop File system (HDFS) and Ceph.  Typically store N copies, across N systems  Horizontally scalabel  Content addressable, unstructured.
  • 59. Week 11: I/O Systems IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 60. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. I/O Hardware..................................................................................................................................2 II. Application I/O Interface............................................................................................................4 III. Kernel I/O Subsystem.................................................................................................................5 Error Handling ...................................................................................................................................6 I/O Protection.....................................................................................................................................6 Power Management ...........................................................................................................................6 IV. Transforming I/O requests to hardware Operations....................................................................6
  • 61. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. I/O Hardware Introduction WHAT ARE THE TYPES OF I/O DEVICES? 1. Storage. 2. Transmission. 3. Human Interface. WHAT ARE THE COMMON CONCEPTS OF I/O INTERFACES? 1. Port: connection point for a device. 2. Bus – daisy chain / shared direct access.  PCI/e is used in PCs and Servers  Expansion bus is relatively slow devices.  Serial-attached SCSI is a disk interface. 3. Controller operates port, bus, devices.  Integrated or separate circuit board. • Devices usually have register where device drivers places commands, addresses, data on the registers in which a FIFO Buffer is used. • Data-in register: host reads input. • Data-out register: host sends output. • Status register: contains status about instructions (completed, available, error). • Control register: written by host to start command, change mode of device. • Memory-mapped I/O: device data/registers mapped to processor address space. Polling • Polling happens in 3 cycles. 1. ready status. 2. extract status bit 3. branch if not zero. WHAT ARE THE STEPS OF POLLING? 1. Ready busy bit: until 0. (Busy wait) 2. Set write bit: write data into data-out register. 3. command-ready bit is set to 1. 4. Controller sets busy bit. 5. Controller Clears command-ready bit.
  • 62. 3 | P a g e www.MoussaAcademy.com 00201007153601 Interrupts • CPU interrupt-request line is checked by processor after each instruction. • Interrupt handler: receives interrupt. WHAT ARE THE TYPES OF INTERRUPTS? 1. Maskable: can be ignored or delayed. 2. Non-maskable: must be handled immediately. • Interrupt vector: dispatches interrupts to correct handlers. • Interrupt mechanism is also used for exceptions. • System calls executes via trap which triggers kernel to handle request. • Multi-CPU devices can handle multiple interrupts concurrently. • Used for time-sensitive processing. WHAT ARE THE INTERRUPT HANDLING FEATURES? 1. Interrupt handling during critical processing. 2. Dispatching interrupt handler without polling. 3. Multilevel interrupts. 4. Instruction to get the OS attention directly such as: Division by zero (this is a trap). Direct Memory Access (DMA) • Used to avoid programmed I/O. • Requires DMA controller. • OS writes DMA command into memory. WHAT ARE THE CONTENTS OF COMMAND BLOCK? 1. Source and Destination Addresses. 2. Read or Write mode. 3. Bytes count. 4. Writes location of command block to DMA controller. • Cycle Stealing: accessing computer memory without interfering with CPU.
  • 63. 4 | P a g e www.MoussaAcademy.com 00201007153601 II. Application I/O Interface Main Points • Each OS has its own I/O subsystem and device driver frameworks. WHAT ARE THET DEVICE VARIATIONS? 1. Stream or block. 2. Sequential or random-access. 3. Synchronous or asynchronous. 4. Sharable or dedicated. 5. Speed of operation. 6. Read-write, read only or write- only. Characteristics HOW CAN I/O DEVICES BE GROUPED BY OS? 1. Block I/O. 2. Character I/O (stream). 3. Memory-mapped file access. 4. Networks sockets. Block Devices • Disk drives. • Commands includes Read, Write, Seek • Raw I/O, Direct I/O, file-system access. Character Devices • Keyboard, mice, serial ports. • Command includes get(), put(). • Libraries layers allow line editing. Network Devices • Different from block and character devices to have its own interface. • Linux, Unix, Windows use Separate network protocol from network operations. • Commands includes select(). • Varying approaches.  pipes, FIFO, stream, queues, mailboxes.
  • 64. 5 | P a g e www.MoussaAcademy.com 00201007153601 Clock and Timers • Provide current time, elapsed time, timer. • Programmable interval timer such timings, periodic interrupts. Non-blocking I/O And Asynchronous I/O • Blocking: suspended process until I/O completion. • Non-blocking: I/O call returns as much as is available.  Implemented via multi- threading.  Return with count of bytes read or written. • Asynchronous: Process rune while I/O executes but, Difficult to use. Vectorized I/O • Vectorized I/O: allow one system call to perform multiple I/O operations. • This method is called scatter-gather.  Decreases context switching and system calls overhead.  Provide atomicity. III. Kernel I/O Subsystem. 1. Scheduling • Some I/O request ordering does a queue per-device. • Some OSs try fairness and does Quality of Service(IPQOS) technique. 2. Buffering • Buffering: Store data in memory while transferring between devices. WHY BUFFERING? 1. Speed mismatch. 2. Transfer size mismatch. 3. Maintaining copy semantics. • Double Buffering: two copies of the data. 3. Cashing • Caching: faster device which holds a copy of the data. • Primary key for performance. • Sometimes combined with buffering. 4. Spooling • Spooling: holding output for a device. • If the device can only serve one request at a time, such as: Printing. 5. Device Reservation • Exclusive access to a device. • Allocation and de-allocation system calls • Watching out for deadlocks. In Unix, ioctl() In Unix, readve()
  • 65. 6 | P a g e www.MoussaAcademy.com 00201007153601 Error Handling Main Points • OS can recover from disk read, device unavailable, write failures. • Most return error number or code when I/O request fails. • System hold problem report logs. I/O Protection Main Points • All I/O instructions are privileged. • I/O must be performed via a system call so Memory locations must be protected. Power Management Block Devices • Cloud computing environments move virtual machines between servers IV. Transforming I/O requests to hardware Operations I/O Life Cycle
  • 66. Week 12: File System & File-System Implementation IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 67. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. File Concept....................................................................................................................................2 II. Access Methods ..........................................................................................................................2 III. Disk Structure .............................................................................................................................3 IV. Protection....................................................................................................................................5 V. File-system Structure ..................................................................................................................5 VI. File-system Operations ...............................................................................................................6 VII. Directory Implementation...........................................................................................................6 VIII. Allocation Methods.................................................................................................................6 IX. Free-space Management .............................................................................................................8
  • 68. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. File Concept Main Points • Contiguous logical address space. • Types:  Data can either be Numeric, Character, Binary.  Program. File attributes 1. Name 2. Identifier 3. Type 4. Location 5. Size 6. Protection 7. Time, date, and user identification File Operations 1. Create 2. Write 3. Read 4. Seek(Reposition) 5. Delete 6. Truncate 7. Open 8. Close File Locking • Shared lock: Reader lock. • Exclusive lock: Writer lock. • Mandatory: Access is denied depending on locks held and requested. • Advisory: processes can find the status of locks. II. Access Methods Main Points • Sequential Access 1. Read next. 2. Write next. 3. Reset 4. No read after last write (rewrite). • Direct Access 1. Read n. 2. Write n. 3. Seek n which is equal to:  read next, write next, rewrite n Other Access Methods • Generally, involves creating an index. • If the index is too large, we can create an index to the index. • IBM indexed sequential-access method (ISAM)  Small master index  File sorted on a key.  Done by OS. • VMS OS provides index and relative files. Write uses a write pointer Read uses a read pointer
  • 69. 3 | P a g e www.MoussaAcademy.com 00201007153601 III. Disk Structure Main Points • Disk can be divided into partitions. And can be: 1. RAID, which makes the disk protected against failures. 2. raw (without a file system). 3. Formatted (with a file system). • Volume: entity containing a file system. • There are many special-purpose file system within the same OS on the computer. Operations on Directory 1. Search for a file 2. Create a file. 3. Delete a file. 4. List a directory. 5. Rename a file. 6. Traverse file system. Directory Organization • Directory is organized to obtain  Efficiency: location a file quickly.  Naming: conveniet for users. WHAT ARE THE TYPES OF DIRECTORIES? 1. Single-level. 2. Two-level. 3. Three-level. 4. Acyclic-graph. 5. General-graph. Single Level Directory • Naming Problem: Only one file name can exist. • Grouping problem: No grouping can happen. also known as minidisks, slices.
  • 70. 4 | P a g e www.MoussaAcademy.com 00201007153601 Two-level Directory • Efficient searching. • No group capability. • Can have same file name for different users. Three-level directory Acyclic- graph directories • Shared subdirectories and files. • Two different names (Aliasing). • Directory is delete (if with a list) results in dangling pointer. How do we solve? 1. Back pointers: using daisy chaining organization. 2. Entry-hold-count solution. • New directroy entry type  Link: pointer to an existing file.  Resolve the link: follow pointer to locate the file.
  • 71. 5 | P a g e www.MoussaAcademy.com 00201007153601 General- graph Directory HOW DO WE GUARANTEE NO CYCLES? 1. Allow only links to files not subdirectories. 2. Garbage collection. 3. Use cycle detection for each new link IV. Protection Main Points WHAT RE THE TYPE OF ACCESSES? 1. Read 2. Write 3. Execute 4. Append 5. Delete 6. List V. File-system Structure Main Points • File structure: collection of related information. • File system: a system that is on disk and organized into layers.  User interface to storage, mapping logical to physical.  Provide efficient and convenient access for storing data, retrieval easily. • File Control Block (FCB): structure containing information about a file. • Device driver: control the physical device. File System Layers • Device drivers manages I/O device at I/O control layer. • Basic file system is given “retrieve block 123” and then translates to device driver.  Manages memory buffers and caches. • File organization module translates logical block to physical block,  manages free space and disk allocation. • Logical file system manages metadata information. • Translates file name into file number, file handle, location. • Manages directories and provide protection. • Layers are useful for reducing complexity and redundancy. • Windows uses FAT, FAT32, NTFS file systems. • Linux uses ext3, ext4 file systems. New ones: ZFS, GoogleFS, Orace ASM,
  • 72. 6 | P a g e www.MoussaAcademy.com 00201007153601 VI. File-system Operations Main Points • Boot Control Block: contains info needed by system to boot OS. • Volume Control Block(superblock, master file table): contains volume details. • OS maintains FCB for each file. In-Memory File System Structures • Mount table: stores file system mounts, file system types. • System-wide open-file table: contains copy of the FCB of each file. • Per-process open-file table: contains pointers to appropiate enteries in system-wide open-table table. VII. Directory Implementation Main Points HOW CAN WE IMPLEMENT A DIRECTORY? 1. Linear List  Simple to program  Time consuming to execute. 2. Hash Table  Shorted search time.  Collisions: two files hash to the same function. VIII. Allocation Methods Main Points • Refers to how disk blocks are allocated. HOW CAN WE ALLOCATED DISK BLOCKS? 1. Contiguous. 2. Linked. 3. File Allocation Table (FAT).
  • 73. 7 | P a g e www.MoussaAcademy.com 00201007153601 Contiguous Allocation • Best performance in most cases. • Simple. • Problmes include:  Finding space on the disk for a file.  Knowing the size of a file.  External fragmentation needs compaction off-line (downtime) or on-line. • Extent Based system  Many newer file system use modified contiguous allocation scheme.  Extent: contiguous blocks of disks. Linked Allocation • Each file is a linked list of blocks • File ends at nil pointer. • No external fragmentation. • Each block contains a pointer to the next block • When new block needed the Free space management system called. • Clustering blocks improves efficeincy, increases internal fragmentation. • Locating a block result in many I/O requests and disk seeks. FAT • Beginning of value has table which is indexed by block number. • Like linked lists are fast on disk and can be cached. • Simple new block allocation. Indexed Allocation Method • Indexed Allocation method: Each file has its own index block • For Small files we use random access with 1 block for index table. • For Large files we use linked scheme, multi-level indexing.  Two-level linked scheme • A Combined Scheme is used by UNIX UFS.
  • 74. 8 | P a g e www.MoussaAcademy.com 00201007153601 Performance • Best method depends on file access type.  Contiguous great for sequential and random • Linked good for sequential, not random • Declaring access type at creation (Select either contiguous or linked). • Indexed more complex  Single block access could require 2 index block reads then data block read  Clustering can help improve throughput, reduce CPU overhead • For NVM  Old algorithm uses many CPU cycles.  Our Goal is to reduce CPU cycles and path needed for I/O. IX. Free-space Management Main Points • File system maintains a free-space list. Linked Free Space List on Disk • Linked list  Cannot get contiguous space.  No waste.  No need to traverse entire lise. • Grouping: modify list to store address of next n-1 free blocks. • Counting is used because space is frequently contiguous (principle of locality).  keep addresses of first free block and count of the following free blocks.  Free space lsit then has enteries containing free addresses and count. Space Maps • Used in ZFS. • Metata-data I/O on very large file systems. • Divice is divided into metaslab units. • Each metaslab has associated a space map (uses counting algorithm). • Logging of file rather than file system. • Metaslab activity: load space map into memory in a balanced-tree structure, indexed by offset
  • 75. Week 13: Security & Protection IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 76. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Security...........................................................................................................................................2 II. Program Threats..........................................................................................................................3 III. System and Network Threads.....................................................................................................4 IV. Goal of Protection.......................................................................................................................5 V. Principles of Protection...............................................................................................................5 VI. Domain of Protection..................................................................................................................5 VII. Access Matrix .............................................................................................................................6 VIII. Implementation of Access Matrix...........................................................................................7
  • 77. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Security Introduction • Intruders (crackers) attempt to breach security. • Threat: potential security violation. • Attack: attempt to breach security. Security Violation Categories WHAT ARE THE SECURITY VIOLATION CATEGORIES? 1. Breach of confidentiality: unauthorized reading of data. 2. Breach of integrity: unauthorized modification of data. 3. Breach of availability: unauthorized destruction of data. 4. Theft of service: unauthorized use of resources. 5. Denial of Service(DOS): Prevention of legitimate use. Security Violation Methods WHAT ARE THE SECURITY VIOLATION METHODS? 1. Masquerading (authentication breach): pretending to be an authorized user. 2. Replay Attack: Replaying the same message or adding a modification to it. 3. Man-in-the-middle attack: intruder sits in data flow, masquerading as sender to receiver and vice versa. 4. Session hijacking: intercepting a session which is already on going to by-pass authentication. 5. Privilege escalation: A really Common attack, access of resources that a user is not supposed to have. Security Measure Levels 1. Physical: Servers, Data centers, terminals 2. Application. 3. Operating System: protection mechanisms, debugging. 4. Network: Interruption, DOS, intercepted communications. • Security is as weak as the weakest link in the chain. • Humans are a risk due to phishing and social engineering. Impossible to have absolute security.
  • 78. 3 | P a g e www.MoussaAcademy.com 00201007153601 II. Program Threats Definitions • Malware: software designed to exploit, disable, damage computer. • Trojan horse: type of malware that is disguised as a legitimate program. • Spyware: program installed legitimate software which display adds, capture user data. • Ransomware: locks data/files via encryption demanding money to decrypt data/files. • Keystroke logger: grab passwords, credit card numbers. Main Points • Other threats include trap doors, logic bombs. • Most threats try to violate the principle of least privilege. • Goal: Leave Remote Access Tool (RAT) for repeated access. Code Injection • Code-injection attack: system code has bugs which allows code to be added or modified. • Results from poor programmer or language (low level languages). • Can be run by script kiddies as there are already written tools. • Goal: buffer overflow. Viruses • Code embedded into a legitimate program. • Designed to infect other computers. • Specific to CPU architecture and OS. • Usually carried via E-mail or macro. WHAT ARE THE CATEGORIES OF VIRUSES? 1. File/ parasitic. 2. Boot/ memory. 3. Macro. 4. Source code. 5. Polymorphic (avoids having a virus signature). 6. Encrypted. 7. Stealth. 8. Multipartite 9. Armored.
  • 79. 4 | P a g e www.MoussaAcademy.com 00201007153601 • Trojan Horse  Code that misuses its environment allowing programs written by user to be executed by others.  Spyware, browser pop-ups, covert channels. • Trap Door  Specific user identifier or password that circumvents normal security procedures.  Could be included in a compiler. Windows WHY IS WINDOWS TARGETED FOR MOST ATTACKS? 1. Most Common OS. 2. Everyone is an administrator. 3. Monoculture considered harmful. III. System and Network Threads Network Attacks • Harder to detect and prevent. • Difficult to have a shared secret on which to address. • No physical limits once connected to the internet. • Difficult to determine the location of connected system as only IP address is available. Main Points • Worms: standalone program that uses spawn mechanism. • Internet Worms  exploits UNIX networking features.  Exploits trust-relationship mechanism used by rsh to access friendly systems.  Grappling hook: A program uploaded main worm program (99 lines of C code).  Hooked system then upload main code. Denial of Service (DOS) • Overload targeted computer (send too many requests). • Distributed Denial-of-Service (DDoS): comes from multiple sites at once (multiple computers). • How many connections the OS can handle needs to be considered at the start of handshake (SYN).  telling difference between being a target or being popular. • Port scanning: looking for network accepting connections (can be used for good or evil).  Nmap: scans all ports in a given IP range.  Nessus: has database of protocol and bugs to apply against a system.  Launched from zombie system which decrease traceability. Transfer information between process Group of computers running identical software
  • 80. 5 | P a g e www.MoussaAcademy.com 00201007153601 IV. Goal of Protection Main Points • Ensure that each object is accessed correctly by the processes that are allowed to do so. V. Principles of Protection Guiding Principle • Principle of least privilege. • Programs, users, and systems should only be given the needed privileges. • Setting permissions properly can limit damaged if any bugs get abused. • Can be Static (during life of system, life of process). • Can be Dynamic (changed by process as needed) such as: domain switching, privilege escalation. • Compartmentalization: protecting each individual system component through permissions. • Grain aspect  Rough-grained privilege management easier, simpler, but least privilege now done in large chunks.  Fine-grained management more complex, more overhead, but more protective. • Audit trail: recording all protection-orientated activities. • defense in depth: No single principle is a panacea for security vulnerabilities. VI. Domain of Protection Main Points • Rings of protection separate functions into domains and order them hierarchically. • Process should only have access to objects it currently requires completing its task (the need-to-know principle). • Associations can be static or dynamic. • If dynamic, processes can domain switch. Domain Structure • User can access objects depends on the identity of the user. • Process can be accessed depending on identity of the process. • Procedure can be accessed corresponding to the local variables defined within the procedure.
  • 81. 6 | P a g e www.MoussaAcademy.com 00201007153601 VII. Access Matrix Main Points • Rows = domains. • Columns = objects. Usage • If a process in Domain Di tries to do operation on Oj then the operation must be in the access matrix. • Users who create object can define access column for that object. • Dynamic Protection:  Owner of Oi.  copy op from Oi to Oj (denoted by “*”).  Control: Di can modify Dj access rights.  Transfer: switch from domain Di to Dj. • Mechanism: OS provides access-matrix + rules.  matrix only changed by authorized users. • Policy: user dictates policy who can access what. • Does not solve general confinement problem. ACCESS MATRIX WITH COPY RIGHTS
  • 82. 7 | P a g e www.MoussaAcademy.com 00201007153601 VIII. Implementation of Access Matrix Global Table • Store ordered triples (domain, object, right-set). • Table could be large and will not fit main memory. • Difficult to group object. Access List for Objects • Row = capability list (key) . • Column = access-control list for one object. • Resulting per-object list (domain, right-set). • Easily extended to contain default set. Capability List for Domains • Instead of object-based, list is domain based. • Capability list is used for domain list of objects together with the allowed operations. • Capability: object represented by its name or address. • Capability list associated with domain but never directly accessible by domain (like a secured pointer). Lock Key • Compromise between access lists and capability lists. • Each object has list of unique bit patterns, called locks. • Each domain as list of unique bit patterns called keys.
  • 83. Week 14: Virtual Machines & Networks and Distributed Systems IT241 Operating Systems Moussa Academy 00201007153601 WWW.MOUSSAACADEMY.COM
  • 84. 1 | P a g e www.MoussaAcademy.com 00201007153601 Contents Contents .................................................................................................................................................1 I. Overview ........................................................................................................................................2 II. Benefits and Features..................................................................................................................2 III. Types of Virtual Machines and Implementations.......................................................................3 IV. Operating System Components...................................................................................................5 V. Distributed Systems ....................................................................................................................6 VI. Distributed File Systems.............................................................................................................7
  • 85. 2 | P a g e www.MoussaAcademy.com 00201007153601 I. Overview System Models Implementation of VMMS WHAT ARE THE TYPES OF HYPERVISORS? 1. Type 0: Hardware-based solution via firmware  IBM LPRAs, Oracle LDOMs 2. Type 1: OS like software that provides virtualization.  VMWare ESX, Joyent, SmartOS, Critix XenServer.  Also Includes general-purpose OS such as: Windows HyperV, Linux Redhata with KVM. 3. Type 2: Applications run on OSs.  VMWare workstation, Fusion, Parallels Desktop, Oracle VirtualBox. WHAT ARE THE OTHER VARIATIONS OF HYPERVISORS? 1. Paravirtualization: guest OS is modified to work with VMM. 2. Programming-environment virtualization: VMMS do not virtualize hardware which creates optimized virtual system (Used by Oracle Java and Microsoft.Net). 3. Emulators: Allows applications written for one hardware to run on different hardware environments. 4. Application Containment: Not virtualization but, provides features like it by segregating application making them more secure and manageable.  Oracle Solaris Zones, BSD Jails, IBM AIX WPARs. • Much variation is due to breadth, depth, and importance of virtualization. II. Benefits and Features Main Points • Templating: Create OS + application VM. • Live Migration: move running VM from one host to another (No interruption of access). • Cloud computing: Templating + Live Migration.  Using APIs, Programs to tell cloud infrastructure to create new guests, VMs, virtual desktops. Virtual Machine Management Service
  • 86. 3 | P a g e www.MoussaAcademy.com 00201007153601 III. Types of Virtual Machines and Implementations VM Life Cycle WHAT IS THE LIFE CYCLE OF A VM? 1. Created by VMM. 2. Resources assigned to it (number of cores, memory, networking details, storage details).  In Type 0, Resources usually dedicated.  Other types, shared resources, or mix. Type 0 Hypervisor • Implemented by firmware. • Small feature set than other types. • Each guest has dedicated Hardware. • I/O is a challenge as it is difficult to have enough devices, controllers for each guest. • VMM implements a control partition running daemons that is used for shared I/O. • Can provide virtualization-within- virtualization. Type 1 Hypervisor • Found in company datacenters (Data Center Oss).  Move guests between systems to balance performance.  Snapshots and cloning. • Special purpose operating systems that run natively on hardware.  Rather than providing system call interface (creates, runs and manage guest Oss).  Can run on Type 0 hypervisors but not on other Type 1s.  Guests generally don’t know they are running in a VM.  Implement device drivers for host HW.  provides traditional OS services such as CPU, memory management • Another variation is a general purpose OS also provides VMM functionality.  RedHat Enterprise Linux with KVM, Windows with Hyper-V, Oracle Solaris.  Perform normal duties as well as VMM duties.  Typically less features than dedicated Type 1 hypervisors. • Treat guests OSs as just another process. Type 2 Hypervisor • Very little OS involvement in virtualization. • VMM is simply another process, run and managed by host which requires no changes to host OS. • Host doesn’t know they are a VMM running guests. • Poor overall performance as it can’t take advantage of some HW features Can lead to virtual machine sprawl due to its simplicity.
  • 87. 4 | P a g e www.MoussaAcademy.com 00201007153601 Paravirtualization • Does not fit virtualization definition. • Less needed as hardware support for VMs grows. • Xen, leader in Paravirtualized space. • Paravirtualization allowed virtualization of older CPUs without binary translation. • Guest had to be modified to use run on paravirtualized VMM. Programming Environment Virtualization • Not really virtualization. • Similar to interpreted languages. • Programming languages that is designed to work on a VM (Ex. Java Virtual Machine(JVM)). Emulation • Virtualization needs VM CPU to be same as host. • Can run on different CPUs. • Translates instruction from guest CPU to native CPU. • Useful when host system has one architecture. • Slower than native code. • Very popular especially in gaming. Application Containment • From the goals of virtualization is segregatiomn of applications.  Can do it without full virtualization if the application is compiledfor the host OS. • Oracle containers / zones  1 kernel running (Host OS).  Each zone has its own applications such as: adresses, ports, networking stacks, user accounts.  CPU and memory is divided between zones.
  • 88. 5 | P a g e www.MoussaAcademy.com 00201007153601 IV. Operating System Components CPU Scheduling • When virtualized single-CPU system act like multiprocessor one. • if not enough CPUs (if one more than CPU) which results in CPU overcommitment. • VMM Cycle stealing: guest do not get CPU cycles they expect. • Some VMMs provide application to run in each guest to fix time-of-day and provide other integration features. I/O • Easier for VMMs to integrate as I/O has a lot of variations but, compliated. • Network is complicated as both host and guest need internet access.  VMM can bridge guest to network.  VMM can provide a Network Address Translation (NAT). Storage Management • Both boot disk and general disk access need to be provided. • Type 1 is provided by VMM as a disk image. • Type 2 is stored as files in host OS. • Physical-to-virutal (P-to-V): convert native disk into VMM format. • Virtual-to-Physical (V-to-P): convert VMM format into a native disk format. Live Migration • Moving guests between systems without interrupting access. WHAT ARE THE STEPS OF LIVE MIGRATION? 1. VMM start connection with the target VMM. 2. Target created a new guest (by creating a new VCPU). 3. VMM sends read-only files to target VMM. 4. VMM sends read-write files to target VMM. 5. Repeat 4 unit done as not all read-write data can be sent (could be a dirty read). 6. If step 4 and 5 becomes very short then, VMM freezes guest, send remaining stuff. 7. Target starts running the freezed guest.
  • 89. 6 | P a g e www.MoussaAcademy.com 00201007153601 V. Distributed Systems Overview • Distributed system: collection of loosely coupled nodes. • Site: location of the machine. • Nodes can be processors, computers, machines, hosts. • Nodes may exist in client-server, peer-to-peer, hyprid.  Client-server: server has resources that a client wants to use.  Peer-to-peer: each node shares equal responsibilities. • Communication over network is done by message passing. Reasons for Distributed Systems WHAT ARE THE REASONS FOR DISTRIBUTED SYSTEMS? 1. Resource sharing  Sharing files, information, printing.  Using remote GPUs. 2. Computation speedup  dsitribute processing needed to multicomputers.  Load balancing: moving jobs to more lightly-laoded sites. 3. Reliability: detect and recover failurs. Design Issues of Distributed Systems 1. Robustness: Making a high fault-tolerant system.  Failure detection: Detecting hardware failure is difficult then, we must use heartbeat protocol.  Reconfiguration and Recovery: when a link becomes available again the information that was not broadcasted must be sent again. 2. Transparency: system should appear as a conventional system (centralized). 3. Scalability: the system should be easy to accept new resources.  React gracefully to increased load.  Adding more resources but, it may generate indirect load.  Data compression and deduplication which cuts down storage and networks used.
  • 90. 7 | P a g e www.MoussaAcademy.com 00201007153601 VI. Distributed File Systems Definitions • Distributed File System (DFS): file system whose clients, server, storage are distributed among machines. • Service: entity running on one or more machines supplying unkown clients. • Server: software running on a single machine. • Client: process than can invoke a service. Main Points • Low level inter-machine interface for cross-machine interaction. WHAT ARE THE WIDELY USED ARCHITECTURES? 1. Client-server model. 2. Cluster-based model. WHAT ARE THE CHALLENGES? 1. Naming and transparency. 2. Remote file access. 3. Caching and caching consistency. Client- Server Model • Server store files and metadata on storage. • Client contact server to request files. • Design Problems  If server crashes results in full failure.  Bottleneck is the server (can cause problems with scalability and bandwidth) • Examples: NFS, OpenAFS. Cluster- based Model • Built to be more fault-tolerant and scalable than client-server DFS. • Clients connected to master metadata server where multiple servers have portions of files. • File chunks replicated n times. • Examples: Google File System (GFS), Hadoop Distributed File System (HDFS). WHAT WAS GFS WAS INFLUNECNED BY? 1. Hardware failure should be expected routinely. 2. Most files are changed by appending new data (rather than overwriting existing data). 3. Modularized software layer MapReduce sit on top of GFS to carry out large-scale parallel computations. • Hadoop framework is also stackable and modularized.
  • 91. 8 | P a g e www.MoussaAcademy.com 00201007153601 Naming and Transparency • Naming: mapping between logicla and physical objects • Multi-level mapping: abstraction of file that hides details about it. • Transparent DFS: hides location where is the file stores on the network. • File replicated multiple times so, mapping returns set of locations of the file replicas. • Location transparency: file name does not reveal file physical location. • Location independence: file name does not have to be changed when the physical location of the file is changed. • Most DFs use static, location-transparent mapping for user-level names.  OpenAFS supports file migration.  Hadoop: file migration but without POSIX standards which hides information from clients.  Amazon S3: provides storage on demand via APIs, placing storage and moving data as necessary. WHAT ARE THE NAMING SCHEMES APPROACHES? 1. File names = hostname + local name (not location transparent or location independent). 2. Attach remote directories to local directories.  Gives appearance of coherent directory tree.  Only previously mounted remote directories can be accessed transparently. 3. Single global structure (spans all files).  If a server is unavailable some directories on different machines also becomes unavailable. Remote File Access • Remote-service mechanics (which is one transfer approach).  Request for access is sent to server and results forwarded back to user.  RPC is the most common way of implementing remote service. • Reduce network traffic by storing recently accessed blocks in cache. • Cache-consistency problem: keeping the cached copies consistent with the master file.  can be called network virtual memory.
  • 92. 9 | P a g e www.MoussaAcademy.com 00201007153601 Caching and Caching Consistency WHAT ARE THE ADVANTAGES OF DISK CACHES? 1. Reliable 2. Cached data kept on disk do not need to be fetched again while recovery. WHAT ARE THE ADVANTAGES OF MAIN MEMORY CACHES? 1. Can make workstations diskless. 2. Quicker data access. 3. Performance speed up in bigger memories. 4. Server cache in main memory regardless of user location. 5. Single caching mechanics for servers and users. Cache Update Policy • Write-through: write data as soon as they are placed on cache.  Reliable, poor performance. • Write-back (Delayed-write): modifications are written later.  Write-access is done quickly (two users writing to same object will not results in one written operation on disk)  Unreliable as unwritten data will be lost.  Variation #1: scan cache regularly and flush blocks that has been changed since last scan.  Variation #2: write-on-close: write data back to server when file is closed. Consistency • Client-initiated approach  Client begins validity check.  Server check whether the local data is consistent with master copy. • Server-initiated approach  Server has records for each client. • In cluster based DFS  cache consistency complicated.  presence of metadata server and replicated data chunks.  GFS allows random writs with concurrent writers.  HDFS allows append-only write operations.