This document discusses processes and threads. It covers topics such as process models with multiprocessing, process creation and termination, process hierarchies and states. It also discusses threads models, usage and implementations. Additionally, it covers interprocess communication including classical IPC problems, race conditions, mutual exclusion and solutions like Peterson's algorithm. It discusses the priority inversion problem and provides an example.
Single Instruction Multiple Data (SIMD) is an approach to improve performance by replicating data paths rather than control. Vector processors apply the same operation to all elements of a vector in parallel. The ILLIAC IV was an early SIMD computer from 1972 with 64 processing elements. Vector processors store vectors in registers and apply the same instruction to all elements simultaneously. The Cray-1 was an influential vector supercomputer from 1978 that used vector registers and optimized memory access for vectors. Vectorization improves performance by performing the same operation on multiple data elements with a single instruction.
The document discusses different memory management techniques used in operating systems. It begins with an overview of processes entering memory from an input queue. It then covers binding of instructions and data to memory at compile time, load time, or execution time. Key concepts discussed include logical vs physical addresses, the memory management unit (MMU), dynamic loading and linking, overlays, swapping, contiguous allocation, paging using page tables and frames, and fragmentation. Hierarchical paging, hashed page tables, and inverted page tables are also summarized.
This document discusses memory management techniques including basic memory management, swapping, virtual memory, page replacement algorithms, segmentation, and the implementation of paging systems. Key concepts covered include memory hierarchies with fast cache, slower main memory, and disk storage; fixed partitioning and multiprogramming; page tables; translation lookaside buffers; page replacement with FIFO, LRU, and clock algorithms; and segmentation combined with paging in systems like MULTICS and the Pentium.
Virtual memory uses demand paging to load pages into physical memory only when needed by the process. During address translation, if the valid-invalid bit in the page table entry is invalid, a page fault occurs which triggers loading the missing page from disk into physical memory. The hardware must support features like page tables, secondary storage, and instruction restart to implement demand paging transparently. While demand paging reduces I/O, each page fault incurs significant overhead which can degrade performance if the page fault rate is too high.
CPU REGISTERS
what is cpu registers
types of cpu registers
function of cpu registers
explanation of cpu registers
categories of cpu registers
5 major categories of cpu registers
The document contains information about the FLAGS register in 8086 microprocessors. It discusses:
1. The FLAGS register contains the current state of the processor and indicates the status by setting bits called flags. There are 9 active flags in 8086.
2. The two kinds of flags are status flags and control flags. Status flags reflect the result of operations, while control flags enable or disable processor operations.
3. Important status flags are carry flag, parity flag, auxiliary carry flag, zero flag, and sign flag. The document provides details on how each flag is set based on results of operations.
This document discusses the internal architecture of microprocessors. It begins by listing group members from the University of Gujrat in Pakistan. It then covers several topics regarding microprocessor architecture, including:
1. Internal structures of single-core and dual-core microprocessors.
2. Program visible and invisible programming models and how registers are accessed.
3. Details about general purpose registers like RAX, RBX, RCX, RDX and segment registers.
4. Special purpose registers including flags, instruction pointer, and stack pointer.
Single Instruction Multiple Data (SIMD) is an approach to improve performance by replicating data paths rather than control. Vector processors apply the same operation to all elements of a vector in parallel. The ILLIAC IV was an early SIMD computer from 1972 with 64 processing elements. Vector processors store vectors in registers and apply the same instruction to all elements simultaneously. The Cray-1 was an influential vector supercomputer from 1978 that used vector registers and optimized memory access for vectors. Vectorization improves performance by performing the same operation on multiple data elements with a single instruction.
The document discusses different memory management techniques used in operating systems. It begins with an overview of processes entering memory from an input queue. It then covers binding of instructions and data to memory at compile time, load time, or execution time. Key concepts discussed include logical vs physical addresses, the memory management unit (MMU), dynamic loading and linking, overlays, swapping, contiguous allocation, paging using page tables and frames, and fragmentation. Hierarchical paging, hashed page tables, and inverted page tables are also summarized.
This document discusses memory management techniques including basic memory management, swapping, virtual memory, page replacement algorithms, segmentation, and the implementation of paging systems. Key concepts covered include memory hierarchies with fast cache, slower main memory, and disk storage; fixed partitioning and multiprogramming; page tables; translation lookaside buffers; page replacement with FIFO, LRU, and clock algorithms; and segmentation combined with paging in systems like MULTICS and the Pentium.
Virtual memory uses demand paging to load pages into physical memory only when needed by the process. During address translation, if the valid-invalid bit in the page table entry is invalid, a page fault occurs which triggers loading the missing page from disk into physical memory. The hardware must support features like page tables, secondary storage, and instruction restart to implement demand paging transparently. While demand paging reduces I/O, each page fault incurs significant overhead which can degrade performance if the page fault rate is too high.
CPU REGISTERS
what is cpu registers
types of cpu registers
function of cpu registers
explanation of cpu registers
categories of cpu registers
5 major categories of cpu registers
The document contains information about the FLAGS register in 8086 microprocessors. It discusses:
1. The FLAGS register contains the current state of the processor and indicates the status by setting bits called flags. There are 9 active flags in 8086.
2. The two kinds of flags are status flags and control flags. Status flags reflect the result of operations, while control flags enable or disable processor operations.
3. Important status flags are carry flag, parity flag, auxiliary carry flag, zero flag, and sign flag. The document provides details on how each flag is set based on results of operations.
This document discusses the internal architecture of microprocessors. It begins by listing group members from the University of Gujrat in Pakistan. It then covers several topics regarding microprocessor architecture, including:
1. Internal structures of single-core and dual-core microprocessors.
2. Program visible and invisible programming models and how registers are accessed.
3. Details about general purpose registers like RAX, RBX, RCX, RDX and segment registers.
4. Special purpose registers including flags, instruction pointer, and stack pointer.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
This document discusses parallel processing techniques in computer systems, including pipelining and vector processing. It provides information on parallel processing levels and Flynn's classification of computer architectures. Pipelining is described as a technique to decompose sequential processes into overlapping suboperations to improve computational speed. Vector processing involves performing the same operation on multiple data elements simultaneously. The document outlines various pipeline designs and hazards that can occur, such as structural hazards from resource conflicts and data hazards from data dependencies.
This document discusses the Flamingo library, which brings a ribbon component to Swing applications. It introduces ribbon concepts like bands, tasks, and command buttons. It demonstrates the ribbon's visual structure and behaviors across different look and feels. It also outlines some missing features and potential future directions for the library.
This document discusses different addressing modes used in computer architecture. It defines 10 addressing modes: immediate, register, register indirect, direct, indirect, implied, relative, indexed, base register, and autoincrement/autodecrement. Each addressing mode is described in terms of how the operand is specified and accessed from memory or registers. Examples are provided to illustrate each addressing mode.
Morris Mano Chapter 08 (Register Transfer Logic).pdfMamunIslam20
The document discusses register transfer logic and instruction codes. It introduces register transfer logic as a way to describe digital systems using registers, operations on data in registers, and control. Instruction codes are discussed as groups of bits that tell a computer to perform operations. Instruction codes are usually divided into parts, with an operation code specifying operations like add, subtract, multiply, etc. Memory is also discussed for storing instruction codes to allow general purpose computers to execute various operations specified by programs.
The document discusses different levels of computer memory hierarchy including main memory, cache memory, auxiliary memory, and virtual memory. Main memory uses RAM and ROM chips that are connected to the CPU through address and data buses. The address lines select the specific memory chip and byte location within that chip. Main memory is the highest level of memory that can be accessed directly by the CPU for storage of data and instructions currently in use.
This document discusses the organization of registers and stacks in a CPU. It describes how a CPU contains a register set that is used to store intermediate values and pointers to improve efficiency over memory access. A general register organization is shown using multiplexers and decoders to select registers for arithmetic operations. The document also introduces the concept of a stack, which uses a last-in, first-out data structure and stack pointer register to efficiently insert and delete items for functions.
The document summarizes key aspects of CPU and processor design, including:
1) It describes the stages of a MIPS pipeline, including instruction fetch, decode, execute, memory access, and write back.
2) It discusses hazards like structure hazards from conflicting resources, data hazards from instructions depending on previous results, and control hazards from branches.
3) Pipelining is introduced to improve performance by overlapping instruction execution, but it requires techniques like forwarding to address hazards between stages.
The document discusses different types of computers and their basic structure and functioning. It begins by classifying computers into categories like microcomputers, laptops, workstations, supercomputers, and more. It then explains the functional units of a computer including the input, output, memory, arithmetic logic, and control units. Finally, it covers various performance metrics for computers like clock rate, pipelining, superscalar operations, and differences between RISC and CISC instruction sets.
The document summarizes arithmetic operations for computers including integer and floating point numbers. It discusses addition, subtraction, multiplication, and division for integers and floating point numbers. It also describes common representations for floating point numbers according to the IEEE 754 standard and arithmetic operations on floating point numbers including addition, subtraction, multiplication, and division. Hardware implementations for integer and floating point arithmetic are also briefly discussed.
Threads provide a way to improve application performance through parallelism. A thread is a flow of execution through a process's code and has its own program counter, registers, and stack. Threads are lighter weight than processes and allow multitasking within a single process. There are two types of threads: user-level threads, which are managed in userspace libraries, and kernel-level threads, which are managed by the operating system kernel. User-level threads are faster but cannot take advantage of multiprocessing, while kernel-level threads can run in parallel on multiprocessors but are slower to create and manage. Threads must be mapped to kernel threads using a many-to-one, one-to-one, or many
This document discusses Intel's multi-core processor organization. It describes how a multi-core processor combines two or more processor cores onto a single silicon chip. It identifies key variables in multi-core organization as the number of cores, levels of cache memory, and amount of shared cache. It provides examples of Intel's Core i7, Core Duo, AMD Opteron, and ARM11 MP Core multi-core processors and highlights their core configurations and cache architectures.
The document provides an overview of the evolution of Intel microprocessors from 8-bit to 64-bit architectures over several decades. It describes key processors such as the 8086, 80286, 80386 which introduced 32-bit architecture and protected mode. Subsequent processors increased performance through features like pipelining, superscalar, and integration of memory controllers and caches on-die. Modern multi-core processors support 64-bit architecture and virtualization.
This document discusses cache memory organization and characteristics. It begins by describing cache location, capacity, unit of transfer, access methods, and physical characteristics. It then covers the different mapping techniques used in caches, including direct mapping, set associative mapping, and fully associative mapping. The document also discusses cache performance factors like hit ratio, replacement algorithms, write policies, block size, and multilevel cache hierarchies. It provides examples of specific processor cache designs like those used in Intel Pentium processors.
This chapter discusses early memory management techniques including fixed partitions, dynamic partitions, and relocatable dynamic partitions. It describes how these systems allocate and deallocate memory using concepts like first-fit and best-fit allocation, and how they utilize compaction to optimize memory usage. Special registers are used to track memory locations as programs are relocated during compaction.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
The document discusses mechanisms for ensuring orderly execution of concurrent processes that share a logical address space. It describes concepts like concurrency, atomic operations, race conditions, and the critical section problem. It presents solutions to the critical section problem like mutual exclusion, progress, and bounded waiting. Algorithms like Peterson's and Dekker's are discussed for solving mutual exclusion between two processes. The use of semaphores as a synchronization tool that avoids busy waiting is also covered.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
This document discusses parallel processing techniques in computer systems, including pipelining and vector processing. It provides information on parallel processing levels and Flynn's classification of computer architectures. Pipelining is described as a technique to decompose sequential processes into overlapping suboperations to improve computational speed. Vector processing involves performing the same operation on multiple data elements simultaneously. The document outlines various pipeline designs and hazards that can occur, such as structural hazards from resource conflicts and data hazards from data dependencies.
This document discusses the Flamingo library, which brings a ribbon component to Swing applications. It introduces ribbon concepts like bands, tasks, and command buttons. It demonstrates the ribbon's visual structure and behaviors across different look and feels. It also outlines some missing features and potential future directions for the library.
This document discusses different addressing modes used in computer architecture. It defines 10 addressing modes: immediate, register, register indirect, direct, indirect, implied, relative, indexed, base register, and autoincrement/autodecrement. Each addressing mode is described in terms of how the operand is specified and accessed from memory or registers. Examples are provided to illustrate each addressing mode.
Morris Mano Chapter 08 (Register Transfer Logic).pdfMamunIslam20
The document discusses register transfer logic and instruction codes. It introduces register transfer logic as a way to describe digital systems using registers, operations on data in registers, and control. Instruction codes are discussed as groups of bits that tell a computer to perform operations. Instruction codes are usually divided into parts, with an operation code specifying operations like add, subtract, multiply, etc. Memory is also discussed for storing instruction codes to allow general purpose computers to execute various operations specified by programs.
The document discusses different levels of computer memory hierarchy including main memory, cache memory, auxiliary memory, and virtual memory. Main memory uses RAM and ROM chips that are connected to the CPU through address and data buses. The address lines select the specific memory chip and byte location within that chip. Main memory is the highest level of memory that can be accessed directly by the CPU for storage of data and instructions currently in use.
This document discusses the organization of registers and stacks in a CPU. It describes how a CPU contains a register set that is used to store intermediate values and pointers to improve efficiency over memory access. A general register organization is shown using multiplexers and decoders to select registers for arithmetic operations. The document also introduces the concept of a stack, which uses a last-in, first-out data structure and stack pointer register to efficiently insert and delete items for functions.
The document summarizes key aspects of CPU and processor design, including:
1) It describes the stages of a MIPS pipeline, including instruction fetch, decode, execute, memory access, and write back.
2) It discusses hazards like structure hazards from conflicting resources, data hazards from instructions depending on previous results, and control hazards from branches.
3) Pipelining is introduced to improve performance by overlapping instruction execution, but it requires techniques like forwarding to address hazards between stages.
The document discusses different types of computers and their basic structure and functioning. It begins by classifying computers into categories like microcomputers, laptops, workstations, supercomputers, and more. It then explains the functional units of a computer including the input, output, memory, arithmetic logic, and control units. Finally, it covers various performance metrics for computers like clock rate, pipelining, superscalar operations, and differences between RISC and CISC instruction sets.
The document summarizes arithmetic operations for computers including integer and floating point numbers. It discusses addition, subtraction, multiplication, and division for integers and floating point numbers. It also describes common representations for floating point numbers according to the IEEE 754 standard and arithmetic operations on floating point numbers including addition, subtraction, multiplication, and division. Hardware implementations for integer and floating point arithmetic are also briefly discussed.
Threads provide a way to improve application performance through parallelism. A thread is a flow of execution through a process's code and has its own program counter, registers, and stack. Threads are lighter weight than processes and allow multitasking within a single process. There are two types of threads: user-level threads, which are managed in userspace libraries, and kernel-level threads, which are managed by the operating system kernel. User-level threads are faster but cannot take advantage of multiprocessing, while kernel-level threads can run in parallel on multiprocessors but are slower to create and manage. Threads must be mapped to kernel threads using a many-to-one, one-to-one, or many
This document discusses Intel's multi-core processor organization. It describes how a multi-core processor combines two or more processor cores onto a single silicon chip. It identifies key variables in multi-core organization as the number of cores, levels of cache memory, and amount of shared cache. It provides examples of Intel's Core i7, Core Duo, AMD Opteron, and ARM11 MP Core multi-core processors and highlights their core configurations and cache architectures.
The document provides an overview of the evolution of Intel microprocessors from 8-bit to 64-bit architectures over several decades. It describes key processors such as the 8086, 80286, 80386 which introduced 32-bit architecture and protected mode. Subsequent processors increased performance through features like pipelining, superscalar, and integration of memory controllers and caches on-die. Modern multi-core processors support 64-bit architecture and virtualization.
This document discusses cache memory organization and characteristics. It begins by describing cache location, capacity, unit of transfer, access methods, and physical characteristics. It then covers the different mapping techniques used in caches, including direct mapping, set associative mapping, and fully associative mapping. The document also discusses cache performance factors like hit ratio, replacement algorithms, write policies, block size, and multilevel cache hierarchies. It provides examples of specific processor cache designs like those used in Intel Pentium processors.
This chapter discusses early memory management techniques including fixed partitions, dynamic partitions, and relocatable dynamic partitions. It describes how these systems allocate and deallocate memory using concepts like first-fit and best-fit allocation, and how they utilize compaction to optimize memory usage. Special registers are used to track memory locations as programs are relocated during compaction.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
The document discusses mechanisms for ensuring orderly execution of concurrent processes that share a logical address space. It describes concepts like concurrency, atomic operations, race conditions, and the critical section problem. It presents solutions to the critical section problem like mutual exclusion, progress, and bounded waiting. Algorithms like Peterson's and Dekker's are discussed for solving mutual exclusion between two processes. The use of semaphores as a synchronization tool that avoids busy waiting is also covered.
- Mutual exclusion between threads can be achieved with only two one-bit variables, each written by one thread and read by the other, as shown by Dekker's algorithm.
- Several lock algorithms were discussed for both two threads (LockOne, LockTwo) and multiple threads (Peterson, Lamport's Bakery). All algorithms aim to provide mutual exclusion without deadlock or starvation.
- Lamport's Bakery algorithm uses a ticket-based approach where each thread takes a number greater than others, allowing first-come first-served access to the critical section once lower numbers are served.
The document discusses CPU scheduling algorithms. It begins by explaining the basic concepts of CPU scheduling, including that the CPU scheduler selects ready processes to execute on the CPU. This allows for multi-programming by switching the CPU among ready processes instead of waiting for each process to finish. The document then discusses different scheduling algorithms like first come first served and shortest job first, and evaluates them based on criteria like CPU utilization, throughput, turnaround time, and waiting time.
The document discusses algorithms for solving the mutual exclusion problem in multithreaded programs. It begins by describing two inadequate algorithms for two threads that fail to guarantee deadlock freedom. It then presents Peterson's algorithm and Kessels' single-writer algorithm, proving they satisfy mutual exclusion, deadlock freedom, and starvation freedom for two threads. The document also discusses using tournament algorithms and the filter algorithm to generalize two-thread solutions to work for multiple threads by having threads progress through levels like a tournament bracket.
The document discusses three classical synchronization problems: the dining philosophers problem, the readers-writers problem, and the bounded buffer problem. For each problem, it provides an overview of the problem structure, potential issues like deadlock, and example semaphore-based solutions to coordinate access to shared resources in a way that avoids those issues. It also notes some applications where each type of problem could arise, like processes sharing a limited number of resources.
OS Process Synchronization, semaphore and Monitorssgpraju
The document summarizes key concepts in process synchronization and concurrency control, including:
1) Process synchronization techniques like semaphores, monitors, and atomic transactions that ensure orderly access to shared resources. Semaphores use wait() and signal() operations while monitors provide mutual exclusion through condition variables.
2) Concurrency control algorithms like locking and two-phase locking that ensure serializability of concurrent transactions accessing a database. Locking associates locks with data items to control concurrent access.
3) Challenges in concurrency control like deadlocks, priority inversion, and starvation that synchronization mechanisms aim to prevent. Log-based recovery with write-ahead logging and checkpoints is used to ensure atomicity of transactions in
The document provides an introduction to cloud computing, defining key concepts such as cloud, cloud computing, deployment models, and service models. It explains that cloud computing allows users to access applications and store data over the internet rather than locally on a device. The main deployment models are public, private, community, and hybrid clouds, while the main service models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS provides fundamental computing resources, PaaS provides development platforms, and SaaS provides software applications to users. The document discusses advantages such as lower costs and universal access, and disadvantages including internet dependence and potential security issues.
Cloud computing involves delivering computing services over the Internet. Instead of running programs locally, users access software and storage that resides on remote servers in the "cloud." The concept originated in the 1950s but Amazon launched the first major public cloud in 2006. Cloud computing has three main components - clients that access the cloud, distributed servers that host applications and data, and data centers that house these servers. There are different types of clients, deployment models for clouds, service models, and cloud computing enables scalability, reliability, and efficiency for applications accessed over the Internet like email, social media, and search engines.
This document presents an introduction to cloud computing. It defines cloud computing as using remote servers and the internet to maintain data and applications. It describes the characteristics of cloud computing including APIs, virtualization, reliability, and security. It discusses the different types of cloud including public, private, community, and hybrid cloud. It also defines the three main cloud stacks: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The benefits of cloud computing are reduced costs, improved accessibility and flexibility. Cloud security and uses of cloud computing are also briefly discussed.
This document provides an overview of Linux process management and scheduling. It discusses the process data structure (task_struct), process creation via fork(), signals, scheduling queues (run queue and sleep queue), and process termination. It describes how processes move between different states like running, ready, sleeping, and zombies. The scheduler manages CPU time allocation to processes based on priority. Interrupts, time slices, and waiting for resources can trigger process state changes and scheduling.
This document provides an overview of processes and threads management in an operating system. It discusses key concepts such as process states (ready, running, blocked), context switching between processes, and the process control block (PCB) used to store process information. It also compares processes and threads, noting that threads are lightweight processes created within a process that share the same memory space. Threads provide benefits like concurrency and efficient communication compared to separate processes. The document outlines thread types, multi-threading models, common pthread functions, and system calls used for process and thread management.
This document discusses processes, threads, interprocess communication, and scheduling in operating systems. It begins by defining processes and threads, explaining process creation and termination, and comparing user-space and kernel-based thread implementations. Interprocess communication methods like semaphores, monitors, and message passing are then introduced. The final section covers CPU scheduling algorithms and goals like throughput, turnaround time, and response time optimization.
The document provides a summary of 15 lectures on operating systems topics:
1. The first few lectures introduce concepts like computer organization, boot process, need for an operating system, and basic OS definitions.
2. Later lectures cover additional OS concepts like multiprogramming, multitasking, multiprocessing, memory protection, and interrupts.
3. The document discusses process management topics like process states, context switching, scheduling, and inter-process communication using pipes.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
The document discusses processes and threads in an operating system. It defines a process as an instance of a program running in memory that contains sections like the stack, heap, text, and data. A program becomes a process when it is loaded into memory and executed. Threads are lightweight processes that share memory and resources with other threads in the same process. The document compares processes and threads, describing threads as having lower overhead since they can communicate through shared memory rather than system calls. It also discusses process states, process control blocks (PCB), and reasons for using multithreading like improving server efficiency.
This document outlines the key concepts and objectives covered in Lecture 04 on threads. It introduces processes and threads, defining them as the basic units of execution in operating systems and Java programs respectively. It discusses how multi-processing systems and multi-threading programs allow concurrent execution. The lecture then covers thread fundamentals in Java, including creating and managing threads, and synchronizing access to shared resources using monitors, locks, and wait-notify mechanisms. Students are assigned workshops on the producer-consumer problem and philosophers problem to demonstrate these concurrency concepts.
This document summarizes key concepts from a lecture on threads. It defines processes and threads, explaining that threads exist within processes and allow for multithreaded execution. It covers thread fundamentals in Java, including creating threads by extending Thread or implementing Runnable, and methods like start(), sleep(), and yield(). The document also discusses synchronizing access to shared resources using monitors, and how wait-notify allows threads to pause and resume execution. Homework includes solving producer-consumer and philosophers problems using these threading concepts.
The document discusses basic operating system concepts including resource management, abstraction, and virtualization as main goals of an OS. It describes system calls as entry points for users to request OS services, and some common UNIX system calls. It also reviews key OS concepts like processes, threads, scheduling, synchronization, and memory management.
This document provides an overview of processes and threads. It discusses process creation, termination, and states. It also covers threads, thread usage, and implementing threads in user space versus the kernel. Additionally, it summarizes interprocess communication techniques like semaphores, monitors, and message passing. It concludes with an introduction to scheduling concepts in batch, interactive, and real-time systems.
This document provides an overview of processes and threads. It discusses process creation, termination, and states. It also covers threads, how they relate to processes, and how they can be implemented in user space or kernel space. Interprocess communication techniques like semaphores, mutexes, and monitors are described. Finally, it discusses scheduling algorithms and concepts like policy versus mechanism.
Processes and threads are fundamental concepts in Windows Vista. A process contains the virtual address space, threads, and resources for program execution. Each process has a process environment block (PEB) and can create multiple threads, each with their own thread environment block (TEB). Threads are the unit of CPU scheduling and each process must have at least one thread. Interprocess communication (IPC) allows processes to communicate and share data using various methods like pipes, mailslots, sockets, and shared memory. Synchronization objects like mutexes, events, and semaphores coordinate access to shared resources between threads.
This document provides an overview of the Linux architecture. It describes Linux as a free UNIX-like kernel that forms the GNU/Linux operating system together with the GNU project software. The kernel acts as an intermediary between hardware and software, managing processes, scheduling, and resources. Processes can be in different states like running, waiting, or sleeping. The kernel uses preemptive multitasking to switch between processes and prioritize interrupts. System calls allow processes to be created and managed.
The document discusses processes and threads. It defines processes as independent sequential programs and threads as components of a process that can execute concurrently. It describes how processes are created, terminated, and form hierarchies. Process states like running, blocked, and ready are also discussed. The implementation of processes using a process table is covered. The document then defines the thread model and how threads in a process can share and have private resources. Examples of multithreaded applications like a word processor and web server are provided. Different methods for implementing threads in user space and kernel space are described.
Multi-threaded programming allows a process to have multiple threads of control that can perform tasks concurrently. Threads share resources like memory and files within a process. This allows a process to perform multiple tasks simultaneously like updating a display, fetching data, or answering network requests. Creating threads is lighter weight than creating entire processes. Common thread models include one-to-one, many-to-one, and many-to-many mappings of threads to kernels. Popular thread libraries are Pthreads, Win32 threads, and Java threads. Scheduling, synchronization, and other concurrency issues must be addressed for multi-threaded programming.
This document discusses processes and threads. It begins by defining processes and describing their states, creation, termination, and hierarchies. It then defines threads as components within a process that can run concurrently. Various thread implementations and usages are described, along with techniques for process synchronization and communication like semaphores, monitors, and message passing. The document concludes by discussing scheduling goals and algorithms for batch, interactive, and real-time systems.
This document discusses processes and threads. It begins by defining processes and describing their states, creation, termination, and hierarchies. It then defines threads as components within a process that can run concurrently. Various thread implementations and usages are described, along with techniques for process synchronization and communication like semaphores, monitors, and message passing. The document concludes by discussing scheduling goals and algorithms for batch, interactive, and real-time systems.
This document discusses processes and threads. It begins by defining processes and describing their states, creation, termination, and hierarchies. It then defines threads as components within processes that can run concurrently. Various methods for implementing and scheduling threads are described, including in user space, kernel space, and hybrid approaches. Interprocess communication techniques like critical sections, semaphores, mutexes, monitors, and message passing are covered. Classical synchronization problems like the dining philosophers, readers/writers, and sleeping barber are also summarized along with their solutions.
Processes and Threads in modern Operating systemssuserf2075d
To format numbers, select the cell that you want to format, on the Home tab, in the Number group, do one of the following :
Click the Accounting Number Format button to display the number with a dollar sign.
NOTE: You can select a different currency symbol by clicking the Accounting Number Format arrow and selecting the desired symbol from the menu.
Click the Percent Style button to convert the number to a percentage and display it with a percent sign.
Click the Comma Style button to display the number with comma separators and two decimal places.
NOTE: You can access additional number formats from the Number Format menu
The second part of Linux Internals covers system calls, process subsystem and inter process communication mechanisms. Understanding these services provided by Linux are essential for embedded systems engineer.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
1. Chapter 2
Processes and Threads
2.1 Processes
2.2 Threads
2.3 Interprocess communication
2.4 Classical IPC problems
2.5 Scheduling
1
2. Processes
The Process Model
• Multiprogramming of four programs
• Conceptual model of 4 independent, sequential processes
• Only one program active at any instant
2
3. Process Creation
Principal events that cause process creation
1. System initialization
2. Execution of a process creation system by a
running process.
3. User request to create a new process
4. Initiation of a batch job
3
4. Process Termination
Conditions which terminate processes
1. Normal exit (voluntary)
2. Error exit (voluntary)
3. Fatal error (involuntary)
4. Killed by another process (involuntary)
4
5. Process Hierarchies
• Parent creates a child process, child processes
can create its own process
• Forms a hierarchy
– UNIX calls this a "process group"
• Windows has no concept of process hierarchy
– all processes are created equal
5
6. Process States (1)
• Process Transitions
• Possible process states
– running
– blocked
– ready
• Transitions between states shown
6
7. Process States (2)
• Lowest layer of process-structured OS
– handles interrupts, scheduling
• Above that layer are sequential processes
7
8. Implementation of Processes
The OS organizes the data about each process in a table naturally
called the process table. Each entry in this table is called
a process table entry or process control block (PCB).
Characteristics of the process table.
1.One entry per process.
2.The central data structure for process management.
3.A process state transition (e.g., moving from blocked to ready)
is reflected by a change in the value of one or more fields in the
PCB.
4.We have converted an active entity (process) into a data
structure (PCB). Finkel calls this the level principle an active entity
becomes a data structure when looked at from a lower level.
8
9. Implementation of Processes
A process in an operating system is represented by a
data structure known as a Process Control Block (PCB)
or process descriptor.
The PCB contains important information about the
specific process including
1.The current state of the process i.e., whether it is
ready, running, waiting, or whatever.
2.Unique identification of the process in order to track
"which is which" information.
3.A pointer to parent process.
9
10. Implementation of Processes
4. Similarly, a pointer to child process (if it exists).
5. The priority of process (a part of CPU scheduling
information).
6. Pointers to locate memory of processes.
7. A register save area.
8. The processor it is running on.
The PCB is a certain store that allows the operating
systems to locate key information about a process.
Thus, the PCB is the data structure that defines a
process to the operating systems.
10
28. Scheduler Activations
• Goal – mimic functionality of kernel threads
– gain performance of user space threads
• Avoids unnecessary user/kernel transitions
• Kernel assigns virtual processors to each process
– lets runtime system allocate threads to processors
• Problem:
Fundamental reliance on kernel (lower layer)
calling procedures in user space (higher layer)
28
29. Pop-Up Threads
• Creation of a new thread when message arrives
(a) before message arrives
(b) after message arrives 29
30. Making Single-Threaded Code Multithreaded (1)
Conflicts between threads over the use of a global variable
30
32. Interprocess Communication (IPC)
• Process frequently need to communicate with
other process. ( Ex: A shell Pipeline)
• Interrupt is the one way to achieve IPC.
• But we require a well structured way to
achieve IPC.
32
33. Interprocess Communication (IPC)
• Issues to be considered:
1.How one process can pass information to
other process.
2.Making sure that two or more process don’t
get into each others’ way, when engaging
Critical Region.
3.Proper sequencing of processes when
dependencies present.
Ex: Process A produce Data &
Process B has to print this data
33
34. Interprocess Communication
Race Conditions
• In o.s. processes working together may
share recourses (Storage) .
• Shared storage
1. may be in primary memory
2. may be a shared file.
34
35. IPC – Race conditions
1. The process wants to print a
file enters the file name in a
special spooler directory.
(shared)
2. Another process, the
printer daemon periodically
checks, if there are any files
to be printed and if there
are, it prints them & then
removes their name from Print Spooler
the directory.
Two processes want to access shared memory at same
time 35
36. IPC – Race conditions
here,
In: points to the next free
slots in the directory
(Local variable)
Out : points to the next file
to be printed
& both are shared Variable
Print Spooler
36
37. IPC – Race conditions
Following Might Happen:
1. Process A reads in and stores
the value 7 in a local variable
called Next –Free-Slot.
2. Just then clock interrupt occurs
and CPU decides that process
A has run long enough.
3. It switches to the process B.
4. Process B also reads in & also
get a 7.
5. It too stores 7 into its local
variable Next –Free-Slot.
Print Spooler
37
38. IPC – Race conditions
6. Process B continues to run
and store the name of the file
in slot 7 & updates in to be 8.
7. Now, process B goes off &
does other things.
8. Eventually, process A runs
again, starting from the place
it lefts off.
9. It looks at Next-Free-Slot.
10. It finds 7 there.
11. It writes its file name in slot 7
erasing the name that
Print Spooler process B just put there.
38
39. IPC – Race conditions
12. Then it computes Next-Free-
Slot +1, which is 8.
13. Now, it sets in to 8.
14. The spooler directory is now
internally consistent.
15. So, the printer daemon
process will not notice any
thing wrong.
16. But, process B never get its’
job done.
17. Situation like this is known as
RACE CONDITIONS.
Print Spooler
39
40. Mutual exclusion & Critical Regions
• We must avoid race conditions by finding some
way to prohibit more than one process reading
& writing the shared data at the same time.
• We can achieve this by doing
MUTUAL EXCLUSION.
40
41. Mutual exclusion & Critical Regions
• MUTUAL EXCLUSION : it is, some way of making
sure that if one process is using a shared
variable or file, the other process will be
excluded from doing the same thing.
• CRITICAL REGION: the part of the memory
where the shared memory is accessed is called
the critical region.
41
42. Mutual exclusion & Critical Regions
Conditions required to avoid race condition:
1. No two processes may be simultaneously inside their
critical regions.
2. No assumptions may be made about speeds or the
number of CPUs.
3. No process running outside its critical region may block
other processes.
4. No process should have to wait forever to enter its
critical region.
42
43. Mutual exclusion using critical regions
• CRITICAL REGION: the part of the memory where the
shared memory is accessed is called the critical region.
43
44. Mutual Exclusion with Busy Waiting
BUSY WAITING : Continually testing a variable until
some value appears is called BUSY WAITING.
Proposals for achieving mutual exclusion:
• Disabling interrupts
• Lock variables
• Strict alternation
• Peterson's solution
• The TSL instruction
44
45. Mutual Exclusion with Busy Waiting
Disabling Interrupts
• It is the Simplest Solution
• Each Process should disable all interrupts just after entering its
critical region
• Each Process should re-enable all interrupts just before leaving
its critical region
• With interrupts disabled, No clock interrupts occur
• CPU can’t switch from process to process without clock interrupts
Disadvantages:
• What happens if one user disables interrupts and then never
turned them on again
• If a system is a multi processor system ; disabling interrupts
affects only the CPU that executed disable instruction
45
46. Mutual Exclusion with Busy Waiting
LOCK VARIABLES
• It is the Simplest software Solution
• We can have a single shared (Lock) variable
• Keep initially 0
• Now a process wants to enter critical region , it first test
Lock variable
• If the lock is zero , the process sets it to 1 and enters the
critical region.
• If the lock is 1 , the process just waits to be it 0
Disadvantages:
• Unfortunately , this idea contains exactly the same
problem that we show in the spooler directory example.
46
47. Mutual Exclusion with Busy Waiting (1) Strict Alternation
Notice the semicolons terminating the while statements in
Fig. above
•Busy waiting continuously testing a variable until some value
appears using it as a lock.
•A lock that uses busy waiting is called a spin lock.
•It should usually be avoided, since it wastes CPU time.
47
48. 1. The integer variable turn (keeps track of whose turn it is
to enter the CR),
2. Initially, process 0 inspects turn, finds it to be 0, and
enters its CR,
3. Process 1 also finds it to be 0 and therefore sits in a tight
loop continually testing turn to see when it becomes 1,
4. When process 0 leaves the CR, it sets turn to 1, to allow
process 1 to enter its CR,
5. Suppose that process 1 finishes its CR quickly, so both
processes are in their nonCR (with turn set to 0)
48
49. 6. Process 0 finishes its nonCR and goes back to the top of its
loop. Process 0 executes its whole loop quickly, exiting its CR
and setting turn to 1.
7. At this point turn is 1 and both processes are executing in
their nonCR,
8. Process 0 finishes its nonCR and goes back to the top of its
loop,
9. Unfortunately, it is not permitted to enter its CR, turn is 1
and process 1 is busy with its nonCR,
10. It hangs in its while loop until process 1 sets turn to 0,
11. This algorithm does avoid all races. But violates condition
Fault tolerance. 49
50. Mutual Exclusion with Busy Waiting TSL Instruction
• Lets take some help of hardware
• Many multiprocessor system have an instruction –
TSL RX, Lock ( Test and set lock)
• This works as follows
1. It reads the content of the memory word into register RX and
then stores a non zero value at the memory address Lock
(Sets a lock )
2. No other processor can access the memory word until the
instruction is finished
3. In other words the CPU executing TSL instruction locks the
memory bus to prohibit other CPUs from accessing memory
until it is done
50
51. Mutual Exclusion with Busy Waiting TSL Instruction
1. To use the TSL instruction, we will use a shared variable , Lock to co-
ordinate the access to shared memory
2. When lock = 0 any process can use it by setting it 1
3. When lock = 1 no process can use it
Entering and leaving a critical region using TSL Instruction
51
52. Peterson's Solution to achieve Mutual Exclusion.
Peterson’s algorithm is shown in Fig. 2-21.
This algorithm consists of two procedures written in ANSI C.
Before using the shared variables (i.e., before entering its critical
region), each process calls enter_region with its own process
number, 0 or 1, as parameter.
This call will cause it to wait, if need be, until it is safe to enter.
After it has finished with the shared variables, the process calls
leave_region to indicate that it is done and to allow the other
process to enter, if it so desires.
53. Peterson's Solution
Let us see how this solution works.
1.Initially neither process is in its critical region.
2.Now process 0 calls enter_region.
3.It indicates its interest by setting its array element and sets turn
to 0.
4.Since process 1 is not interested, enter_region returns
immediately.
5.If process 1 now calls enter_region, it will hang there until
interested[0] goes to FALSE, an event that only happens when
process 0 calls leave_region to exit the critical region.
54. Peterson's Solution
6. Now consider the case that both processes call enter_region
almost simultaneously.
7. Both will store their process number in turn.
8. Whichever store is done last is the one that counts; the first one
is overwritten and lost.
9. Suppose that process 1 stores last, so turn is 1.
10. When both processes come to the while statement, process 0
executes it zero times and enters its critical region.
11. Process 1 loops and does not enter its critical region until
process 0 exits its critical region.
55. Mutual Exclusion with Busy Waiting (2)
Peterson's solution for achieving mutual exclusion 55
56. PRIORITY INVERSION PROBLEM
1. In Scheduling, priority inversion is the scenario where a
low priority Task holds a shared resource, that is required
by a high priority task.
2. This causes the execution of the high priority task to be
blocked until the low priority task has released the
resource, effectively “inverting” the relative priorities of
the two tasks.
3. If some other medium priority task, one that does not
depend on the shared resource, attempts to run in the
interim, it will take precedence over both the low priority
task and the high priority task.
56
57. PRIORITY INVERSION PROBLEM
Priority Inversion will
1.Make problems in real time systems.
2.Reduce the performance of the system
3.May reduce the system responsiveness
which leads to the violation of response
time guarantees.
57
58. 1. Consider Three Tasks A,B,C with priorities A > B > C.
2. Assume these tasks are served by a common server (Sequential).
3. Assume A & C share a critical resource.
4. Suppose C has the Server and acquires the resource.
5. A requests the server, Preempting C.
PRIORITY INVERSION EXAMPLE
6. A then Wants the Resource.
7. Now C must take the server while A blocks waiting for C to
release the resource.
8. Meanwhile B requests the server.
9. Since B > C, B can run arbitrarily long, all the while with A being
blocked.
10. But A > B, Which is Anomaly. (Priority Inversion) 58
59. Sleep & Wakeup
• Both Peterson & TSL solution have the defect of
requiring Busy Waiting
• So we can have some problems like,
1. CPU time is wasted
2. Priority Inversion Problem
These problems can be solved by using Sleep &
Wakeup primitives (System Calls).
59
60. Sleep & Wakeup
• Sleep: Sleep is a system call that causes the
caller to block, that is, be suspended until
another process wakes it up
• Wakeup : Wakeup system call awakens the
process. It has one parameter which is
process itself.
60
61. Producer – Consumer Problem
(Bounded Buffer Problem)
• It consists of two processes, Producer &
Consumer
• They share a common fixed size Buffer
• Producer puts information into Buffer
• Consumer takes information out of buffer
61
62. Producer – Consumer Problem
(Bounded Buffer Problem)
• Trouble:
When the producer wants to put information but
the buffer isn’t empty
• Solution:
1. Producer to go to sleep
2. To be awakened when consumer removes a item
or items from buffer
62
63. Producer – Consumer Problem
(Bounded Buffer Problem)
• Trouble:
When the consumer wants to take information
from the buffer but buffer is empty.
• Solution:
1. Consumer go to sleep
2. To be awakened when Producer put information
in the buffer
63
66. Sleep and Wakeup
Producer Module
Producer-consumer problem with fatal race condition
Reason: Access to the count is unconstrained( Ex: Book) 66
67. Sleep and Wakeup
Consumer Module
Producer-consumer problem with fatal race condition
Reason: Access to the count is unconstrained( Ex: Book) 67
68. Sleep and Wakeup
• Due to access to the count in unconstrained
manner a fatal race condition occurs here
• So some wakeups calls are wasted here
• Wakeup waiting bit is used here to avoid this
• A wakeup bit is set to a process which is still
awake
• Later on when the process go to sleep & if
wakeup bit is set , this bit is turned off but the
process remains still awake
68
69. Problem With Sleep and Wakeup
The problem with this solution is that it contains a race
condition that can lead into a deadlock. Consider the following
scenario:
1.The consumer has just read the variable itemCount, noticed
it's zero and is just about to move inside the if-block.
2.Just before calling sleep, the consumer is interrupted and the
producer is resumed.
3.The producer creates an item, puts it into the buffer, and
increases itemCount.
69
70. Problem With Sleep and Wakeup
1.Because the buffer was empty prior to the last addition, the
producer tries to wake up the consumer.
2.Unfortunately the consumer wasn't yet sleeping, and the
wakeup call is lost. When the consumer resumes, it goes to
sleep and will never be awakened again. This is because the
consumer is only awakened by the producer when itemCount
is equal to 1.
3.The producer will loop until the buffer is full, after which it
will also go to sleep.
4.Since both processes will sleep forever, we have run into a
deadlock. This solution therefore is unsatisfactory.
70
71. Semaphores
• Semaphore is an integer variable
• It is used to count the number of wakeups
saved for future use
• A semaphore could have –
• Value 0 : No wakeups were saved
• Value +ve Integer: Indicates wakeups pending
Semaphore operations:
1. Down operation
2. Up operation
71
72. Operations on Semaphores
• Down operation
1.It checks the value of the semaphore.
2.If it is greater than zero, it decrements the
value by 1 & just continues.
3.If it is zero, the process is put to sleep
without completing Down for a moment.
4.All these operations are done as a single,
indivisible Atomic action.
72
73. Operations on Semaphores
• UP operation
1. It increments the value of the semaphore addressed
2. If one or more process were sleeping on that semaphore
unable to complete down earlier, one of them chosen by
the system
3. it is allowed to complete Down (Decrement semaphore
by 1)
4. Thus, after an UP on a semaphore with process sleeping
on it, the semaphore will still be 0
5. But there will be one less process sleeping on it.
6. Above operation is totally invisible
7. No process ever blocks here 73
74.
75.
76.
77.
78.
79.
80.
81. Producer – Consumer Problem using Semaphore
• This solution uses three semaphores
(1) full (2) empty & (3) mutex
Full : Full is used for counting the number of slots that are full
Empty: Empty is used for counting the number of slots that are
empty
Mutex: Mutex is used to make sure that Producer & Consumer
don’t access the buffer at the same time
Semaphores used here in two different ways –
1. For synchronization ( full & empty)
2. To guarantee Mutual exclusion ( mutex)
81
85. Mutexes
• A mutex is a variable
• It can be in one out of two states : Unlocked or
Locked
• Only one bit is required to represent it
• In practice an integer value is often used, with 0
meaning unlocked and all other values meaning
locked
• When a process (or thread) needs access to a
critical region, it calls mutex_lock
• If the mutex is currently unlocked, the call succeeds
and the calling process (or thread )is free to enter
the critical region 85
86. Mutexes
• On the other hand, if mutex is already locked,
the calling process (or thread) is blocked until
the process (or thread) in the critical region is
finished and calls mutex_unlock.
• Because mutexes are so simple, they can easily
be implemented in user space if a TSL
instruction is available
• The code for mutex_lock and mutex_unlock for
use with a user level threads package
86
87. Mutexes
The code for mutex_lock and mutex_unlock for
use with a user level threads package is as
under.
Implementation of mutex_lock and mutex_unlock
87
93. MONITORS
• The Problem With Semaphores
• Suppose that the two downs in producers’ code
is reversed in order....
• Both process would stay blocked forever
• If resources are not tightly controlled, “chaos
will ensue”
- Race conditions
• To make it easier to write correct programs, a
higher – level synchronization primitive called
a monitor.
94. The Solution
• Monitors provide control by allowing only one process
to access a critical resource at a time
• A monitor is a collection of procedures, variables and
data structures that are all grouped together in a
special kind of module or package.
• Procedures may call the procedures in a monitor
whenever they want to, but they cannot directly access
the monitor’s internal data structures from procedures
declared outside the monitor.
• Monitors have an important property that makes them
useful for achieving mutual exclusion: only one process
can be active in a monitor at any instant.
• A monitor may only access it’s local variables
95. An Abstract Monitor
name : monitor
… some local declarations
… initialize local data
procedure name(…arguments)
… do some work
… other procedures
97. Monitors
• Outline of producer-consumer problem with monitors
– only one monitor procedure active at one time
– buffer has N slots 97
98. Things Needed to Enforce Monitor
• A solution lies in the introduction of condition
variables , along with two operators on them,
Wait & Signal
• “Wait” operation
– Forces running process to sleep
• “signal” operation
– Wakes up a sleeping process
• A condition (Condition variable)
– Something to store who’s waiting for a particular
reason
– Implemented as a queue
99. A Running Example – Kitchen
kitchen : monitor Monitor
Declaration
occupied : Boolean; occupied := false;
nonOccupied : condition; Declarations /
Initialization
procedure enterKitchen
if occupied then nonOccupied.wait;
occupied = true;
Procedure
procedure exitKitchen
occupied = false;
Procedure
nonOccupied.signal;
100. Multiple Conditions
• Sometimes desirable to be able to wait on multiple
things
• Can be implemented with multiple conditions
• Example:
• Two reasons to enter kitchen
- cook (remove clean dishes)
- clean (add clean dishes)
• Two reasons to wait:
– Going to cook, but no clean dishes
– Going to clean, no dirty dishes
101. Emerson’s Kitchen
kitchen : monitor
cleanDishes, dirtyDishes : condition;
dishes, sink : stack; dishes := stack of 10 dishes
sink := stack of 0 dishes
procedure cook
if dishes.isEmpty then cleanDishes.wait
sink.push ( dishes.pop );
dirtyDishes.signal;
procedure cleanDish
if sink.isEmpty then dirtyDishes.wait
dishes.push (sink.pop)
cleanDishes.signal
102. Condition Queue
• Checking if any process is waiting on a
condition:
– “condition.queue” returns true if a process is
waiting on condition
• Example: Doing dishes only if someone is
waiting for them
103. Summary
• Advantages
– Data access synchronization simplified (vs.
semaphores or locks)
– Better encapsulation
• Disadvantages:
– Deadlock still possible (in monitor code)
– Programmer can still botch use of monitors
– No provision for information exchange between
machines
104. Interprocess Communication (IPC)
Mechanism for processes to communicate and
synchronize their actions.
Via shared memory
Via Messaging system - processes communicate without resorting
to shared variables.
Messaging system and shared memory not mutually
exclusive -
can be used simultaneously within a single OS or a single
process.
IPC facility provides two operations.
send(message) - message size can be fixed or variable
receive(message)
105. Producer-Consumer using IPC
Producer
repeat
…
produce an item in nextp;
…
send(consumer, nextp);
until false;
Consumer
repeat
receive(producer, nextc);
…
consume item from nextc;
…
until false;
106. IPC via Message Passing
If processes P and Q wish to communicate,
they need to:
establish a communication link between them
exchange messages via send/receive
Fixed vs. Variable size message
Fixed message size - straightforward physical
implementation, programming task is difficult due
to fragmentation
Variable message size - simpler programming,
more complex physical implementation.
107. Producer-Consumer using Message Passing
Producer
repeat
…
produce an item in nextp;
…
send(consumer, nextp);
until false;
Consumer
repeat
receive(producer, nextc);
…
consume item from nextc;
…
until false;
108. Direct Communication
Sender and Receiver processes must name
each other explicitly:
send(P, message) - send a message to process P
receive(Q, message) - receive a message from
process Q
Properties of communication link:
Links are established automatically.
A link is associated with exactly one pair of
communicating processes.
Exactly one link between each pair.
Link may be unidirectional, usually bidirectional.
109. Indirect Communication
Messages are directed to and received from mailboxes
(also called ports)
Unique ID for every mailbox.
Processes can communicate only if they share a
mailbox.
Send(A, message) /* send message to mailbox A */
Receive(A, message) /* receive message from
mailbox A */
Properties of communication link
Link established only if processes share a common
mailbox.
Link can be associated with many processes.
Pair of processes may share several communication
111. Mailboxes (cont.)
Operations
create a new mailbox
send/receive messages through mailbox
destroy a mailbox
Issue: Mailbox sharing
P1, P2 and P3 share mailbox A.
P1 sends message, P2 and P3 receive… who
gets message??
Possible Solutions
disallow links between more than 2 processes
allow only one process at a time to execute receive
operation
allow system to arbitrarily select receiver and then
notify
112. Barriers
This mechanism is used for groups of processes rather than two-
process producer-consumer type of situations
• Use of a barrier
– processes approaching a barrier
– all processes but one blocked at barrier
– last process arrives, all are let through 112
113. Dining Philosophers (1)
• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time
• How to prevent deadlock
113
120. Scheduling
Introduction to Scheduling (1)
• Bursts of CPU usage alternate with periods of I/O wait
– A CPU/Compute-bound process – Spends most of the time
in computing. They have long CPU Bursts and infrequent I/O
waits
– An I/O bound process - Spends most of the time waiting for
I/O. They have Short CPU Bursts and frequent I/O waits 120
121. Introduction to Scheduling
Types of Scheduling Algorithms
• Non –Preemptive : a non-preemptive scheduling
algorithm picks a process to run and then just
lets it run until it blocks OR until it voluntarily
releases CPU. It can’t be forcibly suspended
• Preemptive: a preemptive scheduling algorithm
picks a process and lets it run for a maximum of
some fixed time. If it is still running at the end of
the time interval, it is suspended and scheduler
picks another process to run.
121
124. Scheduling in Batch Systems
• There are following methods-
1.First – Come – First – Serve
2.Shortest Job First
3.Shortest Remaining Time Next
4.Three level Scheduling
124
125. Scheduling in Batch Systems
• First – Come – First – Serve method:
1.Simplest non-preemptive algorithm
2.Processes are assigned the CPU in the order
they request it
3.Basically there is a single queue of ready
process
4.It is very easy to understand and program
5.A single linked list keeps track of all ready
process
125
126. Scheduling in Batch Systems
FCFS – Example : (With the arrival at Same Time)
Average turn around time is
(20 + 30 + 55 + 70 + 75) / 5 = 250/5 = 50
126
127. FCFS – Example : (With the arrival at Different Times)
127
128. Scheduling in Batch Systems
• FCFS Disadvantages:
What happens when –
1.One compute bound process, runs for one
second at a time and goes for disk read (CPU
will remain Idle)
2.Many I/O bound process that uses little CPU
time but each have to perform 1000 disk reads
to complete (CPU will remain Idle)
128
129. Scheduling in Batch Systems
Shortest Job First method
Working:
here when several equally important jobs are sitting
in the input queue waiting to be started, the scheduler
picks the shortest job first.
Average Turn Around time here:
(5 + 15 +30 + 50 + 75 ) / 5 = 175/5 = 35
An example of shortest job first scheduling
129
130. Shortest Job First
Figure 2-40. An example of shortest job first scheduling.
(a) Running four jobs in the original order. (b) Running them
in shortest job first order.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
132. Scheduling in Batch Systems
• It is worth pointing out that shortest job first is
only optimal when all the jobs are available
simultaneously
• See following example:
Processes A B C D E
Run times 2 4 1 1 1
Arrival times 0 0 3 3 3
Here we can run SJF in two orders like ABCDE or BCDEA
Average Turn. time (ABCDE) = (2-0)+(6-0)+(7-3)+(8-3)+(9-3) = 4.6
Average Turn. time (BCDEA) = ?
132
133. Three level scheduling in Batch Systems
The CPU scheduler
Decides the job to be given
CPU first.
The admission
scheduler Decides
which job to admit first The Memory scheduler
to the system. Decides which job is to be kept in
It is used to handle memory & which are to be swap
compute out to handle memory space
and I/O bound jobs. problem.
133
134. Scheduling in Interactive Systems (1)
1. Round Robin Scheduling
2. Priority Scheduling
• Round Robin Scheduling
– list of runnable processes (a)
– list of runnable processes after B uses up its quantum(b)
134
135. Priority Scheduling
1. A priority number (integer) is associated with
each process
2. The CPU is allocated to the process with the
highest priority
Normally (smallest integer = highest priority)
It can be:
• Preemptive
• Non-preemptive
136. Processes Burst time Priority Arrival
Priority time
Scheduling P1 10 3 00
Example With P2 1 1 00
Same Arrival P3 2 4 00
Time P4 1 5 00
P5 5 2 00
P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time:
=((16-10) + (1-1) + (18-2) + (19-1) + (6-5))/5
= (6+0+16+18+1)/5 = 41/5 = 8.2
137. Priority Scheduling Example
With Different Arrival Time
Processes Burst time Priority Arrival time
P1 10 3 00
P2 1 1 1
P3 2 4 2
P4 1 5 3
P5 5 2 4
The average waiting time:
=(( ? ) + ( ? ) + ( ? ) + ( ? ) + ( ? ))/5
= ( ? +?+?+?+?)/5 = ?/5 = ?
139. Round-Robin Scheduling
• The Round-Robin is designed especially for
time sharing systems.
• Similar to FCFS but adds preemption concept
• Each process gets a small unit of CPU time
(time quantum), usually 10-100 milliseconds
• After this time has elapsed, the process is
preempted and added to the end of the ready
queue.
140. Round-Robin Scheduling Example
Time Quantum : 20ms Arrival Time : 00 (Simultaneously)
The average waiting time:
=((134 ) + (37) + (162) + (121) )/4 = 113.5
141. Round Robin scheduling Example
Time Quantum here : 04ms
Process Arrival Time Service time
1 0 8
2 1 4
3 2 9
4 3 5
P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26
The average waiting time:
=((20-0 ) + (8-1) + (26-2) + (25-3))/4 = (74 )/4 = 18.5
141
142. PRIORITY BASED SCHEDULING
• Assign each process a priority. Schedule highest priority first. All
processes within same priority are FCFS.
• Priority may be determined by user or by some default
mechanism. The system may determine the priority based on
memory requirements, time limits, or other resource usage.
• Starvation occurs if a low priority process never runs. Solution:
build aging into a variable priority.
• Delicate balance between giving favorable response for
interactive jobs, but not starving batch jobs.
142
143. ROUND ROBIN
• Use a timer to cause an interrupt after a predetermined
time. Preempts if task exceeds it’s quantum.
• Train of events
1. Dispatch
2. Time slice occurs OR process suspends on event
3. Put process on some queue and dispatch next
• Use numbers to find queueing and residence times. (Use
quantum.)
143
144. ROUND ROBIN
• Definitions:
– Context Switch: Changing the processor from
running one task (or process) to another. Implies
changing memory.
– Processor Sharing : Use of a small quantum such
that each process runs frequently at speed 1/n.
– Reschedule latency : How long it takes from when
a process requests to run, until it finally gets
control of the CPU.
144
145. ROUND ROBIN
• Choosing a time quantum
– Too short - inordinate fraction of the time is spent in context
switches.
– Too long - reschedule latency is too great. If many processes
want the CPU, then it's a long time before a particular process
can get the CPU. This then acts like FCFS.
– Adjust so most processes won't use their slice. As processors
have become faster, this is less of an issue.
145
147. Multilevel Queue
• Ready Queue partitioned into separate queues
– Example: system processes, foreground
(interactive), background (batch), student
processes….
• Each queue has its own scheduling algorithm
– Example: foreground (RR), background(FCFS)
• Processes assigned to one queue permanently.
• Scheduling must be done between the queues
– Fixed priority - serve all from foreground, then
from background. Possibility of starvation.
– Time slice - Each queue gets some CPU time that it
schedules - e.g. 80% foreground(RR), 20%
background (FCFS)
149. MULTI-LEVEL QUEUES:
• Each queue has its scheduling algorithm.
• Then some other algorithm (perhaps priority based) arbitrates
between queues.
• Can use feedback to move between queues
• Method is complex but flexible.
• For example, could separate system processes, interactive,
batch, favored, unfavored processes
149
151. Scheduling in Real-Time Systems
Real Time Scheduling:
•Hard real-time systems – required to complete a
critical task within a guaranteed amount of time.
•Soft real-time computing – requires that critical
processes receive priority over less fortunate ones.
151
152. Scheduling in Real-Time Systems
Schedulable real-time system
• Given
– m periodic events
– event i occurs within period Pi and requires Ci
seconds
• Then the load can only be handled if
m
Ci
∑ P ≤1
i =1 i
152
153. Scheduling in Real-Time Systems
Example: Events Periods CPU Time
01 100 50
02 200 30
03 500 100
Here , = 0.5 + 0.15 + 0.2 = 0.85
System is
schedulable because
here m
Ci
∑P
i =1
≤1
i
153
154. Policy versus Mechanism
• Separate what is allowed to be done with
how it is done
– a process knows which of its children threads
are important and need priority
• Scheduling algorithm parameterized
– mechanism in the kernel
• Parameters filled in by user processes
– policy set by user process
154
155. Thread Scheduling (1)
Possible scheduling of user-level threads
• 50-msec process quantum
• threads run 5 msec/CPU burst 155
156. Thread Scheduling (2)
Possible scheduling of kernel-level threads
• 50-msec process quantum
• threads run 5 msec/CPU burst
156
157. FCFS
Process Burst Time
P1 3
P2 6
P3 4
P4 2
Order : P1,P2,P3,P4 FCFS
Process Compl Time
P1 3
P2 9
P3 13
P4 15
average Waiting Time = ( )/4 =
157
158. Shortest Job First
Process Burst Time
P1 3
P2 6
P3 4
P4 2
Process Compl Time
P4 2
P1 5
P3 9
P2 15
Average Waiting Time = ( )/4 =
158
160. Round Robin Scheduling
Process Burst Time
P1 3
P2 6
P3 4
P4 2
Time Quantum : 2ms
Gantt Chart : ?
Average Waiting Time: =
160
Editor's Notes
Notice that the condition of one person in the kitchen is now relaxed. Two new rules: there must be one dish in the sink to clean a dish there must be one dish in dishes to cook