The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing between mailboxes. Client-server systems use sockets and remote procedure calls to enable remote communication.
The document discusses processes and processors in distributed systems. It covers threads, system models, processor allocation, scheduling, load balancing, and process migration. Threads are lightweight processes that share an address space and resources. There are advantages to using threads like handling signals and implementing producer-consumer problems. System models for distributed systems include workstations with local disks, diskless workstations, and a processor pool model. Processor allocation aims to maximize CPU utilization and minimize response times. Algorithms must consider overhead, complexity, and stability.
This document discusses process migration and allocation in distributed systems. It covers:
1) Process allocation is easier in multiprocessor systems where all processors share memory and resources, compared to multicomputer systems without shared memory.
2) Processes can either be non-migratory and run on one system, or migratory and move between systems to improve resource utilization. Ensuring transparency is important for migratory processes.
3) Different strategies for process migration include moving state, keeping state on the original system and using RPC, or ignoring state. Centralized, hierarchical, and distributed algorithms can be used to determine optimal or suboptimal migration.
Allocation of processors to processes in Distributed Systems. Strategies or algorithms for processor allocation. Design and Implementation Issues of Strategies.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The document discusses process management in operating systems. It covers control blocks, interrupts, process states, scheduling algorithms like FIFO, SJF, SRTF, Round Robin and priority scheduling. It also discusses queuing, multiprogramming vs time sharing and scheduling criteria like CPU utilization, throughput, turnaround time and waiting time. Scheduling can be long, medium or short term and algorithms include priority queues and multilevel feedback queues.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that changes state as it runs. Process information is stored in a process control block. Processes move between ready, running, waiting, and terminated states. The operating system uses long-term and short-term schedulers to select which processes to move between queues like ready and device queues. Context switching occurs when the CPU switches between processes. Processes can cooperate through communication and synchronization using message passing between mailboxes. Client-server systems use sockets and remote procedure calls to enable remote communication.
The document discusses processes and processors in distributed systems. It covers threads, system models, processor allocation, scheduling, load balancing, and process migration. Threads are lightweight processes that share an address space and resources. There are advantages to using threads like handling signals and implementing producer-consumer problems. System models for distributed systems include workstations with local disks, diskless workstations, and a processor pool model. Processor allocation aims to maximize CPU utilization and minimize response times. Algorithms must consider overhead, complexity, and stability.
This document discusses process migration and allocation in distributed systems. It covers:
1) Process allocation is easier in multiprocessor systems where all processors share memory and resources, compared to multicomputer systems without shared memory.
2) Processes can either be non-migratory and run on one system, or migratory and move between systems to improve resource utilization. Ensuring transparency is important for migratory processes.
3) Different strategies for process migration include moving state, keeping state on the original system and using RPC, or ignoring state. Centralized, hierarchical, and distributed algorithms can be used to determine optimal or suboptimal migration.
Allocation of processors to processes in Distributed Systems. Strategies or algorithms for processor allocation. Design and Implementation Issues of Strategies.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The document discusses process management in operating systems. It covers control blocks, interrupts, process states, scheduling algorithms like FIFO, SJF, SRTF, Round Robin and priority scheduling. It also discusses queuing, multiprogramming vs time sharing and scheduling criteria like CPU utilization, throughput, turnaround time and waiting time. Scheduling can be long, medium or short term and algorithms include priority queues and multilevel feedback queues.
The document discusses processes and threads in an operating system. It defines a process as a program in execution that includes the program code, data, and process control block. A thread is the basic unit of execution within a process and includes the program counter, registers, and stack. The document outlines different process states like creation, termination, and suspension. It also describes different types of threads like user-level and kernel-level threads. Symmetric multiprocessing uses multiple identical processors that can run different threads simultaneously, improving performance. A microkernel is a small OS core that provides message passing between components like the file system or process servers through inter-process communication.
Process scheduling involves managing the CPU and selecting which process runs next based on scheduling strategies. The operating system maintains different queues for processes in various states like ready, blocked, and running. These include the job queue, ready queue, and device queues. Schedulers select processes and move them between queues. The long-term scheduler selects processes to load into memory while the short-term scheduler selects the next process to run on the CPU. The medium-term scheduler handles swapping processes in and out of memory. Context switching involves saving a process's state when it stops running and restoring another process's state when it starts running.
1) Virtual memory simplifies address translation by creating a virtual memory space that contains the operating system and application programs, even though it does not physically exist.
2) Virtual memory is divided into two components - the first occupies real memory and contains the operating system and page pool, while the second occupies external storage and holds application programs, with pages swapped between real memory and external storage.
3) Address translation under virtual memory uses segment tables and page tables to map virtual addresses to physical addresses.
The document discusses the system development life cycle (SDLC), which includes requirements, design, implementation, testing, deployment, operations, and maintenance. It describes the typical phases of the SDLC process - preliminary investigation, feasibility study, system analysis, system design, software development, system testing, implementation and evaluation, and maintenance. The waterfall model is presented as a common SDLC approach, with its sequential phases of requirement analysis, system design, implementation, testing, deployment, and maintenance.
1. Single partition allocation allocates memory such that the operating system resides in lower memory and user processes in higher memory. Limit and relocation registers protect processes. Multiple partition allocation divides memory into partitions, with each process allocated one partition.
2. A process control block (PCB) contains process state, scheduling information, registers, and memory allocation details. Useful PCB information includes the program counter, I/O status, process state like ready or running, accounting data, and CPU registers.
3. Preemptive scheduling interrupts running lower priority processes to run higher priority ones, like round robin. Non-preemptive scheduling runs processes to completion without interruption, like first-come first-
The document discusses operating systems and processes. It defines an operating system as an interface between the user and computer hardware that manages system resources efficiently. Processes are programs in execution that are represented in memory by a process control block containing information like state, registers, scheduling details. Processes go through various states like running, ready, waiting and terminated. The document also describes process creation, termination, and context switching between processes.
The document discusses operating systems and their key functions. It describes how an operating system acts as an intermediary between the user and computer hardware, managing resources like memory, processors, devices and information. It outlines important operating system functions such as memory management, processor management, device management, file management, security and job accounting. It also discusses different types of operating systems including batch, time-sharing, distributed and network operating systems.
The document discusses operating system interview questions and answers. It covers topics such as the definition of an operating system, basic functions of an OS, types of operating systems, kernel functions, process states, virtual memory, deadlocks, threads, synchronization, scheduling algorithms, and memory management. Some key points covered are that an OS acts as an intermediary between the user and hardware, the kernel provides basic services, processes can be in states like running, waiting, or ready, and virtual memory uses timesharing to simulate more memory than physically available.
The document discusses various concepts related to process and thread scheduling in operating systems. It defines key terms like process, job, thread, context switching, and process states. It also explains different scheduling algorithms like round robin, shortest job first, priority scheduling, and multilevel feedback queue scheduling.
The document provides an introduction to operating systems, describing their main components and functions. It discusses different types of operating systems including mainframe systems, desktop systems, multiprocessor systems, distributed systems, clustered systems, real-time systems, and handheld systems. For each type, it highlights some of their key characteristics and how operating systems have evolved to support different computing environments.
This document discusses memory management techniques in operating systems including paging, segmentation, and virtual memory. It defines key concepts such as logical versus physical addresses, page tables, frames, and how memory management units map between these spaces. Advantages and disadvantages of different algorithms like FIFO, LRU and clock are presented. The goals of memory management are to allow for more efficient use of limited memory and enable running multiple processes simultaneously.
This document discusses memory management techniques in operating systems. It provides background on how programs must be brought into memory to execute and techniques for organizing memory like segmentation and paging. It describes the multistep process a user program goes through before execution including being placed in a process in memory. It also discusses logical versus physical addresses, the memory management unit that maps virtual to physical addresses, and dynamic loading and linking of code.
Operating system 06 operating system classificationVaibhav Khanna
Operating systems can be classified in several ways:
- Single-user, single-processor systems have one user and CPU. Examples include MS-DOS.
- Batch processing systems automatically execute jobs one after the other without user interference. A batch monitor controls the environment.
- Multiprogramming systems increase efficiency by allowing multiple jobs to reside in memory at once. The CPU switches between jobs during I/O waits.
- Time-sharing or multitasking systems further improve interaction by rapidly switching between jobs, giving the appearance that users interact with programs simultaneously.
A process is an instance of a program that is currently running on a computer. It has a current state, associated system resources, and executes a sequence of instructions. The operating system manages processes by creating, deleting, suspending, resuming, synchronizing, and allowing communication between processes to prevent deadlocks.
The document provides an introduction to operating systems, covering topics such as the need for operating systems, their evolution over different generations from batch to real-time systems, and the components of a computer system including hardware, operating system, application programs, and users. It then discusses operating system services from both the user and system point of view, and provides case studies of the Windows and Linux operating systems.
The document discusses processor management in operating systems. It describes how operating systems use process scheduling to manage multiple processes running simultaneously on the CPU. Processes have a lifecycle that involves different states like ready, running, waiting etc. The processor manager consists of a job scheduler and process scheduler. The job scheduler balances groups of processes to optimize resource usage while the process scheduler selects the next process to run on the CPU using different scheduling algorithms like FCFS, priority scheduling, round robin etc. Each process is associated with a process control block that stores its state and execution details.
This document contains a question bank on various topics in operating systems including:
1. Process synchronization questions focusing on critical section problems, semaphores, and solutions for two processes.
2. Memory management questions on paging and segmentation.
3. Deadlock questions on safe/unsafe states, bankers algorithm, prevention/avoidance strategies, and recovery from deadlocks.
4. CPU scheduling questions on algorithms like FCFS, RR, SJF and characteristics like short term, long term and medium term schedulers.
5. Process management questions on states, control blocks, creation/termination, and interprocess communication.
The questions provided are meant to help students study notes
The document provides an overview of operating systems, including processes, threads, interprocess communication, deadlocks, and scheduling. It discusses the evolution of operating systems from first to fourth generation. Key concepts covered include processes, files, system calls, command interpreters, and signals. Operating system structures like monolithic, layered, and client-server models are summarized. Common interprocess communication problems like the bounded buffer, readers-writers, and dining philosophers problems are also briefly outlined. Finally, it discusses process scheduling algorithms, deadlock conditions and strategies to handle deadlocks.
Advanced Operating System- IntroductionDebasis Das
Introduction to Advanced Operating systems. Many university courses run advanced/ distributed operating system courses in their 4 year engineering programs. This is based on WBUT CS 704 D course but matches many such courses run by different universities. If you need to downloaad this presentation, please send me an email at ddas15847@gmail.com
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemLieYah Daliah
The document discusses processes and process management in operating systems. It begins by defining a process as a program in execution along with its associated data and process control block (PCB). It then describes the different states a process can be in, including running, ready, blocked, and suspended. It also discusses process creation, termination, and scheduling. The document outlines the five-state process model and differences between single-threaded and multi-threaded processes. It concludes by comparing user-level threads and kernel-level threads.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
The document discusses processes and threads in an operating system. It defines a process as a program in execution that includes the program code, data, and process control block. A thread is the basic unit of execution within a process and includes the program counter, registers, and stack. The document outlines different process states like creation, termination, and suspension. It also describes different types of threads like user-level and kernel-level threads. Symmetric multiprocessing uses multiple identical processors that can run different threads simultaneously, improving performance. A microkernel is a small OS core that provides message passing between components like the file system or process servers through inter-process communication.
Process scheduling involves managing the CPU and selecting which process runs next based on scheduling strategies. The operating system maintains different queues for processes in various states like ready, blocked, and running. These include the job queue, ready queue, and device queues. Schedulers select processes and move them between queues. The long-term scheduler selects processes to load into memory while the short-term scheduler selects the next process to run on the CPU. The medium-term scheduler handles swapping processes in and out of memory. Context switching involves saving a process's state when it stops running and restoring another process's state when it starts running.
1) Virtual memory simplifies address translation by creating a virtual memory space that contains the operating system and application programs, even though it does not physically exist.
2) Virtual memory is divided into two components - the first occupies real memory and contains the operating system and page pool, while the second occupies external storage and holds application programs, with pages swapped between real memory and external storage.
3) Address translation under virtual memory uses segment tables and page tables to map virtual addresses to physical addresses.
The document discusses the system development life cycle (SDLC), which includes requirements, design, implementation, testing, deployment, operations, and maintenance. It describes the typical phases of the SDLC process - preliminary investigation, feasibility study, system analysis, system design, software development, system testing, implementation and evaluation, and maintenance. The waterfall model is presented as a common SDLC approach, with its sequential phases of requirement analysis, system design, implementation, testing, deployment, and maintenance.
1. Single partition allocation allocates memory such that the operating system resides in lower memory and user processes in higher memory. Limit and relocation registers protect processes. Multiple partition allocation divides memory into partitions, with each process allocated one partition.
2. A process control block (PCB) contains process state, scheduling information, registers, and memory allocation details. Useful PCB information includes the program counter, I/O status, process state like ready or running, accounting data, and CPU registers.
3. Preemptive scheduling interrupts running lower priority processes to run higher priority ones, like round robin. Non-preemptive scheduling runs processes to completion without interruption, like first-come first-
The document discusses operating systems and processes. It defines an operating system as an interface between the user and computer hardware that manages system resources efficiently. Processes are programs in execution that are represented in memory by a process control block containing information like state, registers, scheduling details. Processes go through various states like running, ready, waiting and terminated. The document also describes process creation, termination, and context switching between processes.
The document discusses operating systems and their key functions. It describes how an operating system acts as an intermediary between the user and computer hardware, managing resources like memory, processors, devices and information. It outlines important operating system functions such as memory management, processor management, device management, file management, security and job accounting. It also discusses different types of operating systems including batch, time-sharing, distributed and network operating systems.
The document discusses operating system interview questions and answers. It covers topics such as the definition of an operating system, basic functions of an OS, types of operating systems, kernel functions, process states, virtual memory, deadlocks, threads, synchronization, scheduling algorithms, and memory management. Some key points covered are that an OS acts as an intermediary between the user and hardware, the kernel provides basic services, processes can be in states like running, waiting, or ready, and virtual memory uses timesharing to simulate more memory than physically available.
The document discusses various concepts related to process and thread scheduling in operating systems. It defines key terms like process, job, thread, context switching, and process states. It also explains different scheduling algorithms like round robin, shortest job first, priority scheduling, and multilevel feedback queue scheduling.
The document provides an introduction to operating systems, describing their main components and functions. It discusses different types of operating systems including mainframe systems, desktop systems, multiprocessor systems, distributed systems, clustered systems, real-time systems, and handheld systems. For each type, it highlights some of their key characteristics and how operating systems have evolved to support different computing environments.
This document discusses memory management techniques in operating systems including paging, segmentation, and virtual memory. It defines key concepts such as logical versus physical addresses, page tables, frames, and how memory management units map between these spaces. Advantages and disadvantages of different algorithms like FIFO, LRU and clock are presented. The goals of memory management are to allow for more efficient use of limited memory and enable running multiple processes simultaneously.
This document discusses memory management techniques in operating systems. It provides background on how programs must be brought into memory to execute and techniques for organizing memory like segmentation and paging. It describes the multistep process a user program goes through before execution including being placed in a process in memory. It also discusses logical versus physical addresses, the memory management unit that maps virtual to physical addresses, and dynamic loading and linking of code.
Operating system 06 operating system classificationVaibhav Khanna
Operating systems can be classified in several ways:
- Single-user, single-processor systems have one user and CPU. Examples include MS-DOS.
- Batch processing systems automatically execute jobs one after the other without user interference. A batch monitor controls the environment.
- Multiprogramming systems increase efficiency by allowing multiple jobs to reside in memory at once. The CPU switches between jobs during I/O waits.
- Time-sharing or multitasking systems further improve interaction by rapidly switching between jobs, giving the appearance that users interact with programs simultaneously.
A process is an instance of a program that is currently running on a computer. It has a current state, associated system resources, and executes a sequence of instructions. The operating system manages processes by creating, deleting, suspending, resuming, synchronizing, and allowing communication between processes to prevent deadlocks.
The document provides an introduction to operating systems, covering topics such as the need for operating systems, their evolution over different generations from batch to real-time systems, and the components of a computer system including hardware, operating system, application programs, and users. It then discusses operating system services from both the user and system point of view, and provides case studies of the Windows and Linux operating systems.
The document discusses processor management in operating systems. It describes how operating systems use process scheduling to manage multiple processes running simultaneously on the CPU. Processes have a lifecycle that involves different states like ready, running, waiting etc. The processor manager consists of a job scheduler and process scheduler. The job scheduler balances groups of processes to optimize resource usage while the process scheduler selects the next process to run on the CPU using different scheduling algorithms like FCFS, priority scheduling, round robin etc. Each process is associated with a process control block that stores its state and execution details.
This document contains a question bank on various topics in operating systems including:
1. Process synchronization questions focusing on critical section problems, semaphores, and solutions for two processes.
2. Memory management questions on paging and segmentation.
3. Deadlock questions on safe/unsafe states, bankers algorithm, prevention/avoidance strategies, and recovery from deadlocks.
4. CPU scheduling questions on algorithms like FCFS, RR, SJF and characteristics like short term, long term and medium term schedulers.
5. Process management questions on states, control blocks, creation/termination, and interprocess communication.
The questions provided are meant to help students study notes
The document provides an overview of operating systems, including processes, threads, interprocess communication, deadlocks, and scheduling. It discusses the evolution of operating systems from first to fourth generation. Key concepts covered include processes, files, system calls, command interpreters, and signals. Operating system structures like monolithic, layered, and client-server models are summarized. Common interprocess communication problems like the bounded buffer, readers-writers, and dining philosophers problems are also briefly outlined. Finally, it discusses process scheduling algorithms, deadlock conditions and strategies to handle deadlocks.
Advanced Operating System- IntroductionDebasis Das
Introduction to Advanced Operating systems. Many university courses run advanced/ distributed operating system courses in their 4 year engineering programs. This is based on WBUT CS 704 D course but matches many such courses run by different universities. If you need to downloaad this presentation, please send me an email at ddas15847@gmail.com
Process, Threads, Symmetric Multiprocessing and Microkernels in Operating SystemLieYah Daliah
The document discusses processes and process management in operating systems. It begins by defining a process as a program in execution along with its associated data and process control block (PCB). It then describes the different states a process can be in, including running, ready, blocked, and suspended. It also discusses process creation, termination, and scheduling. The document outlines the five-state process model and differences between single-threaded and multi-threaded processes. It concludes by comparing user-level threads and kernel-level threads.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
The document discusses processes and process scheduling in an operating system. It defines a process as the basic unit of work in a system that progresses sequentially. A process control block (PCB) tracks information for each process like its ID and state. Processes can be created through system calls and terminate through exit calls. Scheduling algorithms like short-term, long-term, and medium-term are used to allocate CPU time to ready processes and swap processes in and out of memory. Preemptive and non-preemptive scheduling approaches differ in how they allocate CPU time to processes.
The document discusses processes, process states, and process scheduling. It defines a process as a program in execution that contains the program counter, CPU registers, and other information. Processes go through various states like new, running, waiting, ready, and terminated. The OS tracks information about each process using a process control block (PCB). Process scheduling involves long-term, medium-term, and short-term scheduling to manage processes in memory and allocate CPU time. Context switching refers to saving and restoring a process's state when switching between processes. Inter-process communication allows processes to share resources and data. Threads are lightweight processes that can be used for parallelism and responsiveness.
A process represents a program in execution and goes through various states like new, ready, run, and terminate. It has a minimum of 4 states. A thread is a path of execution within a process and provides parallelism by dividing a process into multiple threads. Threads share resources like memory and code with peer threads but have their own program counters and stacks. Threads provide improved performance over processes as they have lower overhead and faster context switching.
This document discusses process management concepts including processes, threads, process scheduling, and inter-process communication. A process is defined as the fundamental unit of work in a system and requires resources like CPU time and memory. Key process concepts covered include process states, process layout in memory, and the process control block. Threads allow a process to execute multiple tasks simultaneously. Process scheduling and context switching are also summarized. Methods of inter-process communication like shared memory and message passing are described along with examples of client-server communication using sockets, remote procedure calls, and remote method invocation.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
4 Module - Operating Systems Configuration and Use by Mark John LadoMark John Lado, MIT
4 Module - Operating Systems Configuration and Use
More on https://www.markjohn.cf/courses
This course will deliberate on the basics of an operating system, which may include Computer Memory, the Operating System, its Graphical User Interface, The Windows Operating System, and Desktop, Operating System Installation.
A thread is a flow of execution through a process's code that maintains its own program counter, registers, and stack. Threads allow for parallel execution within a process to improve performance. There are two types of threads: user-level threads managed by a thread library and kernel-level threads managed by the operating system kernel. User-level threads are faster to create but cannot take advantage of multiprocessing, while kernel-level threads can utilize multiple processors but are slower to create and manage.
This document provides an introduction to operating systems, including definitions, goals, and components. It describes different types of systems such as mainframe, time-sharing, desktop, parallel, distributed, and real-time systems. It also discusses processes, process scheduling, and interprocess communication.
Symmetric multiprocessing involves multiple processors that share common memory and operating system. All processors are treated equally and can execute any process. It allows for increased throughput but the operating system is more complex. Context switching involves saving and restoring a process's state when switching between processes. Scheduling algorithms like round robin determine which ready processes get CPU time. Concurrency enables multiple processes to run simultaneously through time-sharing and introduces challenges around resource sharing.
Embedded System,
Real Time Operating System Concept
Architecture of kernel
Task
Task States
Task scheduler
ISR
Semaphores
Mailbox
Message queues
Pipes
Events
Timers
Memory management
Introduction to Ucos II RTOS
Study of kernel structure of Ucos II
Synchronization in Ucos II
Inter-task communication in Ucos II
Memory management in Ucos II
Porting of RTOS.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
This document discusses operating system structures and components. It describes four main OS designs: monolithic systems, layered systems, virtual machines, and client-server models. For each design, it provides details on how the system is organized and which components are responsible for which tasks. It also discusses some advantages and disadvantages of the different approaches. The document concludes by explaining how client-server models address issues with distributing OS functions to user space by having some critical servers run in the kernel while still communicating with user processes.
This document discusses file systems and their implementation in operating systems. It covers essential topics such as file naming, structure, types, attributes and access. It also describes how directories are implemented and managed through path names. Finally, it discusses various techniques for implementing file systems on disks, including file allocation methods, data structures for managing metadata like inodes and bitmaps for tracking free blocks. The goal is to store information on disks efficiently while allowing processes to access it.
There are three main approaches to handling deadlock: prevention, avoidance, and detection/recovery. Deadlock prevention aims to prevent the system from entering a deadlocked state by violating one of the necessary conditions for deadlock, such as eliminating mutual exclusion or circular wait. Deadlock avoidance uses techniques like Banker's Algorithm to allocate resources in a way that avoids unsafe states and potential deadlocks. Deadlock detection identifies when a deadlock has occurred and then performs recovery through methods like killing processes or preempting resources.
A process is a program in execution. When a program is loaded into memory and executed, it becomes a process that performs the tasks in the program. A process has four main sections - stack, heap, text, and data. As a process executes, it transitions through different states like ready, running, waiting, and terminated. The operating system uses a Process Control Block (PCB) to manage process information and scheduling. When the CPU switches between processes, it must save the context of the old process and load the new process context via a context switch.
lecture 1 (Part 2) kernal and its categoriesWajeehaBaig
Kernel and its categories
computer start up
Architecture of Operating system(Monolithic ,Layered,Micro kernel,Network and distributed O.S)
Interrupt and its function
System calls
System boot
O.S services(for system, for user)
Deadlock occurs when a set of processes are blocked because each process is holding a resource and waiting for another resource held by another process, forming a circular chain. There are four conditions for deadlock: mutual exclusion, where a resource can only be held by one process; hold and wait, where a process can hold resources while requesting more; no preemption, where a resource cannot be forcibly removed from a process; and circular wait, where there is a circular chain of processes each waiting on a resource held by the next process. An example is given of a traffic deadlock where cars block each other in a circular formation on a single-lane road.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
2. Reasons for needing cooperating
processes
Information Sharing:
Sharing of information between multiple processes can be accomplished
using cooperating processes. This may include access to the same files. A
mechanism is required so that the processes can access the files in parallel
to each other..
Modularity:
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to
faster and more efficient completion of the required tasks.
Computation speed up:
Subtasks of a single task can be performed parallelly using cooperating
processes. This increases the computation speedup as the task can be
executed faster.
Convenience:
There are many tasks that a user needs to do such as compiling, printing,
editing etc. It is convenient if these tasks can be managed by cooperating
processes.
3. Problems of Cooperative system
Possible to have deadlock
– Each process waiting for a message from the other process.
Possible to have starvation
– Two processes sending a message to each other while another process waits for
a message.
Possible to Damage the Data
– Cooperative system may Damage the data which is occurred due to Modularity.
Information sharing
– In cooperative system Information is shared without the user well.
– It may also share the personal data or sensitive information of the user which
the user does not want to share with other.
Data may be Hacked
– Some office data E.g. Banks etc. information of a clients can be hack through
cooperative system in which all the information should be showed to other
system.
– Also money transfer from one account to other account can be done.
5. Process Queues
Ready queue
is one of the many queues that a process may be added to
▪ CPU scheduling schedules from ready queue.
Job queue
▪ set of all processes started in the system waiting for memory
Device queues
▪ set of processes waiting for an I/O device
▪ A process will wait in such a queue until I/O is finished or until the waited event happens
▪ Processes migrate among the various queues
6. Two-State Process Model
State & Description
Running
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to
execute.
Two-state process model refers to running and non-running states
which are described below
7. Process Scheduling
• In a multiprogramming or time-sharing system, there
may be multiple processes ready to execute.
• We need to select one them and give the CPU to that.
• This is scheduling (decision).
• There are various criteria that can be used in the scheduling
decision.
• The scheduling mechanism (dispatcher) than assigns
the selected process to the CPU and starts execution of
it.
Select
(Scheduling Algorithm)
Dispatch
(mechanism)
8. Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
ready queue
Main Memory
CPU
Long-term
scheduler
Short-term
scheduler
job queue
9. Schedulers
• Short-term scheduler is invoked very frequently
(milliseconds) (must be fast)
• Long-term scheduler is invoked very infrequently
(seconds, minutes) (may be slow)
10. Addition of Medium Term
Scheduling
Medium term
scheduler
Short term
Scheduler
(CPU Scheduler)
Medium term
scheduler
12. Process Behavior
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
• CPU-bound process – spends more time doing
computations; few very long CPU bursts
• CPU burst: the execution of the program in CPU between two
I/O requests. We may have a short or long CPU burst.
14. Process Creation
Parent process create children processes, which, in turn
create other processes, forming a tree of processes
Generally, process identified and managed via a process
identifier (pid)
• Resource sharing alternatives:
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution alternatives:
• Parent and children execute concurrently
• Parent waits until children terminate
Process
Process Process
Process Process Process
15. Process Termination
Process executes last statement and asks the operating
system to delete it (can use exit system call)
Process resources are deallocated by operating system
Parent may terminate execution of children processes
(abort)
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating systems do not allow child to continue if its parent
terminates
• All children terminated - cascading termination
16. What is Thread
A thread is a path of execution within a process. A process can
contain multiple threads.
Basic unit of CPU utilization
Thread has its own
• Thread ID
• Program counter
• Registers
• stack
A thread is a flow of execution through the process code, with its
own program counter that keeps track of which instruction to
execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code
segment, data segment and open files. When one thread alters a
code segment memory item, all other threads see that.
17. What is Thread
A thread is also called a lightweight process. Threads provide
a way to improve application performance through
parallelism. Threads represent a software approach to
improving performance of operating system by reducing the
overhead. Thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can
exist outside a process. Each thread represents a separate
flow of control. Threads have been successfully used in
implementing network servers and web server. They also
provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The
following figure shows the working of a single-threaded and a
multithreaded process.
18.
19. Difference between Process and Thread
S.N. Process Thread
1 Process is heavy weight or resource
intensive.
Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with
operating system.
Thread switching does not need to interact
with operating system.
3 In multiple processing environments,
each process executes the same code but
has its own memory and file resources.
All threads can share same set of open files,
child processes.
4 If one process is blocked, then no other
process can execute until the first
process is unblocked.
While one thread is blocked and waiting, a
second thread in the same task can run.
5 Multiple processes without using threads
use more resources.
Multiple threaded processes use fewer
resources.
6 In multiple processes each process
operates independently of the others.
One thread can read, write or change another
thread's data.
20. Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a
process.
• Efficient communication.
• It is more economical to create and context switch
threads.
• Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
21. Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed
threads acting on kernel, an operating system core.
22. User Level Threads
In this case, the thread management kernel is not
aware of the existence of threads. The thread library
contains code for creating and destroying threads,
for passing message and data between threads, for
scheduling thread execution and for saving and
restoring thread contexts. The application starts with
a single thread.
23. User Level Threads
Advantages
• Thread switching does not require Kernel mode
privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level
thread.
• User level threads are fast to create and manage.
Disadvantages
• There is a lack of coordination between threads and
operating system kernel.
• Multithreaded application cannot take advantage of
multiprocessing.
24. Kernel Level Threads
In this case, thread management is done by the Kernel.
There is no thread management code in the application
area. Kernel threads are supported directly by the
operating system. Any application can be programmed to
be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process
as a whole and for individuals threads within the process.
Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
25. Kernel Level Threads
Advantages
• Kernel can simultaneously schedule multiple threads
from the same process on multiple processes.
• If one thread in a process is blocked, the Kernel can
schedule another thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and
manage than the user threads.
• Transfer of control from one thread to another within
the same process requires a mode switch to the Kernel.
26. Multithreading
Multithreading is the ability of an operating system process to manage its
use by more than one user at a time and to even manage multiple requests
by the same user without having to have multiple copies of the
programming running in the computer. Each user request for a program or
system service is kept track of as a thread with a separate identity. As
programs work on behalf of the initial request for that thread and are
interrupted by other requests, the status of work on behalf of that thread is
kept track of until the work is completed.
Multithreading is a technique that allows a program or a process to execute
many tasks concurrently. at the same time and parallel. It allows a process
to run its tasks in parallel mode on a single processor system.
A multithreading is a specialized form of multitasking. Multitasking threads
require less overhead than multitasking processes.
A process consists of the memory space allocated by the operating system
that can contain one or more threads. A thread cannot exist on its own; it
must be a part of a process. A process remains running until all of the
threads are done executing.
Multithreading enables you to write very efficient programs that make
maximum use of the CPU, because idle time can be kept to a minimum.
Advantages of multithreading over multitasking.
27. Process Synchronization
Process Synchronization means sharing system
resources by processes in a such a way that,
Concurrent access to shared data is handled thereby
minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms
to ensure synchronized execution of cooperating
processes.
Process Synchronization was introduced to handle
problems that arose while multiple process
executions
28. 28
Synchronization
• How does the sender/receiver behave if it can not
send/receive the message immediately
• Depend if Blocking or Non-Blocking communication is used
• Blocking is considered synchronous
• Sender blocks until receiver or kernel receives
• Receiver blocks until message available
• Non-blocking is considered asynchronous
• Sender sends the message really or tries later, but always
returns immediately
• Receiver receives a valid message or null, but always returns
immediately
29. Buffering
These and many other reasons make it a need for operating systems to
have Buffers or Temporary memory locations it can use. For example
imagine that there are two different processes. It can be tricky to transfer
data between these processes as the processes may be at two different
states at a given time.
Let us say process A : Is sending a bitmap to the printer driver so that it
can send it to the printer. Unfortunately the driver is busy printing
another page at that time. So until the driver is ready the OS stores the
data in a buffer.
Exact behavior depends also on the Available Buffer
Queue of messages attached to the link; implemented in one of three
ways
1. Zero capacity – 0 messages
Sender must wait for receiver
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
31. Bounded Buffer Problem
This problem is generalized in terms of the Producer
Consumer problem, where a finite buffer pool is
used to exchange messages between producer and
consumer processes.
• Because the buffer pool has a maximum size, this
problem is often called the Bounded buffer
problem.
• Solution to this problem is, creating two counting
semaphores "full" and "empty" to keep track of the
current number of full and empty buffers
respectively.
32. What is the Problem Statement?
There is a buffer of n slots and each slot is capable of
storing one unit of data. There are two processes
running, namely, producer and consumer, which are
operating on the buffer.
A producer tries to insert data into an empty slot of the
buffer. A consumer tries to remove data from a filled slot
in the buffer. As you might have guessed by now, those
two processes won't produce the expected output if they
are being executed concurrently.
There needs to be a way to make the producer and
consumer work in an independent manner.
33. Here's a Solution
One solution of this problem is to use semaphores. The
semaphores which will be used here are:
m, a binary semaphore which is used to acquire and
release the lock.
empty, a counting semaphore whose initial value is the
number of slots in the buffer, since, initially all slots are
empty.
full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the
number of empty slots in the buffer and full represents
the number of occupied slots in the buffer.
34. Here's a Solution
• producer first waits until there is atleast one empty slot.
• Then it decrements the empty semaphore because, there will now be one less
empty slot, since the producer is going to insert data in one of those slots.
• Then, it acquires lock on the buffer, so that the consumer cannot access the
buffer until producer completes its operation.
• After performing the insert operation, the lock is released and the value of full is
incremented because the producer has just filled a slot in the buffer.
• The consumer waits until there is atleast one full slot in the buffer.
• Then it decrements the full semaphore because the number of occupied slots
will be decreased by one, after the consumer completes its operation.
• After that, the consumer acquires lock on the buffer.
• Following that, the consumer completes the removal operation so that the data
from one of the full slots is removed.
• Then, the consumer releases the lock.
• Finally, the empty semaphore is incremented by 1, because the consumer has
just removed data from an occupied slot, thus making it empty.
35. Dining Philosophers Problem
The dining philosopher's problem involves the
allocation of limited resources to a group of
processes in a deadlock-free and starvation-free
manner.
There are five philosophers sitting around a table, in
which there are five chopsticks/forks kept beside
them and a bowl of rice in the centre, When a
philosopher wants to eat, he uses two chopsticks -
one from their left and one from their right. When a
philosopher wants to think, he keeps down both
chopsticks at their original place.
36. What is the Problem Statement?
Consider there are five philosophers sitting around a
circular dining table. The dining table has five
chopsticks and a bowl of rice in the middle.
At any instant, a philosopher is either eating or
thinking. When a philosopher wants to eat, he uses
two chopsticks - one from their left and one from
their right. When a philosopher wants to think, he
keeps down both chopsticks at their original place.
37. Here's the Solution
From the problem statement, it is clear that a
philosopher can think for an indefinite amount of
time. But when a philosopher starts eating, he has to
stop at some point of time. The philosopher is in an
endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the
five chopsticks.
38. Here's the Solution
When a philosopher wants to eat the rice, he will wait for the
chopstick at his left and picks up that chopstick. Then he waits
for the right chopstick to be available, and then picks it too.
After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and
each of them pickup one chopstick, then a deadlock situation
occurs because they will be waiting for another chopstick
forever. The possible solutions for this are:
A philosopher must be allowed to pick up the chopsticks only
if both the left and right chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all
the four philosophers pick up four chopsticks, there will be
one chopstick left on the table. So, one philosopher can start
eating and eventually, two chopsticks will be available. In this
way, deadlocks can be avoided.
39. The Readers Writers Problem
In this problem there are some
processes(called readers) that only read the shared
data, and never change it, and there are other
processes(called writers) who may change the data
in addition to reading, or instead of reading it.
There are various type of readers-writers problem,
most centered on relative priorities of readers and
writers.
40. The Problem Statement
There is a shared resource which should be accessed
by multiple processes. There are two types of
processes in this context. They
are reader and writer. Any number of readers can
read from the shared resource simultaneously, but
only one writer can write to the shared resource.
When a writer is writing data to the resource, no
other process can access the resource.
A writer cannot write to the resource if there are
non zero number of readers accessing the resource
at that time.
41. The Solution
From the above problem statement, it is evident that
readers have higher priority than writer. If a writer wants
to write to the resource, it must wait until there are no
readers currently accessing that resource.
Here, we use one mutex m and a semaphore w. An
integer variable read_count is used to maintain the
number of readers currently accessing the resource. The
variable read_count is initialized to 0. A value of 1 is
given initially to m and w.
Instead of having the process to acquire lock on the
shared resource, we use the mutex m to make the
process to acquire and release lock whenever it is
updating the read_count variable.
42. Sleeping Barber problem
Problem: The analogy is based upon a hypothetical barber
shop with one barber. There is a barber shop which has one
barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair.
• If there is no customer, then the barber sleeps in his
own chair.
• When a customer arrives, he must wake up the barber.
• If there are many customers and the barber is cutting
a customer’s hair, then the remaining customers either
wait if there are empty chairs in the waiting room or they
leave if no chairs are empty.
43. Solution:
The solution to this problem includes three semaphores. First is for the customer
which counts the number of customers present in the waiting room (customer in
the barber chair is not included because he is not waiting). Second, the barber 0 or
1 is used to tell whether the barber is idle or is working, And the third mutex is
used to provide the mutual exclusion which is required for the process to execute.
In the solution, the customer has the record of the number of customers waiting in
the waiting room if the number of customers is equal to the number of chairs in
the waiting room then the upcoming customer leaves the barbershop.
When the barber shows up in the morning, he executes the procedure barber,
causing him to block on the semaphore customers because it is initially 0. Then the
barber goes to sleep until the first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires
the mutex for entering the critical region, if another customer enters thereafter,
the second one will not be able to anything until the first one has released the
mutex. The customer then checks the chairs in the waiting room if waiting
customers are less then the number of chairs then he sits otherwise he leaves and
releases the mutex.
If the chair is available, then customer sits in the waiting room and increments the
variable waiting value and also increases the customer’s semaphore this wakes up
the barber if he is sleeping.
At this point, customer and barber are both awake and the barber is ready to give
that person a haircut. When the haircut is over, the customer exits the procedure
and if there are no customers in waiting room barber sleeps.