In this PPT Of operating system it covers:
Process Concept; Process Control Block; Process Scheduling; CPU Scheduling - Basic Concepts; Scheduling Algorithms – FIFO; RR; SJF; Multi- level; Multi-level feedback. Process Synchronization and deadlocks: The Critical Section Problem; Synchronization hardware; Semaphores; Classical problems; Deadlock: System model; Characterization; Deadlock prevention; Avoidance and Detection.
This document discusses process management concepts including processes, threads, process scheduling, and inter-process communication. A process is defined as the fundamental unit of work in a system and requires resources like CPU time and memory. Key process concepts covered include process states, process layout in memory, and the process control block. Threads allow a process to execute multiple tasks simultaneously. Process scheduling and context switching are also summarized. Methods of inter-process communication like shared memory and message passing are described along with examples of client-server communication using sockets, remote procedure calls, and remote method invocation.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
A process represents a program in execution. It progresses sequentially through different states from start to termination. A process has sections for stack, heap, text, and data in memory. Shortest Job First (SJF) scheduling allocates the CPU to the process with the shortest estimated run time remaining. It aims to minimize average waiting times but requires knowing future process durations. First Come First Served (FCFS) scheduling handles processes in the order they arrive without preemption, but is prone to the convoy effect where short jobs wait behind long ones.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
Inter-process communication (IPC) allows processes to communicate and synchronize actions. There are two main models - shared memory, where processes directly read/write shared memory, and message passing, where processes communicate by sending and receiving messages. Critical sections are parts of code that access shared resources and must be mutually exclusive to avoid race conditions. Semaphores can be used to achieve mutual exclusion, with operations P() and V() that decrement or increment the semaphore value to control access to the critical section. For example, in the producer-consumer problem semaphores can suspend producers if the buffer is full and consumers if empty, allowing only one process at a time in the critical section.
Scheduling Definition, objectives and types Maitree Patel
Scheduling is the process of determining which process will use the CPU when multiple processes are ready to execute. The objectives of scheduling are to maximize CPU utilization, throughput, and fairness while minimizing response time, turnaround time, and waiting time. There are three main types of schedulers: long-term schedulers manage process admission to the system; short-term or CPU schedulers select the next process to run on the CPU; and medium-term schedulers handle process suspension during I/O waits.
The document discusses various aspects of process scheduling and CPU scheduling. It describes the different queues that an operating system maintains for processes in different states. These include ready queues for processes ready to execute, and device queues for processes waiting on I/O. It also covers schedulers for long term, short term, and medium term scheduling and different scheduling algorithms like FCFS, priority scheduling, and round robin scheduling. Context switching is described as the mechanism to store and restore process states to enable time sharing of the CPU between processes.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
This document discusses process management concepts including processes, threads, process scheduling, and inter-process communication. A process is defined as the fundamental unit of work in a system and requires resources like CPU time and memory. Key process concepts covered include process states, process layout in memory, and the process control block. Threads allow a process to execute multiple tasks simultaneously. Process scheduling and context switching are also summarized. Methods of inter-process communication like shared memory and message passing are described along with examples of client-server communication using sockets, remote procedure calls, and remote method invocation.
This document discusses process management in operating systems. It covers key topics like process control blocks, scheduling queues, types of schedulers (long-term, short-term, medium-term), context switching, multithreading models (many-to-one, one-to-one, many-to-many), and scheduling algorithms. The document provides details on how operating systems manage processes and computer resources to ensure efficient execution of programs.
A process represents a program in execution. It progresses sequentially through different states from start to termination. A process has sections for stack, heap, text, and data in memory. Shortest Job First (SJF) scheduling allocates the CPU to the process with the shortest estimated run time remaining. It aims to minimize average waiting times but requires knowing future process durations. First Come First Served (FCFS) scheduling handles processes in the order they arrive without preemption, but is prone to the convoy effect where short jobs wait behind long ones.
A process is the basic unit of execution in an operating system. It consists of a program in execution along with additional system resources and state. Key aspects of a process include its process control block (PCB) which stores process state and scheduling information, and the different states a process can be in such as running, ready, waiting, etc. Processes communicate and synchronize through interprocess communication which allows sharing data and coordinating work. The operating system performs process scheduling to allocate the CPU to processes and enable multitasking.
Inter-process communication (IPC) allows processes to communicate and synchronize actions. There are two main models - shared memory, where processes directly read/write shared memory, and message passing, where processes communicate by sending and receiving messages. Critical sections are parts of code that access shared resources and must be mutually exclusive to avoid race conditions. Semaphores can be used to achieve mutual exclusion, with operations P() and V() that decrement or increment the semaphore value to control access to the critical section. For example, in the producer-consumer problem semaphores can suspend producers if the buffer is full and consumers if empty, allowing only one process at a time in the critical section.
Scheduling Definition, objectives and types Maitree Patel
Scheduling is the process of determining which process will use the CPU when multiple processes are ready to execute. The objectives of scheduling are to maximize CPU utilization, throughput, and fairness while minimizing response time, turnaround time, and waiting time. There are three main types of schedulers: long-term schedulers manage process admission to the system; short-term or CPU schedulers select the next process to run on the CPU; and medium-term schedulers handle process suspension during I/O waits.
The document discusses various aspects of process scheduling and CPU scheduling. It describes the different queues that an operating system maintains for processes in different states. These include ready queues for processes ready to execute, and device queues for processes waiting on I/O. It also covers schedulers for long term, short term, and medium term scheduling and different scheduling algorithms like FCFS, priority scheduling, and round robin scheduling. Context switching is described as the mechanism to store and restore process states to enable time sharing of the CPU between processes.
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxTSha7
The document discusses key concepts related to CPU scheduling in operating systems. It defines CPU scheduling and its purpose of allowing concurrent process execution. It describes the criteria used for scheduling algorithms and their evaluation. It also explains the different states a process can be in, including new, ready, running, blocked/wait, and terminated. The types of schedulers - long term, short term, and medium term - and their different objectives and functions are outlined as well.
The document discusses processes and process management in an operating system. A process is an instance of a computer program being executed and contains the program code and current activity. Processes go through various states like ready, running, waiting, and terminated. The operating system uses a process control block (PCB) to maintain information about each process like its state, program counter, memory allocation, and other details. Key process operations include creation, termination, and context switching between processes using the PCB.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also covered.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that uses system resources like memory and CPU. The document outlines the different states a process can be in, like ready, running, waiting, and describes how processes transition between these states. It discusses the concept of a process control block that contains information about each process like its state, registers, and scheduling information. The document also covers topics like process creation, changing process states, suspending processes, and interprocess communication.
Operating Systems chap 2_updated2 (1).pptxAmanuelmergia
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also summarized.
This Tutorial will provide you information on working of operating system. Main topics are following and further sub-topics are discussed in detail.
1. Kernel Architecture.
2. Initialization of operating system.
3. Process of operating system.
4. Management in operating system.
5. File system.
6.Security in operating system.
7.Interface in operating System.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The process scheduler is responsible for microscopic scheduling by allocating processors to processes and deciding which process runs, for how long, and when. It keeps track of process states using a process control block for each process. Scheduling policies determine which ready process is assigned the processor and for how long based on factors like priority, time quantum elapsed, or an I/O request. Without synchronization, race conditions and deadly embraces can occur when processes share resources.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
this presentation explains chapter 3 of the distributed operating system book for Andrew S.tanenbaum in addition to other related topics in the synchronization of the distributed operating system
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
The document discusses processes and process management. It defines a process as an active program in execution. Processes fall into two categories - system processes started by the OS and user processes started by the user. Each process runs independently and has its own memory space. Process management allows controlling processes by starting, ending, and setting priorities. Processes pass through states like new, ready, running, wait, and termination. The OS performs operations on processes like creation, scheduling, execution, and deletion. Schedulers like long-term, short-term, and medium-term manage processes. A process control block tracks process information.
Symmetric multiprocessing involves multiple processors that share common memory and operating system. All processors are treated equally and can execute any process. It allows for increased throughput but the operating system is more complex. Context switching involves saving and restoring a process's state when switching between processes. Scheduling algorithms like round robin determine which ready processes get CPU time. Concurrency enables multiple processes to run simultaneously through time-sharing and introduces challenges around resource sharing.
This document provides an overview of processes and operating systems. It discusses key concepts like processes, multitasking, multiprocessing, and multithreading. It also covers scheduling algorithms like round robin, priority scheduling, and rate monotonic scheduling. Real-time operating systems prioritize tasks based on timing constraints to ensure deadlines are met. Preemption and priorities are important methods for scheduling in real-time systems.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
Lecture 5- Process Synchonization_revised.pdfAmanuelmergia
This document discusses process synchronization and the critical section problem in operating systems. It provides background on how concurrent processes may need to access shared resources and data in a controlled manner to maintain consistency. The classic critical section problem is defined, where multiple processes need exclusive access to a critical section of code that manipulates shared data. Semaphores are introduced as a solution, where wait and signal operations on a semaphore can be used to control access to critical sections and ensure only one process is in its critical section at a time.
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxTSha7
The document discusses key concepts related to CPU scheduling in operating systems. It defines CPU scheduling and its purpose of allowing concurrent process execution. It describes the criteria used for scheduling algorithms and their evaluation. It also explains the different states a process can be in, including new, ready, running, blocked/wait, and terminated. The types of schedulers - long term, short term, and medium term - and their different objectives and functions are outlined as well.
The document discusses processes and process management in an operating system. A process is an instance of a computer program being executed and contains the program code and current activity. Processes go through various states like ready, running, waiting, and terminated. The operating system uses a process control block (PCB) to maintain information about each process like its state, program counter, memory allocation, and other details. Key process operations include creation, termination, and context switching between processes using the PCB.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also covered.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that uses system resources like memory and CPU. The document outlines the different states a process can be in, like ready, running, waiting, and describes how processes transition between these states. It discusses the concept of a process control block that contains information about each process like its state, registers, and scheduling information. The document also covers topics like process creation, changing process states, suspending processes, and interprocess communication.
Operating Systems chap 2_updated2 (1).pptxAmanuelmergia
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also summarized.
This Tutorial will provide you information on working of operating system. Main topics are following and further sub-topics are discussed in detail.
1. Kernel Architecture.
2. Initialization of operating system.
3. Process of operating system.
4. Management in operating system.
5. File system.
6.Security in operating system.
7.Interface in operating System.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
The process scheduler is responsible for microscopic scheduling by allocating processors to processes and deciding which process runs, for how long, and when. It keeps track of process states using a process control block for each process. Scheduling policies determine which ready process is assigned the processor and for how long based on factors like priority, time quantum elapsed, or an I/O request. Without synchronization, race conditions and deadly embraces can occur when processes share resources.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
this presentation explains chapter 3 of the distributed operating system book for Andrew S.tanenbaum in addition to other related topics in the synchronization of the distributed operating system
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
Process management- This ppt contains all required information regarding oper...ApurvaLaddha
The document discusses processes and process management. It defines a process as an active program in execution. Processes fall into two categories - system processes started by the OS and user processes started by the user. Each process runs independently and has its own memory space. Process management allows controlling processes by starting, ending, and setting priorities. Processes pass through states like new, ready, running, wait, and termination. The OS performs operations on processes like creation, scheduling, execution, and deletion. Schedulers like long-term, short-term, and medium-term manage processes. A process control block tracks process information.
Symmetric multiprocessing involves multiple processors that share common memory and operating system. All processors are treated equally and can execute any process. It allows for increased throughput but the operating system is more complex. Context switching involves saving and restoring a process's state when switching between processes. Scheduling algorithms like round robin determine which ready processes get CPU time. Concurrency enables multiple processes to run simultaneously through time-sharing and introduces challenges around resource sharing.
This document provides an overview of processes and operating systems. It discusses key concepts like processes, multitasking, multiprocessing, and multithreading. It also covers scheduling algorithms like round robin, priority scheduling, and rate monotonic scheduling. Real-time operating systems prioritize tasks based on timing constraints to ensure deadlines are met. Preemption and priorities are important methods for scheduling in real-time systems.
The document discusses process scheduling in operating systems. It defines process scheduling as the activity of selecting which process runs on the CPU. It describes the different queues operating systems use to manage processes, including ready, job, and device queues. It also discusses long-term, short-term, and medium-term schedulers and their roles in managing processes over different timescales. Context switching and cooperating processes are also summarized.
The document discusses different types of schedulers and scheduling algorithms used in operating systems. It describes:
1) Primary and secondary schedulers that prioritize threads and increase priority of non-executing threads.
2) Four states a process can be in: new, ready, running, waiting, terminated. This helps the scheduler respond to each process.
3) Scheduling algorithms like FCFS, SJF, SRTN, priority, and round robin - discussing their approach, advantages, disadvantages.
4) Concepts like arrival time, burst time, completion time, turnaround time, waiting time used in CPU scheduling.
Lecture 5- Process Synchonization_revised.pdfAmanuelmergia
This document discusses process synchronization and the critical section problem in operating systems. It provides background on how concurrent processes may need to access shared resources and data in a controlled manner to maintain consistency. The classic critical section problem is defined, where multiple processes need exclusive access to a critical section of code that manipulates shared data. Semaphores are introduced as a solution, where wait and signal operations on a semaphore can be used to control access to critical sections and ensure only one process is in its critical section at a time.
Similar to Process management in Operating System_Unit-2 (20)
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
2. Process
• A process is basically a program in execution.The execution of a process must
progress in a sequential fashion.
• To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
• When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data.The following image shows
a simplified layout of a process inside main memory −
3. • Stack
The process Stack contains the temporary data such as
method/function parameters, return address and local
variables.
• Heap
This is dynamically allocated memory to a process during its
run time.
• Text
This includes the current activity represented by the value
of Program Counter and the contents of the processor's
registers.
• Data
This section contains the global and static variables
4. Process Life Cycle (Process State)
When a process executes, it passes through different states.
5. • Start
This is the initial state when a process is first started/created.
• Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to assign
CPU to some other process.
• Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set
to running and the processor executes its instructions.
• Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
• Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved
to the terminated state where it waits to be removed from main memory.
6. Context Switching
• A context switch is the mechanism to store and
restore the state or context of a CPU in Process
Control block so that a process execution can be
resumed from the same point at a later time. Using
this technique, a context switcher enables multiple
processes to share a single CPU. Context switching
is an essential part of a multitasking operating
system features.
• Process control block:When the scheduler switches
the CPU from executing one process to execute
another, the state from the current running process
is stored into the process control block
7. CPU SchedulingAlgorithms
A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms.There are six popular process scheduling
algorithms in which we are going to discuss four algorithms in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-First (SJF) Scheduling
• Round Robin (RR) Scheduling
• Multiple-Level Queues Scheduling
• Multi-level Feedback Scheduling
8. Algorithms are either non-preemptive or preemptive.
• Non-preemptive algorithms are designed so that once a process
enters the running state, it cannot be preempted until it
completes its allotted time
• Preemptive scheduling is based on priority where a scheduler
may preempt a low priority running process anytime when a
high priority process enters into a ready state.
9. CPU Scheduling in Operating Systems
• Arrival Time: Time at which the process arrives in the ready queue.
• CompletionTime:Time at which process completes its execution.
• BurstTime: Time required by a process for CPU execution.
• Turn AroundTime: Time Difference between completion time and arrival time.
10. Important formula
Turn AroundTime = CompletionTime – ArrivalTime
•WaitingTime (W.T):Time Difference between turn around
time and burst time.
WaitingTime =Turn AroundTime – BurstTime
11. Objectives of Process Scheduling
Algorithm
• Max CPU utilization [Keep CPU as busy as possible]
• Fair allocation of CPU.
• Max throughput [Number of processes that complete their
execution per time unit]
• Min turnaround time [Time taken by a process to finish
execution]
• Min waiting time [Time a process waits in ready queue]
• Min response time [Time when a process produces first
response]
12. Process Synchronization
There are two ways any process can execute –
• In Concurrent Execution – the CPU scheduler switches rapidly between
processes. A process is stopped at any points and the processor is assigned
to another instruction execution. Here, only one instruction is executed at a
time.
• Parallel execution – 2 or more instructions of different process execute
simultaneously on different processing cores.
13. Why is Process Synchronization Important?
When several processes share data, running in parallel on
different cores, then changes made by one process may override
changes made by another process running parallel. Resulting in
inconsistent data. So, this requires processes to be synchronized,
handling system resources and processes to avoid such situation
is known as Process Synchronization.
14. Classic Banking Example –
• Consider your bank account has 5000$.
• You try to withdraw 4000$ using net banking and simultaneously try to withdraw
via ATM too.
• For Net Banking at time t = 0ms bank checks you have 5000$ as balance and
you’re trying to withdraw 4000$ which is lesser than your available balance. So, it
lets you proceed further and at time t = 1ms it connects you to server to transfer
the amount
• Imagine, for ATM at time t = 0.5ms bank checks your available balance which
currently is 5000$ and thus let’s you enter ATM password and withdraw amount.
• At time t = 1.5 ms ATM dispenses the cash of 4000$ and at time t = 2 net banking
transfer is complete of 4000$
15. • Now, due to concurrent access and processing time that computer takes in both
ways you were able to withdraw 3000$ more than your balance. In total 8000$
were taken out and balance was just 5000$.
16. How to solve this Situation
• To avoid such situations process synchronization is used, so
another concurrent process P2 is notified of existing concurrent
process P1 and not allowed to go through as there is P1 process
which is running and P2 execution is only allowed once P1
completes.
• Process Synchronization also prevents race around condition.
It’s the condition in which several processes access and
manipulate the same data. In this condition, the outcome of the
execution depends upon the particular order in which access
takes place.
17. What is Race Condition?
• Race Condition occurs when more than one process tries to access and
modify the same shared data or resources because many processes try to
modify the shared data or resources there are huge chances of a process
getting the wrong result or data.Therefore, every process race to say that it
has correct data or resources and this is called a race condition.
• The value of the shared data depends on the execution order of the process
as many processes try to modify the data or resources at the same time.
The race condition is associated with the critical section. Now the question
arises that how to handle a race condition.We can tackle this problem by
implementing logic in the critical section like only one process at a time can
access the critical section and this section is called the atomic section.
18. Types of Process Synchronization
• On the basis of synchronization, processes are categorized as one of the following
two types:
• Independent Process: Execution of one process does not affects the execution of
other processes.
• Cooperative Process: Execution of one process affects the execution of other
processes.
19. Elements of the Process
• Entry Section –To enter the critical section code, a
process must request permission. Entry Section code
implements this request.
• Critical Section –This is the segment of code where
process changes common variables, updates a table,
writes to a file and so on. When 1 process is
executing in its critical section, no other process is
allowed to execute in its critical section.
• Exit Section – After the critical section is executed,
this is followed by exit section code which marks the
end of critical section code.
• Remainder Section –The remaining code of the
process is known as remaining section.
20. Critical Section Problem
A solution to the critical section problem must satisfy the following three
conditions:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those processes
that are not executing in their remainder section can participate in deciding
which will enter in the critical section next, and the selection cannot be
postponed indefinitely.
• BoundedWaiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
22. Semaphores
• Semaphore is simply a variable that is non-negative and shared between threads.
It is another algorithm or solution to the critical section problem. It is a signaling
mechanism and a thread that is waiting on a semaphore, which can be signaled by
another thread.
• It uses two atomic operations, 1) wait, and 2) signal for the process
synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
23. Classical Problems of Synchronization -The
Bounded Buffer Problem
• The Bounded Buffer Problem (also called the Producer-Consumer Problem)
• The Readers-Writers Problem
• The Dining Philosophers Problem
These problems are used to test nearly every newly proposed synchronization
scheme or primitive.
24. The Bounded Buffer Problem (Producer-
consumer problem)
Consider:
• a buffer which can store n items
• a producer process which creates the items (1 at a time)
• a consumer process which processes them (1 at a time)
• A producer cannot produce unless there is an empty buffer slot to fill.
• A consumer cannot consume unless there is at least one produced item.
25. Semaphore empty=N, full=0, mutex=1;
process producer {
while (true) {
empty.acquire();
mutex.acquire();
// produce
mutex.release();
full.release();
}
}
process consumer {
while (true) {
full.acquire();
mutex.acquire();
// consume
mutex.release();
empty.release();
}
The semaphore mutex provides mutual
exclusion for access to the buffer.
26. Readers-Writers Problem
• A data item such as a file is shared among several processes.
• Each process is classified as either a reader or writer.
• Multiple readers may access the file simultaneously.
• A writer must have exclusive access (i.e., cannot share with either a reader or
another writer).
27. A solution gives priority to either readers or writers.
• readers' priority: no reader is kept waiting unless a writer has already obtained
permission to access the database
• writers' priority: if a writer is waiting to access the database, no new readers can
start reading
• A solution to either version may cause starvation
• in the readers' priority version, writers may starve
• in the writers' priority version, readers may starve
28. • A semaphore solution to the readers' priority version (without addressing
starvation):
Semaphore mutex = 1;
Semaphore db = 1;
int readerCount = 0;
process writer {
db.acquire();
// write
db.release();
}
process reader {
// protecting readerCount
mutex.acquire();
++readerCount;
if (readerCount == 1)
db.acquire();
mutex.release();
// read
// protecting readerCount
mutex.acquire();
--readerCount;
if (readerCount == 0)
db.release;
mutex.release();
}
29. Dining Philosophers Problem
• The dining philosopher's problem states that there are 5 philosophers sharing a circular
table and they eat and think alternatively.There is a bowl of rice for each of the
philosophers and 5 chopsticks. A philosopher needs both their right and left chopstick to
eat. A hungry philosopher may only eat if there are both chopsticks available. Otherwise
a philosopher puts down their chopstick and begin thinking again.
30. Solution of Dining Philosophers Problem
• A solution of the Dining Philosophers Problem is to use a semaphore to represent a
chopstick.A chopstick can be picked up by executing a wait operation on the semaphore
and released by executing a signal semaphore.
• The structure of the chopstick is shown below −
• semaphore chopstick [5];
• Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the
table and not picked up by a philosopher.
31. The structure of a random philosopher i is given as follows −
do {
wait( chopstick[i] );
wait( chopstick[ (i+1) % 5] );
EATINGTHE RICE
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
THINKING
} while(1);
• In the above structure, first wait operation is performed on chopstick[i] and
chopstick[ (i+1) % 5].This means that the philosopher i has picked up the
chopsticks on his sides.Then the eating function is performed.
• After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5].
This means that the philosopher i has eaten and put down the chopsticks on his
sides.Then the philosopher goes back to thinking.
32. Deadlock
• A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
•
33. Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true.
• Mutual Exclusion
• Hold and wait
• No Preemption
• Circular wait
34. • Mutual Exclusion
• There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process 1
only.
35. • Hold and Wait - A process can hold multiple resources and still request more
resources from other processes which are holding them. In the diagram given
below, Process 2 holds Resource 2 and Resource 3 and is requesting the Resource 1
which is held by Process 1.
36. • No PreemptionA resource cannot be preempted from a process by force. A
process can only release a resource voluntarily. In the diagram below, Process 2
cannot preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
•
37. • Circular Wait - A process is waiting for the resource held by the second process,
which is waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process.This forms a circular
chain. For example: Process 1 is allocated Resource2 and it is requesting Resource
1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2.This
forms a circular wait loop
38. Deadlock Prevention
• Eliminate Mutual Exclusion
• Eliminate Hold and wait
• Eliminate No Preemption
• Eliminate Circular Wait
39. • Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such
as the tape drive and printer, are inherently non-shareable.
• Eliminate No Preemption
Preempt resources from the process when resources required by other high
priority processes.
40. • Eliminate Hold and wait
• Allocate all required resources to the process before the start of its execution, this
way hold and wait condition is eliminated but it will lead to low device utilization.
for example, if a process requires printer at a later time and we have allocated
printer before the start of its execution printer will remain blocked till it has
completed its execution.
• The process will make a new request for resources after releasing the current set
of resources.This solution may lead to starvation.
41. • Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the
resources increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4,
R3 lesser than R5 such request will not be granted, only request for resources more
than R5 will be granted.
42. Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
• Banker’s Algorithm
Banker’s Algorithm is resource allocation and deadlock avoidance algorithm which
test all the request made by processes for resources, it checks for the safe state, if
after granting request system remains in the safe state it allows the request and if
there is no safe state it doesn’t allow the request made by the process.
43. Inputs to Banker’s Algorithm:
• Max need of resources by each process.
• Currently allocated resources by each process.
• Max free available resources in the system.
44. The request will only be granted under the below condition:
• If the request made by the process is less than equal to max need to that process.
• If the request made by the process is less than equal to the freely available
resource in the system.
45. The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system:
• Safety algorithm
• Resource request algorithm
46. SafetyAlgorithm
It is a safety algorithm used to check whether or not a system is in a safe state or
follows the safe sequence in a banker's algorithm:
• Step 1:There are two vectors Wok and Finish of length m and n in a safety
algorithm.
Initialize:Work = Available
Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1.
• Step 2: Check the availability status for each type of resources [i], such as:
Need[i] <= Available
Finish[i] == false
If the i does not exist, go to step 4.
• Step 3:Work =Work +Allocation(i) // to get new resource allocation
Finish[i] = true
Go to step 2 to check the status of resource availability for the next process.
• Step 4: If Finish[i] == true; it means that the system is safe for all processes.
47. Resource RequestAlgorithm
• A resource request algorithm checks how a system will behave when a process
makes each type of resource request in a system as a request matrix.
When the number of requested resources of each type is less than
the Need resources, go to step 2 and if the condition fails, which means that the
process P[i] exceeds its maximum claim for the resource. As the expression
suggests:
If Request(i) <= Need
Go to step 2;
If Request(i) <= Available
Else Process P[i] must wait for the resource since it is not available for use.
48. When the requested resource is allocated to the process by changing state:
Available = Available - Request
Allocation(i) = Allocation(i) + Request (i)
Needi = Needi - Requesti
49. • Consider a system that contains five processes P1, P2, P3, P4, P5 and the three
resource types A, B and C. Following are the resources types: A has 10, B has 5 and
the resource type C has 7 instances.
50. Answer the following questions using the banker's algorithm:
1. What is the reference of the need matrix?
2. Determine if the system is safe or not.
3. What will happen if the resource request (1, 0, 2) for process P1 can the system
accept this request immediately?
51. Ans. 2:
Context of the need matrix is as follows:
• Need [i] = Max [i] - Allocation [i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1
52. • Ans. 2: Apply the Banker's Algorithm:
Available Resources of A, B and C are 3, 3, and 2.
Now we check if each type of resource request is available for each process.
Step 1: For Process P1:
Need <= Available
7, 4, 3 <= 3, 3, 2 condition is false.
So, we examine another process, P2.
For Process P2:
Need <= Available
1, 2, 2 <= 3, 3, 2 condition true
New available = available + Allocation
(3, 3, 2) + (2, 0, 0) => 5, 3, 2
53. Step 3: For Process P3:
P3 Need <= Available
6, 0, 0 < = 5, 3, 2 condition is false.
Similarly, we examine another process, P4.
• Step 4: For Process P4:
• P4 Need <= Available
• 0, 1, 1 <= 5, 3, 2 condition is true
• New Available resource = Available + Allocation
• 5, 3, 2 + 2, 1, 1 => 7, 4, 3
• Similarly, we examine another process P5
54. Step 5: For Process P5:
P5 Need <= Available
4, 3, 1 <= 7, 4, 3 condition is true
New available resource = Available + Allocation
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1 and P3.
Step 6: For Process P1:
P1 Need <= Available
7, 4, 3 <= 7, 4, 5 condition is true
New Available Resource = Available + Allocation
7, 4, 5 + 0, 1, 0 => 7, 5, 5
So, we examine another process P2.
55. Step 7: For Process P3:
P3 Need <= Available
6, 0, 0 <= 7, 5, 5 condition is true
New Available Resource = Available + Allocation
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe
sequence like P2, P4, P5, P1 and P3.
56. • Ans. 3:
For granting the Request (1, 0, 2), first we have to check that Request <= Available,
that is (1, 0, 2) <= (3, 3, 2), since the condition is true. So the process P1 gets the
request immediately.