The document discusses process management concepts including processes, threads, process scheduling, and interprocess communication. It provides definitions and explanations of key terms:
- A process is a program in execution that passes through various states like ready, running, waiting, and terminated.
- Threads allow concurrency within a process and are more lightweight than processes.
- Process scheduling algorithms like FCFS, SJN, priority, and round robin are used to allocate the CPU to ready processes.
- Interprocess communication and synchronization techniques like semaphores allow processes to share resources and data in a synchronized manner to prevent inconsistencies.
32-bit preemptive multitasking operating system for Intel microprocessors. Uses a micro-kernel architecture. Uses a micro-kernel architecture. Environmental subsystems emulate different operating systems. Protection subsystems provide security functions
32-bit preemptive multitasking operating system for Intel microprocessors. Uses a micro-kernel architecture. Uses a micro-kernel architecture. Environmental subsystems emulate different operating systems. Protection subsystems provide security functions
Useful documents for engineering students of CSE, and specially for students of aryabhatta knowledge university, Bihar (A.K.U. Bihar). It covers following topics: Background, logical vs. physical address space, swapping, contiguous memory
allocation, paging, segmentation
Threads,
system model,
processor allocation,
scheduling in distributed systems
Load balancing and
sharing approach,
fault tolerance,
Real time distributed systems,
Process migration and related issues
Useful documents for engineering students of CSE, and specially for students of aryabhatta knowledge university, Bihar (A.K.U. Bihar). It covers following topics: Background, logical vs. physical address space, swapping, contiguous memory
allocation, paging, segmentation
Threads,
system model,
processor allocation,
scheduling in distributed systems
Load balancing and
sharing approach,
fault tolerance,
Real time distributed systems,
Process migration and related issues
In computing, scheduling is the action .nathansel1
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
3. Introduction
Prof. K. Adisesha (Ph. D)
3
Introduction to Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
➢ When a program is loaded into the main memory it becomes a process.
➢ It can be divided into four sections ─
❖ Stack: The process Stack contains the temporary data such as
method/function parameters, return address and local variables
❖ Heap: This is dynamically allocated memory to a process during its run
time
❖ Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
❖ Data: This section contains the global and static variables.
4. Introduction
Prof. K. Adisesha (Ph. D)
4
Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
➢ A process can have one of the following five states at a time.
❖ Start
❖ Ready
❖ Running
❖ Waiting
❖ Terminated or Exit
5. Introduction
Prof. K. Adisesha (Ph. D)
5
Process Life Cycle:
➢ Start: This is the initial state when a process is first started/created.
➢ Ready: The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the OS so that they can run.
➢ Running: Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
➢ Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
➢ Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.
6. Introduction
Prof. K. Adisesha (Ph. D)
6
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ The PCB is identified by an integer process ID (PID).
➢ A PCB keeps all the information needed to keep track of
a process.
➢ The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
7. Threading
Prof. K. Adisesha (Ph. D)
7
Thread:
A thread is also called a lightweight process. A thread is a flow of execution through
the process code.
➢ A thread shares with its peer threads few information like code segment, data segment and
open files.
➢ Threads provide a way to improve application performance through parallelism.
➢ All threads can share same set of open files, child processes.
➢ Multiple threaded processes use fewer resources.
➢ Threads are implemented in following two ways −
❖ User Level Threads − User managed threads.
❖ Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
8. Threading
Prof. K. Adisesha (Ph. D)
8
Multithreading Models:
Some operating system provide a combined user level thread and Kernel level thread
facility.
➢ In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
➢ Multithreading models are three types
❖ Many to many relationship
❖ Many to one relationship
❖ One to one relationship
9. Threading
Prof. K. Adisesha (Ph. D)
9
Thread:
Difference between User-Level & Kernel-Level Thread.
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and
manage.
Kernel-level threads are slower to create and
manage.
Implementation is by a thread library at the
user level.
Operating system supports creation of Kernel
threads.
User-level thread is generic and can run on
any operating system.
Kernel-level thread is specific to the
operating system.
Multi-threaded applications cannot take
advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
10. Threading
Prof. K. Adisesha (Ph. D)
10
Advantages of Thread:
The various advantages of using Thread are:
➢ Threads minimize the context switching time.
➢ Use of threads provides concurrency within a process.
➢ Efficient communication.
➢ It is more economical to create and context switch threads.
➢ Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency..
11. Swapping
Prof. K. Adisesha (Ph. D)
11
Swapping:
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to secondary storage and make that memory available to other processes.
➢ Swapping is also known as a technique for memory compaction.
➢ The total time taken by swapping process includes
the time it takes to move the entire process to a
secondary disk and then to copy the process back to
memory
12. Process Scheduling
Prof. K. Adisesha (Ph. D)
12
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU on the basis of a particular strategy.
➢ Process scheduling is an essential part of a Multiprogramming operating systems.
➢ The OS maintains all PCBs in Process Scheduling Queues.
❖ Job queue: This queue keeps all the processes in the
system.
❖ Ready queue: This queue keeps a set of all processes
residing in main memory, ready and waiting to
execute.
❖ Device queues: The processes which are blocked due
to unavailability of an I/O device constitute this queue
13. Process Scheduling
Prof. K. Adisesha (Ph. D)
13
Schedulers:
Schedulers are special system software which handle process scheduling in various ways.
➢ The main task is to select the jobs to be submitted into the system and to decide which process to run.
➢ Schedulers are of three types:
❖ Long-Term Scheduler:
▪ It is also called a job scheduler.
▪ A long-term scheduler determines which programs are admitted to the system for processing.
❖ Short-Term Scheduler:
▪ It is also called as CPU scheduler.
▪ Its main objective is to increase system performance in accordance with the chosen set of criteria.
❖ Medium-Term Scheduler:
▪ Medium-term scheduling is a part of swapping.
▪ It removes the processes from the memory. It reduces the degree of multiprogramming.
14. Process Scheduling
Prof. K. Adisesha (Ph. D)
14
Scheduling Criteria:
Schedulers selects the processes in memory that are ready to execute, and allocates the
CPU based on certain scheduling Criterias.
➢ Scheduling Criteria are based on:
❖ CPU utilization – keep the CPU as busy as possible
❖ Throughput – No. of processes that complete their execution per time unit
❖ Turnaround time – amount of time to execute a particular process
❖ Waiting time – amount of time a process has been waiting in the ready queue
❖ Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
15. Process Scheduling
Prof. K. Adisesha (Ph. D)
15
Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
➢ These algorithms are either non-preemptive or preemptive
➢ There are popular process scheduling algorithms:
❖ First-Come, First-Served (FCFS) Scheduling
❖ Shortest-Job-Next (SJN) Scheduling
❖ Priority Scheduling
❖ Round Robin(RR) Scheduling
❖ Multiple-Level Queues Scheduling.
16. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
16
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
➢ It is a non-preemptive, pre-emptive scheduling algorithm.
➢ Easy to understand and implement.
➢ Its implementation is based on FIFO queue.
➢ Poor in performance as average wait time is high..
17. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
17
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
Example: FCFS Scheduling
18. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
18
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ This is a non-preemptive, pre-emptive scheduling algorithm.
➢ Best approach to minimize waiting time.
➢ Easy to implement in Batch systems where required CPU time is known in advance.
➢ Impossible to implement in interactive systems where required CPU time is not
known.
➢ The processer should know in advance how much time process will take..
19. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
19
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ Example:
20. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
20
Priority Scheduling:
Priority scheduling is a priority based algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
➢ Processes with same priority are executed on first come first served basis.
➢ Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
21. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
21
Priority Based Scheduling:
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Example:
22. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
22
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Round Robin is the preemptive process scheduling algorithm.
➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
➢ Context switching is used to save states of preempted processes.
23. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
23
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Example:
24. Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
24
Multiple-Level Queues Scheduling:
Multiple-level queues are not an independent scheduling algorithm.
➢ They make use of other existing algorithms to group and schedule jobs with common
characteristics.
❖ Multiple queues are maintained for processes with common characteristics.
❖ Each queue can have its own scheduling algorithms.
❖ Priorities are assigned to each queue.
25. Interprocess Communication
Prof. K. Adisesha (Ph. D)
25
Inter process Communication (IPC):
Inter process communication is a mechanism which allows processes to communicate
with each other and synchronize their actions.
➢ The communication between these processes can be seen as a method of co-operation
between them.
➢ Some of the methods to provide IPC:
❖ Message Queue.
❖ Shared Memory.
❖ Signal.
❖ Shared Files and Pipe
❖ Socket
26. Process Synchronization
Prof. K. Adisesha (Ph. D)
26
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process Synchronization ensures a perfect co-ordination among the process.
➢ Maintaining data consistency demands mechanisms to ensure synchronized execution
of cooperating processes.
➢ Process Synchronization can be provided by using several different tools like:
❖ Semaphores
❖ Mutual Exclusion or Mutex
❖ Monitor
27. Process Synchronization
Prof. K. Adisesha (Ph. D)
27
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
➢ On the basis of synchronization, processes are categorized as one of the following two types:
❖ Independent Process: Execution of one process does not affects the execution of
other processes.
❖ Cooperative Process: Execution of one process affects the execution of other
processes.
28. Process Synchronization
Prof. K. Adisesha (Ph. D)
28
Process Synchronization:
Race Condition:
➢ When serval process access and manipulates the same data at the same time, they may enter into a race
condition
➢ Race condition occurs among process that share common storage for read and write.
➢ Race condition occurs due to improper synchronization of shared memory access.
Critical section problem:
➢ Critical section is a code segment that can be accessed by only one process at a time.
➢ Critical section contains shared variables which need to be synchronized to maintain consistency of data
variables.
➢ Any solution to the critical section problem must satisfy three requirements:
❖ Mutual Exclusion
❖ Progress
❖ Bounded Waiting.
29. Process Synchronization
Prof. K. Adisesha (Ph. D)
29
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread.
➢ A semaphore uses two atomic operations, wait and signal for process synchronization.
➢ Classical problems of Synchronization with Semaphore Solution:
❖ Bounded-buffer (or Producer-Consumer) Problem
❖ Dining- Philosophers Problem
❖ Readers and Writers Problem
❖ Sleeping Barber Problem
30. Process Synchronization
Prof. K. Adisesha (Ph. D)
30
Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem.
➢ Solution to this problem is, creating two counting semaphores “full” and “empty” to
keep track of the current number of full and empty buffers respectively.
➢ Producers produce a product and consumers consume the product, but both use of one
of the containers each time.
31. Process Synchronization
Prof. K. Adisesha (Ph. D)
31
Dining- Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ There is one chopstick between each philosopher.
➢ A philosopher may eat if he can pickup the two chopsticks
adjacent to him.
➢ One chopstick may be picked up by any one of its adjacent
followers but not both.
➢ This problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free
manner.
32. Process Synchronization
Prof. K. Adisesha (Ph. D)
32
Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes. We
distinguish between these two types of processes by referring to the former as readers
and to the latter as writers.
➢ Precisely in OS we call this situation as the readers-writers problem.
➢ Problem parameters:
❖ One set of data is shared among a number of processes.
❖ Once a writer is ready, it performs its write. Only one writer may write at a time.
❖ If a process is writing, no other process can read it.
❖ If at least one reader is reading, no other process can write.
❖ Readers may not write and only read.
33. Process Synchronization
Prof. K. Adisesha (Ph. D)
33
Sleeping Barber Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ Barber shop with one barber, one barber chair and N
chairs to wait in.
➢ When no customers the barber goes to sleep in barber
chair and must be woken when a customer comes in.
➢ When barber is cutting hair new customers take empty
seats to wait, or leave if no vacancy.
34. Deadlocks
Prof. K. Adisesha (Ph. D)
34
Deadlock:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
➢ In operating systems when there are two or more processes
that hold some resources and wait for resources held by
other(s).
➢ Example:
❖ Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting
for resource 1.
35. Deadlocks
Prof. K. Adisesha (Ph. D)
35
Deadlock:
Methods for handling deadlock .
➢ There are three ways to handle deadlock:
❖ Deadlock prevention or avoidance: The idea is to not let the system into a
deadlock state. by using strategy of “Avoidance”, we have to make an assumption.
❖ Deadlock detection and recovery: Let deadlock occur, then do preemption to
handle it once occurred.
❖ Ignore the problem altogether: If deadlock is very rare, then let it happen and
reboot the system. This is the approach that both Windows and UNIX take.