This document contains information about processes in an operating system. It discusses what a process is, the different states a process can be in (new, ready, running, waiting, terminated), and how processes are created and terminated. It also describes process scheduling queues, interprocess communication, and advantages of using processes like information sharing, faster computation through parallel processing, modularity, and convenience.
Unit I (8 Hrs)
Introduction to System Software , Overview of all system software’s: Operating system
I/O manager, Assembler, Compiler, Linker ,Loader.
Introductory Concepts: Operating system functions and characteristics, historical evolution
of operating systems, Real time systems, Distributed systems.
Unit II (8 Hrs)
Operating Systems: Methodologies for implementation of O/S service system calls,
system programs, Interrupt mechanisms.
Process - Concept of process and threads, Process states, Process management, Context
switching
Interaction between processes and OS Multithreading Process Control, Job schedulers,
Job Scheduling, scheduling criteria, scheduling algorithms
Unit III (8 Hrs)
Concurrency Control : Concurrency and Race Conditions, Mutual exclusion requirements
Software and hardware solutions, Semaphores, Monitors, Classical IPC problems and
solutions.
Deadlock : Characterization, Detection, Recovery, Avoidance and Prevention.
Unit IV (8 Hrs)
Memory management: Contiguous and non-contiguous, Swapping, Paging, Segmentation
and demand Paging, Virtual Memory, Management of Virtual memory: allocation, fetch and
replacement
Unit V (8 Hrs)
File Management: Concept, Access methods, Directory Structure, Protection, File System
implementation, Directory Implementation, Allocation methods, Free Space management,
efficiency and performance
IO systems: disk structure, disk scheduling, disk management.
Unit VI (8 Hrs)
Case Study of Linux: Structure of LINUX, design principles, kernel, process management and
scheduling, file systems installing requirement, basic architecture of UNIX/Linux system, Kernel,
Shell Commands for files and directories cd, cp, mv, rm, mkdir, more, less, creating and viewing
files, using cat, file comparisons, View files, disk related commands, checking disk free spaces,
Essential linux commands.
Understanding shells, Processes in linux – process fundamentals, connecting processes with pipes,
Redirecting input output, manual help, Background processing, managing multiple processes,
changing process priority, scheduling of processes at command, batch commands, kill, ps, who,
sleep, Printing commands, grep, fgrep, find, sort, cal, banner, touch, file, file related commands – ws,sat, cut, grep, dd, etc. Mathematical commands – bc, expr, factor, units. Vi, joe, vim editor
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Unit I (8 Hrs)
Introduction to System Software , Overview of all system software’s: Operating system
I/O manager, Assembler, Compiler, Linker ,Loader.
Introductory Concepts: Operating system functions and characteristics, historical evolution
of operating systems, Real time systems, Distributed systems.
Unit II (8 Hrs)
Operating Systems: Methodologies for implementation of O/S service system calls,
system programs, Interrupt mechanisms.
Process - Concept of process and threads, Process states, Process management, Context
switching
Interaction between processes and OS Multithreading Process Control, Job schedulers,
Job Scheduling, scheduling criteria, scheduling algorithms
Unit III (8 Hrs)
Concurrency Control : Concurrency and Race Conditions, Mutual exclusion requirements
Software and hardware solutions, Semaphores, Monitors, Classical IPC problems and
solutions.
Deadlock : Characterization, Detection, Recovery, Avoidance and Prevention.
Unit IV (8 Hrs)
Memory management: Contiguous and non-contiguous, Swapping, Paging, Segmentation
and demand Paging, Virtual Memory, Management of Virtual memory: allocation, fetch and
replacement
Unit V (8 Hrs)
File Management: Concept, Access methods, Directory Structure, Protection, File System
implementation, Directory Implementation, Allocation methods, Free Space management,
efficiency and performance
IO systems: disk structure, disk scheduling, disk management.
Unit VI (8 Hrs)
Case Study of Linux: Structure of LINUX, design principles, kernel, process management and
scheduling, file systems installing requirement, basic architecture of UNIX/Linux system, Kernel,
Shell Commands for files and directories cd, cp, mv, rm, mkdir, more, less, creating and viewing
files, using cat, file comparisons, View files, disk related commands, checking disk free spaces,
Essential linux commands.
Understanding shells, Processes in linux – process fundamentals, connecting processes with pipes,
Redirecting input output, manual help, Background processing, managing multiple processes,
changing process priority, scheduling of processes at command, batch commands, kill, ps, who,
sleep, Printing commands, grep, fgrep, find, sort, cal, banner, touch, file, file related commands – ws,sat, cut, grep, dd, etc. Mathematical commands – bc, expr, factor, units. Vi, joe, vim editor
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
System call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on.
This may include hardware-related services (for example, accessing a hard disk drive), creation and execution of new processes, and communication with integral kernel services such as process scheduling.
System calls provide an essential interface between a process and the operating system
OPERATING SYSTEMSDESIGN AND IMPLEMENTATIONsathish sak
The operating system has two basic
functions of the operating system
It is an extended machine or virtual machine
Easier to program than the underlying hardware
It is a resource manager
Shares resources in time and space
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
To understand an OS, let’s first look at its components and then how they’re composed or organized.
We’ll come back and look at each of these in detail as the course progresses.
Realize that it’s never as simple as it looks. These basic concepts exist in some form in all systems, however each system implements them in a slightly different way.
Also, the divisions between components may not be as clean in the real world as in the model
Presentation about Operating System.
including file management.process management,multitasking,different kind of operating system,some popular operating system
System call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on.
This may include hardware-related services (for example, accessing a hard disk drive), creation and execution of new processes, and communication with integral kernel services such as process scheduling.
System calls provide an essential interface between a process and the operating system
OPERATING SYSTEMSDESIGN AND IMPLEMENTATIONsathish sak
The operating system has two basic
functions of the operating system
It is an extended machine or virtual machine
Easier to program than the underlying hardware
It is a resource manager
Shares resources in time and space
UNIT II PROCESS MANAGEMENT
Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks.
To understand an OS, let’s first look at its components and then how they’re composed or organized.
We’ll come back and look at each of these in detail as the course progresses.
Realize that it’s never as simple as it looks. These basic concepts exist in some form in all systems, however each system implements them in a slightly different way.
Also, the divisions between components may not be as clean in the real world as in the model
Presentation about Operating System.
including file management.process management,multitasking,different kind of operating system,some popular operating system
Concept of processes, process scheduling, operations on processes, inter-process communication,
communication in Client-Server-Systems, overview & benefits of threads.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
This ppt covers following topics,
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
Communication in Client-Server Systems
UNIT II PROCESS MANAGEMENT
Processes – Process Concept, Process Scheduling, Operations on Processes, Inter-process Communication; CPU Scheduling – Scheduling criteria, Scheduling algorithms, Multiple-processor scheduling, Real time scheduling; Threads- Overview, Multithreading models, Threading issues; Process Synchronization – The critical-section problem, Synchronization hardware, Mutex locks, Semaphores, Classic problems of synchronization, Critical regions, Monitors; Deadlock – System model, Deadlock characterization, Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from deadlock.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
CHAPTER READING TASK OPERATING SYSTEM
1. 0 | P a g e
UNIVERSITI TEKNIKAL MALAYSIA MELAKA
SEMESTER 1 YEAR 2
BITS 1213 - OPERATING SYSTEM
LIST OF MEMBERS :
NO NAME MATRIC NO.
1 NUR ATIQAH BT MOHD ROSLI B031210097
2 SU’AIDAH BT MOKHTAR B031210193
3 SITI NADIRAH BT MINHAT B0131210037
4 NOR HADHIRAH BT SHERIFF B031210041
5 NURUL NAJEEHA BT ANNUAR B031210270
2. 1 | P a g e
CONTENT
PROCESS................................................................................................... 2-14
What is process............................................................................................ 2-3
Process state................................................................................................. 4-8
Process creation........................................................................................... 9-10
Process termination..................................................................................... 11-12
THREAD..................................................................................................... 12-23
Introduction................................................................................................. 12
How it work................................................................................................. 12-13
Advantages and disadvantages.................................................................... 14-15
Threading issues.......................................................................................... 16-18
Types of thread............................................................................................ 19-23
SYMMETRIC MULTIPROCESSING..................................................... 24-25
Description and process................................................................................ 24
How it works................................................................................................ 24
Diagram of multiprocessing......................................................................... 25
MICROKERNEL....................................................................................... 26-30
Introduction.................................................................................................. 26
Descriptions.................................................................................................. 26
Features........................................................................................................ 27
Advantages and disadvantages.................................................................... 28-29
Diagram of microkernel............................................................................... 30
3. 2 | P a g e
PROCESS IN OPERATING SYSTEM
I. WHAT IS PROCESS?
Process – a program in execution; process execution must progress in sequential fashion
Task, execution of individual program.
A process includes:
– program counter – specifying next instruction to be executed.
– Stack – containing temporary data such as return address.
– data section – containing global variables.
Figure 3.1
Process memory is divided into four sections as shown in Figure 3.1.
Text - Comprises the compiled program code, read in from non-volatile storage
when the program is launched.
4. 3 | P a g e
Data -Stores global and static variables, allocated and initialized prior to
executing main.
Heap - Dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.
Stack - Local variables. Space on the stack is reserved for local variables when
they are declared ( at function entrance or elsewhere, depending on the
language ), and the space is freed up when the variables go out of scope. Note
that the stack is also used for function return values, and the exact mechanisms
of stack management may be language specific.
Note that the stack and the heap start at opposite ends of the process's free
space and grow towards each other. If they should ever meet, then either a
stack overflow error will occur, or else a call to new or malloc will fail due to
insufficient memory available.
When processes are swapped out of memory and later restored, additional
information must also be stored and restored. Key among them are the program
counter and the value of all program registers.
5. 4 | P a g e
II. PROCESS STATE
There is five states in process as shown in Figure 3.2 below (may have other states besides the
ones listed):
New - The process is in the stage of being created.
Ready - The process has all the resources available that it needs to run, but the CPU is
not currently working on this process's instructions.
Running - The CPU is working on this process's instructions.
Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to
go off, or a child process to finish.
Terminated - The process has completed.
Figure 3.2
Process Control Block (PCB)
6. 5 | P a g e
Figure 3.3
A PCB contains the following Information:
– Process state: new, ready, …
– Program counter: indicates the address of the next instruction
to be executed for this program.
– CPU registers: includes accumulators, stack pointers, …
– CPU scheduling information: includes process priority,
pointers to scheduling queues.
– Memory-management information: includes the value of base
and limit registers (protection) …
– Accounting information: includes amount of CPU and real
time used, account numbers, process numbers, …
– I/O status information: includes list of I/O devices allocated to this process, a list
of open files, …
CPU Switch From Process to Process
7. 6 | P a g e
Figure 3.4
Process Scheduling Queue
The two main objectives of the process scheduling system are to keep the CPU busy
at all times and to deliver "acceptable" response times for all programs, particularly
for interactive ones.
The process scheduler must meet these objectives by implementing suitable policies
for swapping processes in and out of the CPU.
( Note that these objectives can be conflicting. In particular, every time the system
steps in to swap processes it takes up time on the CPU to do so, which is thereby "lost"
from doing any useful productive work. ).
8. 7 | P a g e
Figure 3.5 - Ready Queue And Various I/O Device Queues
All processes are stored in the job queue.
Processes in the Ready state are placed in the ready queue.
Processes waiting for a device to become available or to deliver data are placed in
device queues. There is generally a separate device queue for each device.
Schedulers
A long-term scheduler is typical of a batch system or a very heavily loaded system. It
runs infrequently, ( such as when one process ends selecting one more to be loaded in
from disk in its place ), and can afford to take the time to implement intelligent and
advanced scheduling algorithms.
9. 8 | P a g e
The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of 100
milliseconds, and must very quickly swap one process out of the CPU and swap in
another one.
Some systems also employ a medium-term scheduler. When system loads get high,
this scheduler will swap one or more processes out of the ready queue system for a
few seconds, in order to allow smaller faster jobs to finish up quickly and clear the
system. See the differences in Figures 3.7 and 3.8 below.
An efficient scheduling system will select a good process mix of CPU-bound
processes and I/O bound processes.
Figure 3.6 - Queueing-diagram representation of process scheduling
10. 9 | P a g e
III. PROCESS CREATION
• A process may create several new processes, via a create-process system call, during
execution.
• Parent process creates children processes, which, in turn create other processes,
forming a tree of processes.
• Resource sharing, such as CPU time, memory, files, I/O devices …
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• When a process creates a new process, two possibilities exist in terms of execution:
– Parent and children execute concurrently.
– Parent waits until children terminate.
• There are also two possibilities in terms of the address space of the new process:
– Child duplicate of parent.
– Child has a program loaded into it.
• UNIX examples:
– fork system call creates new process
– execve system call used after a fork to replace the process’ memory space with
new program.
11. 10 | P a g e
IV. PROCESS TERMINATION
• Process executes last statement and asks the operating system to delete it by using
the exit system call.
– Output data from child to parent via wait system call.
– Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes via abort system call for a
variety of reasons, such as:
– Child has exceeded allocated resources.
– Task assigned to child is no longer required.
– Parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
Interprocess Communications (IPC)
Mechanism for processes to communicate and to synchronize their actions.
• IPC is best provided by message-passing systems.
• IPC facility provides two operations:
– send (message) – message size fixed or variable
– receive (message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Processes can communicate in two ways:
– Direct communication
– Indirect communication.
12. 11 | P a g e
Advantages of process.
Information sharing – such as shared files.
Computation speed-up – to run a task faster, we must break it into subtasks, each of
which will be executing in parallel. This speed up can be achieved only if the
computer has multiple processing elements (such as CPUs or I/O channels).
Modularity – construct a system in a modular function (i.e., dividing the system
functions into separate processes).
Convenience – one user may have many tasks to work on at one time. For example, a
user may be editing, printing, and compiling in parallel.
Disadvantages of process.
Not convenient for user and poor performance.
Complexity in OS.
Processes can misbehave
– By avoiding all traps and performing no I/O, can take over entire
machine.
– Only solution: Reboot!
Difficult to setup process correctly and to express all possible options
– Process permissions, where to write I/O, environment variables.
– Example: WindowsNT has call with 10 arguments.
13. 12 | P a g e
THREADS
INTRODUCTION
What is thread in operating system?
A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. A thread is also called a light weight
process. Threads provide a way to improve application performance through
parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical
process.
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism.
The CPU switches rapidly back and forth among the threads giving illusion that
the threads are running in parallel. Like a traditional process i.e., process with
one thread, a thread can be in any of several states (Running, Blocked, Ready or
Terminated). Each thread has its own stack.
DESCRIPTION OF TOPIC
A thread is a flow of execution through the process code, with its own program
counter, system registers and stack. A thread is also called a light weight
process. Threads provide a way to improve application performance through
parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical
process.
14. 13 | P a g e
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control.Threads have been
successfully used in implementing network servers and web server. They also
provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors. Folowing figure shows the working of the single and
multithreaded processes.
HOW IT WORK
Each process has its own memory space. When Process 1 accesses some given
memory location, say 0x8000, that address will be mapped to some physical
memory address1. But from Process 2, location 0x8000 will generally refer to a
completely different portion of physical memory. A thread is a subdivision that
shares the memory space of its parent process. So when either Thread 1 or
Thread 2 of Process 1 accesses "memory address 0x8000", they will be referring
to the same physical address. Threads belonging to a process usually share a few
other key resources as well, such as their working directory, environment
variables, file handles etc.
On the other hand, each thread has its own private stack and registers, including
program counter. These are essentially the things that threads need in order to be
independent. Depending on the OS, threads may have some other private
15. 14 | P a g e
resources too, such as thread-local storage (effectively, a way of referring to
"variable number X", where each thread has its own private value of X). The OS
will generally attach a bit of "housekeeping" information to each thread, such as
its priority and state (running, waiting for I/O etc).
ADVANTAGES AND DISADVANTAGES
NO ADVANTAGES DISADVANTAGES
1.
Responsiveness - One thread may
provide rapid response while other
threads are blocked or slowed down
doing intensive calculations.
Global variables are shared
between threads.Inadvertent
modification of shared
variables can be disastrous.
2
Resource sharing - By default
threads share common code, data,
and other resources, which allows
multiple tasks to be performed
simultaneously in a single address
Many library functions are not
thread safe.
16. 15 | P a g e
space.
3
Economy - Creating and managing
threads is much faster than
performing the same tasks for
processes.
If one thread crashes the whole
application crashes.
4
Scalability, i.e. Utilization of
multiprocessor architectures - A
single threaded process can only
run on one CPU, no matter how
many may be available, whereas
the execution of a multi-threaded
application may be split amongst
available processors.
Memory crash in one thread
kills other threads sharing the
same memory, unlike
processes.
5
User level threads are fast to create
and manage.
In a typical operating system,
most system calls are blocking.
6
User level thread can run on any
operating system.
Transfer of control from one
thread to another within same
process requires a mode switch
to the Kernel.
17. 16 | P a g e
THREADING ISSUES
1- The Semantics of fork() and exec() system calls
It is system dependant. If the new process execs right away, there is no
need to copy all the other threads. If it doesn't, then the entire process
should be copied. Many versions of UNIX provide multiple versions of
the fork call for this purpose.
2- Signal Handling
i- When a multi-threaded process receives a signal, there are four
major options where that the thread will be delivered based on the
signal:-
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process.
Deliver the signal to certain threads in the process.
Assign a specific thread to receive all signals in a process.
ii- The best choice may depend on which specific signal is involved.
iii- Windows does not support signals, but they can be emulated using
Asynchronous Procedure Calls ( APCs ). APCs are delivered to
specific threads, not processes.
iv- Synchronous and asynchronous
18. 17 | P a g e
3- Thread Cancellation of target thread
Threads that are no longer needed may be cancelled by another thread in
two ways:
Asynchronous Cancellation cancels the thread immediately.
Deferred Cancellation sets a flag indicating the thread should
cancel itself when it is convenient. It is then up to the cancelled
thread to check this flag periodically and exit nicely when it sees
the flag set.
4- Thread-Local Storage
i- Most data is shared among threads, and this is one of the major
benefits of using threads in the first place.
ii- Sometimes threads need thread-specific data also.
iii- Most major thread libraries ( pThreads, Win32, Java ) provide
support for thread-specific data, known as thread-local storage or
TLS.
iv- Note that this is more like static data than local variables, because it
does not cease to exist when the function ends.
5- Scheduler Activations
i- Many implementations of threads provide
virtual processor (interface between the user thread),
Kernel thread, particularly for the many-to-many or two-tier
models.
ii- This virtual processor is known as a "Lightweight Process", LWP.
19. 18 | P a g e
iii- There is a one-to-one correspondence between LWPs and kernel
threads.
iv- The number of kernel threads available, and hence the number of
LWPs may change dynamically.
v- The application (user level thread library) maps user threads onto
available LWPs.
vi- Kernel threads are scheduled onto the real processors by the OS.
vii- The kernel communicates to the user-level thread library when
certain events occur (such as a thread about to block) via an upcall,
which is handled in the thread library by an upcall handler. The
upcall also provides a new LWP for the upcall handler to run on,
which it can then use to reschedule the user thread that is about to
become blocked. The OS will also issue upcalls when a thread
becomes unblocked, so the thread library can make appropriate
adjustments.
viii- If the kernel thread blocks, then the LWP blocks, which blocks the
user thread.
ix- Ideally there should be at least as many LWPs available as there
could be concurrently blocked kernel threads. Otherwise if all
LWPs are blocked, then user threads will have to wait for one to
become available.
20. 19 | P a g e
TYPES OF THREAD
Threads are implemented in following two ways;
User Level Threads
In this case, application manages thread management kernel is not aware
of the existence of threads. The thread library contains code for creating
and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts.
The application begins with a single thread and begins running in that
thread.
Kernel Level Threads
In this case, thread management done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported
directly by the operating system. Any application can be programmed to
be multithreaded. All of the threads within an application are supported
within a single process.
The Kernel maintains context information for the process as a whole and
for individuals threads within the process. Scheduling by the Kernel is
21. 20 | P a g e
done on a thread basis. The Kernel performs thread creation, scheduling
and management in Kernel space. Kernel threads are generally slower to
create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
Kernel routines themselves can multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the
user threads.
Kernel can simultaneously schedule multiple threads from the same
process on multiple processes.
If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
Kernel routines themselves can multithreaded.
Kernel threads are generally slower to create and manage than the
user threads.
22. 21 | P a g e
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types;
Many to Many Model
Many user level threads multiplexes to the Kernel thread of smaller or equal
numbers. The number of Kernel threads may be specific to either a particular
application or a particular machine.
Following diagram shows the many to many model. In this model, developers
can create as many user threads as necessary and the corresponding Kernel
threads can run in parallels on a multiprocessor.
Many to One Model
Many to one model maps many user level threads to one Kernel level thread.
Thread management is done in user space. When thread makes a blocking
23. 22 | P a g e
system call, the entire process will be blocks. Only one thread can access the
Kernel at a time,so multiple threads are unable to run in parallel on
multiprocessors.
If the user level thread libraries are implemented in the operating system in such
a way that system does not support them then Kernel threads use the many to
one relationship modes.
One to One Model
There is one to one relationship of user level thread to the kernel level
thread.This model provides more concurrency than the many to one model. It
also another thread to run when a thread makes a blocking system call. It
support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to
one relationship model.
24. 23 | P a g e
Difference between User Level & Kernel Level Thread
S.N. User Level Threads Kernel Level Thread
1
User level threads are faster to
create and manage.
Kernel level threads are slower to
create and manage.
2
Implementation is by a thread
library at the user level.
Operating system supports creation of
Kernel threads.
3
User level thread is generic and
can run on any operating system.
Kernel level thread is specific to the
operating system.
4
Multi-threaded application cannot
take advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
25. 24 | P a g e
SYMMETRIC MULTIPROCESSING
SMP systems allow any processor to work on any task no matter where
the data for that task are located in memory, provided that each task in the
system is not in execution on two or more processors at the same time.
With proper operating system support, SMP systems can easily move
tasks between processors to balance the workload efficiently.
SMP systems are tightly coupled multiprocessor systems with a pool of
homogeneous processors running independently, each processor
executing different programs and working on different data and with
capability of sharing common resources (memory, I/O device, interrupt
system and so on) and connected using a system bus or a crossbar.
Uniprocessor and SMP systems require different programming methods to
achieve maximum performance. Programs running on SMP systems may
experience a performance increase even when they have been written for
uniprocessor systems. This is because hardware interrupts that usually
suspend program execution while the kernel handles them can execute on
an idle processor instead
26. 25 | P a g e
Process of Symmetric Multiprocessor
27. 26 | P a g e
Microkernel
Introduction of Microkernel
Early operating system kernels were rather small, partly because computer memory was
limited. As the capability of computers grew, the number of devices the kernel had to control
also grew. Through the early history of Unix, kernels were generally small, even though those
kernels contained device drivers and file system managers. When address spaces increased
from 16 to 32 bits, kernel design was no longer cramped by the hardware architecture, and
kernels began to grow.
The Berkeley Software Distribution (BSD) of Unix began the era of big kernels. In addition to
operating a basic system consisting of the CPU, disks and printers, BSD started adding
additional file systems, a complete TCP/IP networking system, and a number of "virtual"
devices that allowed the existing programs to work invisibly over the network. This growth
continued for many years, resulting in kernels with millions of lines of source code. As a
result of this growth, kernels were more prone to bugs and became increasingly difficult to
maintain.
The microkernel was designed to address the increasing growth of kernels and the difficulties
that came with them. In theory, the microkernel design allows for easier management of code
due to its division into user space services. This also allows for increased security and
stability resulting from the reduced amount of code running in kernel mode. For example, if a
networking service crashed due to buffer overflow, only the networking service's memory
would be corrupted, leaving the rest of the system still functional.
28. 27 | P a g e
Descriptions of Microkernel
A Microkernel is a highly Spartan modular subsystem composed of OS-neutral abstractions,
providing only essential services such as process abstractions, threads, IPC, and memory
management primitives. All device drivers, etc., which are normally part of an OS kernel, run
on the microkernel as just another user process.
• Multiple operating systems can then be layered on top of these abstractions, and are thus
viewed assimply another application.
• Thisfocus on modularity allowsforscalability,extensibility and portability not found in
monolithic operating systems(Unix, Linux, DOS, etc.)
Features
• the microkernel provides only rudimentary core facilities, different OS personalities(such as
BSD Unix, Linux,NT, etc.) can be hosted on the microkernel.
• Because ofits highly modular nature,many of the services commonly found in“kernelspace”
are found in “userspace”on a microkernelFeatures
• Flexibility (can restart modules without rebooting the OS)
• Lower fixed memory demand: The L4 (Mach)Microkernel only takes up about 32 Kilobytes
of memory.
• However, a microkernel + regular OS will probably take up more memory than a simple OS
would take up, because of the additional memory required by the microkernel itself
• SMP delivery is easier
29. 28 | P a g e
The advantages and disadvatages of Microkernel
Advantages
Extensible: add a new server to add new OS functionality.
Kernel does not determine operating system environment.
• Allows support for multiple OS personalities
• Need an emulation server for each system (e.g. Mac,Windows, Unix)
• All applications run on same microkernel
• Applications can use customized OS (e.g. for databases)
Most hardware agnostic
Threads ,IPC, user-level servers don't need to worry abount udelying hardware.
Strong protection
Even of the OS againts itself (i.e the part of the OS that are implemented as
servers)
Esay extension to multiprocessor and distributed system.
Microkernel are simplicity of the kernel (small)
Flexibility
we can have a file server and a database server
Disadvantages
Performance
System call can require a lot of protection mode changes
Expensive to reimplement everything with a new model.
30. 29 | P a g e
OS personalities are easir to port to new hardware after porting to microkernel,
but porting to microkernel may be hardder than porting to new hardware.
More overhead
Loss of the system call and context switches
E.g Mach, L4, AmigaOS, Minix , K42.