The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The document discusses the structure and management of processes in an operating system. It describes the different states a process can be in like running, ready, sleeping, etc. and the transitions between states. It explains the layout of a process in memory including text, data, and stack regions. It discusses how the kernel saves and restores the context of a process during context switches, interrupts, and system calls. It also summarizes how the kernel manipulates a process's address space through operations on regions like allocating, attaching, changing size, loading, duplicating, and freeing regions.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
1) The document provides an overview of computer organization topics including computer types, functional units, basic operational concepts, and performance.
2) It describes the main functional units of a computer including input, memory, ALU, output, and control units. Memory is used to store programs and data in primary and secondary storage.
3) The steps of instruction execution are outlined beginning with the program counter loading the address of the first instruction and ending with it being incremented to the next one. Functional units work together under the control of the central processing unit.
The document discusses different memory management techniques used in operating systems:
1. Programs go through several steps before execution - compilation, loading, and execution where address binding can occur.
2. Memory management schemes separate logical and physical addresses using techniques like paging and segmentation to map virtual to physical addresses.
3. Swapping allows processes to be temporarily moved out of memory to disk to improve memory utilization at the cost of performance.
Operating Systems Part III-Memory ManagementAjit Nayak
The document discusses memory management techniques in operating systems. It describes how programs must be loaded into memory to execute and memory addresses are represented at different stages. It introduces the concepts of logical and physical address spaces and how they are mapped using a memory management unit. It also summarizes common memory management techniques like paging, segmentation, and swapping that allow processes to be allocated non-contiguous regions of physical memory from a pool of memory frames and backed by disk. Paging partitions both logical and physical memory into fixed-size pages and frames, using a page table to map virtual to physical addresses.
This document describes various memory management techniques used in computer systems, including swapping, contiguous allocation, paging, segmentation, and the memory architecture of the Intel Pentium CPU. It discusses how paging uses a page table to map logical addresses to physical frames through an address translation process. Segmentation divides memory into variable-length segments and uses segment tables. The Pentium supports both pure segmentation and a hybrid of segmentation and paging to translate logical addresses to physical memory locations.
The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The document discusses the structure and management of processes in an operating system. It describes the different states a process can be in like running, ready, sleeping, etc. and the transitions between states. It explains the layout of a process in memory including text, data, and stack regions. It discusses how the kernel saves and restores the context of a process during context switches, interrupts, and system calls. It also summarizes how the kernel manipulates a process's address space through operations on regions like allocating, attaching, changing size, loading, duplicating, and freeing regions.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
1) The document provides an overview of computer organization topics including computer types, functional units, basic operational concepts, and performance.
2) It describes the main functional units of a computer including input, memory, ALU, output, and control units. Memory is used to store programs and data in primary and secondary storage.
3) The steps of instruction execution are outlined beginning with the program counter loading the address of the first instruction and ending with it being incremented to the next one. Functional units work together under the control of the central processing unit.
The document discusses different memory management techniques used in operating systems:
1. Programs go through several steps before execution - compilation, loading, and execution where address binding can occur.
2. Memory management schemes separate logical and physical addresses using techniques like paging and segmentation to map virtual to physical addresses.
3. Swapping allows processes to be temporarily moved out of memory to disk to improve memory utilization at the cost of performance.
Operating Systems Part III-Memory ManagementAjit Nayak
The document discusses memory management techniques in operating systems. It describes how programs must be loaded into memory to execute and memory addresses are represented at different stages. It introduces the concepts of logical and physical address spaces and how they are mapped using a memory management unit. It also summarizes common memory management techniques like paging, segmentation, and swapping that allow processes to be allocated non-contiguous regions of physical memory from a pool of memory frames and backed by disk. Paging partitions both logical and physical memory into fixed-size pages and frames, using a page table to map virtual to physical addresses.
This document describes various memory management techniques used in computer systems, including swapping, contiguous allocation, paging, segmentation, and the memory architecture of the Intel Pentium CPU. It discusses how paging uses a page table to map logical addresses to physical frames through an address translation process. Segmentation divides memory into variable-length segments and uses segment tables. The Pentium supports both pure segmentation and a hybrid of segmentation and paging to translate logical addresses to physical memory locations.
- Paging is a memory management technique that divides logical memory into fixed-size pages and physical memory into frames. When a process is executed, its pages are loaded into any available frames. This allows physical memory to be non-contiguous while avoiding external fragmentation.
- Address translation uses a page table containing the frame number for each process page. A logical address is divided into a page number, which indexes the page table, and a page offset, which combined with the frame base address gives the physical memory location.
- Segmentation divides a process into variable-sized segments, each with a base and limit defined in a segment table. A logical address has a segment number and offset, with the offset added to the base
This document discusses memory management techniques in operating systems. It defines memory management as the functionality that handles primary memory and moves processes between main memory and disk. It describes process address spaces and different types of addresses. It also covers static vs dynamic loading and linking, swapping, memory allocation, fragmentation, paging, segmentation, and address translation. Paging breaks processes into fixed-size blocks called pages, while segmentation divides processes into variable-length segments. The operating system uses techniques like page mapping tables, segment maps, and memory compaction to manage memory.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
This document discusses character devices in Linux. It explains that character devices have three main registers: a data register for reading and writing small amounts of data, an action register that triggers a physical action when written to but returns zero when read, and a status register that provides information when read but is a no-op when written to. It then provides an overview of input/output in operating systems generally.
The document provides an overview of Oracle 10g database architecture including its physical and logical structures as well as processes. Physically, a database consists of datafiles, redo logs, and control files. Logically, it is divided into tablespaces containing schemas, segments, and other objects. The Oracle instance comprises the system global area (SGA) shared memory and background processes that manage tasks like writing redo logs and checkpointing data blocks. User processes connect to the database through sessions allocated in the program global area.
The document provides information on real-time operating system (RTOS) based embedded systems. It discusses the key functions of an RTOS kernel including task management, scheduling, synchronization, error handling, memory management, interrupt handling, and time management. It also describes the differences between hard and soft real-time systems and defines tasks, processes, and threads in an operating system context.
The document discusses key concepts related to process management in Linux, including process lifecycle, states, memory segments, scheduling, and priorities. It explains that a process goes through creation, execution, termination, and removal phases repeatedly. Process states include running, stopped, interruptible, uninterruptible, and zombie. Process memory is made up of text, data, BSS, heap, and stack segments. Linux uses a O(1) CPU scheduling algorithm that scales well with process and processor counts.
Main memory is where programs are loaded to run on the CPU. There are several techniques for managing memory allocation and binding programs to addresses in memory, including compile-time, load-time, and execution-time binding. Memory management is needed to map logical addresses used by programs to physical addresses in memory. Paging is a memory management technique that divides memory into pages to allow non-contiguous allocation and reduce fragmentation.
The document discusses processes, threads, communication, and synchronization in distributed systems. It covers key concepts like processes and threads, address spaces, creation of new processes, shared memory regions, thread communication and synchronization, remote invocation, and architectures for multi-threaded servers. Communication primitives provided by some distributed operating systems are also summarized.
This document discusses memory management in operating systems. It covers topics like how memory management keeps track of allocated and free memory, provides protection using base and limit registers, and different address binding schemes. It also discusses dynamic loading, dynamic linking, logical versus physical addresses, swapping, memory allocation techniques like single allocation and multiple partitions, and issues like fragmentation. Paging and segmentation techniques for managing memory are also summarized.
Memory management handles allocation of memory to processes and tracks used and free memory. It uses techniques like paging, segmentation, and dynamic allocation from a heap. Paging maps logical addresses to physical pages, avoiding external fragmentation. Segmentation divides memory into logical segments of varying sizes. Dynamic allocation fulfills requests from the heap, managing free blocks and avoiding fragmentation and memory leaks.
This document discusses various memory management techniques used in operating systems, including memory protection using base and limit registers, address binding at compile, load, and execution times, logical vs physical address spaces, memory allocation using contiguous and multiple partition schemes, segmentation using segment tables, and paging using page tables. Paging allows for non-contiguous allocation of physical memory through the use of pages and frames, address translation via a page table with page and offset, and protection bits. It aims to reduce fragmentation and support sharing of common code through reentrant code.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
This document discusses the key components of a file system and disk drive architecture. It covers disk structure, scheduling, management, and swap space. It describes concepts like disk formatting, boot blocks, bad blocks, and swap space location. The document also provides a high-level overview of the Windows 2000 operating system, covering its architecture, kernel, processes and threads, exceptions, and subsystems.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
- Paging is a memory management technique that divides logical memory into fixed-size pages and physical memory into frames. When a process is executed, its pages are loaded into any available frames. This allows physical memory to be non-contiguous while avoiding external fragmentation.
- Address translation uses a page table containing the frame number for each process page. A logical address is divided into a page number, which indexes the page table, and a page offset, which combined with the frame base address gives the physical memory location.
- Segmentation divides a process into variable-sized segments, each with a base and limit defined in a segment table. A logical address has a segment number and offset, with the offset added to the base
This document discusses memory management techniques in operating systems. It defines memory management as the functionality that handles primary memory and moves processes between main memory and disk. It describes process address spaces and different types of addresses. It also covers static vs dynamic loading and linking, swapping, memory allocation, fragmentation, paging, segmentation, and address translation. Paging breaks processes into fixed-size blocks called pages, while segmentation divides processes into variable-length segments. The operating system uses techniques like page mapping tables, segment maps, and memory compaction to manage memory.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
This document discusses character devices in Linux. It explains that character devices have three main registers: a data register for reading and writing small amounts of data, an action register that triggers a physical action when written to but returns zero when read, and a status register that provides information when read but is a no-op when written to. It then provides an overview of input/output in operating systems generally.
The document provides an overview of Oracle 10g database architecture including its physical and logical structures as well as processes. Physically, a database consists of datafiles, redo logs, and control files. Logically, it is divided into tablespaces containing schemas, segments, and other objects. The Oracle instance comprises the system global area (SGA) shared memory and background processes that manage tasks like writing redo logs and checkpointing data blocks. User processes connect to the database through sessions allocated in the program global area.
The document provides information on real-time operating system (RTOS) based embedded systems. It discusses the key functions of an RTOS kernel including task management, scheduling, synchronization, error handling, memory management, interrupt handling, and time management. It also describes the differences between hard and soft real-time systems and defines tasks, processes, and threads in an operating system context.
The document discusses key concepts related to process management in Linux, including process lifecycle, states, memory segments, scheduling, and priorities. It explains that a process goes through creation, execution, termination, and removal phases repeatedly. Process states include running, stopped, interruptible, uninterruptible, and zombie. Process memory is made up of text, data, BSS, heap, and stack segments. Linux uses a O(1) CPU scheduling algorithm that scales well with process and processor counts.
Main memory is where programs are loaded to run on the CPU. There are several techniques for managing memory allocation and binding programs to addresses in memory, including compile-time, load-time, and execution-time binding. Memory management is needed to map logical addresses used by programs to physical addresses in memory. Paging is a memory management technique that divides memory into pages to allow non-contiguous allocation and reduce fragmentation.
The document discusses processes, threads, communication, and synchronization in distributed systems. It covers key concepts like processes and threads, address spaces, creation of new processes, shared memory regions, thread communication and synchronization, remote invocation, and architectures for multi-threaded servers. Communication primitives provided by some distributed operating systems are also summarized.
This document discusses memory management in operating systems. It covers topics like how memory management keeps track of allocated and free memory, provides protection using base and limit registers, and different address binding schemes. It also discusses dynamic loading, dynamic linking, logical versus physical addresses, swapping, memory allocation techniques like single allocation and multiple partitions, and issues like fragmentation. Paging and segmentation techniques for managing memory are also summarized.
Memory management handles allocation of memory to processes and tracks used and free memory. It uses techniques like paging, segmentation, and dynamic allocation from a heap. Paging maps logical addresses to physical pages, avoiding external fragmentation. Segmentation divides memory into logical segments of varying sizes. Dynamic allocation fulfills requests from the heap, managing free blocks and avoiding fragmentation and memory leaks.
This document discusses various memory management techniques used in operating systems, including memory protection using base and limit registers, address binding at compile, load, and execution times, logical vs physical address spaces, memory allocation using contiguous and multiple partition schemes, segmentation using segment tables, and paging using page tables. Paging allows for non-contiguous allocation of physical memory through the use of pages and frames, address translation via a page table with page and offset, and protection bits. It aims to reduce fragmentation and support sharing of common code through reentrant code.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
This document discusses the key components of a file system and disk drive architecture. It covers disk structure, scheduling, management, and swap space. It describes concepts like disk formatting, boot blocks, bad blocks, and swap space location. The document also provides a high-level overview of the Windows 2000 operating system, covering its architecture, kernel, processes and threads, exceptions, and subsystems.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
Similar to Unit 3.1 The Structure of Process, Process Control, Process Scheduling.ppt (20)
- The Unix operating system originated from the MULTICS project in the 1960s. Bell Labs researchers Ken Thompson and Dennis Ritchie developed the first version of Unix in 1969.
- Unix became popular due to its portability, simplicity, and ability to support complex programs built from simpler ones. It introduced a hierarchical file system, consistent file format, and multi-user capabilities.
- Over time, Unix evolved into two main versions - System V (SVR) and Berkeley Software Distribution (BSD). Linux was later developed as a Unix-like system and shares many similarities with SVR.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Unit 3.1 The Structure of Process, Process Control, Process Scheduling.ppt
1. Sanjivani Rural Education Society’s
Sanjivani College of Engineering, Kopargaon-423 603
(An Autonomous Institute, Affiliated to Savitribai Phule Pune University, Pune)
NAAC ‘A’ Grade Accredited, ISO 9001:2015 Certified
Department of Computer Engineering
(NBA Accredited)
Prof. A. V. Brahmane
Assistant Professor
E-mail : brahmaneanilkumarcomp@sanjivani.org.in
Contact No: 91301 91301 Ext :145, 9922827812
Subject- Operating System and Administration (CO2013)
Unit III- The Structure of Process, Process Control and
Process Scheduling
2. Content
• Process state and transitions,
• Layout of the system memory,
• Context of the process,
• Saving the context of the process,
• Manipulation the process address space,
• Sleep,
• Process creation,
• Signal,
• Process termination, Awaiting the process termination,
• Invoking other program, Process Scheduling
• Case Study - Access Control, Rootly Powers and Controlling Processes
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
3. Process State and transition
• The complete set of process states:
• Executing in user mode.
• Executing in kernel mode.
• Ready to run.
• Sleeping in memory.
• Ready to run, but in swap space (covered later).
• Sleeping in swap space.
• Preempted. (the process is returning from kernel to user mode, but the kernel preempts it and
does a context switch to schedule another process. Very similar to state 3)
• Newly created. Not ready run, nor sleeping. This is the start state for all processes expect process
0.
• The process executed exit system call and is in the zombie state. The process no longer exists, but
it leaves a record containing an exit code and some timing statistics for its parent process to
collect. The zombie state is the final state of a process.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
4. Process State and its Tansistions
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
5. Kernel uses two tables Process Table and U area Table
• The fields in the process table are the following:
• State of the process
• Fields that allow the kernel to locate the process and its u-area in main memory or in
secondary storage.
• Several user identifiers (user IDs or PIDs) specify the relationship of processes to each
other.
• Event descriptor when the process is sleeping.
• Scheduling parameters allow the kernel to determine the order in which processes move
to the states kernel running and user running.
• A signal fields enumerates the signals sent to a process but not yet handled.
• Various timers give process execution time and kernel resource utilization. These are
used for calculation of process scheduling priority. One field is a user-set timer used to
send an alarm signal to a process.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
6. U- Area Table
• The u-area contains these fields :
• A pointer in the process table identifies the entry that corresponds to the u-area.
• The real and effective user IDs determine various privileges allowed the process, such as file access rights.
• Timer fields record the time the process spent executing in user mode and in kernel mode.
• An array indicates how the process wishes to react to signals.
• The control terminal field identifies the "login terminal" associated with the process, if one exists.
• An error field records errors encountered during a system call.
• A return value field contains the result of system calls.
• I/O parameters describe the amount of data to transfer, the address of the source (or target) data array in
user space, file offsets for I/O, and so on.
• The current directory and current root describe the file system environment of the process.
• The user file descriptor table records the files the process has open.
• Limit fields restrict the size of a process and the size of a file it can write.
• A permission modes field masks mode settings on files the process creates.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
7. Layout of System Memory
• The physical memory is addressable. Starting from offset 0, going up to the amount of
physical memory. A process in UNIX contains three sections: text, data, and stack. Text
section contains the instructions. Those instructions could refer to other addresses, for
example, addresses of different subroutines, addresses of global variables in the data
section, or the addresses of local data structures on the stack. If the addresses generated
by the compiler were to be treated as physical addresses, it would be impossible to run
more than one process at a time, because the addresses could overlap. Even if the
compiler tries to address this problem by using heuristics, it would be difficult and
impractical.
• To solve this problem, the kernel treats the addresses given by the compiler as virtual
addresses. And when the program starts executing, the memory management unit
translates the virtual addresses to physical addresses. The compiler doesn't need to
know which physical addresses the process will get. For example, two instances of a
same program could be executing in memory using the same virtual addresses but
different physical addresses. The subsystems of the kernel and the hardware that
cooperate to translate virtual to physical addresses comprise the memory
management subsystem.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
8. Regions
• The UNIX system divides its virtual address space in logically separated regions.
The regions are contiguous area of virtual address space. A region is a logically
distinct object which can be shared. The text, data, and stack are usually separate
regions. It is common to share the text region among instances of a same process.
• The region table entries contain the physical locations at which the region is
spread. Each process contains a private per process regions table, called
a pregion. The pregion entry contains a pointer to an entry in the region table,
and contains starting virtual address of the region. pregion are stored in process
table, or u-area, or a separately allocated memory space, according to the
implementation. The pregion entries contain the access permissions: read-only,
read-write, or read-execute. The pregion and the region structure is analogous to
file table and the in-core inode table. But since, pregions are specific to a
process, pregion table is private to a process, however the file table is global.
Regions can be shared amongst processes.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
9. An example of regions:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
10. Pages and Page Tables
• In a memory model based a pages, the physical memory is divided into equal
sized blocks called pages. Page sizes are usually between 512 bytes to 4K bytes,
and are defined by the hardware. Every memory location can be address by a
"page number" and "byte offset" in the page. For example, a machine with 2^32
bytes of memory has pages of size 1K bytes (2^10), then it will have 2^22 pages.
• When kernel assigns physical pages of memory to a region, it need not assign the
pages contiguously or in any particular order. Just like disk blocks are not assigned
contiguously to avoid fragmentation.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
11. • The kernel maintains a mapping of logical to physical page numbers in
a table which looks like this:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
12. • These tables are called page tables. Region table entry has pointers to page
tables. Since logical address space is contiguous, it is just the index into an array
of physical page numbers. The page tables also contain hardware dependent
information such as permissions for pages. Modern machines have special
hardware for address translation. Because software implementation of such
translation would be too slow. Hence, when a process starts executing, the kernel
tells the hardware where its page tables reside.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
14. • For being hardware independent, let us assume that the hardware has register
triples (in abundance) which the kernel uses for memory management. The first
register in the triple contains the address of the page table, the second register
contains the first virtual address mapped by the page table, and the third register
contains control information such as number of pages in page tables and page
access permissions. When executing a process, the kernel loads such register
triples with the data in the pregion entries.
• If a process accesses an address outside its virtual address space, an exception is
generated. Suppose a process has 0 to 16K bytes of address space and the
process accesses the virtual address 20K, an exception will generated, and it is
caught by the operating system. Similarly, if a process tries to access a page
without having enough permission, an exception will be generated. In such cases,
process normally exit.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
15. Layout of the Kernel
• Even if the kernel executes in the context of a process, its virtual
address space is independent of processes. When the system boots
up, the kernel is loaded into memory and necessary page tables and
registers are loaded with appropriate data. Many hardware systems
slice a process' address space into many sections, user and kernel
being two of them. When in user mode, access to kernel page tables
is prohibited. When the process switches to kernel mode, only then it
can access kernel page tables. Some system implementations try to
allocate the same physical pages to the kernel, keeping the translation
function, an identity function.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
16. Changing Mode from User to Kernel
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
17. The U area
• Even if every process has a u-area, the kernel accesses them through
its u variable. It needs to access only one u-area at a time, of the currently
executing process. The kernel knows where the page table entry of the u-area is
located, therefore, when a process is scheduled, the physical address of its u-area
is loaded into kernel page tables.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
18. Memory Map of U area in the Kernel
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
19. The Context of a Process
• The context of a process consists of:
• Contents of its (user) address space, called as user level context
• Contents of hardware registers, called as register context
• Kernel data structures that relate to the process, called as system context
• User level context consists of the process text, data, user stack and shared
memory that is in the virtual address space of the process. The part which resides
on swap space is also part of the user level context.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
20. The register context consists of the following components:
• Program counter specifies the next instruction to be executed. It is an address in
kernel or in user address space.
• The processor status register (PS) specifies hardware status relating the process.
It has subfields which specify if last instruction overflowed, or resulted in 0,
positive or negative value, etc. It also specifies the current processor execution
level and current and most recent modes of execution (such as kernel, user).
• The stack pointer points to the current address of the next entry in the kernel or
user stack. If it will point to next free entry or last used entry it dependent on the
machine architecture. The direction of the growth of stack (toward numerically
higher or lower addresses) also depend on machine architecture.
• The general purpose registers contain data generated by the process during its
execution.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
21. The System Level
• The system level context has a "static part" and a "dynamic part". A
process has one static part throughout its lifetime. But it can have a
variable number of dynamic parts.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
22. • The static part consists of the following components:
• The process table entry
• The u-area
• Pregion entries, region tables and page tables.
• The dynamic part consists of the following components:
• The kernel stack contains the stack frames the kernel functions. Even if all processes
share the kernel text and data, kernel stack needs to be different for all processes as
every process might be in a different state depending on the system calls it executes. The
pointer to the kernel stack is usually stored in the u-area but it differs according to
system implementations. The kernel stack is empty when the process executes in user
mode
• The dynamic part of the system level context consists of a set of layers, visualized as a
last-in-first-out stack. Each system-level context layer contains information necessary to
recover the previous layer, including register context of the previous layer.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
23. Components of the Context of a Process
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
24. Saving the Context of a Process
• Interrupts and Exceptions
• The system is responsible for handling interrupts and exceptions. If the system is
executing at a lower processor execution level, when an interrupts occurs, the
kernel accepts the interrupt before decoding the next instruction and then raises
the processor execution level to block other interrupts of that or lower level. It
handles the interrupt by performing following sequence of operations:
• It saves the current register context of the executing process and creates (pushes)
a new context layer.
• The kernel determines the source (cause) of the interrupt, and if applicable, unit
number (such as which drive caused the interrupt). When the system receives an
interrupt, it gets a number. It uses that number as an index into the interrupt
vector, which stores the actions to be taken (interrupt handlers) when interrupts
occur. Example of interrupt vector:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
25. • The kernel invokes the interrupt handler. The kernel stack of the new context
layer is logically distinct from the kernel stack of the previous context layer. Some
implementations use the processes kernel stack to store the stack frame of an
interrupt handler, while some implementations use a global interrupt stack for
the interrupt handlers which are guaranteed to return without a context switch.
• The kernel returns from the interrupt handler and executes a set of hardware
instructions which restore the previous context. The interrupt handler may affect
the behavior of the process as it might modify the kernel data structures. But
usually, the process resumes execution as if the interrupt never occurred.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
27. Algorithm for Handling Intrrupts
• The algorithm for interrupt handling is given below:
• /* Algorithm: inthand
• * Input: none
• * Output: none
• */
• {
• save (push) current context layer;
• determine interrupt source;
• find interrupt vector;
• call interrupt handler;
• restore (pop) previous context layer;
• }
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
28. System Call Interface
• The library functions such as open, read, etc. in the standard C library
are not actually system calls. Those are normal functions and normal
functions cannot change the mode of execution of the system. These
functions invoke a special instruction which makes the system change
its execution mode to kernel mode and start executing the system call
code. The instruction is called as operating system trap. The system
calls are a special case of interrupt handling. The library routines pass
the a number unique for each system call, as the parameter to the
operating system trap through a specific register or on the stack.
Through that number, the kernel determines which system call to
execute.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
29. Algorithm Syscall
• /* Algorithm: syscall
• * Input: system call number
• * Output: result of system call
• */
• {
• find entry in the system call table corresponding to the system call number;
• determine number of parameters to the system call;
• copy parameters from the user address space to u-area;
• save current context for abortive return; // studied later
• invoke system call code in kernel;
• if (error during execution of system call)
• {
• set register 0 in user saved register context to error number;
• turn on carry bit in PS register in user saved register context;
• }
• else
• set register 0, 1 in user saved register context to return values from system call;
• }
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
30. Context Switch
• As seen previously, the kernel permits a context switch under 4 situations:
• When a process sleeps
• When a process exits
• When a process returns from a system call to user mode but is not the most
eligible process to run.
• When a process returns from an interrupt handler to user mode but is not the
most eligible process to run.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
31. Manipulation of the Process Address Space
• The region table entry contains the following information:
• The inode of the file from which the region was initially loaded.
• The type of the region (text, shared memory, private data, or stack).
• The size of the region.
• The location of the region in physical memory.
• The state of the region:
• Locked, in demand, being loaded into memory, valid, loaded into memory
• The reference count, giving the number of processes that reference the region
• The operations that manipulate regions are:
• lock a region
• unlock a region
• allocate a region
• attach a region to the memory space of a process
• change the size of a region
• load a region from a file into the memory space of a process
• free a region
• detach a region from the memory space of a process, and duplicate the contents of a region
32. Sleep
• Processes sleep inside of system calls awaiting for a particular resource or even if
a page fault occurs. In such cases, they push a context layer and do a context
switch. The context layers of a sleep process are shown below:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon