The document is a lecture presentation on inter-process communication and memory in Linux. It discusses signals and semaphores that allow processes to communicate events. It describes how processes can specify actions in response to signals. The document also covers zombie processes, memory management including virtual memory and demand paging, and provides an abstract model of how virtual memory is translated to physical addresses.
Operating Systems 1 (6/12) - ProcessesPeter Tröger
The document discusses operating system concepts related to processes. It covers:
1) The core operating system task of managing multiple running processes through techniques like multi-processing and multi-tasking.
2) A process is the unit of execution and is represented by a process control block containing its state and resource usage information.
3) Processes go through various states in their lifecycle including running, ready, blocked, and terminated as they are scheduled by the operating system dispatcher.
The document discusses various topics related to memory management in operating systems including swapping, contiguous memory allocation, paging, segmentation, virtual memory concepts like demand paging, page replacement, and thrashing. It provides details on page tables, segmentation hardware, logical to physical address translation, and performance aspects of demand paging. The key aspects covered are memory management techniques to overcome fragmentation and enable efficient use of limited main memory.
The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The document discusses processes in Linux operating systems. It defines a process as an active program instance, as opposed to a passive program file. It describes how processes are generated from programs using the fork() system call, and how the exec() system call replaces the program running in a process. It also covers process types, lifecycles, and tools for monitoring and managing processes. Key points covered include the differences between processes and programs, process IDs, parent/child relationships, and how fork() and exec() are used to generate and execute new processes.
In the given presentation, process overview,process management scheduling typesand some more basic concepts were explained.
Kindly refere the presentation.
The document discusses memory management techniques in operating systems. It describes fixed and dynamic partitioning of main memory between programs. Fixed partitioning can lead to internal fragmentation while dynamic partitioning can result in external fragmentation. The memory manager must track busy and free memory areas using data structures like partition tables. It also must decide where to allocate new programs using first-fit or best-fit placement algorithms.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
The document discusses how operating systems use processes and scheduling to allow multiple programs to run simultaneously. It explains that a process is the running instance of a program and contains the program counter, registers, memory allocation, and other state information. The operating system uses process scheduling and a process control block (PCB) for each process to track status, allocate CPU time, and handle interrupts and blocking for I/O. It outlines common scheduling algorithms like first-come first-served, shortest job next, priority, and round robin.
Operating Systems 1 (6/12) - ProcessesPeter Tröger
The document discusses operating system concepts related to processes. It covers:
1) The core operating system task of managing multiple running processes through techniques like multi-processing and multi-tasking.
2) A process is the unit of execution and is represented by a process control block containing its state and resource usage information.
3) Processes go through various states in their lifecycle including running, ready, blocked, and terminated as they are scheduled by the operating system dispatcher.
The document discusses various topics related to memory management in operating systems including swapping, contiguous memory allocation, paging, segmentation, virtual memory concepts like demand paging, page replacement, and thrashing. It provides details on page tables, segmentation hardware, logical to physical address translation, and performance aspects of demand paging. The key aspects covered are memory management techniques to overcome fragmentation and enable efficient use of limited main memory.
The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The document discusses processes in Linux operating systems. It defines a process as an active program instance, as opposed to a passive program file. It describes how processes are generated from programs using the fork() system call, and how the exec() system call replaces the program running in a process. It also covers process types, lifecycles, and tools for monitoring and managing processes. Key points covered include the differences between processes and programs, process IDs, parent/child relationships, and how fork() and exec() are used to generate and execute new processes.
In the given presentation, process overview,process management scheduling typesand some more basic concepts were explained.
Kindly refere the presentation.
The document discusses memory management techniques in operating systems. It describes fixed and dynamic partitioning of main memory between programs. Fixed partitioning can lead to internal fragmentation while dynamic partitioning can result in external fragmentation. The memory manager must track busy and free memory areas using data structures like partition tables. It also must decide where to allocate new programs using first-fit or best-fit placement algorithms.
Memory management is the method by which an operating system handles and allocates primary memory. It tracks the status of memory locations as allocated or free, and determines how memory is distributed among competing processes. Memory can be allocated contiguously or non-contiguously. Contiguous allocation assigns consecutive blocks of memory to a process, while non-contiguous allocation allows a process's memory blocks to be scattered across different areas using techniques like paging or segmentation. Paging divides processes and memory into fixed-size pages and frames to allow non-contiguous allocation while reducing fragmentation.
The document discusses how operating systems use processes and scheduling to allow multiple programs to run simultaneously. It explains that a process is the running instance of a program and contains the program counter, registers, memory allocation, and other state information. The operating system uses process scheduling and a process control block (PCB) for each process to track status, allocate CPU time, and handle interrupts and blocking for I/O. It outlines common scheduling algorithms like first-come first-served, shortest job next, priority, and round robin.
The document discusses various memory management techniques used by operating systems including memory allocation, paging, segmentation, virtual memory and page replacement algorithms. It provides details on how operating systems manage processes in memory using techniques like memory mapping, context switching, swapping and fragmentation handling. Various address translation mechanisms like logical, physical and virtual addresses are also summarized along with common page replacement algorithms like FIFO, LRU, OPT and their working.
Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
Virtual memory allows processes to execute even if they are larger than physical memory by storing portions of processes on disk. When a process attempts to access memory that is not currently loaded, a page fault occurs which brings the required page into memory from disk. This is known as demand paging and allows the operating system to efficiently load only those portions of a process needed for execution, reducing memory usage and improving performance compared to loading the entire process at once.
The document discusses address translation in computer systems using paging. It explains that with paging, the virtual address space is divided into pages, and physical memory is divided into page frames. A page table maps pages to frames. The page table can be stored in memory and pointed to by a page table base register. Hardware support for fast translation includes a translation lookaside buffer (TLB) that caches recent page-frame mappings. Paging allows sharing of pages between processes and provides memory protection between processes.
The document provides an overview of operating system functions and components. It discusses how operating systems provide convenience, efficiency, and manage computer resources through services like process creation and execution, I/O device access, file access control, and error handling. It describes the evolution of operating systems from early single-user batch systems to today's interactive, multi-tasking systems using techniques like multi-programming, time-sharing, virtual memory, and paging.
This document discusses memory management and paging in operating systems. It explains that memory management allocates space for application routines and prevents interference between programs. The memory hierarchy includes main memory, cache memory, and secondary storage. Paging is a memory management technique that divides processes and main memory into equal pages. It allows processes to be non-contiguous in memory. The operating system uses page tables to map logical addresses to physical addresses stored across different pages and frames. Paging reduces external fragmentation but can cause internal fragmentation.
The document discusses processes in an operating system. It defines a process as an active program in execution that has its own memory space and resources. It describes the different states a process can be in like ready, running, waiting etc. It also discusses process control blocks that contain information about processes and scheduling queues that processes move between. Inter-process communication using shared memory and message passing is also summarized.
The document discusses processes and process management in operating systems. It defines a process as the unit of execution, scheduling, and ownership. A process consists of code, data, a stack, registers, and other components needed to run a program. Processes can be in different states like ready, running, waiting. The OS uses data structures called process control blocks (PCBs) to manage process states and execution contexts. It also maintains scheduling queues to organize processes in different states. Process creation, termination, and interprocess communication (IPC) allow processes to work together in a system.
The following Document outlines what we believe are the top 20 Windows tools every System Administrator should know or be familiar with. Some will you most likely already know about, but we hope you'll find plenty of information here that you didn't know.
Everyone that deals with Windows in a system administrator capacity has to know about Task Manager. The nice thing is it keeps getting better with each new version of Windows.
This document discusses C++ memory management. It describes the different memory segments used in a C++ program including the code segment, BSS segment, data segment, heap, and stack. The stack and heap are discussed in more detail. The stack stores function parameters and local variables and uses a last-in, first-out approach. The heap stores dynamically allocated memory using new until explicitly freed. The document also provides details on how the call stack works, including pushing and popping stack frames when functions are called and returned from.
This document discusses memory management techniques in operating systems including paging, segmentation, and virtual memory. It defines key concepts such as logical versus physical addresses, page tables, frames, and how memory management units map between these spaces. Advantages and disadvantages of different algorithms like FIFO, LRU and clock are presented. The goals of memory management are to allow for more efficient use of limited memory and enable running multiple processes simultaneously.
This document discusses memory management techniques used in operating systems. It describes how memory management allocates main memory efficiently between multiple processes. Early techniques included fixed and dynamic partitioning, as well as the buddy system. Address translation allows logical addresses used by programs to be translated to physical addresses when processes are loaded into memory. Relocation and protection are key requirements for memory management.
The document discusses several topics related to process scheduling and memory management in operating systems, including:
- Process scheduling using a round-robin algorithm with prioritization of processes into different queues.
- System calls for getting and setting the time, including for scheduling alarms for individual processes.
- The clock interrupt handler which performs functions like process scheduling, profiling, and keeping track of time.
- Memory management policies including swapping entire processes in and out of memory and demand paging which swaps individual pages in and out on demand. The System V kernel uses both techniques together to avoid thrashing.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
The document discusses various topics related to process management in Unix/Linux operating systems. It describes the different states a process can be in, such as ready, running, waiting, zombie, etc. It also explains process creation using fork() and program execution using exec(). Other topics covered include process termination with exit(), waiting for child processes using wait(), process relationships like parent-child, process identifiers, signals, and context switching.
The document discusses processes and operations on processes in an operating system. It defines a process as a program in execution that includes a program counter, stack, data section, and heap. A process goes through various states like running, ready, waiting, and terminated. The operating system uses process control blocks to store process information and schedules processes using queues and schedulers. It also covers process creation, where a parent process spawns child processes, and process termination when a process exits.
Introduction to Operating System (Important Notes)Gaurav Kakade
The document provides definitions and explanations of various operating system concepts. It discusses context switching, rollbacks, system calls, the definition of an operating system, swapping, semaphores, Belady's anomaly, waiting time, claim edges in resource allocation graphs, whether round robin is preemptive, the definition of an editor, fragmentation, files and their attributes, page faults, turnaround time, multiprogramming, overlays in memory management, processes, compile time, deadlocks, basic file operations, dispatchers, classic synchronization problems, and pages and frames in memory management. It also explains deadlock prevention strategies such as mutual exclusion, hold and wait, and no preemption.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses various memory management techniques used by operating systems including memory allocation, paging, segmentation, virtual memory and page replacement algorithms. It provides details on how operating systems manage processes in memory using techniques like memory mapping, context switching, swapping and fragmentation handling. Various address translation mechanisms like logical, physical and virtual addresses are also summarized along with common page replacement algorithms like FIFO, LRU, OPT and their working.
Operating system - Process and its conceptsKaran Thakkar
This presentation gives an overview of Process concepts in Operating System. The presentation aims at alleviating most of the overheads while understanding the process concept in operating system. this tailor made presentation will help individuals to understand the overall meaning of process and its underlying concepts used in an operating system.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
The document discusses various concepts related to process management in operating systems including process scheduling, CPU scheduling, and process synchronization. It defines a process as a program in execution and describes the different states a process can be in during its lifecycle. It also discusses process control blocks which maintain information about each process, and various scheduling algorithms like first come first serve, shortest job first, priority and round robin scheduling.
Virtual memory allows processes to execute even if they are larger than physical memory by storing portions of processes on disk. When a process attempts to access memory that is not currently loaded, a page fault occurs which brings the required page into memory from disk. This is known as demand paging and allows the operating system to efficiently load only those portions of a process needed for execution, reducing memory usage and improving performance compared to loading the entire process at once.
The document discusses address translation in computer systems using paging. It explains that with paging, the virtual address space is divided into pages, and physical memory is divided into page frames. A page table maps pages to frames. The page table can be stored in memory and pointed to by a page table base register. Hardware support for fast translation includes a translation lookaside buffer (TLB) that caches recent page-frame mappings. Paging allows sharing of pages between processes and provides memory protection between processes.
The document provides an overview of operating system functions and components. It discusses how operating systems provide convenience, efficiency, and manage computer resources through services like process creation and execution, I/O device access, file access control, and error handling. It describes the evolution of operating systems from early single-user batch systems to today's interactive, multi-tasking systems using techniques like multi-programming, time-sharing, virtual memory, and paging.
This document discusses memory management and paging in operating systems. It explains that memory management allocates space for application routines and prevents interference between programs. The memory hierarchy includes main memory, cache memory, and secondary storage. Paging is a memory management technique that divides processes and main memory into equal pages. It allows processes to be non-contiguous in memory. The operating system uses page tables to map logical addresses to physical addresses stored across different pages and frames. Paging reduces external fragmentation but can cause internal fragmentation.
The document discusses processes in an operating system. It defines a process as an active program in execution that has its own memory space and resources. It describes the different states a process can be in like ready, running, waiting etc. It also discusses process control blocks that contain information about processes and scheduling queues that processes move between. Inter-process communication using shared memory and message passing is also summarized.
The document discusses processes and process management in operating systems. It defines a process as the unit of execution, scheduling, and ownership. A process consists of code, data, a stack, registers, and other components needed to run a program. Processes can be in different states like ready, running, waiting. The OS uses data structures called process control blocks (PCBs) to manage process states and execution contexts. It also maintains scheduling queues to organize processes in different states. Process creation, termination, and interprocess communication (IPC) allow processes to work together in a system.
The following Document outlines what we believe are the top 20 Windows tools every System Administrator should know or be familiar with. Some will you most likely already know about, but we hope you'll find plenty of information here that you didn't know.
Everyone that deals with Windows in a system administrator capacity has to know about Task Manager. The nice thing is it keeps getting better with each new version of Windows.
This document discusses C++ memory management. It describes the different memory segments used in a C++ program including the code segment, BSS segment, data segment, heap, and stack. The stack and heap are discussed in more detail. The stack stores function parameters and local variables and uses a last-in, first-out approach. The heap stores dynamically allocated memory using new until explicitly freed. The document also provides details on how the call stack works, including pushing and popping stack frames when functions are called and returned from.
This document discusses memory management techniques in operating systems including paging, segmentation, and virtual memory. It defines key concepts such as logical versus physical addresses, page tables, frames, and how memory management units map between these spaces. Advantages and disadvantages of different algorithms like FIFO, LRU and clock are presented. The goals of memory management are to allow for more efficient use of limited memory and enable running multiple processes simultaneously.
This document discusses memory management techniques used in operating systems. It describes how memory management allocates main memory efficiently between multiple processes. Early techniques included fixed and dynamic partitioning, as well as the buddy system. Address translation allows logical addresses used by programs to be translated to physical addresses when processes are loaded into memory. Relocation and protection are key requirements for memory management.
The document discusses several topics related to process scheduling and memory management in operating systems, including:
- Process scheduling using a round-robin algorithm with prioritization of processes into different queues.
- System calls for getting and setting the time, including for scheduling alarms for individual processes.
- The clock interrupt handler which performs functions like process scheduling, profiling, and keeping track of time.
- Memory management policies including swapping entire processes in and out of memory and demand paging which swaps individual pages in and out on demand. The System V kernel uses both techniques together to avoid thrashing.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
The document discusses various topics related to process management in Unix/Linux operating systems. It describes the different states a process can be in, such as ready, running, waiting, zombie, etc. It also explains process creation using fork() and program execution using exec(). Other topics covered include process termination with exit(), waiting for child processes using wait(), process relationships like parent-child, process identifiers, signals, and context switching.
The document discusses processes and operations on processes in an operating system. It defines a process as a program in execution that includes a program counter, stack, data section, and heap. A process goes through various states like running, ready, waiting, and terminated. The operating system uses process control blocks to store process information and schedules processes using queues and schedulers. It also covers process creation, where a parent process spawns child processes, and process termination when a process exits.
Introduction to Operating System (Important Notes)Gaurav Kakade
The document provides definitions and explanations of various operating system concepts. It discusses context switching, rollbacks, system calls, the definition of an operating system, swapping, semaphores, Belady's anomaly, waiting time, claim edges in resource allocation graphs, whether round robin is preemptive, the definition of an editor, fragmentation, files and their attributes, page faults, turnaround time, multiprogramming, overlays in memory management, processes, compile time, deadlocks, basic file operations, dispatchers, classic synchronization problems, and pages and frames in memory management. It also explains deadlock prevention strategies such as mutual exclusion, hold and wait, and no preemption.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
The document discusses operating systems and processes. It defines an operating system as software that controls hardware and manages resources. A process is a program in execution that has a unique ID and state. Processes go through various states like running, ready, blocked/waiting, and terminated. Threads are lightweight processes that can be scheduled independently and share resources within a process. User-level threads are managed in libraries while kernel-level threads are managed by the operating system kernel.
This document summarizes key aspects of memory management using paging. It discusses how paging divides both physical memory and logical addresses into fixed-sized pages and frames. A page table maps page numbers to physical frame numbers, translating logical addresses to physical addresses. Paging avoids fragmentation and allows non-contiguous allocation of physical memory to processes. The operating system maintains page tables and a frame table to manage physical memory allocation and address translation for processes.
OS | Functions of OS | Operations of OS | Operations of a process | Scheduling algorithms | FCFS scheduling | SJF scheduling | RR scheduling | Paging | File system implementation | Cryptography as a security tool
The document discusses processes and process management in an operating system. A process is an instance of a computer program being executed and contains the program code and current activity. Processes go through various states like ready, running, waiting, and terminated. The operating system uses a process control block (PCB) to maintain information about each process like its state, program counter, memory allocation, and other details. Key process operations include creation, termination, and context switching between processes using the PCB.
This document discusses processes and threads in operating systems. It defines a process as a program under execution with its own virtual CPU and state. Processes are created through system initialization, forking, or by user request. Processes transition between running, ready, blocked, and terminated states. A process control block stores process information. Context switching involves saving one process's state and restoring another's. Threads are lightweight processes within a process that share the process's resources. Threads provide concurrency and efficient communication compared to processes.
This document provides information about operating systems and computer hardware concepts for a second year computer science course. It includes definitions of terms like 32-bit microprocessor, opcode, data bus, and cache memory. It also lists some common operating systems and describes concepts like multicore processors, virtualization, virtual memory, and the different states a process can be in.
Virtual memory is a technique that allows processes to exceed the size of physical memory. It divides programs into pages stored on disk until needed. When a page is accessed, it is copied into RAM. Addresses are translated between virtual and physical addresses by an MMU. Pages are replaced using policies like FIFO. Thrashing occurs when too many page faults slow processing. Demand paging loads pages on first access, while segmentation divides programs into variable blocks. Combined systems use both paging and segmentation.
This document summarizes key concepts from lecture notes on operating systems. It discusses the role of an operating system as an intermediary between the user and computer hardware. It describes the main components of an operating system including process management, memory management, file management, I/O management, and networking. It also covers process states, scheduling algorithms like FCFS, SJF, priority and round robin scheduling, and the goals of utilizing CPU resources efficiently and providing a user-friendly interface.
Main memory is where programs are loaded to run on the CPU. There are several techniques for managing memory allocation and binding programs to addresses in memory, including compile-time, load-time, and execution-time binding. Memory management is needed to map logical addresses used by programs to physical addresses in memory. Paging is a memory management technique that divides memory into pages to allow non-contiguous allocation and reduce fragmentation.
This document contains lecture notes on operating systems. It covers topics like the definition and goals of operating systems, system components, processes and process states, CPU scheduling algorithms, synchronization between processes, deadlocks, memory management, and virtual memory. The key points are:
- An operating system acts as an intermediary between the user and computer hardware to provide an environment for running programs and efficiently using computer resources.
- System components include process management, memory management, file management, I/O management, networking, and protection.
- CPU scheduling algorithms like FCFS, SJF, priority, round-robin, and multilevel queue aim to make efficient use of CPU time between processes.
This document provides a summary of key topics covered in lecture materials on operating systems. It discusses the basic functions and components of operating systems including process management, memory management, CPU scheduling, synchronization, deadlocks, and virtual memory. Specific scheduling algorithms like first-come first-served, shortest job first and round-robin are explained. The document also covers operating system services, system calls, protection and various historical generations of operating systems.
Topic covers:
what is operating system?
need of operating system
Loading of operating system
types of operating system?
Functions of operating system?
System Security Plan?
Hardening of operating system
Operating Systems chap 2_updated2 (1).pptxAmanuelmergia
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also summarized.
The operating system is a collection of programs that manage computer hardware resources and provide common services for computer programs. It acts as an interface between the user and computer hardware, and controls the computer system by managing all hardware and software. The primary functions of an operating system are to execute user programs, make the computer system convenient to use, and efficiently use computer hardware resources.
The document discusses processes and process management in operating systems. It begins with an analogy comparing workers to programs and processes. It then defines a process as a program in execution that requires resources like memory and CPU. The document outlines the lifecycle of a process through various states like ready, running, waiting etc. It describes process creation, termination, and scheduling. Process control blocks containing process information are discussed. The need for process management and operations like context switching and process synchronization are also covered.
Globalwebtutors.com is an online tutoring platform that provides homework help, dissertation editing, assignment help, and question help. Users can send requirements to Support@globalwebtutors.com or connect via live chat. The document then discusses various aspects of memory management techniques used in operating systems like paging, segmentation, and virtual memory management. It describes processes like swapping, different address types, internal and external fragmentation, and more. More information is available at the provided link.
1) Virtual memory simplifies address translation by creating a virtual memory space that contains the operating system and application programs, even though it does not physically exist.
2) Virtual memory is divided into two components - the first occupies real memory and contains the operating system and page pool, while the second occupies external storage and holds application programs, with pages swapped between real memory and external storage.
3) Address translation under virtual memory uses segment tables and page tables to map virtual addresses to physical addresses.
Similar to Inter Process communication and Memory Management (20)
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Information and Communication Technology in EducationMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 2)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐈𝐂𝐓 𝐢𝐧 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧:
Students will be able to explain the role and impact of Information and Communication Technology (ICT) in education. They will understand how ICT tools, such as computers, the internet, and educational software, enhance learning and teaching processes. By exploring various ICT applications, students will recognize how these technologies facilitate access to information, improve communication, support collaboration, and enable personalized learning experiences.
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐫𝐞𝐥𝐢𝐚𝐛𝐥𝐞 𝐬𝐨𝐮𝐫𝐜𝐞𝐬 𝐨𝐧 𝐭𝐡𝐞 𝐢𝐧𝐭𝐞𝐫𝐧𝐞𝐭:
-Students will be able to discuss what constitutes reliable sources on the internet. They will learn to identify key characteristics of trustworthy information, such as credibility, accuracy, and authority. By examining different types of online sources, students will develop skills to evaluate the reliability of websites and content, ensuring they can distinguish between reputable information and misinformation.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
220711130097 Tulip Samanta Concept of Information and Communication Technology
Inter Process communication and Memory Management
1. LECTURE PPT
FOR
LINUXM.SC. COMPUTER SCIENCE III SEMESTER
TOPIC: INTER PROCESS COMMUNICATION AND MEMORY
By: Poonam Yadav
Assistant Professor,CS
Shri Shankaracharya Mahavidyalaya, Junwani, Bhilai
2. SIGNALS AND SEMAPHORES
• A signal is a notification sent to a process that an event has occurred. These
events may include hardware or software faults, terminal activity, timer
expiration, changes in the status of a child process, changes in a window
size, and so forth. A list of the 30 most common signals is given in Table.
• Each process may specify an action to be taken in response to any signal
other than the kill signal. The action can be any of these: Take the default
action for the signal: Exit means receiving process is terminated. Core means
receiving process is terminated and leaves a “core image”.
• Stop means receiving process stops.
• Ignore the signal.
• On receiving a signal, execute a signal-handling function defined for this
process.
3. • Many of the signals are used to notify processes of special events that
may not be of interest to a user. Although most of them can have user
impact (e.g., power failures and hardware errors), there is not much a
user can do in response.
• A notable exception for users and for shell programmers is the HUP or
hang up signal. You can control what will happen after you hang up or
log off.
4. THE NOHUP COMMAND
• The nohup Command When your terminal is disconnected, the kernel sends the signal SIGHUP (signal 01) to all
processes that were attached to your terminal, as long as your shell does not have job control.
• The purpose of this signal is to have all other processes terminate. Times frequently arise, however, when you want to
have a command continue execution after you hang up.
• For example, you will want to continue a troff process that is formatting a memorandum without having to stay
logged in.
• To ensure that a process stays alive after you log off, use the nohup command as follows:
• $ nohup command
• The nohup command is a built-in shell command in sh, ksh, and csh.
• In some earlier versions of UNIX the nohup command was a shell script that trapped and ignored the hangup signal. It
basically acted like this:
• $ (trap '' 1; command) & nohup
• refers only to the command that immediately follows it. If you issue a command such as
5. • $ nohup cat file | sort lp
• or if you issue a command such as
• $ nohup date; who ; ps -ef
• only the first command will ignore the hangup signal; the remaining commands on the line will die when you hang
• To use nohup with multiple commands, either precede each command with nohup or, preferably, place all
in a file and use nohup to protect the shell that is executing the commands:
• $ cat file
• date
• who
• ps -ef
• $ nohup sh file
6. ZOMBIE PROCESSES
• Normally, UNIX system processes terminate by using the exit system call.
• The call exit (status) returns the value of status to the parent process. When a process issues the exit call, the
kernel disables all signal handling associated with the process, closes all files, releases all resources, frees any
memory, assigns any children of the exiting process to be adopted by init, sends the death of a child signal to
the parent process, and converts the process into the zombie state.
• A process in the zombie state is not alive; it does not use any resources or accomplish any work. But it is not
allowed to die until the exit is acknowledged by the parent process.
• If the parent does not acknowledge the death of the child (because the parent is ignoring the signal, or
because the parent itself is hung), the child stays around as a zombie process. These zombie processes appear
in the ps -f listing with in place of the command:
• UID PID PPID C STIME TTY TIME COMD
• root 21671 21577 0 0:00
• root 21651 21577 0 0:00
• Because a zombie process consumes no CPU time and is attached to no terminal, the STIME and TTY fields
are blank. In earlier versions of the UNIX System, the number of these zombie processes could increase and
clutter up the process table.
• In more recent versions of the UNIX System, the kernel automatically releases the zombie processes.
7.
8. MEMORY MANAGEMENT
The memory management subsystem is one of the most important parts of the operating system. Since the early days of computing, there has been
a need for more memory than exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of
these is virtual memory. Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing
processes as they need it.
Virtual memory does more than just make your computer's memory go further. The memory management subsystem provides:
Large Address Spaces
The operating system makes the system appear as if it has a larger amount of memory than it actually has. The virtual memory can be many times
larger than the physical memory in the system,
Protection
Each process in the system has its own virtual address space. These virtual address spaces are completely separate from each other and so a process
running one application cannot affect another. Also, the hardware virtual memory mechanisms allow areas of memory to be protected against
writing. This protects code and data from being overwritten by rogue applications.
Memory Mapping
Memory mapping is used to map image and data files into a processes address space. In memory mapping, the contents of a file are linked directly
into the virtual address space of a process.
9. MEMORY MANAGEMENT
Fair Physical Memory Allocation
The memory management subsystem allows each running process in the system a fair share of the
physical memory of the system,
Shared Virtual Memory
Although virtual memory allows processes to have separate (virtual) address spaces, there are times when
you need processes to share memory. For example there could be several processes in the system
running the bash command shell. Rather than have several copies of bash, one in each processes
virtual address space, it is better to have only one copy in physical memory and all of the processes
running bash share it. Dynamic libraries are another common example of executing code shared
between several processes.
Shared memory can also be used as an Inter Process Communication (IPC) mechanism, with two or more
processes exchanging information via memory common to all of them. Linux supports the
Unix TM System V shared memory IPC.
11. • the virtual address spaces of two processes, process X and process Y,
each with their own page tables. These page tables map each processes
virtual pages into physical pages in memory. This shows that
process X's virtual page frame number 0 is mapped into memory in
physical page frame number 1 and that process Y's virtual page frame
number 1 is mapped into physical page frame number 4. Each entry in
the theoretical page table contains the following information:
• Valid flag. This indicates if this page table entry is valid,
• The physical page frame number that this entry is describing,
• Access control information. This describes how the page may be used.
12. • The page table is accessed using the virtual page frame number as an
offset. Virtual page frame 5 would be the 6th element of the table (0 is
the first element).
• To translate a virtual address into a physical one, the processor must first
work out the virtual addresses page frame number and the offset within
that virtual page. By making the page size a power of 2 this can be easily
done by masking and shifting.
13. • The processor uses the virtual page frame number as an index into the processes page table to retrieve its page table
entry. If the page table entry at that offset is valid, the processor takes the physical page frame number from this
entry. If the entry is invalid, the process has accessed a non-existent area of its virtual memory. In this case, the
processor cannot resolve the address and must pass control to the operating system so that it can fix things up.
• Just how the processor notifies the operating system that the correct process has attempted to access a virtual
address for which there is no valid translation is specific to the processor. However the processor delivers it, this is
known as a page fault and the operating system is notified of the faulting virtual address and the reason for the page
fault.
• Assuming that this is a valid page table entry, the processor takes that physical page frame number and multiplies it
by the page size to get the address of the base of the page in physical memory. Finally, the processor adds in the
offset to the instruction or data that it needs.
• Using the above example again, process Y's virtual page frame number 1 is mapped to physical page frame number 4
which starts at 0x8000 (4 x 0x2000). Adding in the 0x194 byte offset gives us a final physical address of 0x8194.
• By mapping virtual to physical addresses this way, the virtual memory can be mapped into the system's physical pages
in any order.
14. DEMAND PAGING
• As there is much less physical memory than virtual memory the operating system
must be careful that it does not use the physical memory inefficiently. One way to
save physical memory is to only load virtual pages that are currently being used by
the executing program. For example, a database program may be run to query a
database. In this case not all of the database needs to be loaded into memory, just
those data records that are being examined. If the database query is a search query
then it does not make sense to load the code from the database program that deals
with adding new records. This technique of only loading virtual pages into memory
as they are accessed is known as demand paging.
• When a process attempts to access a virtual address that is not currently in memory
the processor cannot find a page table entry for the virtual page referenced.
• If the faulting virtual address is invalid this means that the process has attempted to
access a virtual address that it should not have. Maybe the application has gone
wrong in some way, for example writing to random addresses in memory. In this case
the operating system will terminate it, protecting the other processes in the system
from this rogue process.
15. • If the faulting virtual address was valid but the page that it refers to is not currently in
memory, the operating system must bring the appropriate page into memory from the
image on disk. Disk access takes a long time, relatively speaking, and so the process must
wait quite a while until the page has been fetched. If there are other processes that could run
then the operating system will select one of them to run. The fetched page is written into a
free physical page frame and an entry for the virtual page frame number is added to the
processes page table. The process is then restarted at the machine instruction where the
memory fault occurred. This time the virtual memory access is made, the processor can make
the virtual to physical address translation and so the process continues to run.
• Linux uses demand paging to load executable images into a processes virtual memory.
Whenever a command is executed, the file containing it is opened and its contents are
mapped into the processes virtual memory. This is done by modifying the data structures
describing this processes memory map and is known as memory mapping. However, only the
first part of the image is actually brought into physical memory. The rest of the image is left
on disk. As the image executes, it generates page faults and Linux uses the processes
memory map in order to determine which parts of the image to bring into memory for
execution.
16. SWAPPING
• If a process needs to bring a virtual page into physical memory and there are no free
physical pages available, the operating system must make room for this page by
discarding another page from physical memory.
• If the page to be discarded from physical memory came from an image or data file
and has not been written to then the page does not need to be saved. Instead it can
be discarded and if the process needs that page again it can be brought back into
memory from the image or data file.
• However, if the page has been modified, the operating system must preserve the
contents of that page so that it can be accessed at a later time. This type of page is
known as a dirty page and when it is removed from memory it is saved in a special
sort of file called the swap file. Accesses to the swap file are very long relative to the
speed of the processor and physical memory and the operating system must juggle
the need to write pages to disk with the need to retain them in memory to be used
again.
17. • Linux uses a Least Recently Used (LRU) page aging technique to fairly
choose pages which might be removed from the system. This scheme
involves every page in the system having an age which changes as the
page is accessed. The more that a page is accessed, the younger it is; the
less that it is accessed the older and more stale it becomes. Old pages
are good candidates for swapping.
18. SHARED VIRTUAL MEMORY
• Virtual memory makes it easy for several processes to share memory.
All memory access are made via page tables and each process has its
own separate page table. For two processes sharing a physical page of
memory, its physical page frame number must appear in a page table
entry in both of their page tables.
• the shared physical page does not have to exist at the same place in
virtual memory for any or all of the processes sharing it.
19. PHYSICAL AND VIRTUAL ADDRESSING
MODES
• It does not make much sense for the operating system itself to run in virtual memory.
This would be a nightmare situation where the operating system must maintain page
tables for itself. Most multi-purpose processors support the notion of a physical
address mode as well as a virtual address mode. Physical addressing mode requires
no page tables and the processor does not attempt to perform any address
translations in this mode. The Linux kernel is linked to run in physical address space.
• The Alpha AXP processor does not have a special physical addressing mode. Instead,
it divides up the memory space into several areas and designates two of them as
physically mapped addresses. This kernel address space is known as KSEG address
space and it encompasses all addresses upwards from 0xfffffc0000000000. In order
to execute from code linked in KSEG (by definition, kernel code) or access data there,
the code must be executing in kernel mode. The Linux kernel on Alpha is linked to
execute from address 0xfffffc0000310000.
20. ACCESS CONTROL
• The page table entries also contain access control information. As the
processor is already using the page table entry to map a processes virtual
address to a physical one, it can easily use the access control information to
check that the process is not accessing memory in a way that it should not.
• There are many reasons why you would want to restrict access to areas of
memory. Some memory, such as that containing executable code, is
naturally read only memory; the operating system should not allow a
process to write data over its executable code. By contrast, pages containing
data can be written to but attempts to execute that memory as instructions
should fail. Most processors have at least two modes of execution: kernel
and user. You would not want kernel code executing by a user or kernel data
structures to be accessible except when the processor is running in kernel
mode.
22. The access control information is held in the PTE and is processor specific; the PTE for Alpha AXP.
The bit fields have the following meanings:
V
Valid, if set this PTE is valid,
FOE
``Fault on Execute'', Whenever an attempt to execute instructions in this page occurs, the processor
reports a page fault and passes control to the operating system,
FOW
``Fault on Write'', as above but page fault on an attempt to write to this page,
FOR
``Fault on Read'', as above but page fault on an attempt to read from this page,
ASM
Address Space Match. This is used when the operating system wishes to clear only some of the entries
from the Translation Buffer,
KRE
Code running in kernel mode can read this page,
23. URE
Code running in user mode can read this page,
GH
Granularity hint used when mapping an entire block with a single Translation Buffer entry
rather than many,
KWE
Code running in kernel mode can write to this page,
UWE
Code running in user mode can write to this page,
page frame number
For PTEs with the V bit set, this field contains the physical Page Frame Number (page
frame number) for this PTE. For invalid PTEs, if this field is not zero, it contains
information about where the page is in the swap file.
The following two bits are defined and used by Linux:
_PAGE_DIRTY
if set, the page needs to be written out to the swap file,
_PAGE_ACCESSED
Used by Linux to mark a page as having been accessed.