The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Segmentation topic is presented in a most easy way.
Segmentation is a user view of memory in Operating System. Segmentation is one of the most common ways to achieve memory protection. In a computer system using segmentation, an instruction operand that refers to a memory location includes a value that identifies a segment and an offset within that segment.
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
a glance on memory management in operating system.
this note is useful for those who are keen to know about how the OS works and a brief explanation regarding several terms such
-paging
segmentation
fragmentation
virtual memory
page table
to A Level A2 Computing students, this light note may be helpful for your revision
Gives an overview about Process, PCB, Process States, Process Operations, Scheduling, Schedulers, Interprocess communication, shared memory and message passing systems
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Segmentation topic is presented in a most easy way.
Segmentation is a user view of memory in Operating System. Segmentation is one of the most common ways to achieve memory protection. In a computer system using segmentation, an instruction operand that refers to a memory location includes a value that identifies a segment and an offset within that segment.
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
a glance on memory management in operating system.
this note is useful for those who are keen to know about how the OS works and a brief explanation regarding several terms such
-paging
segmentation
fragmentation
virtual memory
page table
to A Level A2 Computing students, this light note may be helpful for your revision
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Unit III
STORAGE MANAGEMENT
Main Memory-Contiguous Memory Allocation, Segmentation, Paging, 32 and 64 bit architecture Examples; Virtual Memory- Demand Paging, Page Replacement, Allocation, Thrashing; Allocating Kernel Memory, OS Examples.
Operating system 32 logical versus physical addressVaibhav Khanna
How to utilize memory optimally by manipulating objects in the memory is referred to as memory management.
Program must be brought (from disk) into memory and placed within a process for it to be run
Main memory and registers are only storage CPU can access directly
Memory unit only sees a stream of addresses + read requests, or address + data and write requests
Register access in one CPU clock (or less)
Main memory can take many cycles, causing a stall
Cache sits between main memory and CPU registers
Protection of memory required to ensure correct operation
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
8 memory management strategies
1. Memory Management Strategies
Background
Swapping
Contiguous Memory Allocation
Paging
Structure of the Page Table
Segmentation
2. 1. Background
1.1 Basic Hardware
• Main memory and registers
are only storage CPU can
access directly
• Register access in one CPU
clock, but, main memory can
take many cycles
• Cache sits between main
memory and CPU registers
• Protection of memory required
to ensure correct operation
• A pair of base and limit
registers define the logical
address space
Loganathan R, CSE, HKBKCE 2
3. 1. Background Contd…
• CPU hardware compares every address generated in user mode with the
registers
• A program executing in user mode attempt to access OS memory or other
users' memory results in a trap to the OS, which treats the attempt as a fatal
error
• This prevents a user program from accidentally or deliberately modifying the
code or data structures of either the OS or other users.
base base + limit
yes yes
address
CPU ≥ < Memory
No No
Trap to OS monitor
–addressing error
Hardware Address Protection with base and limit registers
Loganathan R, CSE, HKBKCE 3
4. 1. Background Contd…
1.2 Address Binding
• A user program will go through several
steps, before being executed as shown
• Address binding of instructions and data
to memory addresses can happen at
three different stages
– Compile time: If memory location known
a priori, absolute code can be generated;
must recompile code if starting location
changes Eg. DOS .com Programs
– Load time: Must generate relocatable
code if memory location is not known at
compile time
– Execution time: Binding delayed until run
time if the process can be moved during
its execution from one memory segment
to another. Need hardware support for
address maps (e.g., base and limit
registers)
Loganathan R, CSE, HKBKCE 4
5. 1. Background Contd…
1.3 Logical vs. Physical Address Space
• Logical address – generated by the CPU, also referred as virtual address
• Physical address – address seen by the memory unit i.e loaded to memory-address
register
• Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
• The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU)
• In MMU, the value in
Dynamic relocation using a relocation
the relocation register is register
added to every address
generated by a user
process at the time it is
sent to memory
• The user program deals
with logical addresses; it
never sees the real
physical addresses
Loganathan R, CSE, HKBKCE 5
6. 1. Background Contd…
1.4 Dynamic Loading
• All routines are kept on disk in a relocatable load format and main program is
loaded into memory and is executed
• Routine is not loaded until it is called
• The relocatable linking loader is called to load the desired routine
• Better memory-space utilization since unused routine is never loaded
• Useful when large amounts of code are needed to handle infrequently
occurring cases
• No special support from the operating system is required implemented
through program design
1.5 Dynamic Linking and Shared Libraries
• Linking postponed until execution time
• Small piece of code, stub, used to locate the appropriate memory-resident
library routine
• Stub replaces itself with the address of the routine, and executes the routine
• Operating system needed to check if routine is in processes’ memory address
• Dynamic linking is particularly useful for libraries
• System also known as shared libraries
Loganathan R, CSE, HKBKCE 6
7. 2. Swapping
• A process can be swapped temporarily out of memory to a backing store,
and then brought back into memory for continued execution
• Similar to round-robin CPU-scheduling algorithm , when a quantum expires,
the memory manager will swap out that process to swap another process
into the memory space that has been freed .
Swapping of two
processes using a
disk as a backing
store.
Loganathan R, CSE, HKBKCE 7
8. 2. Swapping Contd…
• Backing store – fast disk large enough to accommodate copies of all memory
images for all users and provide direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority process
can be loaded and executed
• The swapped out process will be swapped back into the same memory space
it occupied previously due to the restriction by the method of address
binding(assembly or load time)
• A process can be swapped into a different memory space If execution-time
binding is used since physical addresses are computed during execution time
• System maintains a ready queue of ready-to-run processes which have
memory images on disk
• The dispatcher swaps out a process in memory if there is no free memory
region and swaps in the desired process from a ready queue
• Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
• Example : User process is 10 MB
Backing store is a hard disk with a transfer rate of 40 MB per sec
Transfer time = 10/40 MB per sec. = 250 milliseconds
Swap time = transfer time+ Seek time(latency 8 millisec)= 258 millisec.
Total swap time = swap out + swap in = 516 milliseconds
• Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows) Loganathan R, CSE, HKBKCE 8
9. 3. Contiguous Memory Allocation
• Main memory usually into two partitions:
– Resident operating system, usually held in low memory with interrupt vector
– User processes then held in high memory
3.1 Memory Mapping and Protection
• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses – each logical address must be less
than the limit register
• MMU maps logical address dynamically by adding the relocation register
Hardware support for
relocation and limit
registers
Loganathan R, CSE, HKBKCE 9
10. 3. Contiguous Memory Allocation Contd…
3.2 Memory Allocation
• Memory is to divide memory into several fixed-sized partitions, when a partition is free,
a process is loaded into the free partition and when it terminates, the partition
becomes available for another process
• Multiple-partition allocation (generalization of the fixed-partition)
– Hole – block of available memory; holes of various size are scattered throughout memory
– OS maintains information about the allocated partitions and free partitions (hole)
– When a process arrives, it is allocated memory from a hole large enough to accommodate it
– If the hole is too large, it is split into two parts., One part is allocated to the arriving
process; the other is returned to the set of holes
– If the new hole is adjacent to other holes, the adjacent holes are merged to form one larger hole
• Dynamic storage allocation problem, which concerns how to satisfy a request of size n
from a list of free holes and the solutions are:
• First-fit: Allocate the first hole that is big enough , Search from beginning or where the
previous first-fit search ended
• Best-fit: Allocate the smallest hole that is big enough, must search entire list, unless
ordered by size, produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list, produces the largest
leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Loganathan R, CSE, HKBKCE 10
11. 3. Contiguous Memory Allocation Contd…
3.3 Fragmentation
• External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous, i.e. storage is fragmented into a large
number of small holes
• Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a
partition, but not being used
• Reduce external fragmentation by
1 Compaction
– Shuffle memory contents to place all free memory together in one
large block
– Compaction is possible only if relocation is dynamic, and is done at
execution time
2 Permit the logical address space of the processes to be
noncontiguous, thus allowing a process to be allocated physical
memory wherever available
Loganathan R, CSE, HKBKCE 11
12. 4. Paging
• Permits the physical address space of a process to be noncontiguous
4.1 Basic Method
• Divide physical memory into fixed-sized blocks called frames (size is power of 2)
• Divide logical memory into blocks of same size called pages
• The backing store is divided into fixed-sized blocks of size of frames
• Hardware support for paging i.e. a page table to translate logical to physical addresses
Loganathan R, CSE, HKBKCE 12
13. 4. Paging Contd…
• Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
• For given logical address space
2m and page size 2n then m – n
bits of a logical address
designate the page number,
and the n low-order bits
designate the page offset as :
page number page offset
p d
m-n n
where p is an index into the Paging Model of Logical
page table and d is the and Physical Memory
displacement within the page
Loganathan R, CSE, HKBKCE 13
14. 4. Paging Contd…
Paging example for a 32-byte(25)
memory with logical address of 16 byte
(24) and 4-byte(22) pages i.e. m=4 & n=2
For logical address 0 is page 0, offset 0
Indexing into the page table, find that
page 0 is in frame 5
Logical address 0 maps to physical
address = 5(frame number) x 4(page
size) + 0(offset) = 20
Logical address 3 (page 0, offset 3)
maps to physical address = 5x4 + 3 = 23
Logical address 13(page 3, offset 1)
indexing to page 3 find frame number 2
which maps to physical address = 2x4
+1=9
32-byte memory and 4-byte pages
Loganathan R, CSE, HKBKCE 14
15. 4. Paging Contd…
In paging no external fragmentation: Any free frame can be allocated to a process
that needs it, but, may have some internal fragmentation i.e. last frame allocated
may not be completely full
If the process requires n pages, If n frames are available, they are allocated
The first page of the
process is loaded into one
of the allocated frames,
and the frame number is
put in the page table for
this process.
The next page is loaded
into another frame, and its
frame number is put into
the page table and so on
OS keeps tracks of which
frames are allocated,
which frames are
available, how many total
frames are there, and so
on in a frame table
Before allocation After allocation
Loganathan R, CSE, HKBKCE 15
16. 4. Paging Contd…
4.2 Hardware Support
• Hardware implementation of Page Table is a set of high speed dedicated
Registers
• Page table is kept in main memory and
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table
• The CPU dispatcher reloads these registers, instructions to load or modify the
page-table registers are privileged
• In this scheme every data/instruction access requires two memory accesses.
One for the page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-aside
buffers (TLBs)
• TLB entry consists a key (or tag) and a value, when it is presented with an item,
the item is compared with all keys simultaneously
Page # Frame #
Loganathan R, CSE, HKBKCE 16
17. 4. Paging Contd…
• When page number from CPU address is presented to the TLB, if the page
number is found, its frame number is immediately available and is used to
access memory.
• If the page number is not in the TLB (known as a TLB miss), a memory
reference to the page table must be made, also the page number and frame
number to the TLB.
• If the TLB is full, the
OS select one entry
for replacement.
• Replacement
policies range from
LRU to random
• TLBs allow entries
(for kernel code) to
be wired down, so
that they cannot be
removed from the
TLB.
Paging Hardware With TLB
Loganathan R, CSE, HKBKCE 17
18. 4. Paging Contd…
• Some TLBs store address-space identifiers (ASIDs) in each TLB
entry which identifies each process and is used to provide address-
space protection for that process
• TLB must be flushed (or erased) to ensure that the next executing
process does not use the wrong translation information
• The percentage of times that a particular page number is found in
the TLB is called the hit ratio
4.3 Protection
• Memory protection implemented by associating protection bit
with each frame, Valid-invalid bit attached to each entry in the
page table:
–“valid” indicates that the associated page is in the process’s
logical address space, and is thus a legal page
–“invalid” indicates that the page is not in the process’s logical
address space
Loganathan R, CSE, HKBKCE 18
19. 4. Paging Contd…
Valid (v) or Invalid (i) Bit In A Page Table
Loganathan R, CSE, HKBKCE
19
20. 4. Paging Contd…
4.4 Shared Pages
• An advantage of paging is the
possibility of sharing common
code
• If the code is reentrant code
(or pure code), it can be shared
• One copy of read-only
(reentrant) code shared among
processes (i.e., text editors,
compilers, window systems).
• Shared code must appear in
same location in the logical
address space of all processes
• Each process keeps a separate
copy of the code and data
• The pages for the private code
and data can appear anywhere
in the logical address space
Sharing of code in a paging environment
Loganathan R, CSE, HKBKCE 20
21. 5. Structure of the Page Table
Techniques for structuring the page table : 1. Hierarchical
Paging 2. Hashed Page Tables 3. Inverted Page Tables
5.1 Hierarchical Paging
• Break up the logical address space into
multiple page tables
• A simple technique is a two-level page table
• A logical address (on 32-bit machine with 1K
page size) is divided into a page number
consisting of 22 bits and a page offset
consisting of 10 bits
• Since the page table is paged, the page
number is further divided into a 12-bit page
number a 10-bit page offset
page number page offset
pi p2 d
12 10 10
where pi is an index into the outer page
table, and p2 is the displacement within the
page of the outer page table
Two-Level Page-Table Scheme
Loganathan R, CSE, HKBKCE 21
22. 5. Structure of the Page Table Contd…
• Address-Translation Scheme for a two-level 32-bit paging architecture
• Three-level Paging Scheme
• Example 64 bit logical address
Using two-level paging scheme
Using three-level paging scheme
Loganathan R, CSE, HKBKCE 22
23. 5. Structure of the Page Table Contd…
5.2 Hashed Page Tables
• Common in address spaces > 32 bits
• The virtual page number is hashed into a page table. This page table contains
a chain of elements hashing to the same location.
• Virtual page numbers are compared in this chain searching for a match. If a
match is found, the corresponding physical frame is extracted.
p & q - page numbers
s & r - frame numbers
Hashed Page Table 23
Loganathan R, CSE, HKBKCE
24. 5. Structure of the Page Table Contd…
5.3 Inverted Page Table
• One entry for each real page of memory
• Entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page
• Decreases memory needed to store each page table, but increases time
needed to search the table when a page reference occurs
• Use hash table to limit the search to one — or at most a few — page-table
entries
Inverted Page Table Architecture
24
Loganathan R, CSE, HKBKCE
25. 6. Segmentation
• Memory-management scheme that supports user view of memory i.e. a
collection of variable-sized segments, with no necessary ordering among
segments
User's view of a program
6.1 Basic Method
• Segments are numbered and are referred to
by a segment number i.e. a logical address
consists of a two tuple:
< segment-number, offset >
• A program is a collection of segments,
compiler constructs separate segments for :
• The code
• Global variables
• The heap, from which memory is
allocated
• The stacks used, by each thread
• The standard C library
Loganathan R, CSE, HKBKCE 25
26. 6. Segmentation Contd…
6.2 Hardware
• Segment table – maps 2 dimensional physical addresses, each table entry has:
– base – contains the starting physical address where the segments reside in
memory
– limit – specifies the length of the segment
• Segment-table base
register (STBR) points to
the segment table’s
location in memory
• Segment-table length
register (STLR) indicates
number of segments
used by a program;
Segment number s is legal
if s < STLR
Segmentation
hardware.
Loganathan R, CSE, HKBKCE 26
27. 6. Segmentation Contd…
• Segment 2 is 400
bytes long and begins
at location 4300. Thus,
a reference to byte 53
of segment 2 is
mapped onto location
4300 + 53 = 4353
• Segment 3, byte 852,
is mapped to 3200 (the
base of segment 3) +
852 = 4052
• A reference to byte
1222 of segment 0
would result in a trap
to the operating
system, as this
segment is only 1,000
bytes long
Example of Segmentation
Loganathan R, CSE, HKBKCE 27