This document discusses virtual memory and how it works. Virtual memory allows multiple processors to each have their own virtual address space even if there is not enough physical memory for each processor to have a full address space allocated. It does this by dividing physical memory into blocks that are allocated to different processors. A protection scheme restricts each process to only accessing blocks belonging to that process. Virtual memory reduces the time needed to start programs by not requiring all code and data to be loaded into physical memory. Memory is mapped so that virtual addresses are translated to physical addresses to access main memory.
2. INTRODUCTION
• Multiple processors will be having its own
address space.
• It will be expensive to allocate a memory with
full address space to each processors.
2
3. • Virtual memory:
– Divide physical memory to block
– Allocate them to different processors.
• Protection Scheme:
– To restrict a process to the block belonging only to
that process.
• Virtual memory reduce the time to start a
program
– All code and data does not need to be in physical
memory.
3
4. • If a program becomes too large .
– It was the programer job to fit it.
– Virtual memory was invented to relieve
programmer from this burden.
• Virtual memory supports relocation
mechanism.
– Allows same program to run in any location in
physical memory.
4
6. • Page and segment-block
• Page fault and address fault-miss
• Memory mapping(Address translation)
– Processors produce virtual address
– Virtual address is translated to physical address
which access main memory.
6
7. Difference between cache and VM
• Replacement of cache miss is controlled by
hardware.
VM replacement is controlled by OS.
• The size of processor address determines size
of VM
Cache Size is independent of processor
address.
7
8. • VM system can be categorized into 2:
– Pages:with fixed size block.
– Segments:with variable size block.
8
10. PAGED ADDRESSING
• Single fixed size address
• Divided into page number and
• offset within page.
SEGMENTED ADDRESSING
• Variable size required.
• 1 word for segment number
• 1 word for offset within a segment.
• Total 2 word.
10
12. • Because of replacement problem few
computers use pure segmentation.
• Hybrid approach:
– Called as paged segment.
– Segment will be internal number of page.
– Memory need not be contiguous
– full segment need not be in memory.
12
13. PAGE TABLE
• Paging and segmentation rely on data
structure.
• Indexed by page or segment number.
• Data structure contains physical address of
block.
• Segmentation:
– Offset is added to the segments physical address
to obtain final physical address.
13
14. • Paging:
– The offset is simply concatenated to physical page
address.
14
15. Indexed by virtual page number.
Size of the table =no.of pages in virtual address space.
15
16. REPLACEMENT ON VIRTUAL MEMORY
MISS
• LRU scheme is used for replacement.
• Processors will have a use bit or reference bit.
• The bit will set when the page is accessed.
16
17. Techniques for fast address translation
• Paging:
– Memory access to obtain physical address
– Access to get data.
• Can keep address translation in separate
cache to reduce the second access to data.
• Special address translation cache is called
Translation Lookaside Buffer(TLB)or
Translation Buffer(TB).
17
18. TLB
• Like cache entry.
• Tag holds portion of virtual address.
• Data portion holds:
– physical page frame number
– Protection field
– Valid bit
– Use and dirty bit.
18
20. • Step 1 and 2:
– Translation begins by sending virtual address to all
tags.
– The tag must be marked valid to allow match.
• Step 3:
– The matching tag sends the corresponding physical
address through a 40:1 multiplexer.
• Step 4:
– The page offset is combined with physical page frame
to form full address.
20
21. SELECTING A PAGE SIZE
• LARGE PAGE SIZE:
1)The size of the page table is inversely proportional to
the page size.
So memory can be saved by making the pages bigger.
2) larger page size can allow larger caches with fast cache
hit times.
3)Transferring larger pages to or from secondary storage,
possibly over a network, is more efficient than
transferring smaller pages.
4) The number of TLB entries is restricted, so a larger
page size means that more memory can be mapped
efficiently, thereby reducing the number of TLB misses.
21
22. • SMALL PAGE SIZE:
1)Conserving storage:
• A small page size will result in less wasted
storage
• Avoids internal fragmentation
2)Many process are small so large page size will
increase the time to invoke process.
22