SlideShare a Scribd company logo
1 of 63
Download to read offline
1
Memory Management
Learning Outcomes
• Appreciate the need for memory management in
operating systems, understand the limits of fixed
memory allocation schemes.
• Understand fragmentation in dynamic memory
allocation, and understand dynamic allocation
approaches.
• Understand how program memory addresses relate to
physical memory addresses, memory management in
base-limit machines, and swapping
• An overview of virtual memory management, including
paging and segmentation.
2
3
Process
• One or more threads of execution
• Resources required for execution
– Memory (RAM)
• Program code (“text”)
• Data (initialised, uninitialised, stack)
• Buffers held in the kernel on behalf of the process
– Others
• CPU time
• Files, disk space, printers, etc.
4
OS Memory Management
• Keeps track of what memory is in use and
what memory is free
• Allocates free memory to process when
needed
– And deallocates it when they don’t
• Manages the transfer of memory between
RAM and disk.
5
Memory Hierarchy
• Ideally, programmers
want memory that is
– Fast
– Large
– Nonvolatile
• Not possible
• Memory
management
coordinates how
memory hierarchy is
used.
– Focus usually on
RAM ⇔ Disk
6
OS Memory Management
• Two broad classes of memory
management systems
– Those that transfer processes to and from
external storage during execution.
• Called swapping or paging
– Those that don’t
• Simple
• Might find this scheme in an embedded device,
dumb phone, or smartcard.
7
Basic Memory Management
Monoprogramming without Swapping or Paging
Three simple ways of organizing memory
- an operating system with one user process
8
Monoprogramming
• Okay if
– Only have one thing to do
– Memory available approximately equates to
memory required
• Otherwise,
– Poor CPU utilisation in the presence of I/O
waiting
– Poor memory utilisation with a varied job mix
9
Idea
• Recall, an OS aims to
– Maximise memory utilisation
– Maximise CPU utilization
• (ignore battery/power-management issues)
• Subdivide memory and run more than one
process at once!!!!
– Multiprogramming, Multitasking
10
Modeling Multiprogramming
CPU utilization as a function of number of processes in
memory
Degree of multiprogramming
11
General problem: How to divide
memory between processes?
• Given a workload, how to we
– Keep track of free memory?
– Locate free memory for a new process?
• Overview of evolution of simple memory
management
– Static (fixed partitioning) approaches
• Simple, predicable workloads of early
computing
– Dynamic (partitioning) approaches
• More flexible computing as compute power and
complexity increased.
• Introduce virtual memory
– Segmentation and paging
Process B
Process C
Process D
12
Problem: How to divide memory
• One approach
– divide memory into fixed
equal-sized partitions
– Any process <= partition size
can be loaded into any
partition
– Partitions are free or busy
Process A
Process B
Process C
Process D
13
Simple MM: Fixed, equal-sized
partitions
• Any unused space in the
partition is wasted
– Called internal fragmentation
• Processes smaller than main
memory, but larger than a
partition cannot run.
Process A
Process B
Process C
Process D
14
Simple MM: Fixed, variable-sized
partitions
• Divide memory at boot time into a
selection of different sized
partitions
– Can base sizes on expected
workload
• Each partition has queue:
– Place process in queue for smallest
partition that it fits in.
– Processes wait for when assigned
partition is empty to start
15
• Issue
– Some partitions may
be idle
• Small jobs available,
but only large partition
free
• Workload could be
unpredicatable
16
Alternative queue strategy
• Single queue, search
for any jobs that fit
• Small jobs in large
partition if necessary
– Increases internal
memory fragmentation
17
Fixed Partition Summary
• Simple
• Easy to implement
• Can result in poor memory utilisation
– Due to internal fragmentation
• Used on IBM System 360 operating system
(OS/MFT)
– Announced 6 April, 1964
• Still applicable for simple embedded systems
– Static workload known in advance
18
Dynamic Partitioning
• Partitions are of variable length
– Allocated on-demand from ranges of free
memory
• Process is allocated exactly what it needs
– Assumes a process knows what it needs
21
Dynamic Partitioning
• In previous diagram
– We have 16 meg free in total, but it can’t be used to
run any more processes requiring > 6 meg as it is
fragmented
– Called external fragmentation
• We end up with unusable holes
22
Recap: Fragmentation
• External Fragmentation:
– The space wasted external to the allocated memory
regions.
– Memory space exists to satisfy a request, but it is
unusable as it is not contiguous.
• Internal Fragmentation:
– The space wasted internal to the allocated memory
regions.
– allocated memory may be slightly larger than
requested memory; this size difference is wasted
memory internal to a partition.
23
Dynamic Partition Allocation
Algorithms
• Also applicable to malloc()-like in-application
allocators
• Given a region of memory, basic requirements
are:
– Quickly locate a free partition satisfying the request
• Minimise CPU time search
– Minimise external fragmentation
– Minimise memory overhead of bookkeeping
– Efficiently support merging two adjacent free
partitions into a larger partition
24
Classic Approach
• Represent available memory as a linked list of
available “holes”.
– Base, size
– Kept in order of increasing address
• Simplifies merging of adjacent holes into larger holes.
– Can be stored in the “holes” themselves
Address
Size
Link
Address
Size
Link
Address
Size
Link
Address
Size
Link
25
Coalescing Free Partitions with Linked
Lists
Four neighbor combinations for the terminating
process X
26
Dynamic Partitioning Placement
Algorithm
• First-fit algorithm
– Scan the list for the first entry that fits
• If greater in size, break it into an allocated and free part
• Intent: Minimise amount of searching performed
– Aims to find a match quickly
– Generally can result in smaller holes at the front end
of memory that must be searched over when trying to
find a free block.
– May have lots of unusable holes at the beginning.
• External fragmentation
– Tends to preserve larger blocks at the end of memory
Address
Size
Link
Address
Size
Link
Address
Size
Link
Address
Size
Link
27
Dynamic Partitioning Placement
Algorithm
• Next-fit
– Like first-fit, except it begins its search from the point
in list where the last request succeeded instead of at
the beginning.
• Spread allocation more uniformly over entire memory
– More often allocates a block of memory at the end of memory
where the largest block is found
• The largest block of memory is broken up into smaller blocks
– May not be able to service larger request as well as first fit.
Address
Size
Link
Address
Size
Link
Address
Size
Link
Address
Size
Link
28
Dynamic Partitioning Placement
Algorithm
• Best-fit algorithm
– Chooses the block that is closest in size to the
request
– Poor performer
• Has to search complete list
– does more work than first- or next-fit
• Since smallest block is chosen for a process, the smallest
amount of external fragmentation is left
– Create lots of unusable holes
Address
Size
Link
Address
Size
Link
Address
Size
Link
Address
Size
Link
29
Dynamic Partitioning Placement
Algorithm
• Worst-fit algorithm
– Chooses the block that is largest in size (worst-fit)
• (whimsical) idea is to leave a usable fragment left over
– Poor performer
• Has to do more work (like best fit) to search complete list
• Does not result in significantly less fragmentation
Address
Size
Link
Address
Size
Link
Address
Size
Link
Address
Size
Link
31
Dynamic Partition Allocation
Algorithm
• Summary
– First-fit and next-fit are generally better than the others and
easiest to implement
• You should be aware of them
– simple solutions to a still-existing OS or application function –
memory allocation.
• Note: Largely have been superseded by more complex
and specific allocation strategies
– Typical in-kernel allocators used are lazy buddy, and slab
allocators
32
Compaction
• We can reduce
external fragmentation
by compaction
– Shuffle memory contents
to place all free memory
together in one large
block.
– Only if we can relocate
running programs?
• Pointers?
– Generally requires
hardware support
Process A
Process B
Process C
Process D
Process A
Process B
Process C
Process D
33
Some Remaining Issues with Dynamic
Partitioning
• We have ignored
– Relocation
• How does a process run in
different locations in memory?
– Protection
• How do we prevent processes
interfering with each other
Process A
Process B
Process C
Process D
34
Example Logical Address-Space
Layout
• Logical
addresses refer
to specific
locations within
the program
• Once running,
these address
must refer to real
physical memory
• When are logical
addresses bound
to physical?
0x0000
0xFFFF
35
When are memory
addresses bound?
• Compile/link time
– Compiler/Linker binds the
addresses
– Must know “run” location at
compile time
– Recompile if location changes
• Load time
– Compiler generates relocatable
code
– Loader binds the addresses at
load time
• Run time
– Logical compile-time addresses
translated to physical addresses
by special hardware.
36
Hardware Support for Runtime Binding
and Protection
• For process B to run using logical
addresses
– Process B expects to access addresses
from zero to some limit of memory size
Process B
limit
0x0000
37
Hardware Support for Runtime Binding
and Protection
• For process B to run using logical
addresses
– Need to add an appropriate offset to its
logical addresses
• Achieve relocation
• Protect memory “lower” than B
– Must limit the maximum logical address B
can generate
• Protect memory “higher” than B
Process B baselimit
0x0000
0xFFFF
38
Hardware Support for Relocation and
Limit Registers
39
Base and Limit Registers
• Also called
– Base and bound registers
– Relocation and limit registers
• Base and limit registers
– Restrict and relocate the currently
active process
– Base and limit registers must be
changed at
• Load time
• Relocation (compaction time)
• On a context switch
Process A
Process B
Process C
Process D
baselimit
0x0000
0xFFFF
0x0000
0x1FFF
0x8000
0x9FFF
base=0x8000
limit = 0x2000
40
Base and Limit Registers
• Also called
– Base and bound registers
– Relocation and limit registers
• Base and limit registers
– Restrict and relocate the currently
active process
– Base and limit registers must be
changed at
• Load time
• Relocation (compaction time)
• On a context switch
Process A
Process B
Process C
Process D
base
limit
0x0000
0xFFFF
0x0000
0x2FFF
0x4000
0x6FFF
base=0x4000
limit = 0x3000
41
Base and Limit Registers
• Pro
– Supports protected multi-processing (-tasking)
• Cons
– Physical memory allocation must still be
contiguous
– The entire process must be in memory
– Do not support partial sharing of address
spaces
• No shared code, libraries, or data structures
between processes
42
Timesharing
• Thus far, we have a system suitable for
a batch system
– Limited number of dynamically allocated
processes
• Enough to keep CPU utilised
– Relocated at runtime
– Protected from each other
• But what about timesharing?
– We need more than just a small number of
processes running at once
– Need to support a mix of active and inactive
processes, of varying longevity
Process A
Process B
Process C
Process D
0x0000
0xFFFF
43
Swapping
• A process can be swapped temporarily out of memory to
a backing store, and then brought back into memory for
continued execution.
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images.
• Can prioritize – lower-priority process is swapped out so
higher-priority process can be loaded and executed.
• Major part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory
swapped.
– slow
44
Schematic View of Swapping
45
So far we have assumed a
process is smaller than memory
• What can we do if a process is larger than
main memory?
46
Overlays
• Keep in memory only those instructions
and data that are needed at any given
time.
• Implemented by user, no special support
needed from operating system
• Programming design of overlay structure
is complex
47
Overlays for a Two-Pass Assembler
48
Virtual Memory
• Developed to address the issues identified with
the simple schemes covered thus far.
• Two classic variants
– Paging
– Segmentation
• Paging is now the dominant one of the two
• Some architectures support hybrids of the two
schemes
– E.g. Intel IA-32 (32-bit x86)
49
Virtual Memory - Paging
• Partition physical memory into small
equal sized chunks
– Called frames
• Divide each process’s virtual (logical)
address space into same size chunks
– Called pages
– Virtual memory addresses consist of a
page number and offset within the page
• OS maintains a page table
– contains the frame location for each page
– Used by to translate each virtual address
to physical address
– The relation between
virtual addresses and physical memory
addresses is given by page table
• Process’s physical memory does not
have to be contiguous
52
Paging
• No external fragmentation
• Small internal fragmentation (in last page)
• Allows sharing by mapping several pages
to the same frame
• Abstracts physical organisation
– Programmer only deal with virtual addresses
• Minimal support for logical organisation
– Each unit is one or more pages
53
Memory Management Unit
(also called Translation Look-aside Buffer – TLB)
The position and function of the MMU
54
MMU Operation
Internal operation of simplified MMU with 16 4 KB pages
Assume for now that
the page table is
contained wholly in
registers within the
MMU – in practice it
is not
55
Virtual Memory - Segmentation
• Memory-management scheme
that supports user’s view of
memory.
• A program is a collection of
segments. A segment is a
logical unit such as:
– main program, procedure,
function, method, object, local
variables, global variables,
common block, stack, symbol
table, arrays
56
Logical View of Segmentation
1
3
2
4
1
4
2
3
user space physical memory space
57
Segmentation Architecture
• Logical address consists of a two tuple: <segment-
number, offset>,
– Addresses identify segment and address with segment
• Segment table – each table entry has:
– base – contains the starting physical address where the
segments reside in memory.
– limit – specifies the length of the segment.
• Segment-table base register (STBR) points to the
segment table’s location in memory.
• Segment-table length register (STLR) indicates number
of segments used by a program;
segment number s is legal if s < STLR.
58
Segmentation Hardware
59
Example of Segmentation
60
Segmentation Architecture
• Protection. With each entry in segment table
associate:
– validation bit = 0 ⇒ illegal segment
– read/write/execute privileges
• Protection bits associated with segments; code
sharing occurs at segment level.
• Since segments vary in length, memory
allocation is a dynamic partition-allocation
problem.
• A segmentation example is shown in the
following diagram
61
Sharing of Segments
62
Segmentation Architecture
• Relocation.
– dynamic
⇒ by segment table
• Sharing.
– shared segments
⇒ same physical backing multiple segments
⇒ ideally, same segment number
• Allocation.
– First/next/best fit
⇒ external fragmentation
63
Comparison
Comparison of paging and segmentation

More Related Content

What's hot (20)

Demand paging
Demand pagingDemand paging
Demand paging
 
Processor Organization and Architecture
Processor Organization and ArchitectureProcessor Organization and Architecture
Processor Organization and Architecture
 
Memory mapping
Memory mappingMemory mapping
Memory mapping
 
Virtual memory ppt
Virtual memory pptVirtual memory ppt
Virtual memory ppt
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
File allocation methods (1)
File allocation methods (1)File allocation methods (1)
File allocation methods (1)
 
Memory management
Memory managementMemory management
Memory management
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memory
 
Memory management
Memory managementMemory management
Memory management
 
Operating Systems: Device Management
Operating Systems: Device ManagementOperating Systems: Device Management
Operating Systems: Device Management
 
Page Replacement
Page ReplacementPage Replacement
Page Replacement
 
Inter Process Communication
Inter Process CommunicationInter Process Communication
Inter Process Communication
 
Presentation on Segmentation
Presentation on SegmentationPresentation on Segmentation
Presentation on Segmentation
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
SCHEDULING ALGORITHMS
SCHEDULING ALGORITHMSSCHEDULING ALGORITHMS
SCHEDULING ALGORITHMS
 
Cache performance considerations
Cache performance considerationsCache performance considerations
Cache performance considerations
 
Processes and threads
Processes and threadsProcesses and threads
Processes and threads
 
Virtual memory presentation
Virtual memory presentationVirtual memory presentation
Virtual memory presentation
 

Similar to Memory Management

07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppthello509579
 
lecture 8 b main memory
lecture 8 b main memorylecture 8 b main memory
lecture 8 b main memoryITNet
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxHarikishnaKNHk
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for allVSKAMCSPSGCT
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportAnwal Mirza
 
Memory Management Architecture.ppt
Memory Management Architecture.pptMemory Management Architecture.ppt
Memory Management Architecture.pptSulTanOmid
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxAmanuelmergia
 
Operating system 34 contiguous allocation
Operating system 34 contiguous allocationOperating system 34 contiguous allocation
Operating system 34 contiguous allocationVaibhav Khanna
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportdilip kumar
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportBitta_man
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdfHarika Pudugosula
 
chapter 2 memory and process management
chapter 2 memory and process managementchapter 2 memory and process management
chapter 2 memory and process managementAisyah Rafiuddin
 

Similar to Memory Management (20)

Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
 
Lect13
Lect13Lect13
Lect13
 
Lect13
Lect13Lect13
Lect13
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
lecture 8 b main memory
lecture 8 b main memorylecture 8 b main memory
lecture 8 b main memory
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptx
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
Memory management1
Memory management1Memory management1
Memory management1
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Memory Management.pdf
Memory Management.pdfMemory Management.pdf
Memory Management.pdf
 
Memory Management Architecture.ppt
Memory Management Architecture.pptMemory Management Architecture.ppt
Memory Management Architecture.ppt
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptx
 
Ch8 main memory
Ch8   main memoryCh8   main memory
Ch8 main memory
 
Operating system 34 contiguous allocation
Operating system 34 contiguous allocationOperating system 34 contiguous allocation
Operating system 34 contiguous allocation
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdf
 
chapter 2 memory and process management
chapter 2 memory and process managementchapter 2 memory and process management
chapter 2 memory and process management
 

More from DEDE IRYAWAN

Riset Teknologi Informasi - Tugas 03 - Paper Review on “Naive Bayes Classifi...
Riset Teknologi Informasi - Tugas 03 -  Paper Review on “Naive Bayes Classifi...Riset Teknologi Informasi - Tugas 03 -  Paper Review on “Naive Bayes Classifi...
Riset Teknologi Informasi - Tugas 03 - Paper Review on “Naive Bayes Classifi...DEDE IRYAWAN
 
APPLIED DATABASE III - Slide Arsitektur Data Mining
APPLIED DATABASE III - Slide Arsitektur Data MiningAPPLIED DATABASE III - Slide Arsitektur Data Mining
APPLIED DATABASE III - Slide Arsitektur Data MiningDEDE IRYAWAN
 
APPLIED DATABASE III - Modul Data Preprocessing
APPLIED DATABASE III - Modul Data PreprocessingAPPLIED DATABASE III - Modul Data Preprocessing
APPLIED DATABASE III - Modul Data PreprocessingDEDE IRYAWAN
 
Riset Teknologi Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...
Riset Teknologi  Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...Riset Teknologi  Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...
Riset Teknologi Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...DEDE IRYAWAN
 
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEKMANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEKDEDE IRYAWAN
 
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEKMANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEKDEDE IRYAWAN
 
TOEFL Exercise 22 Slide
TOEFL Exercise 22 SlideTOEFL Exercise 22 Slide
TOEFL Exercise 22 SlideDEDE IRYAWAN
 
TOEFL Exercise 17 - Listen for IDIOMS
TOEFL Exercise 17 - Listen for IDIOMSTOEFL Exercise 17 - Listen for IDIOMS
TOEFL Exercise 17 - Listen for IDIOMSDEDE IRYAWAN
 
TOEFL EXERCISE 15 - UNTRUE CONDITION
TOEFL EXERCISE 15 - UNTRUE CONDITIONTOEFL EXERCISE 15 - UNTRUE CONDITION
TOEFL EXERCISE 15 - UNTRUE CONDITIONDEDE IRYAWAN
 
TOEFL EXERCISE 14 - Listen for WISHES
TOEFL EXERCISE 14 - Listen for  WISHESTOEFL EXERCISE 14 - Listen for  WISHES
TOEFL EXERCISE 14 - Listen for WISHESDEDE IRYAWAN
 
TOEFL Exercise 13 - Emphatic Expression of SURPRISE
TOEFL Exercise 13 - Emphatic Expression of SURPRISETOEFL Exercise 13 - Emphatic Expression of SURPRISE
TOEFL Exercise 13 - Emphatic Expression of SURPRISEDEDE IRYAWAN
 
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTION
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTIONTOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTION
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTIONDEDE IRYAWAN
 
TOEFL Exercise 11 - Expression of AGREEMENT
TOEFL Exercise 11 - Expression of AGREEMENTTOEFL Exercise 11 - Expression of AGREEMENT
TOEFL Exercise 11 - Expression of AGREEMENTDEDE IRYAWAN
 
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVE
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVETOEFL Exercise 10 - NEGATIVE WITH COMPARATIVE
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVEDEDE IRYAWAN
 
TOEFL Exercise 9 - “ALMOST NEGATIVE” Expression
TOEFL Exercise 9 - “ALMOST NEGATIVE” ExpressionTOEFL Exercise 9 - “ALMOST NEGATIVE” Expression
TOEFL Exercise 9 - “ALMOST NEGATIVE” ExpressionDEDE IRYAWAN
 
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONS
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONSTOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONS
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONSDEDE IRYAWAN
 
TOEFL Exercise 7 - NEGATIVE EXPRESSION
TOEFL Exercise 7 - NEGATIVE EXPRESSIONTOEFL Exercise 7 - NEGATIVE EXPRESSION
TOEFL Exercise 7 - NEGATIVE EXPRESSIONDEDE IRYAWAN
 
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDS
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDSTOEFL EXERCISE 3 - AVOID SIMILAR SOUNDS
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDSDEDE IRYAWAN
 
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINE
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINETOEFL EXERCISE 1 - FOCUS ON THE SECOND LINE
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINEDEDE IRYAWAN
 
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...DEDE IRYAWAN
 

More from DEDE IRYAWAN (20)

Riset Teknologi Informasi - Tugas 03 - Paper Review on “Naive Bayes Classifi...
Riset Teknologi Informasi - Tugas 03 -  Paper Review on “Naive Bayes Classifi...Riset Teknologi Informasi - Tugas 03 -  Paper Review on “Naive Bayes Classifi...
Riset Teknologi Informasi - Tugas 03 - Paper Review on “Naive Bayes Classifi...
 
APPLIED DATABASE III - Slide Arsitektur Data Mining
APPLIED DATABASE III - Slide Arsitektur Data MiningAPPLIED DATABASE III - Slide Arsitektur Data Mining
APPLIED DATABASE III - Slide Arsitektur Data Mining
 
APPLIED DATABASE III - Modul Data Preprocessing
APPLIED DATABASE III - Modul Data PreprocessingAPPLIED DATABASE III - Modul Data Preprocessing
APPLIED DATABASE III - Modul Data Preprocessing
 
Riset Teknologi Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...
Riset Teknologi  Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...Riset Teknologi  Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...
Riset Teknologi Informasi - Modul 6 - Judul, Baris Kepemilikan, Abstrak, dan...
 
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEKMANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 6 - MANAJEMEN BIAYA PROYEK
 
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEKMANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEK
MANAJEMEN PROYEK PERANGKAT LUNAK - Modul 5 - MANAJEMEN WAKTU PROYEK
 
TOEFL Exercise 22 Slide
TOEFL Exercise 22 SlideTOEFL Exercise 22 Slide
TOEFL Exercise 22 Slide
 
TOEFL Exercise 17 - Listen for IDIOMS
TOEFL Exercise 17 - Listen for IDIOMSTOEFL Exercise 17 - Listen for IDIOMS
TOEFL Exercise 17 - Listen for IDIOMS
 
TOEFL EXERCISE 15 - UNTRUE CONDITION
TOEFL EXERCISE 15 - UNTRUE CONDITIONTOEFL EXERCISE 15 - UNTRUE CONDITION
TOEFL EXERCISE 15 - UNTRUE CONDITION
 
TOEFL EXERCISE 14 - Listen for WISHES
TOEFL EXERCISE 14 - Listen for  WISHESTOEFL EXERCISE 14 - Listen for  WISHES
TOEFL EXERCISE 14 - Listen for WISHES
 
TOEFL Exercise 13 - Emphatic Expression of SURPRISE
TOEFL Exercise 13 - Emphatic Expression of SURPRISETOEFL Exercise 13 - Emphatic Expression of SURPRISE
TOEFL Exercise 13 - Emphatic Expression of SURPRISE
 
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTION
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTIONTOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTION
TOEFL Exercise 12 - Expression of UNCERTAINITY and SUGGESTION
 
TOEFL Exercise 11 - Expression of AGREEMENT
TOEFL Exercise 11 - Expression of AGREEMENTTOEFL Exercise 11 - Expression of AGREEMENT
TOEFL Exercise 11 - Expression of AGREEMENT
 
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVE
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVETOEFL Exercise 10 - NEGATIVE WITH COMPARATIVE
TOEFL Exercise 10 - NEGATIVE WITH COMPARATIVE
 
TOEFL Exercise 9 - “ALMOST NEGATIVE” Expression
TOEFL Exercise 9 - “ALMOST NEGATIVE” ExpressionTOEFL Exercise 9 - “ALMOST NEGATIVE” Expression
TOEFL Exercise 9 - “ALMOST NEGATIVE” Expression
 
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONS
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONSTOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONS
TOEFL EXERCISE 8 - DOUBLE NEGATIVE EXPRESSIONS
 
TOEFL Exercise 7 - NEGATIVE EXPRESSION
TOEFL Exercise 7 - NEGATIVE EXPRESSIONTOEFL Exercise 7 - NEGATIVE EXPRESSION
TOEFL Exercise 7 - NEGATIVE EXPRESSION
 
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDS
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDSTOEFL EXERCISE 3 - AVOID SIMILAR SOUNDS
TOEFL EXERCISE 3 - AVOID SIMILAR SOUNDS
 
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINE
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINETOEFL EXERCISE 1 - FOCUS ON THE SECOND LINE
TOEFL EXERCISE 1 - FOCUS ON THE SECOND LINE
 
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...
English for Proficiency Certification (EPC) - Modul 2 - Further Exercises and...
 

Recently uploaded

Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfAlina Yurenko
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 

Recently uploaded (20)

Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 

Memory Management

  • 2. Learning Outcomes • Appreciate the need for memory management in operating systems, understand the limits of fixed memory allocation schemes. • Understand fragmentation in dynamic memory allocation, and understand dynamic allocation approaches. • Understand how program memory addresses relate to physical memory addresses, memory management in base-limit machines, and swapping • An overview of virtual memory management, including paging and segmentation. 2
  • 3. 3 Process • One or more threads of execution • Resources required for execution – Memory (RAM) • Program code (“text”) • Data (initialised, uninitialised, stack) • Buffers held in the kernel on behalf of the process – Others • CPU time • Files, disk space, printers, etc.
  • 4. 4 OS Memory Management • Keeps track of what memory is in use and what memory is free • Allocates free memory to process when needed – And deallocates it when they don’t • Manages the transfer of memory between RAM and disk.
  • 5. 5 Memory Hierarchy • Ideally, programmers want memory that is – Fast – Large – Nonvolatile • Not possible • Memory management coordinates how memory hierarchy is used. – Focus usually on RAM ⇔ Disk
  • 6. 6 OS Memory Management • Two broad classes of memory management systems – Those that transfer processes to and from external storage during execution. • Called swapping or paging – Those that don’t • Simple • Might find this scheme in an embedded device, dumb phone, or smartcard.
  • 7. 7 Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process
  • 8. 8 Monoprogramming • Okay if – Only have one thing to do – Memory available approximately equates to memory required • Otherwise, – Poor CPU utilisation in the presence of I/O waiting – Poor memory utilisation with a varied job mix
  • 9. 9 Idea • Recall, an OS aims to – Maximise memory utilisation – Maximise CPU utilization • (ignore battery/power-management issues) • Subdivide memory and run more than one process at once!!!! – Multiprogramming, Multitasking
  • 10. 10 Modeling Multiprogramming CPU utilization as a function of number of processes in memory Degree of multiprogramming
  • 11. 11 General problem: How to divide memory between processes? • Given a workload, how to we – Keep track of free memory? – Locate free memory for a new process? • Overview of evolution of simple memory management – Static (fixed partitioning) approaches • Simple, predicable workloads of early computing – Dynamic (partitioning) approaches • More flexible computing as compute power and complexity increased. • Introduce virtual memory – Segmentation and paging Process B Process C Process D
  • 12. 12 Problem: How to divide memory • One approach – divide memory into fixed equal-sized partitions – Any process <= partition size can be loaded into any partition – Partitions are free or busy Process A Process B Process C Process D
  • 13. 13 Simple MM: Fixed, equal-sized partitions • Any unused space in the partition is wasted – Called internal fragmentation • Processes smaller than main memory, but larger than a partition cannot run. Process A Process B Process C Process D
  • 14. 14 Simple MM: Fixed, variable-sized partitions • Divide memory at boot time into a selection of different sized partitions – Can base sizes on expected workload • Each partition has queue: – Place process in queue for smallest partition that it fits in. – Processes wait for when assigned partition is empty to start
  • 15. 15 • Issue – Some partitions may be idle • Small jobs available, but only large partition free • Workload could be unpredicatable
  • 16. 16 Alternative queue strategy • Single queue, search for any jobs that fit • Small jobs in large partition if necessary – Increases internal memory fragmentation
  • 17. 17 Fixed Partition Summary • Simple • Easy to implement • Can result in poor memory utilisation – Due to internal fragmentation • Used on IBM System 360 operating system (OS/MFT) – Announced 6 April, 1964 • Still applicable for simple embedded systems – Static workload known in advance
  • 18. 18 Dynamic Partitioning • Partitions are of variable length – Allocated on-demand from ranges of free memory • Process is allocated exactly what it needs – Assumes a process knows what it needs
  • 19.
  • 20.
  • 21. 21 Dynamic Partitioning • In previous diagram – We have 16 meg free in total, but it can’t be used to run any more processes requiring > 6 meg as it is fragmented – Called external fragmentation • We end up with unusable holes
  • 22. 22 Recap: Fragmentation • External Fragmentation: – The space wasted external to the allocated memory regions. – Memory space exists to satisfy a request, but it is unusable as it is not contiguous. • Internal Fragmentation: – The space wasted internal to the allocated memory regions. – allocated memory may be slightly larger than requested memory; this size difference is wasted memory internal to a partition.
  • 23. 23 Dynamic Partition Allocation Algorithms • Also applicable to malloc()-like in-application allocators • Given a region of memory, basic requirements are: – Quickly locate a free partition satisfying the request • Minimise CPU time search – Minimise external fragmentation – Minimise memory overhead of bookkeeping – Efficiently support merging two adjacent free partitions into a larger partition
  • 24. 24 Classic Approach • Represent available memory as a linked list of available “holes”. – Base, size – Kept in order of increasing address • Simplifies merging of adjacent holes into larger holes. – Can be stored in the “holes” themselves Address Size Link Address Size Link Address Size Link Address Size Link
  • 25. 25 Coalescing Free Partitions with Linked Lists Four neighbor combinations for the terminating process X
  • 26. 26 Dynamic Partitioning Placement Algorithm • First-fit algorithm – Scan the list for the first entry that fits • If greater in size, break it into an allocated and free part • Intent: Minimise amount of searching performed – Aims to find a match quickly – Generally can result in smaller holes at the front end of memory that must be searched over when trying to find a free block. – May have lots of unusable holes at the beginning. • External fragmentation – Tends to preserve larger blocks at the end of memory Address Size Link Address Size Link Address Size Link Address Size Link
  • 27. 27 Dynamic Partitioning Placement Algorithm • Next-fit – Like first-fit, except it begins its search from the point in list where the last request succeeded instead of at the beginning. • Spread allocation more uniformly over entire memory – More often allocates a block of memory at the end of memory where the largest block is found • The largest block of memory is broken up into smaller blocks – May not be able to service larger request as well as first fit. Address Size Link Address Size Link Address Size Link Address Size Link
  • 28. 28 Dynamic Partitioning Placement Algorithm • Best-fit algorithm – Chooses the block that is closest in size to the request – Poor performer • Has to search complete list – does more work than first- or next-fit • Since smallest block is chosen for a process, the smallest amount of external fragmentation is left – Create lots of unusable holes Address Size Link Address Size Link Address Size Link Address Size Link
  • 29. 29 Dynamic Partitioning Placement Algorithm • Worst-fit algorithm – Chooses the block that is largest in size (worst-fit) • (whimsical) idea is to leave a usable fragment left over – Poor performer • Has to do more work (like best fit) to search complete list • Does not result in significantly less fragmentation Address Size Link Address Size Link Address Size Link Address Size Link
  • 30.
  • 31. 31 Dynamic Partition Allocation Algorithm • Summary – First-fit and next-fit are generally better than the others and easiest to implement • You should be aware of them – simple solutions to a still-existing OS or application function – memory allocation. • Note: Largely have been superseded by more complex and specific allocation strategies – Typical in-kernel allocators used are lazy buddy, and slab allocators
  • 32. 32 Compaction • We can reduce external fragmentation by compaction – Shuffle memory contents to place all free memory together in one large block. – Only if we can relocate running programs? • Pointers? – Generally requires hardware support Process A Process B Process C Process D Process A Process B Process C Process D
  • 33. 33 Some Remaining Issues with Dynamic Partitioning • We have ignored – Relocation • How does a process run in different locations in memory? – Protection • How do we prevent processes interfering with each other Process A Process B Process C Process D
  • 34. 34 Example Logical Address-Space Layout • Logical addresses refer to specific locations within the program • Once running, these address must refer to real physical memory • When are logical addresses bound to physical? 0x0000 0xFFFF
  • 35. 35 When are memory addresses bound? • Compile/link time – Compiler/Linker binds the addresses – Must know “run” location at compile time – Recompile if location changes • Load time – Compiler generates relocatable code – Loader binds the addresses at load time • Run time – Logical compile-time addresses translated to physical addresses by special hardware.
  • 36. 36 Hardware Support for Runtime Binding and Protection • For process B to run using logical addresses – Process B expects to access addresses from zero to some limit of memory size Process B limit 0x0000
  • 37. 37 Hardware Support for Runtime Binding and Protection • For process B to run using logical addresses – Need to add an appropriate offset to its logical addresses • Achieve relocation • Protect memory “lower” than B – Must limit the maximum logical address B can generate • Protect memory “higher” than B Process B baselimit 0x0000 0xFFFF
  • 38. 38 Hardware Support for Relocation and Limit Registers
  • 39. 39 Base and Limit Registers • Also called – Base and bound registers – Relocation and limit registers • Base and limit registers – Restrict and relocate the currently active process – Base and limit registers must be changed at • Load time • Relocation (compaction time) • On a context switch Process A Process B Process C Process D baselimit 0x0000 0xFFFF 0x0000 0x1FFF 0x8000 0x9FFF base=0x8000 limit = 0x2000
  • 40. 40 Base and Limit Registers • Also called – Base and bound registers – Relocation and limit registers • Base and limit registers – Restrict and relocate the currently active process – Base and limit registers must be changed at • Load time • Relocation (compaction time) • On a context switch Process A Process B Process C Process D base limit 0x0000 0xFFFF 0x0000 0x2FFF 0x4000 0x6FFF base=0x4000 limit = 0x3000
  • 41. 41 Base and Limit Registers • Pro – Supports protected multi-processing (-tasking) • Cons – Physical memory allocation must still be contiguous – The entire process must be in memory – Do not support partial sharing of address spaces • No shared code, libraries, or data structures between processes
  • 42. 42 Timesharing • Thus far, we have a system suitable for a batch system – Limited number of dynamically allocated processes • Enough to keep CPU utilised – Relocated at runtime – Protected from each other • But what about timesharing? – We need more than just a small number of processes running at once – Need to support a mix of active and inactive processes, of varying longevity Process A Process B Process C Process D 0x0000 0xFFFF
  • 43. 43 Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Can prioritize – lower-priority process is swapped out so higher-priority process can be loaded and executed. • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. – slow
  • 45. 45 So far we have assumed a process is smaller than memory • What can we do if a process is larger than main memory?
  • 46. 46 Overlays • Keep in memory only those instructions and data that are needed at any given time. • Implemented by user, no special support needed from operating system • Programming design of overlay structure is complex
  • 47. 47 Overlays for a Two-Pass Assembler
  • 48. 48 Virtual Memory • Developed to address the issues identified with the simple schemes covered thus far. • Two classic variants – Paging – Segmentation • Paging is now the dominant one of the two • Some architectures support hybrids of the two schemes – E.g. Intel IA-32 (32-bit x86)
  • 49. 49 Virtual Memory - Paging • Partition physical memory into small equal sized chunks – Called frames • Divide each process’s virtual (logical) address space into same size chunks – Called pages – Virtual memory addresses consist of a page number and offset within the page • OS maintains a page table – contains the frame location for each page – Used by to translate each virtual address to physical address – The relation between virtual addresses and physical memory addresses is given by page table • Process’s physical memory does not have to be contiguous
  • 50.
  • 51.
  • 52. 52 Paging • No external fragmentation • Small internal fragmentation (in last page) • Allows sharing by mapping several pages to the same frame • Abstracts physical organisation – Programmer only deal with virtual addresses • Minimal support for logical organisation – Each unit is one or more pages
  • 53. 53 Memory Management Unit (also called Translation Look-aside Buffer – TLB) The position and function of the MMU
  • 54. 54 MMU Operation Internal operation of simplified MMU with 16 4 KB pages Assume for now that the page table is contained wholly in registers within the MMU – in practice it is not
  • 55. 55 Virtual Memory - Segmentation • Memory-management scheme that supports user’s view of memory. • A program is a collection of segments. A segment is a logical unit such as: – main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays
  • 56. 56 Logical View of Segmentation 1 3 2 4 1 4 2 3 user space physical memory space
  • 57. 57 Segmentation Architecture • Logical address consists of a two tuple: <segment- number, offset>, – Addresses identify segment and address with segment • Segment table – each table entry has: – base – contains the starting physical address where the segments reside in memory. – limit – specifies the length of the segment. • Segment-table base register (STBR) points to the segment table’s location in memory. • Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR.
  • 60. 60 Segmentation Architecture • Protection. With each entry in segment table associate: – validation bit = 0 ⇒ illegal segment – read/write/execute privileges • Protection bits associated with segments; code sharing occurs at segment level. • Since segments vary in length, memory allocation is a dynamic partition-allocation problem. • A segmentation example is shown in the following diagram
  • 62. 62 Segmentation Architecture • Relocation. – dynamic ⇒ by segment table • Sharing. – shared segments ⇒ same physical backing multiple segments ⇒ ideally, same segment number • Allocation. – First/next/best fit ⇒ external fragmentation