This document discusses memory management techniques in operating systems. It begins by defining memory management and discussing the concepts of logical and physical address spaces. It then explains different memory management schemes including swapping, multiple partitions with fixed and variable sizes, and paging and segmentation. The key points are that the OS must facilitate sharing of main memory among multiple processes and that memory management schemes determine how logical addresses are mapped to physical addresses.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
This is the slide of Memory management. Here is discussed about memory allocation with the basic idea. Also discussed static and dynamic memory allocation
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
This is the slide of Memory management. Here is discussed about memory allocation with the basic idea. Also discussed static and dynamic memory allocation
The objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Operating system 32 logical versus physical addressVaibhav Khanna
How to utilize memory optimally by manipulating objects in the memory is referred to as memory management.
Program must be brought (from disk) into memory and placed within a process for it to be run
Main memory and registers are only storage CPU can access directly
Memory unit only sees a stream of addresses + read requests, or address + data and write requests
Register access in one CPU clock (or less)
Main memory can take many cycles, causing a stall
Cache sits between main memory and CPU registers
Protection of memory required to ensure correct operation
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
2. Objectives
At the end of the course, the student should be
able to:
•Define memory management;
•Discuss the concept of address binding;
•Define logical and physical address space;
•Discuss swapping, multiple partitions, paging and
segmentation.
3. Memory Management
• The sharing of the CPU by several processes
requires that the operating system keeps
several processes (including the OS itself) in
main memory at the same time.
• The operating system should therefore have
algorithms for facilitating the sharing of main
memory among these processes (memory
management).
4. Memory Management
The Concept of Address Binding
• Usually, a program resides on a disk as a
binary executable file. The program must then
be brought into main memory before the CPU
can execute it.
• Depending on the memory management
scheme, the process may be moved between
disk and memory during its execution. The
collection of processes on the disk that are
waiting to be brought into memory for execution
forms the job queue or input queue.
5. Memory Management
The Concept of Address Binding
• The normal procedure is to select one of the
processes in the input queue and to load the
process into memory.
• A user process may reside in any part of the
physical memory. Thus, although the address
space of the computer starts at 00000, the
first address of the user process does not
need to be 00000.
6. Memory Management
Multistep processing of a user program
Source
Program
Compiler or
Assemble r
O bje ct
Module
compile
time
Linkage
Editor
O ther
O bje ct
Module s
Load
Module
Loade r
System
Library
load
time
In-me mory
Binary
Memory
Image
Dynamically
Loaded
System
Library
exe cution
time (run
time)
dynamic
linking
7. Memory Management
The Concept of Address Binding
• Addresses in a source program are generally
symbolic (such as LOC or ALPHA). A
compiler will typically bind these symbolic
addresses to relocatable addresses (such as
14 bytes from the beginning of a certain
module). The linker editor or loader will
then bind the relocatable addresses to
absolute addresses (such as 18000H). Each
binding is a mapping from one address space
to another.
8. Memory Management
The binding of instructions and data to memory
address may be done at any step along the way:
1. Compile Time. If it is known at compile time
where the process will reside in memory, then
absolute code can be generated.
• For example, if it is known that a user process
resides starting at location R, then the generated
compiler code will start at that location and
extend up from there.
• If, at some later time, the starting location
changes, then it will be necessary to recompile
the code.
9. Memory Management
2. Load Time. If it is not known at compile
time where the process will reside in memory,
then the compiler must generate relocatable
code. In this case, final binding is delayed at
load time. If the starting address changes, then
the OS must reload the user code to incorporate
this changed value.
10. Memory Management
3. Execution Time. If the process can be
moved during its execution from one memory
segment to another, then binding must be
delayed until run time. Special hardware must
be available for this scheme to work. Most
general-purpose operating systems use this
method.
11. Memory Management
The Concept of Address Binding
• To obtain better memory-space utilization,
dynamic loading is often used.
• With dynamic loading, a routine is not loaded
until it is called. All routines are kept on disk
in a relocatable load format.
• Whenever a routine is called, the relocatable
linking loader is called to load the desired
routine into memory and to update the
program’s address tables to reflect this
change.
12. Memory Management
The Concept of Address Binding
• The advantage of dynamic loading is that an
unused routine is never loaded. This scheme
is particularly useful when large amounts of
code are needed to handle infrequently
occurring cases, such as error routines. In
this case, although the total program size may
be large, the portion that is actually used (and
hence actually loaded) may be much smaller.
13. Memory Management
Logical and Physical Address Space
• An address generated by the CPU is
commonly referred to as a logical address.
• An address seen by the memory unit is
commonly referred to as a physical address.
• The compile-time and load-time address
binding schemes result in an environment
where the logical and physical addresses are
the same.
14. Memory Management
Logical and Physical Address Space
• However, the execution-time address-binding
scheme results in an environment where the
logical and physical addresses differ.
• The run-time mapping from logical to physical
addresses is done by the memory
management unit (MMU), which is a
hardware device.
15. Memory Management
Logical and Physical Address Space
• The hardware support necessary for this
scheme is similar to the ones discussed
earlier. The base register is now the
relocation register. The value in the
relocation register is added to every address
generated by the user process at the time it is
sent to memory.
16. Memory Management
For example, if the base is at 14000, then an attempt by
the user to address location 0 is dynamically relocated to
location 14000; an access to location 346
is mapped to location 14346.
14000
MEMORYCPU +
Relocation
Register
logical
address
physical
address
346 14346
MMU
17. Memory Management
Logical and Physical Address Space
• Notice that the user program never sees the
real physical addresses. The program can
create a pointer to location 346, store it in
memory, manipulate it, compare it to other
addresses – all as the number 346. Only
when it is used as a memory address is it
relocated relative to the base register.
18. Memory Management
Logical and Physical Address Space
• There are now two different types of
addresses: logical addresses (in the range 0
to max) and physical addresses (in the range
R + 0 to R + max for a base value of R). The
user generates only logical addresses and
thinks that the process runs in locations 0 to
max. The user program supplies the logical
addresses; these must be mapped to physical
addresses before they are used.
19. Memory Management
SWAPPING
• To facilitate multiprogramming, the OS can
temporarily swap a process out of memory to
a fast secondary storage (fixed disk) and then
brought back into memory for continued
execution.
21. Memory Management
SWAPPING
• For example, assume a multiprogrammed
environment with a round-robin CPU-
scheduling algorithm. When a quantum
expires, the memory manager will start to
swap out the process that just finished, and to
swap in another process to the memory
space that has been freed.
22. Memory Management
SWAPPING
• Take note that the quantum must be
sufficiently large that reasonable amounts of
computing are done between swaps.
• The context-switch time in swapping is fairly
high.
23. Memory Management
Example:
Size of User Process = 1 MB
= 1,048,576 bytes
Transfer Rate of
Secondary Storage = 5 MB/sec
= 5,242,880 bytes/sec
The actual transfer rate of the 1 MB process to
or from memory takes:
1,048,576 / 5,242,880 = 200 ms
24. Memory Management
SWAPPING
• Assuming that no head seeks are necessary
and an average latency of 8 ms, the swap
time tales 208 ms. Since it is necessary to
swap out and swap in, the total swap time is
then about 416 ms.
25. Memory Management
SWAPPING
• For efficient CPU utilization, the execution
time for each process must be relative long
relative to the swap time. Thus, in a round-
robin CPU-scheduling algorithm, for example,
the time quantum should be substantially
larger than 416 ms.
• Swapping is constrained by other factors as
well. A process to be swapped out must be
completely idle.
26. Memory Management
SWAPPING
• Of particular concern is any pending I/O. A
process may be waiting for an I/O operation when
it is desired to swap that process to free up its
memory. However, if the I/O is asynchronously
accessing the user memory for I/O buffers, then
the process cannot be swapped.
• Assume that the I/O operation of process P1 was
queued because the device was busy. Then if P1
was swapped out and process P2 was swapped
in, the I/O operation might attempt to use memory
that now belongs to P2.
27. Memory Management
SWAPPING
• Normally a process that is swapped out will
be swapped back into the same memory
space that it occupied previously. If binding is
done at assembly or load time, then the
process cannot be moved to different
locations. If execution-time binding is being
used, then it is possible to swap a process
into a different memory space, because the
physical addresses are computed during
execution time.
28. Memory Management
MULTIPLE PARTITIONS
• In an actual multiprogrammed environment,
many different processes reside in memory,
and the CPU switches rapidly back and forth
among these processes.
• Recall that the collection of processes on a
disk that are waiting to be brought into
memory for execution form the input or job
queue.
29. Memory Management
MULTIPLE PARTITIONS
• Since the size of a typical process is much
smaller than that of main memory, the
operating system divides main memory into a
number of partitions wherein each partition
may contain exactly one process.
• The degree of multiprogramming is bounded
by the number of partitions.
30. Memory Management
MULTIPLE PARTITIONS
• When a partition is free, the operating system
selects a process from the input queue and
loads it into the free partition. When the
process terminates, the partition becomes
available for another process.
31. Memory Management
MULTIPLE PARTITIONS
• There are two major memory management
schemes possible in handling multiple
partitions:
1. Multiple Contiguous Fixed Partition
Allocation
Example:
MFT Technique (Multiprogramming with a
Fixed number of Tasks) originally used by the
IBM OS/360 operating system.
32. Memory Management
MULTIPLE PARTITIONS
• There are two major memory management
schemes possible in handling multiple
partitions:
2. Multiple Contiguous Variable Partition
Allocation
Example:
MVT Technique (Multiprogramming with a
Variable number of Tasks)
33. Memory Management
MULTIPLE PARTITIONS
• Fixed Regions (MFT)
• In MFT, the region sizes are fixed, and do not
change as the system runs.
• As jobs enter the system, they are put into a
job queue. The job scheduler takes into
account the memory requirements of each job
and the available regions in determining
which jobs are allocated memory.
34. Memory Management
Example: Assume a 32K main memory divided
into the following partitions:
12K for the operating system
2K for very small processes
6K for average processes
12K for large jobs
OPERATING
SYSTEM
0
32K
USER PARTITION 1
USER PARTITION 2
USER PARTITION 3
12K
2K
6K
12K
35. Memory Management
MULTIPLE PARTITIONS
• The operating system places jobs or process
entering the memory in a job queue on a
predetermined manner (such as first-come
first-served).
• The job scheduler then selects a job to place
in memory depending on the memory
available.
37. Memory Management
A typical memory management algorithm would:
1. Assign Job 1 to User Partition 2
2. Assign Job 2 to User Partition 1
3. Job 3 (3K) needs User Partition 2 (6K) since it
is too small for User Partition 3 (12K). Since Job 2 is
still using this partition, Job 3 should wait for its turn.
4. Job 4 cannot use User Partition 3 since it will
go ahead of Job 3 thus breaking the FCFS rule. So
it will also have to wait for its turn even though User
Partition 3 is free.
This algorithm is known as the best-fit only
algorithm.
38. Memory Management
MULTIPLE PARTITIONS
• One flaw of the best-fit only algorithm is that it
forces other jobs (particularly those at the
latter part of the queue to wait even though
there are some free memory partitions).
• An alternative to this algorithm is the best-fit
available algorithm. This algorithm allows
small jobs to use a much larger memory
partition if it is the only partition left. However,
the algorithm still wastes some valuable
memory space.
39. Memory Management
MULTIPLE PARTITIONS
• Another option is to allow jobs that are near
the rear of the queue to go ahead of other
jobs that cannot proceed due to any
mismatch in size. However, this will break the
FCFS rule.
40. Memory Management
MULTIPLE PARTITIONS
Other problems with MFT:
•1. What if a process requests for more
memory?
Possible Solutions:
A] kill the process
B] return control to the user program
with an “out of memory” message
C] reswap the process to a bigger
partition, if the system allows dynamic
relocation
41. Memory Management
MULTIPLE PARTITIONS
• 2. How does the system determine the sizes
of the partitions?
• 3. MFT results in internal and external
fragmentation which are both sources of
memory waste.
42. Memory Management
MULTIPLE PARTITIONS
• Internal fragmentation occurs when a process
requiring m memory locations reside in a
partition with n memory locations where m <
n. The difference between n and m (n - m) is
the amount of internal fragmentation.
• External fragmentation occurs when a
partition is available, but is too small for any
waiting job.
43. Memory Management
MULTIPLE PARTITIONS
• Partition size selection affects internal and
external fragmentation since if a partition is
too big for a process, then internal
fragmentation results. If the partition is too
small, then external fragmentation occurs.
Unfortunately, with a dynamic set of jobs to
run, there is probably no one right partition for
memory.
45. Memory Management
MULTIPLE PARTITIONS
• Only Jobs 1 and 2 can enter memory (at
partitions 1 and 2). During this time:
I.F. = (10 K - 7 K) + (4 K - 3 K)
= 4 K
E.F. = 8 K
• Therefore:
Memory Utilization= 10/22 x 100
= 45.5%
• What if the system partitions memory as 10:8:4 or
7:3:6:6?
46. Memory Management
Variable Partitions (MVT)
• In MVT, the system allows the region sizes to
vary dynamically. It is therefore possible to have
a variable number of tasks in memory
simultaneously.
• Initially, the operating system views memory as
one large block of available memory called a
hole. When a job arrives and needs memory, the
system searches for a hole large enough for this
job. If one exists, the OS allocates only as much
as is needed, keeping the rest available to satisfy
future requests.
47. Memory Management
Example:
Assume that memory has 256 K locations with
the operating system residing at the first 40 K
locations. Assume further that the following jobs are
in the job queue:
The system again follows the FCFS algorithm in
scheduling processes.
JOB MEMORY COMPUTE TIME
1 60K 10 units
2 100K 5 units
3 30K 20 units
4 70K 8 units
5 50K 15 units
48. Memory Management
Example Memory Allocation and Job Scheduling for MVT
OS
Job 1
0
Job 2
Job 3
40K
100K
200K
230K
256K
OS
Job 1
0
Job 4
Job 3
40K
100K
200K
230K
256K
170K
Job 2 out, Job 4 in after
5 time units
OS
Job 5
0
Job 4
Job 3
40K
100K
200K
230K
256K
170K
Job 1 out, Job 5 in after
next 5 time units
90K
49. Memory Management
Variable Partitions (MVT)
• This example illustrates several points about
MVT:
• 1. In general, there is at any time a set of
holes, of various sizes, scattered throughout
memory.
• 2. When a job arrives, the operating system
searches this set for a hole large enough for
the job (using the first-fit, best-fit, or worst
fit algorithm).
50. Memory Management
Variable Partitions (MVT)
First Fit
• Allocate the first hole that is large enough.
This algorithm is generally faster and
empty spaces tend to migrate toward
higher memory. However, it tends to
exhibit external fragmentation.
51. Memory Management
Variable Partitions (MVT)
Best Fit
• Allocate the smallest hole that is large
enough. This algorithm produces the
smallest leftover hole. However, it may
leave many holes that are too small to be
useful.
52. Memory Management
Variable Partitions (MVT)
Worst Fit
• Allocate the largest hole. This algorithm
produces the largest leftover hole.
However, it tends to scatter the unused
portions over non-contiguous areas of
memory.
53. Memory Management
Variable Partitions (MVT)
• 3. If the hole is too large for a job, the system
splits it into two: the operating system gives one
part to the arriving job and it returns the other the
set of holes.
• 4. When a job terminates, it releases its block of
memory and the operating system returns it in the
set of holes.
• 5. If the new hole is adjacent to other holes, the
system merges these adjacent holes to form one
larger hole.
54. Memory Management
Variable Partitions (MVT)
• It is important for the operating system to
keep track of the unused parts of user
memory or holes by maintaining a linked list.
A node in this list will have the following fields:
1. the base address of the hole
2. the size of the hole
3. a pointer to the next node in the list
55. Memory Management
Variable Partitions (MVT)
• Internal fragmentation does not exist in MVT
but external fragmentation is still a problem.
It is possible to have several holes with sizes
that are too small for any pending job.
• The solution to this problem is compaction.
The goal is to shuffle the memory contents to
place all free memory together in one large
block.
56. Memory Management
Example: OS
Job 5
0
Job 4
Job 3
26 K
40K
100K
200K
230K
256K
30 K
170K
10 K
90K
OS
Job 5
0
Job 4
Job 3
66 K
40K
190K
256K
160K
90K• Compaction is
possible only if
relocation is
dynamic, and is
done at execution
time.
57. Memory Management
PAGING
• MVT still suffers from external fragmentation
when available memory is not contiguous, but
fragmented into many scattered blocks.
• Aside from compaction, paging can minimize
external fragmentation. Paging permits a
program’s memory to be non-contiguous, thus
allowing the operating system to allocate a
program physical memory whenever possible.
58. Memory Management
PAGING
• In paging, the operating system divides main
memory into fixed-sized blocks called frames.
The system also breaks a process into blocks
called pages where the size of a memory
frame is equal to the size of a process page.
The pages of a process may reside in
different frames in main memory.
59. Memory Management
PAGING
• Every address generated by the CPU is a
logical address. A logical address has two
parts:
1. The page number (p) indicates what
page the word resides.
2. The page offset (d) selects the word
within the page.
60. Memory Management
PAGING
• The operating system translates this logical
address into a physical address in main
memory where the word actually resides.
This translation process is possible through
the use of a page table.
61. Memory Management
PAGING
• The page number is used as an index into the
page table. The page table contains the base
address of each page in physical memory.
CPU p d
logical
address
f
p
page table
f d
Main
Memory
physical
address
62. Memory Management
• This base address is combined with the page
offset to define the physical memory address that
is sent to the memory unit.
page 0
Logical
Memory
1
page 1
page 2
page 3
4
3
7
0
1
2
3
Page Table
page 0
page 2
page 3
page 1
Physical
Memory
0
1
2
3
4
5
6
7
frame
number
63. Memory Management
• The page size (like the frame size) is defined by
the hardware. The size of a page is typically a
power of 2 varying between 512 bytes and 16 MB
per page, depending on the computer
architecture.
• If the size of a logical address space is 2m, and a
page size is 2n addressing units (bytes or words),
then the high-order m – n bits of a logical address
designate the page number, and the n lower-
order bits designate the page offset. Thus, the
logical address is as follows:
p d
m - n n
page number page offset
64. Memory Management
• Example:
Main Memory Size = 32 bytes
Process Size = 16 bytes
Page or Frame Size = 4 bytes
No. of Process Pages = 4 pages
No. of MM Frames = 8 frames
66. Memory Management
PAGING
• Logical address 0 is page 0, offset 0. Indexing
into the page table, it is seen that page 0 is in
frame 5. Thus, logical address 0 maps to
physical address 20 (5 x 4 + 0). Logical address
3 (page 0, offset 3) maps to physical address 23
(5 x 4 + 3). Logical address 4 is page 1, offset 0;
according to the page table, page 1 is mapped to
frame 6. Thus logical address 4 maps to physical
address 24 (6 x 4 + 0). Logical address 13 maps
to physical address 9.
67. Memory Management
Example:
Main Memory Size = 32 bytes
Process Size = 16 bytes
Page or Frame Size = 4 bytes
No. of Process Pages = 4 pages
No. of MM Frames = 8 frames
71. Memory Management
PAGING
• There is no external fragmentation in paging
since the operating system can allocate any
free frame to a process that needs it.
However, it is possible to have internal
fragmentation if the memory requirements of
a process do not happen to fall on page
boundaries. In other words, the last page
may not completely fill up a frame.
72. Memory Management
Example:
Page Size = 2,048 bytes
Process Size = 72,766 bytes
No. of Pages = 36 pages
(35 pages plus 1,086 bytes)
Internal Fragmentation is 2,048 - 1,086 = 962
•In the worst case, a process would need n pages
plus one byte. It would be allocated n + 1 frames,
resulting in an internal fragmentation of almost an
entire frame.
73. Memory Management
PAGING
• If process size is independent of page size, it
is expected that internal fragmentation to
average one-half page per process. This
consideration suggests that small page sizes
are desirable. However, overhead is involved
in each page-table entry, and this overhead is
reduced as the size of the pages increases.
Also, disk I/O is more efficient when the
number of data being transferred is larger.
74. Memory Management
PAGING
• Each operating system has its own methods
for storing page tables. Most allocate a page
table for each process. A pointer to the page
table is stored with the other register values
(like the program counter) in the PCB. When
the dispatcher is told to start a process, it
must reload the user registers and define the
correct hardware page-table values from the
stored user page table.
75. Memory Management
• The options in implementing page tables are:
1. Page Table Registers
In the simplest case, the page table is
implemented as a set of dedicated registers.
These registers should be built with high-speed
logic to make page-address translation efficient.
The advantage of using registers in implementing
page tables is fast mapping. Its main
disadvantage is that it becomes expensive for
large logical address spaces (too many pages).
76. Memory Management
2. Page Table in Main Memory
The page table is kept in memory and a
Page Table Base Register (PTBR) points to the
page table. The advantage of this approach is
that changing page tables requires changing
only this register, substantially reducing context
switch time. However, two memory accesses
are needed to access a word.
77. Memory Management
3. Associative Registers
The standard solution is to use a special,
small, fast-lookup hardware cache, variously
called Associative Registers or Translation
Look-aside Buffers (TLB).
78. Memory Management
• The associative registers contain only a few
of the page-table entries. When a logical
address is generated by the CPU, its page
number is presented to a set of associative
registers that contain page numbers and their
corresponding frame numbers. If the page
number is found in the associative registers,
its frame number is immediately available and
is used to access memory.
79. Memory Management
• If the page number is not in the associative
registers, a memory reference to the page
table must be made. When the frame number
is obtained, it can be used to access memory
(as desired). In addition, the page number
and frame number is added to the associative
registers, so that they can be found quickly on
the next reference.
80. Memory Management
• Another advantage in paging is that processes
can share pages therefore reducing overall
memory consumption.
Example:
Consider a system that supports 40 users, each of
whom executes a text editor. If the text editor
consists of 150K of code and 50K of data space, then
the system would need 200K x 40 = 8,000K to
support the 40 users.
However, if the text editor code is reentrant (pure
code that is non-self-modifying), then all 40 users can
share this code. The total memory consumption is
therefore 150K + 50K x 40 = 2,150K only.
81. Memory Management
PAGING
• It is important to remember that in order to
share a program, it has to be reentrant, which
implies that it never changes during
execution.
82. Memory Management
SEGMENTATION
• Because of paging, there are now two ways
of viewing memory. These are the user’s
view (logical memory) and the actual
physical memory. There is a necessity of
mapping logical addresses into physical
addresses.
83. Memory Management
SEGMENTATION
• Logical Memory
• A user or programmer views memory as a
collection of variable-sized segments, with
no necessary ordering among the
segments.
84. Memory Management
• Therefore, a program is simply a set of
subroutines, procedures, functions, or modules.
subrouti
ne
stack
Sqrt
main
program
symbol
table
LOGICAL ADDRESS SPACE
85. Memory Management
SEGMENTATION
• Each of these segments is of variable-length;
the size is intrinsically defined by the purpose
of the segment in the program. The user is
not concerned whether a particular segment
is stored before or after another segment.
The OS identifies elements within a segment
by their offset from the beginning of the
segment.
87. Memory Management
SEGMENTATION
• Segmentation is the memory-management
scheme that supports this user’s view of
memory. A logical address space is a
collection of segments. Each segment has a
name and a length. Addresses specify the
name of the segment or its base address
and the offset within the segment.
88. Memory Management
SEGMENTATION
Example:
To access an instruction in the Code
Segment of the 8086/88 processor, a program
must specify the base address (the CS register)
and the offset within the segment (the IP
register).
89. Memory Management
The mapping of logical address into physical
address is possible through the use of a segment
table.
d < limit ?
limit
CPU (s, d)
to MM
true
trap to operating system monitor --
addressing error
false
base
s
+
90. Memory Management
SEGMENTATION
• A logical address consists of two parts: a
segment number s, and an offset into the
segment, d. The segment number is an index
into the segment table. Each entry of the
segment table has a segment base and a
segment limit. The offset d must be between
0 and limit.
92. Memory Management
SEGMENTATION
• A reference to segment 3, byte 852, is
mapped to 3200 (the base of segment 3) +
852 = 4052. A reference to byte 1222 of
segment 0 would result in a trap to the
operating system since this segment is only
1000 bytes long.