The document discusses memory management techniques used by operating systems. It explains that memory management involves allocating and swapping blocks of data between main memory and secondary storage due to the slow speed of memory I/O compared to CPUs. Key requirements for memory management are relocation, protection, and sharing of memory. Early techniques like fixed and dynamic partitioning had limitations that were addressed by paging and segmentation which partition memory into small fixed-size chunks. Virtual memory management allows processes to be larger than physical memory by swapping pages or segments between main and secondary storage on demand.
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
memory managment on computer science.pptfootydigarse
Description:
This PowerPoint presentation delves into the critical realm of memory management, exploring strategies to optimize system performance and resource utilization. Beginning with an overview of memory management fundamentals, the presentation progresses to examine various memory management techniques employed in modern computing environments. Topics covered include memory allocation algorithms, memory fragmentation mitigation strategies, virtual memory concepts, and the role of caching mechanisms. Through illustrative diagrams, case studies, and real-world examples, the presentation offers insights into best practices for memory management across different computing platforms. Additionally, emerging trends and advancements in memory management technologies are explored, providing attendees with a comprehensive understanding of how to leverage memory management to enhance system efficiency, scalability, and reliability. Whether you're a seasoned IT professional, a software developer, or a student eager to expand your knowledge of memory management, this presentation offers valuable insights into the intricacies of memory optimization in contemporary computing systems.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
2. The need for memory
management
• Memory is cheap today, and getting
cheaper
– But applications are demanding more and
more memory, there is never enough!
• Memory Management, involves allocation
and swapping blocks of data to/from
secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
5. Requirements: Relocation
• The programmer does not know where the
program will be placed in memory when it
is executed,
– it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to
the actual physical memory address
6. Requirements: Protection
• Processes should not be able to reference
memory locations in another process
without permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
7. Requirements: Sharing
• Allow several processes to access the
same portion of memory
• Better to allow each process access to the
same copy of the program rather than
have their own separate copy
8. Partitioning
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of
virtual memory if we look first at
partitioning
– Virtual Memory has evolved from the
partitioning methods
10. Fixed Partitioning
• Equal-size partitions
– Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
11. Fixed Partitioning Problems
• A program may not fit in a partition.
– The programmer must design the program
with overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies
an entire partition.
– This results in internal fragmentation.
12. Solution – Unequal Size
Partitions
• Lessens both problems
– but doesn’t solve completely
13. Placement Algorithm
• Equal-size
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the smallest
partition within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
15. Remaining Problems with
Fixed Partitions
• The number of active processes is limited
by the system
– I.E limited by the pre-determined number of
partitions
• A large number of very small process will
not use the space efficiently
– In either fixed or variable length partition
methods
17. Dynamic Partitioning
Example
• External Fragmentation
• Memory external to all
processes is fragmented
• Can resolve using
compaction
– OS moves processes so
that they are contiguous
– Time consuming and
wastes CPU
time(Relocation)
OS (8M)
P1
(20M)
P2
(14M)
P3
(18M)
Empty
(56M)
Empty (4M)
P4(8M)
Empty (6M)
P2
(14M)
Empty (6M)
Refer to Figure 7.4
18. Dynamic Partitioning
• Operating system must decide which free
block to allocate to a process
• Best-fit algorithm
– Chooses the block that is closest in size to the
request
• Worst performer overall
• Since smallest block is found for process, the
smallest amount of fragmentation is left
– Memory compaction must be done more often
19. Dynamic Partitioning
• First-fit algorithm
– Scans memory form the beginning and
chooses the first available block that is large
enough
– Fastest
20. Dynamic Partitioning
• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at the
end of memory where the largest block is
found
– The largest block of memory is broken up into
smaller blocks
– Compaction is required to obtain a large block
at the end of memory
22. Buddy System
• Entire space available is treated as a
single block of 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal
buddies
– Process continues until smallest block greater
than or equal to s is generated
24. Relocation
• When program loaded into memory the
actual (absolute) memory locations are
determined
• A process may occupy different partitions
which means different absolute memory
locations during execution
– Swapping
– Compaction
25. Addresses
• Logical
– Reference to a memory location independent
of the current assignment of data to memory.
• Relative
– Address expressed as a location relative to
some known point.
• Physical or Absolute
– The absolute address or actual location in
main memory.
27. Registers Used during
Execution
• Base register
– Starting address for the process
• Bounds(limit) register
– Ending location of the process
• These values are set when the process is
loaded or when the process is swapped in
28. Registers Used during
Execution
• The value of the base register is added to
a relative address to produce an absolute
address
• The resulting address is compared with
the value in the bounds register
• If the address is not within bounds, an
interrupt is generated to the operating
system
29. Paging
• Partition memory into small equal fixed-
size chunks and divide each process into
the same size chunks
• The chunks of a process are called pages
• The chunks of memory are called frames
30. Paging
• Operating system maintains a page table
for each process
– Contains the frame location for each page in
the process
– Memory address consist of a page number
and offset within the page
33. Segmentation
• A program can be subdivided into
segments
– Segments may vary in length
– There is a maximum segment length
• Addressing consist of two parts
– a segment number and
– an offset
• Segmentation is similar to dynamic
partitioning
35. Key points in
Memory Management
1) A process may be broken up into pieces
that do not need to located contiguously
in main memory
2) Memory references are logical addresses
dynamically translated into physical
addresses at run time
– A process may be swapped in and out of main
memory occupying different regions at
different times during execution
36. Breakthrough in
Memory Management
• If both of those two characteristics are
present,
– then it is not necessary that all of the pages
or all of the segments of a process be in main
memory during execution.
• If the next instruction, and the next data
location are in memory then execution can
proceed
– at least for a time
37. Implications of
this new strategy
• More processes may be maintained in
main memory
– Only load in some of the pieces of each
process
– With so many processes in main memory, it is
very likely a process will be in the Ready state
at any particular time
• A process may be larger than all of main
memory
38. Support Needed for
Virtual Memory
• Hardware must support paging and
segmentation
• Operating system must be able to manage
the movement of pages and/or segments
between secondary memory and main
memory
39. Paging with virtual memory
• Solve the external fragmentation problem by using fixed sized units in
both physical and virtual memory
frame 0
frame 1
frame 2
frame Y
physical address space
…
page 0
page 1
page 2
page X
virtual address space
…
page 3
Editor's Notes
in a uniprogramming system, main memory is divided into two parts: one part for the operating system (resident monitor, kernel) and one part for the program currently being executed. In a multiprogramming system, the “user” part of memory must be further subdivided to accommodate multiple processes.Emphasise that memory management is vital in a multiprogramming system. If only a few processes are in memory, then for much of the time all of the processes will be waitingfor I/O and the processor will be idle.Thus memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time.
This figure depicts a process image. Talk the students through this diagramAssume that the process image occupies a contiguous region of main memory. The OS needs to know the location of:process control information the execution stack, the entry point to begin execution of the program for this process. Because the operating system knows this information because it is managing memory and is responsible for bringing this process into main memory. However, the processor must deal with memory references within the program. Branch instructions contain an address to reference the instruction to be executed next. Data reference instructions contain the address of the byte or word of data referenced. Somehow, the processor hardware and operating system software must be able to translate the memory references found in the code of the program into actual physical memory addresses, reflecting the current location of the program in main memory.
The following slides expandon these topics.
Normally, a user process cannot access any portion of the operating system, neither program nor data. Usually a program in one process cannot branch to an instruction in another process or access the data area of another process. The processor must be able to abort such instructions at the point of execution.Note that the memory protection requirement must be satisfied by the processor (hardware) rather than the operating system (software) because the OS cannot anticipate all of the memory references that a program will make. It is only possible to assess the permissibility of a memory reference at the time of execution.Why is it a Bad Thing for one process to be able to read, or even write, to memory occupied by a different process?Why is it impossible to check absolute addresses at compile time (hint: see relocation)
Any protection mechanism must have the flexibility to allow several processes to access the same portion of main memory. Processes that are cooperating on some task may need to share access to the same data structure.
Don’t dwell on this slide – it is just an indication on the various approaches which will be covered in further detail in other slides
Finish by mentioning tat fixed partitioning is almost unknown today
Animated slideImagine a system with 64M RAMInitially, main memory is empty, except for the operating system Three processes are loaded in – leaving a ‘hole’ too small for any further processAt some point, none of the processes in memory is ready. The operating system swaps out process 2, Which leaves sufficient room to load a new process, process 4 – but that creates another holeLater, a point is reached at which none of the processes in main memory is ready, but process 2, in the Ready-Suspend state, is available. Because there is insufficient room in memory for process 2, the operating system swaps process 1 out and swaps process 2 back in leaving yet another holeExplain External Fragmentation and compaction – mention that compaction implies the capability of dynamic relocation
Slide shows Fig 7.5 - an example memory configuration after a number of placement and swapping-out operations. The last block that was used was a 22-Mbyte block from which a 14-Mbyte partition was created. Figure 7.5b shows the difference between the best, first, and next-fit placement algorithms in satisfying a 16-Mbyte allocation request.Best-fit will search the entire list of available blocks and make use of the 18-Mbyte block, leaving a 2-Mbyte fragment.First-fit results in a 6-Mbyte fragment, and Next-fit results in a 20-Mbyte fragment.
In a fixed partitioning scheme limits the number of active processes and may use space inefficiently if there is a poor match between available partition sizes and process sizes.A dynamic partitioning scheme is more complex to maintain and includes the overhead of compaction.An interesting compromise is the buddy system.
Figure 7.6 gives an example using a 1-Mbyte initial block.The first request,A, is for 100 Kbytes, for which a 128K block is needed.The initial block is divided into two 512K buddies.The first of these is divided into two 256K buddies, and the first of these is divided into two 128K buddies, one of which is allocated to A.The next request,B, requires a 256K block. Such a block is already available and is allocated. The process continues with splitting and coalescing occurring as needed.Note that when E is released,two 128K buddies are coalesced into a 256K block, which is immediately coalesced with its buddy
means that the address the program sees is different from the physical address of the data being stored in RAM.translation must be made from both Logical and Relative addresses to arrive at the Absolute address
Animated slideSystem with a number of frames allocatedProcess A, stored on disk, consists of four pages. When it comes time to load this process, the operating system finds four free frames and loads the four pages of process A into the four frames.Process B, consisting of three pages, and process C, consisting of four pages, are subsequently loaded.Then process B is suspended and is swapped out of main memory. Later, all of the processes in main memory are blocked, and the operating system needs to bring in a new process, process D, which consists of five pages. The Operating System loads the pages into the available frames and updates the page table
The difference with dynamic partitioning, is that with segmentation a program may occupy more than one partition, and these partitions need not be contiguous.Segmentation eliminates internal fragmentation but suffers from external fragmentation (as does dynamic partitioning)However, because a process is broken up into a number of smaller pieces, the external fragmentation should be less.A consequence of unequal-size segments is that there is no simple relationship between logical addresses and physical addresses. Analogous to paging, a simple segmentation scheme would make use of a segment table for each process and a list of free blocks of main memory. Each segment table entry would have to give the starting address in main memory of the corresponding segment. the length of the segment, to assure that invalid addresses are not used.When a process enters the Running state, the address of its segment table is loaded into a special register used by the memory management hardware.
Two characteristics of paging and segmentation are the keys to this breakthrough in memory management:All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time.A process may be swapped in and out of main memory occupying different regions of main memory at different times during the course of execution.2. A process may be broken up into a number of pieces (pages or segments) and these pieces need not be contiguously located in main memory during execution.The combination of dynamic run-time address translation and the use of a page or segment table permits this.
If the piece (segment or page) that holds the next instruction to be fetched and the piece that holds the next data location to be accessed are in main memory, then execution may proceed (at least for a time).
More processes may be maintained in main memory. Because we are only going to load some of the pieces of any particular process, there is room for more processes. This leads to more efficient utilization of the processor because it is more likely that at least one of the more numerous processes will be in a Ready state at any particular time.2. A process may be larger than all of main memory. Without the scheme a programmer must be acutely aware of how much memory is available. If the program being written is too large, the programmer must devise ways to structure the program into pieces that can be loaded separately in some sort of overlay strategy. With virtual memory based on paging or segmentation, that job is left to the operating system and the hardware. As far as the programmer is concerned, he or she is dealing with a huge memory, the size associated with disk storage.The operating system automatically loads pieces of a process into main memory as required.
For virtual memory to be practical and effective, two ingredients are needed. there must be hardware support for the paging and/or segmentation scheme to be employed. the operating system must include software for managing the movement of pages and/or segments between secondary memoryand main memory.