Chorus - Distributed Operating System [ case study ]Akhil Nadh PC
ChorusOS is a microkernel real-time operating system designed as a message-based computational model. ChorusOS started as the Chorus distributed real-time operating system research project at Institut National de Recherche en Informatique et Automatique (INRIA) in France in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by Chorus Systèmes. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
The Deadlock Problem
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Chorus - Distributed Operating System [ case study ]Akhil Nadh PC
ChorusOS is a microkernel real-time operating system designed as a message-based computational model. ChorusOS started as the Chorus distributed real-time operating system research project at Institut National de Recherche en Informatique et Automatique (INRIA) in France in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by Chorus Systèmes. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
The Deadlock Problem
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Abhaycavirtual memory and the pagehit.pptxwemoji5816
in this ppt we are learning about the concept of the virtual memory incomputer science with the help of which we run large program in less primary memory
To manage the computer memory by controlling and coordinating act as memory management. It has resides hardware component, operating system and other applications. Hardware requires according to system requirement such as RAM, chips and hard disks. Operating system such as processor. Programmer does not know where request is gone all the work done by memory management it is in-built functionality. Memory requires two types of logical and physical. In logical part internal allocation of memory has been done.
In physical part, hardware must satisfy the external requirement. It manages the process of allocating memory during runtime. Memory management does two task such as when a program needs a block a memory this would be managed by memory management and assigns the memory another one is when a program no longer needed that is deleted or deallocate by memory management. Main objective of it’s run time mapping from virtual to physical addresses.
Sources of Funds, Venture Capital System, Designing a Funding Strategy, What investors look in a pitch funding, Current funding options available in GLobal Market
Core Concept of Marketing, Nature and Scope of Marketing, Importance, Selling Vs Marketing, Marketing Concepts, Segmentation, Basis of Segmentation, Targeting, Strategies of Targeting, Positioning, Strategieis of Positioning, Consumer Markets and Buying Behaviour, Consumer Behaviour, Buying Decision Behaviour
Entreprenuership Development Plan, Institutional Support System, National Institute for Entrepreneurship and Small Business Development, STEPs stands for Science and Technology Entrepreneurs Park, National Alliance for Young Entrepreneurs (NAYE), Technical Consultancy Organizations (TCOs), National Small Industries Corporation, Industrial Development Bank of India (IDBI), IFCI (Industrial Finance Corporation of India), ICICI (Industrial Credit and Investment Corporation of India) , RUDSETI (Rural Development and Self Employment Training Institute), Rural Development and Human Development Training Programs, Technology Transfer Programs
Planning and organizing Entrepreneurial VentureArnav Chowdhury
Define Process of planning
entrepreneurial venture, How to Organize business research
tool and techniques, Define Life cycle of venture, Define Problem solving approaches,What are the ways of financing new venture
Introduction to entrepreneurship: What are Entrepreneurship Traits, Define Entrepreneur decision making process
What is the Role of entrepreneurship in economy
Analyze Concept of start up and forms of ownership
Role of Women entrepreneur and challenges
Cyber Safety Mechanism: Introduction, brief Introduction about Policies involved in cyber safety mechanism and purpose of implementing cyber security model
Information Technology Law (Cyber Law): Evolution of the IT Act 2000 and Its amendments: Genesis and Necessity, advantages.
Antivirus Techniques: Firewalls, Intrusion Detection System (IDS), Intrusion Prevention System (IPS).
Brief Introduction about Anti-Phishing Approach (Common Strategies Used For Secured Authentication): Authentication using passwords like One Time Password (OTP) generators, Two Factor Authentications, Secure Socket Layer (SSL), Secure Electronic Transaction (SET), Cryptography.
Information Technology and Modern Gadgets: Introduction, Utilization of Various Gadgets, Advantages of modern gadgets, Disadvantages of modern gadgets, Top 10 gadgets in India with small description.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
2. Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main
memory and disk during execution.
Memory management keeps track of each and every memory location, regardless
of either it is allocated to some process or it is free.
It checks how much memory is to be allocated to processes.
It decides which process will get memory at what time.
It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
3. Logical Address is generated by CPU while a program is running.
The logical address is virtual address as it does not exist physically, therefore, it is
also known as Virtual Address.
This address is used as a reference to access the physical memory location by
CPU.
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.
4. Physical Address identifies a physical location of required data in a memory.
The user never directly deals with the physical address but can access by its
corresponding logical address.
The user program generates the logical address and thinks that the program is
running in this logical address but the program needs physical memory for its
execution, therefore, the logical address must be mapped to the physical address
by MMU before they are used.
The term Physical Address Space is used for all physical addresses corresponding
to the logical addresses in a Logical address space.
5.
6. PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS
Basic generated by CPU location in a memory unit
Address Space
Logical Address Space is
set of all logical addresses
generated by CPU in
reference to a program.
Physical Address is set of
all physical addresses
mapped to the
corresponding logical
addresses.
Visibility
User can view the logical
address of a program.
User can never view
physical address of
program.
Generation generated by the CPU Computed by MMU
Access
The user can use the
logical address to access
the physical address.
The user can indirectly
access physical address
but not directly.
7. Swapping is a mechanism in which a process can be swapped temporarily out of
main memory (or move) to secondary storage (disk) and make that memory
available to other processes. At some later time, the system swaps back the
process from the secondary storage to main memory.
Though performance is usually affected by swapping process but it helps in
running multiple and big processes in parallel and that's the reason Swapping is
also known as a technique for memory compaction.
8.
9. As processes are loaded and removed from memory, the free memory space is
broken into little pieces. It happens after sometimes that processes cannot be
allocated to memory blocks considering their small size and memory blocks
remains unused. This problem is known as Fragmentation.
S.N. Fragmentation & Description
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is
not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as
it cannot be used by another process.
Fragmentation is of two types −
10. In Operating Systems, Paging is a storage mechanism used to retrieve processes from
the secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.
Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.
11.
12. Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the
main memory will be divided into the collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each
process is divided into pages of 1 KB each so that one page can be stored in one
frame.
Initially, all the frames are empty therefore pages of the processes will get stored
in the contiguous way.
13.
14.
15. Page address is called logical address and represented by page number and
the offset.
Frame address is called physical address and represented by a frame number and
the offset.
Logical Address = Page number + page offset
Physical Address = Frame number + page offset
16. Here is a list of advantages and disadvantages of paging −
Paging reduces external fragmentation, but still suffer from internal
fragmentation.
Paging is simple to implement and assumed as an efficient memory management
technique.
Due to equal size of the pages and frames, swapping becomes very easy.
Page table requires extra memory space, so may not be good for a system having
small RAM.
17. Segmentation is a memory management technique in which each job is divided into
several segments of different sizes, one for each module that contains pieces that
perform related functions. Each segment is actually a different logical address space of
the program.
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.
Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset.
18.
19. A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
User written error handling routines are used only when an error occurred in the data
or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
The ability to execute a program that is only partially in memory would counter many
benefits.
Less number of I/O would be needed to load or swap each user program into memory.
A program would no longer be constrained by the amount of physical memory that is
available.
Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
20. Modern microprocessors
intended for general-
purpose use, a memory
management unit, or
MMU, is built into the
hardware. The MMU's job
is to translate virtual
addresses into physical
addresses. A basic example
is given below −
21. A demand paging system is
quite similar to a paging
system with swapping where
processes reside in secondary
memory and pages are loaded
only on demand, not in
advance.
22. Advantages
Large virtual memory.
More efficient use of memory.
There is no limit on degree of multiprogramming.
Disadvantages
Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
23. A page fault occurs when a program attempts to access a block of memory that is
not stored in the physical memory, or RAM. The fault notifies the operating
system that it must locate the data in virtual memory, then transfer it from the
storage device, such as an HDD or SSD, to the system RAM.
24. First In First Out (FIFO) algorithm
Oldest page in main memory is
the one which will be selected for
replacement.
Easy to implement, keep a list,
replace pages from the tail and
add new pages at the head.
25. An optimal page-replacement
algorithm has the lowest page-fault
rate of all algorithms. An optimal
page-replacement algorithm exists,
and has been called OPT or MIN.
Replace the page that will not be used
for the longest period of time. Use the
time when a page is to be used.
26. Page which has not been used for the
longest time in main memory is the
one which will be selected for
replacement.
Easy to implement, keep a list, replace
pages by looking back into time.
27. The page with the smallest count is the one which will be selected for
replacement.
This algorithm suffers from the situation in which a page is used heavily during
the initial phase of a process, but then is never used again.
28. This algorithm is based on the argument that the page with the smallest count
was probably just brought in and has yet to be used.
29. Performance of Demand Paging
The performance of demand paging is often measured in terms of the effective access
time.
Effective access time is the amount of time it takes to access memory, if the cost of page
faults are amortized over all memory accesses.
In some sense it is an average or expected access time.
ea = (1 - p) * ma + p*pft
ea = effective access time
ma = physical memory (core) access time
pft = page fault time
p = probability of a page fault occuring
(1-p) = the probability of accessing memory in an available frame
The page fault time is the sum of the additional overhead associated with accessing a
page in the backing store.
This includes additional context switches, disk latency and transfer time associated with
page-in and page-out operations,
the overhead of executing an operating system trap,.
30. There are various constraints to the strategies for the allocation of frames:
You cannot allocate more than the total number of available frames.
At least a minimum number of frames should be allocated to each process. This
constraint is supported by two reasons. The first reason is, as less number of
frames are allocated, there is an increase in the page fault ratio, decreasing the
performance of the execution of the process. Secondly, there should be enough
frames to hold all the different pages that any single instruction can reference.
31. Equal allocation:
In a system with x frames and y processes, each process gets equal number of
frames, i.e. x/y.
For instance, if the system has 48 frames and 9 processes, each process will get 5
frames.
The three frames which are not allocated to any process can be used as a free-
frame buffer pool.
Disadvantage: In systems with processes of varying sizes, it does not make much
sense to give each process equal frames. Allocation of a large number of frames to
a small process will eventually lead to the wastage of a large number of allocated
unused frames.
32. Proportional allocation: Frames are allocated to each process according to the
process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m,
where S is the sum of the sizes of all the processes and m is the number of frames
in the system.
For instance, in a system with 62 frames, if there is a process of 10KB and
another process of 127KB, then the first process will be allocated (10/137)*62 = 4
frames and the other process will get (127/137)*62 = 57 frames.
Advantage: All the processes share the available frames according to their needs,
rather than equally.
33. Local replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from its own set of allocated frames only.
Advantage: The pages in memory for a particular process and the page fault ratio is
affected by the paging behavior of only that process.
Disadvantage: A low priority process may hinder a high priority process by not making
its frames available to the high priority process.
Global replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from the set of all frames, even if that
frame is currently allocated to some other process; that is, one process can take a
frame from another.
Advantage: Does not hinder the performance of processes and hence results in greater
system throughput.
Disadvantage: The page fault ratio of a process can not be solely controlled by the
process itself. The pages in memory for a process depends on the paging behavior of
other processes as well.
34. If the page fault and then swapping happening very frequently at higher rate,
then operating system has to spend more time to swap these pages. This state is
called thrashing. Because of this, CPU utilization is going to be reduced.
35. Working Set Model
This model is based on locality. What locality is saying, the page used recently can
be used again and also the pages which are nearby this page will also be used.
Working set means set of pages in the most recent D time. The page which
completed its D amount of time in working set automatically dropped from it. So
accuracy of working set depends on D we have chosen. This working set model
avoid thrashing while keeping the degree of multiprogramming as high as
possible.
36. Page Fault Frequency
It is some direct approach than working set model. When thrashing occurring we
know that it has few number of frames. And if it is not thrashing that means it
has too many frames. Based on this property we assign an upper and lower bound
for the desired page fault rate. According to page fault rate we allocate or remove
pages. If the page fault rate become less than the lower limit, frames can be
removed from the process. Similarly, if the page fault rate become more than the
upper limit, more number of frames can be allocated to the process. And if no
frames available due to high page fault rate, we will just suspend the processes
and will restart them again when frames available.