International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abhaycavirtual memory and the pagehit.pptxwemoji5816
in this ppt we are learning about the concept of the virtual memory incomputer science with the help of which we run large program in less primary memory
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abhaycavirtual memory and the pagehit.pptxwemoji5816
in this ppt we are learning about the concept of the virtual memory incomputer science with the help of which we run large program in less primary memory
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
LRU Replacement Policy
The Least Recently Used replacement policy chooses to
replace the page which has not been referenced for the longest time.
This policy assumes the recent past will approximate the immediate
future. The operating system keeps track of when each page was
referenced by recording the time of reference or by maintaining a
stack of references
An Efficient Virtual Memory using Graceful Codeijtsrd
Memory is hardware that is used by computer to load the operating system and run programs. It is buildup of RAM chip that has different memory modules. The amount of main memory in a computer is limited to the amount of RAM that has installed. Generally memory sizes are 256 MB, 512 MB, and 1 GB, because of computer has limited amount of RAM. When too many programs are simultaneously it is possible to run a program out of memory. This is the concept where virtual memory comes. Virtual memory enhance the available memory of a computer has by enlarging the address space or place in memory where data can be stored. Hard disk is used for additional memory allocation .However, since secondary storage is much slower than the RAM, program which is in Virtual Memory must be mapped back to virtual memory in order to be used. The process of mapping data and forth between the hard disc and RAM takes longer than accessing it directly from the memory. It means virtual memory is increased, the more it will slow your computer down. While virtual memory enables your computer to run more than one program it could, otherwise it is the best way to having as main memory as possible. Divya YA ""An Efficient Virtual Memory using Graceful Code"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23878.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23878/an-efficient-virtual-memory-using-graceful-code/divya-ya
ANALYSIS ON LOAD BALANCING ALGORITHMS IMPLEMENTATION ON CLOUD COMPUTING ENVIR...AM Publications
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
Secure and Efficient Client and Server Side Data Deduplication to Reduce Stor...dbpublications
Duplication of data in storage systems is becoming increasingly common problem. The system introduces I/O Deduplication, a storage optimization that utilizes content similarity for improving I/O performance by eliminating I/O operations and reducing the mechanical delays during I/O operations and shares data with existing users if Deduplication found on the client or server side. I/O Deduplication consists of three main techniques: content-based caching, dynamic replica retrieval and selective duplication. Each of these techniques is motivated by our observations with I/O workload traces obtained from actively-used production storage systems, all of which revealed surprisingly high levels of content similarity for both stored and accessed data.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ie3313971401
1. Vasundhara Rathod, Monali Chim, Pramila Chawan / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.1397-1401
1397 | P a g e
A Survey Of Page Replacement Algorithms In Linux
Vasundhara Rathod*, Monali Chim**, Pramila Chawan***
(Department of Computer Engineering, Veermata Jijabai Technological Institute, Mumbai)
ABSTRACT
The speed and efficiency of a processor
inversely depends on the time it takes in handling
instructions. The speed of a processing device not
only depends on its architectural features like
chipset, transport buses and cache but also on the
memory mapping scheme. A virtual memory
system needs a page replacement algorithm to
decide which pages should be evicted from
memory in case a page fault occurs. Many page
replacement algorithms have been proposed
where each algorithm attempts to minimize the
page fault rate whilst incurring least amount of
overhead. This paper summarizes traditional
algorithms such as LRU and CLOCK, and also
study of recent approaches such as LIRS,
CLOCK-Pro, ARC, and CAR and their
advantages and disadvantages.
Keywords - CAR, CLOCK, Linux, LRU, Page
Replacement.
I. INTRODUCTION
Memory management is one of the most
important parts of the operating system. To realize
the full potential of multiprogramming systems it is
indispensable to interleave the execution of more
programs than that can be physically accommodated
in the main memory. For this reason we use two-
level memory hierarchy consisting of a faster but
costlier main memory and a slower but cheaper
secondary memory.
Virtual memory systems use this hierarchy to
bring parts of a program into main memory from the
secondary memory in terms of units called as pages
[1]. Pages are brought into main memory only when
the executing process demands them; this is known
as demand paging.
A page fault occurs when a page that is requested
is not in main memory and needs to be brought from
secondary memory. In this case an existing page
needs to be discarded. The selection of which page
to discard is performed by page replacement
algorithms which try to minimize the page fault rate
at the least overhead.
This paper outlines the memory management
subsystem and some advanced page replacement
algorithms in chronological order along with their
relative advantages and disadvantages starting with
basic algorithms such as Belady’s MIN, LRU and
CLOCK and the more advanced Dueling CLOCK,
LRU-K, LIRS, CLOCK-Pro, ARC and CAR
algorithms.
II. MEMORY MANAGEMENT SUBSYSTEM
Virtual memory is essential for an
operating system because it makes several things
possible. First, it helps to isolate tasks from each
other by encapsulating them in their private address
spaces. Second, virtual memory can give tasks the
feeling of more memory available than is actually
possible. And third, by using virtual memory, there
might be multiple copies of the same program,
linked to the same addresses, running in the system.
There are at least two known mechanisms to
implement virtual memory: segmentation and
paging.
The memory management subsystem provides:
Large Address Spaces: The operating system
makes the system appear as if it has a larger amount
of memory than actually exists. The virtual memory
can be many times greater than the physical memory
in the system.[1, 2]
Protection: Each process in the system has its
own virtual address space. These virtual address
spaces are completely separate from each other and
so a process running one application cannot affect
another. Moreover, the hardware virtual memory
mechanisms allow areas of memory to be protected
against writing. This protects code and data from
being overwritten by rogue applications.
Memory Mapping: Memory mapping is used to
map files into a processes address space. The
contents of a file are linked directly into the virtual
address space of a process.
Unbiased Physical Memory Allocation: The
memory management subsystem allows each
running process in the system an equal share of
physical memory.
Shared Virtual Memory: Although virtual
memory allows processes to have separate (virtual)
address spaces, some cases require the processes to
share memory. For example, there can be several
processes in the system running the bash command
shell. Rather than having several copies of bash, one
in each processes virtual address space, it would be
better to have only one copy in physical memory
and all of the processes running bash share it.
Dynamic libraries are another common example of
executing code shared between several processes.
Shared memory can also be used for Inter Process
Communication (IPC) [2, 4] mechanism, with two
or more processes exchanging information via
2. Vasundhara Rathod, Monali Chim, Pramila Chawan / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.1397-1401
1398 | P a g e
memory common to all of them. Linux supports the
Unix System V shared memory IPC.
III. PAGE REPLACEMENT ALGORITHMS
1. Belady’s MIN
This is the most optimal algorithm which
guarantees best performance. It proposes removing
the page from the memory which is not to be
accessed for the longest time. This optimal result is
referred to as Belady’s MIN algorithm or the
clairvoyant algorithm. The replacement decisions
rely on knowledge of the future page sequence but it
is impossible to predict how far in the future pages
will be needed. This makes the algorithm
impractical for real systems. This algorithm can be
used to compare the effectiveness of other
replacement algorithm making it useful in
simulation studies since it provides a lower bound
on page fault rates under various operating
conditions.
2. Least Recently Used (LRU)
The LRU policy is based on the principle
of locality which states that program and data
references within a process tend to bunch together.
The Least Recently Used replacement policy selects
that page for replacement which has not been
referenced for the longest time. LRU was thought to
be the most optimum policy for a long time. The
problem with this approach is the difficulty in
implementation. One approach would be to label
each page with the time of its last reference; this
would have to be done at each memory reference,
both instruction and data. LRU policy performs
nearly as well as an optimal policy, but it is difficult
to implement and imposes a significant overhead.
3. CLOCK
In CLOCK, all page frames are visualized
to be arranged in a circular list that resembles a
clock. The hand of the clock is used to point to the
oldest page in the buffer. Each page has an
associated reference bit that is set whenever the
particular page is referenced. The page replacement
policy is invoked in case of a page miss, in which
case the page pointed to by the hand, i.e. the oldest
page is inspected. If the reference bit of the page is
set, then the bit is reset to zero and the hand is
advanced to point to the next oldest page. This
process continues till a page with reference bit zero
is found. This page is removed from the buffer and a
new page is brought in its place with reference bit
set to zero. It has been observed that the CLOCK
algorithm approximates LRU very efficiently with
least amount of overhead.
4. Dueling CLOCK
Dueling CLOCK is an adaptive
replacement policy that is based on the CLOCK
algorithm. A major disadvantage of the regular
CLOCK algorithm is that it is not scan resistant i.e.
it does not allow scanning to push frequently used
pages out of main memory[8]. To combat this
drawback, a scan resistant CLOCK algorithm is
presented and a set dueling approach is used to
adaptively choose either the CLOCK or the scan
resistant CLOCK as the dominant algorithm for
page replacement.
In order to make clock scan resistant, the only
change required is that the hand of the clock should
now point to the newest page in the buffer rather
than the oldest page. Now, during a sequential scan,
the same page frame is inspected first every time,
and only a single page frame is utilized for all pages
in the scan keeping the rest of the frequently
accessed pages secure in the buffer.
The Dueling CLOCK algorithm suggests a
method to adaptively use the above two algorithms,
namely, CLOCK and scan resistant CLOCK. The
cache is divided into three groups, G1, G2, and G3.
Small group G1 always uses CLOCK algorithm for
replacement while small group G2 always uses the
scan resistant CLOCK policy. The larger group G3
can use either CLOCK or scan resistant CLOCK
policies depending upon the relative performance of
groups G1 and G2. In order to determine the
replacement policy for G3, a 10 bit policy select
counter (PSEL) is used. The PSEL counter is
decremented whenever a cache miss occurs on the
cache set in group G1 and the counter is
incremented whenever a cache miss occurs on the
cache set in group G2. Now, group G3 adopts
CLOCK replacement policy when the MSB of
PSEL is 1, other G3 uses the scan resistant CLOCK
policy.
In practice, Dueling CLOCK algorithm has been
shown to provide considerable performance
improvement over LRU when applied to the
problem of L2 cache management.
5. LRU-K
The LRU policy takes into account the
recency information while evicting pages, without
considering the frequency. To consider the
frequency information, LRU-K was proposed which
evicts pages with the largest backward K-distance.
Backward K-distance of a page p is the distance
backward from the current time to the Kth
most
recent reference to the page p. Since this policy
considers Kth
most recent reference to a page, it
favours pages which are accessed frequently within
a short time.
Experimental results indicate that LRU-K
performs better than LRU [6]; while higher K does
not result in an appreciable increase in the
performance, but has high implementation overhead.
6. Low Inter-reference Recency Set (LIRS)
3. Vasundhara Rathod, Monali Chim, Pramila Chawan / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.1397-1401
1399 | P a g e
The Low Inter-reference Recency Set
algorithm takes into consideration the Inter-
Reference Recency of pages as the dominant factor
for eviction [14]. The Inter-Reference Recency
(IRR) of a page refers to the number of other pages
accessed between two consecutive references to that
page. It is assumed that if current IRR of a page is
large, then the next IRR of the block is likely to be
large again and hence the page is suitable for
eviction as per Belady's MIN. It needs to be noted
that the page with high IRR selected for eviction
may also have been recently used.
The algorithm distinguishes between pages with
High IRR (HIR) and those with Low IRR (LIR).
The number of LIR and HIR pages is chosen such
that all LIR pages and only a small percentage of
HIR pages are kept in cache. Now, in case of a
cache miss, the resident HIR page with highest
recency is removed from the cache and the
requested page is brought in. Now, if the new IRR
of the requested page is smaller than the recency of
some LIR page, then their LIR/ HIR statuses are
interchanged. Usually only around 1% of the cache
is used to store HIR pages while 99% of the cache is
reserved for LIR pages.
7. CLOCK-Pro
The CLOCK-Pro algorithm tries to
approximate LIRS using CLOCK. Reuse distance,
which is analogous to IRR of LIRS, is an important
parameter for page replacement decision in
CLOCK-Pro [4]. When a page is accessed, the reuse
distance is the period of time in terms of the number
of other distinct pages accessed since its last access.
A page is categorized as a cold page if it has a large
reuse distance or as a hot page if it has a small reuse
distance.
Fig. 1 Working of CLOCK-Pro
Let the size of main memory be m. It is
divided into hot pages having size mh and cold pages
having size mc. Apart from these, at most m non-
resident pages have their history access information
cached. Hence a total of 2m meta-data entries are
present for keeping track of page access history. A
single list is maintained to place all the accessed
pages (either hot or cold) in the order of page
access. Naturally, the pages with small recency are
at the list head and the pages with large recency are
at the list tail.
After a cold page is accepted into the list, a test
period is granted to that page. This is done to give
the cold pages a chance to compete with the hot
pages. If the page is re-accessed during the test
period, it is turned into a hot page. On the other
hand, if the page is not re-accessed, then it is
removed from the list. A cold page in its test period
can be removed out of memory; however, its page
meta-data is kept in the list for the test purpose until
the end of its test period. The test period is set as the
largest recency of the hot pages. The page entries
are maintained in a circular list. For each page, there
are three status bits; a cold/hot indicator, a reference
bit and for each cold page an indicator to determine
if the page is in test period.
In CLOCK-Pro, there are three hands. HANDcold
points to the last resident cold page i.e., the cold
page to be next replaced. During page replacement,
if the reference bit of the page pointed by HANDcold
is 0, the page is evicted. If the page is in the test
period, its meta-data will be saved in the list. If the
reference bit is 1 and the page is in test period, it is
turned as a hot page and the reference bit is reset.
HANDcold is analogous to the hand in the clock
algorithm. HANDhot points to the hot page with the
largest recency. If reference bit of the page pointed
by HANDhot is 0, the page is turned to cold page. If
the reference bit is 1, then it is reset and the hand
moves clockwise by one page. If the page is a non-
resident cold page and its test period is terminated,
the page is removed from the list. At the end,
HANDhot stops at a hot page. At any point, if the
number of non-resident cold pages exceeds m, the
test period of the cold page pointed by HANDtest is
terminated and the page is removed from the list.
This means that the cold page was not re-accessed
during its test period and hence should be removed.
HANDtest stops at the next cold page.
When a page fault occurs, the page is checked in
the memory. If the faulted page is in the list as a
cold page, it is turned into a hot page and is placed
at the head of the list of hot pages. Also HANDhot is
run to change a hot page with largest recency to a
cold page. This is done to balance the number of hot
and cold pages in the memory. If the faulted page is
not in the list, it is brought in the memory and set as
a cold page. This page is placed at the head of the
list of cold pages and its test period is started. At
any point if the number of cold pages exceed (mc +
m), HANDtest is run to evict non-referenced cold
pages.
Like LIRS, CLOCK-Pro is adaptive to access
patterns with strong and weak locality. Previous
4. Vasundhara Rathod, Monali Chim, Pramila Chawan / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.1397-1401
1400 | P a g e
simulation studies have indicated that LIRS and
CLOCK-Pro provide better performance than LRU
for a variety of memory access patterns.
8. Adaptive Replacement Cache (ARC)
The Adaptive Replacement Cache (ARC)
[11] algorithm keeps a track of both frequently used
and recently used pages, along with some history
data regarding eviction for both.
ARC maintains two LRU lists: L1 and L2. The
list L1 contains all the pages that have been
accessed exactly once recently, while the list L2
contains the pages that have been accessed at least
twice recently. Thus L1 can be thought of as
capturing short-term utility (recency) and L2 can be
thought of as capturing long term utility
(frequency). Each of these lists is split into top
cache entries and bottom ghost entries.
That is, L1 is split into T1 and B1, and L2 is split
into T2 and B2. The entries in T1 union T2
constitute the cache, while B1 and B2 are ghost lists.
These ghost lists keep a track of recently evicted
cache entries and help in adapting the behavior of
the algorithm. In addition, the ghost lists contain
only the meta-data and not the actual pages. The
cache directory is thus organized into four LRU
lists:
1. T1, for recent cache entries
2. T2, for frequent entries, referenced at least twice
3. B1, ghost entries recently evicted from the T1
cache, but are still tracked
4. B2, similar ghost entries, but evicted from T2
If the cache size is c, then |T1 + T2| = c. Suppose
|T1| = p, then |T2| = c - p. The ARC algorithm
continually adapts the value of parameter p
depending on whether the current workload favors
recency or frequency. If recency is more prominent
in the current workload, p increases; while if
frequency is more prominent, p decreases (c - p
increases).
Also, the size of the cache directory, |L1| + |L2| =
2c. For a fixed p, the algorithm for replacement
would be as:
1. If |T1| > p, replace the LRU page in T1
2. If |T1| < p, replace the LRU page in T2
3. If |T1| = p and the missed page is in B1, replace
the LRU page in T2
4. If |T1| = p and the missed page is in B2, replace
the LRU page in T1.
The adaptation of the value of p is based on the
following idea: If there is a hit in B1 then the data
stored from the point of view of recency has been
useful and more space should be allotted to store the
least recently used one time data. Thus, we should
increase the size of T1 for which the value of p
should increase.
If there is a hit in B2 then the data stored from
the point of view of frequency was more relevant
and more space should be allotted to T2. Thus, the
value of p should decrease. The amount by which p
should deviate is given by the relative sizes of B1
and B2.
9. CLOCK with Adaptive Replacement (CAR)
CAR attempts to merge the adaptive policy
of ARC with the implementation efficiency of
CLOCK [7]. The algorithm maintains four doubly
linked lists T1, T2, B1, and B2. T1 and T2 are
CLOCKs while B1 and B2 are simple LRU lists.
The concept behind these lists is same as that for
ARC. In addition, the lists T1 and T2 i.e. the pages
in the cache, have a reference bit that can be set or
reset. Intuitively T10
, B1 indicate “recency”(Fig.
2.(a)) and T11
U T2 U B2 indicate “frequency”(Fig.
2.(b)).
Fig. 2 A visual description of CAR
The precise definition of four lists is as
follows:
1. T10
and B1 contains all the pages that are
referenced exactly once since its most recent
eviction from T1 U T2 U B1 U B2 or was never
referenced before since its inception.
2. T11
, B2 and T2 contains all the pages that are
referenced more than once since its most recent
eviction from T1 U T2 U B1 U B2.
5. Vasundhara Rathod, Monali Chim, Pramila Chawan / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.1397-1401
1401 | P a g e
The two important constraints on the sizes of T1,
T2, B1 and B2 are:
1. 0 ≤ |T1| + |B1| ≤ c. By definition, T1 U B1
captures recency. The size of recently accessed
pages and frequently accessed pages keep on
changing. This prevents pages which are accessed
only once from taking up the entire cache directory
of size 2c since increasing size of T1 U B1 indicates
that the recently referenced pages are not being
referenced again which in turn means the recency
data that is stored is not helpful. Thus it means that
only the frequently used pages are re-referenced or
new pages are being referenced.
2. 0 ≤ |T2| + |B2| ≤ 2c. If only a set of pages are
being accessed frequently, there are no new
references. The cache directory has information
regarding only frequency.
The idea behind the algorithm is as follows. In
case of a cache hit, the requested page is delivered
from the cache and its reference bit is set. Instead, if
the page is not in the cache but is present in list B1,
this indicates the page had been used once recently
before being evicted from the cache. This page is
then moved to the head of T2 and its reference bit is
set to 0. Also, a hit in B1 indicates that pages used
once recently are required again implying a recency
favoring workload. Hence the value of p has to be
increased resulting increase in the size of T1.
Accordingly, if the page is not in the cache but in
B2, this indicates that the page had been used
frequently before being evicted from the cache. This
page is then moved to the head of T2 and its
reference bit is set to 0. Also, a hit in B2 indicates
that the pages used frequently are required again
implying a frequency favoring workload. Hence the
value of p has to be decreased resulting increase in
the size of T2.
Finally if the page is not found in B1 U B2, then
the page is added to the MRU position in T1. In any
of the above cases if the cache is full (|T1| + |T2| =
c), then the CLOCK policy is applied on either T1
or T2 depending on the parameter p.
IV. CONCLUSION
CAR solves the cache hit serialization
problem of LRU and ARC. CAR attempts to
amalgamate the adaptive policy of ARC with the
implementation efficiency of CLOCK. The self-
tuning nature of CAR makes it very attractive for
exploitation in environments where no a priori
knowledge of workload is available. CAR is also
scan-resistant.
CAR can be enhanced with new approach
CART- Clock Adaptive Replacement algorithm
with Temporal filtering, which has all the features
and advantages of CAR and in addition, it also
employs a much strict and more precise criterion to
distinguish pages with short-term utility from those
with long-term utility.
REFERENCES
Journal Papers and Books:
[1] L. A. Belady, ―A study of replacement
algorithms for virtual storage computers,‖ IBM
Sys. J., vol. 5, no. 2, pp. 78–101, 1966.
[2] M. J. Bach, The Design of the UNIX Operating
System. Engle- wood Cliffs, NJ: Prentice-Hall,
1986.
[3] A. S. Tanenbaum and A. S. Woodhull, Operating
Systems: Design and Implementation. Prentice-
Hall, 1997.
[4] A. Silberschatz and P. B. Galvin, Operating
System Concepts.Reading, MA: Addison-Wesley,
1995.
[5] J. E. G. Coffman and P. J. Denning, Operating
Systems Theory. Englewood Cliffs, NJ: Prentice-
Hall, 1973.
[6] M. K. McKusick, K. Bostic, M. J. Karels, and J. S.
Quarter- man, The Design and Implementation of
the 4.4BSD Operating System. Addison-Wesley,
1996..
[7] S. Bansal, and D. Modha, ―CAR: Clock with
Adaptive Replacement‖ , FAST-’04 Proceedings
of the 3rd USENIX Conference on File and
Storage Technologies, pp. 187-200, 2004.
[8] A. Janapsatya, A. Ignjatovic, J. Peddersen and S.
Parameswaran, ―Dueling CLOCK: Adaptive
cache replacement policy based on the CLOCK
algorithm‖ , Design, Automation and Test in
Europe Conference and Exhibition, pp. 920-925,
2010.
[9] S. Jiang, and X. Zhang, ―LIRS: An Efficient
Policy to improve Buffer Cache Performance‖ ,
IEEE Transactions on Computers, pp. 939-952,
2005.
[10]S. Jiang, X. Zhang, and F. Chen, ―CLOCK-Pro:
An Effective Improvement of the CLOCK
Replacement‖ , ATEC ’05 Proceedings of the
annual conference on USENIX Anuual Tecchnical
Conference, pp. 35, 2005.
[11]N. Meigiddo, and D. S. Modha, ―ARC: A Self-
Tuning, Low overhead Replacement Cache‖ ,
IEEE Transactions on Computers, pp. 58-65,
2004.
[12]J. E. O’neil, P. E. O’neil and G. Weikum, ―An
optimality Proof of the LRU-K Page Replacement
Algorithm‖ , Journal of the ACM, pp. 92-112,
1999.
[13]A. S. Sumant, and P. M. Chawan, ―Virtual
Memory Management Techniques in 2.6 Linux
kernel and challenges‖ , IASCIT International
Journal of Engineering and Technology, pp. 157-
160, 2010.
[14]S. Jiang and X. Zhang, ―LIRS: An efficient low
inter-reference recency set replacement policy to
improve buffer cache performance,‖ in Proc.
ACM SIGMETRICS Conf., 2009