The document discusses cache memory and provides information on various aspects of cache memory including:
- Introduction to cache memory including its purpose and levels.
- Cache structure and organization including cache row entries, cache blocks, and mapping techniques.
- Performance of cache memory including factors like cycle count and hit ratio.
- Cache coherence in multiprocessor systems and coherence protocols.
- Synchronization mechanisms used in multiprocessor systems for cache coherence.
- Paging techniques used in cache memory including address translation using page tables and TLBs.
- Replacement algorithms used to determine which cache blocks to replace when the cache is full.
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
Memory organization in computer architectureFaisal Hussain
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
Memory organization in computer architectureFaisal Hussain
Memory organization in computer architecture
Volatile Memory
Non-Volatile Memory
Memory Hierarchy
Memory Access Methods
Random Access
Sequential Access
Direct Access
Main Memory
DRAM
SRAM
NVRAM
RAM: Random Access Memory
ROM: Read Only Memory
Auxiliary Memory
Cache Memory
Hit Ratio
Associative Memory
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Student information management system project report ii.pdf
Cache Memory
1. CACHE MEMORY
O N L I N E P R E S E N TAT I O N
B Y S U B I D B I S W A S
M A I L : s u b i d b i s w a s 2 0 6 1 @ g m a i l . c o m
CS401- Computer Architechture & Organisation
2. GROUP MEMBERS
1 . A I N T R O D U C T I O N , W O R K I N G A N D L E V E L S O F C A C H E M E M O R Y
2 . B C A C H E S T R U C T U R E A N D O R G A N I Z AT I O N , C O N C L U S I O N
3 . C P E R F O R M A N C E O F C A C H E M E M O R Y
4 . D C A C H E C O H E R E N C E
5 . E S Y N C H R O N I Z AT I O N M E C H A N I S M
6 . F M A P P I N G T E C H N I Q U E S O F C A C H E M E M O R Y
7 . G PA G I N G T E C H N I Q U E S I N C A C H E M E M O R Y
8 . H R E P L A C E M E N T A L G O R I T H M
9 . I C A C H E W R I T E P O L I C I E S
1 0 . J C A C H E M E M O R Y H I E R A R C H Y
CSE
4. What is Cache Memory?
Cache memory is a small, high speed RAM buffer located between
the CPU and Main Memory.
• Cache memory holds a copy the instructions or data currently
being used by the CPU.
• The main purpose of a cache memory is to accelerate your
computer speed while keeping the price of the computer low.
• It stores and retains data only until a computer is powered up.
• The cache memory stores copies of the data from frequently
used main memory locations.
Cache memory is faster than RAM,
and because it is located closer to
the CPU, it can get and start
processing the instructions and
data much more quickly.
CACHE MEMORY
A buffer contains data that is stored for a short amount of time, typically in the computer's memory (RAM).
The purpose of a buffer is to hold data right before it is used.
5. • Why Cache is needed?
The cache memory is required to balance the speed mismatch between the main memory and the
CPU. The clock of the processor is very fast, while the main memory access time is comparatively
slower. Hence, the processing speed depends more on the speed of the main memory.
• How it differs from RAM?
Cache memory is a type of super-fast RAM which is designed to make a computer or device run
more efficiently. By itself, this may not be particularly useful, but cache memory plays a key role in
computing when used with other parts of memory.
ADVANTAGES & DISADVANTAGES
Advantages of cache memory
The main memory is slower than cache memory.
It creates a way for fast data transfers so it
consumes less access time as compared to main
memory.
It stores frequently access that can be executed
within a short period of time.
Disadvantages of cache memory
It is limited capacity memory.
It is very expensive as compared to
Memory (random access memory
(RAM)) and Hard Disk.
8. LEVELS OF CACHE MEMORY
Levels of memory:
Level 1 or Register-
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
Level 2 or Cache memory-
It is the fastest memory which has faster access time where data is temporarily stored for faster
access.
Level 3 or Main Memory-
It is memory on which computer works currently. It is small in size and once power is off data no
longer stays in this memory.
Level 4 or Secondary Memory-
It is external memory which is not as fast as main memory but data stays permanently in this memory.
10. HOW IT WORKS?
• How Cache Memory works?
MEMORY ORGANIZATION
CPU – Central Processing Unit is just like brain of a
computer; and performs the arithmetical, logical
operations of the system by carrying instructions on the
code.
The memory organization of a system is shown below:
1. At the core is CPU,
2. Cache,
3. RAM
4. Storage Device.
12. Cache Structure:
Cache row entries usually have the following structure:
CACHE STRUCTURE AND ORGANIZATION
Tag Data Block Flag Bits
An effective memory address which goes along with the cache line (memory block) is split into the tag,
the index and the block offset.
Tag Index Block Offset
The data block (cache line) contains the actual data fetched from the main memory.
The tag contains (part of) the address of the actual data fetched from the main memory.
An instruction cache requires only one flag bit per cache row entry: a valid bit.
The index describes which cache set that the data has been put in.
The block offset specifies the desired data within the stored data block within the cache row.
13. Cache Organization:
The cache organization is about mapping
data in memory to a location in cache.
One way to go about this mapping is to
consider last few bits of long memory
address to find small cache address, and
place them at the found address.
The problem with this approach is, we
loose the information about high order
bits and have no way to find out the lower
order bits belong to which higher order
bits.
CACHE STRUCTURE AND ORGANIZATION
14. • To handle above problem, more
information is stored in cache to tell
which block of memory is stored in
cache. We store additional information
as Tag.
CACHE STRUCTURE AND ORGANIZATION
• What is a Cache Block?
Since programs have Spatial Locality (Once a
location is retrieved, it is highly probable that
the nearby locations would be retrieved in
near future). So a cache is organized in the
form of blocks. Typical cache block sizes are
32 bytes or 64 bytes.
?? – 11
16. Issues of Cache Memory Performance:
The performance of cache memory concerns mainly on two aspects-
1. Cycle Count
2. Hit Ratio
The Cycle Count: The cache speed is affected by the underlying static or dynamic ram technology, the
cache organization and the cache hit ratios.
Hit Ratio: It is affected by the cache size and by the block size .
Effect of Block Size: with a fixed cache size, cache performance is rather sensitive to block size.
Effects of Set Number: for a fixed cache capacity the hit ratio may increase as the number of sets
increase.
PERFORMANCE OF CACHE MEMORY
17. Techniques to Improve Cache Memory Performance of Cache Memory:
Technique 1:
Larger block size.
Technique 2:
Larger cache to reduce miss rate.
Technique 3:
Higher associativity to reduce miss rate.
Technique 4:
Multi level cache Caches should be faster to keep Pace with the speed of the processor and the cache should be
larger to overcome the widening gap between the processor and main memory.
Technique 5:
Prioritize read misses to reduce miss penalty.
Technique 6:
Avoid address translation for indexing to reduce hit time.
PERFORMANCE OF CACHE MEMORY
19. Cache Coherence:
Cache coherence is the uniformity of shared resource data
that ends up stored in multiple local caches.
CACHE COHERENCE
• Cache and the main memory
may have inconsistent copies
of the same object.
• In a multiprocessor system, data
inconsistency may occur among
adjacent levels or within the same
level of the memory hierarchy.
20. There are three types of coherence:
• Directory-based:
In a directory-based system, the data being shared is
placed in a common directory that maintains the
coherence between caches.
• Snooping:
Snooping is a process where the individual caches
monitor address lines for accesses to memory locations
that they have cached.
• Snarfing:
It is a mechanism where a cache controller watches both
address and data in an attempt to update its own copy of
a memory location when a second master modifies a
location in main memory.
CACHE COHERENCE
21. CACHE COHERENCE
There are three distinct level of cache
coherence :-
● Every write operation appears to
occur instantaneously.
● All processors see exactly the same
sequence of changes of values for
each separate operand.
● Different processors may see an
operation and assume different
sequences of values; this is known as
non-coherent behavior.
There are various Cache Coherence
Protocols in multiprocessor system.
These are :-
● MSI protocol (Modified, Shared,
Invalid)
● MOSI protocol (Modified, Owned,
Shared, Invalid)
● MESI protocol (Modified, Exclusive,
Shared, Invalid)
● MOESI protocol (Modified, Owned,
Exclusive, Shared, Invalid)
23. Hardware Synchronization Mechanisms:
Synchronization is a special form of communication where instead of data control,
information is exchanged between communicating processes residing in the same
or different processors.
Multiprocessor systems use hardware mechanisms to implement low-level
synchronization operations. Most multiprocessors have hardware mechanisms to
impose atomic operations such as memory read, write or read-modify-write
operations to implement some synchronization primitives. Other than atomic
memory operations, some inter-processor interrupts are also used for
synchronization purposes
SYNCHRONIZATION MECHANISM
25. What is Cache Mapping?
Cache mapping is a technique by which the contents of main memory are brought into the
cache memory.
Cache Mapping Techniques
Cache Mapping is performed using following
three different techniques –
1. Direct Mapping
2. Full Associative Mapping
3. K-way Set Associative Mapping
CACHE MAPPING TECHNIQUES
Cache
Mapping
Set
Associative
Mapping
Full
Associative
Mapping
Direct
Mapping
Figure: The given diagram illustrates the mapping process
26. • Direct Mapping Technique:
In Direct mapping, each memory block is assigned to a specific line in the cache. If a line is previously taken up
by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into
two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored in the
main memory. Direct mapping`s performance is directly proportional to the Hit ratio.
DIRECT MAPPING TECHNIQUE
Tag Line number Block Offset
The line number of cache to which a particular block can map
is given by-
Cache line no. = (Main block address) modulo (No. of lines in cache)
In direct mapping, the physical address is divided as -
Example of Direct Mapping
27. • Full Associative Mapping Technique:
In this type of mapping, the associative memory is used to store content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word in bits are used
to identify which word in the block is needed, but the tag becomes all of the remaining bits. This
enables the placement of any word at any place in the cache memory. It is considered to be the
fastest and the most flexible mapping form.
FULL ASSOCIATIVE MAPPING
Block No./Tag Block /Line Offset
In Fully Associative Mapping the physical address is
divided as -
Example of Full Associative Mapping
28. • Set Associative Mapping Technique:
Set associative addresses the problem of possible thrashing in the direct mapping method.
Instead of having exactly one line that a block can map to in the cache, few lines are grouped together creating
a set . Then a block in memory can map to any one of the lines of a specific set. Set-associative mapping allows
that each word that is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache mapping techniques.
SET ASSOCIATIVE MAPPING
Tag Set No. Block/Line Offset
The set of the cache to which a particular block of the main memory
can map is given by-
Cache set no. = (Main memory Block address ) modulo ( No. of sets in cache )
In set associative mapping the physical address is given as –
Example of 2 Way Set Associative Mapping
30. What is Paging Technique?
Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non – contiguous. The
mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as paging technique.
The Physical Address Space is conceptually divided into a
number of fixed-size blocks, called frames.
The Logical address Space is also splitted into fixed-size
blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space =
4 K words
Logical Address = 13 bits, then Logical Address Space = 8
K words
Page size = frame size = 1 K words (assumption)
PAGING TECHNIQUES
31. Address generated by CPU is divided into
Page number(p): Number of bits required to represent the
pages in Logical Address Space or Page number
Page offset(d): Number of bits required to represent
particular word in a page or page size of Logical Address
Space or word number of a page or page offset.
Physical Address is divided into
Frame number(f): Number of bits required to represent the
frame of Physical Address Space or Frame number.
Frame offset(d): Number of bits required to represent
particular word in a frame or frame size of Physical Address
Space or word number of a frame or frame offset.
ADDRESS SPACE IN PAGING TECHNIQUES
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page
in page table)
32. TLB IN PAGING TECHNIQUES
The hardware implementation of page table can be done by using dedicated registers. But the usage of register
for the page table is satisfactory only if page table is small. If page table contain large number of entries then
we can use TLB(Translation Look-Aside Buffer), a special, small, fast look up hardware cache.
The TLB is associative, high speed memory.
Each entry in TLB consists of two parts: a tag and a value.
When this memory is used, then an item is compared with all tags simultaneously.If the item is found, then
corresponding value is returned.
33. PAGE TABLE IN PAGING TECHNIQUES
Page table has page table entries where each page table entry stores a frame number and optional status (like protection)
bits. Many of status bits used in the virtual memory system. The most important thing in PTE is frame Number.
Page table entry has the following information –
Caching in Page Table:
Caching enabled/disabled – Some times we need the fresh data. Let us say the user is typing some information from the keyboard and
your program should run according to the input given by the user. In that case, the information will come into the main memory.
Therefore main memory contains the latest information which is typed by the user. Now if you try to put that page in the cache, that cache
will show the old information. So whenever freshness is required, we don’t want to go for caching or many levels of the memory.
The information present in the closest level to the CPU and the information present in the closest level to the user might be different. So
we want the information has to be consistency, which means whatever information user has given, CPU should be able to see it as first as
possible. That is the reason we want to disable caching. So, this bit enables or disable caching of the page.
34. ADVANTAGES & DISADVANTAGES
Advantages of Paging Disadvantages of Paging
The paging technique is easy to
implement.
The paging technique makes efficient
utilization of memory.
The paging technique supports time-
sharing system.
The paging technique supports non-
contiguous memory allocation
Paging may encounter a problem
called page break.
When the number of pages in
virtual memory is quite large,
maintaining page table become
hectic.
36. What is Replacement Algorithm?
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies)
are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize
in order to manage a cache of information stored on the computer. Caching improves performance by keeping
recent or often-used data items in memory locations that are faster or computationally cheaper to access than
normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for
the new ones.
There are two primary figures of merit of a cache: The latency, and the hit rate. There are also a number of secondary
factors affecting cache performance.
The "hit ratio" of a cache describes how often a searched-for item is actually found in the cache. More efficient
replacement policies keep track of more usage information in order to improve the hit rate (for a given cache size).
The "latency" of a cache describes how long after requesting a desired item the cache can return that item (when there is
a hit). Faster replacement strategies typically keep track of less usage information—or, in the case of direct-mapped cache,
no information—to reduce the amount of time required to update that information.
Each replacement strategy is a compromise between hit rate and latency.
REPLACEMENT ALGORITHM
37. Different Replacement Algorithm:
Bélády's algorithm:
The most efficient caching algorithm would be to always discard the information that will not be needed
for the longest time in the future. This optimal result is referred to as Bélády's optimal algorithm/simply
optimal replacement policy or the clairvoyant algorithm. Since it is generally impossible to predict how far
in the future information will be needed, this is generally not implementable in practice. The practical
minimum can be calculated only after experimentation, and one can compare the effectiveness of the
actually chosen cache algorithm.
First in first out (FIFO):
Using this algorithm the cache behaves in the same way as a FIFO queue. The cache evicts the blocks in
the order they were added, without any regard to how often or how many times they were accessed
before.
DIFFERENT REPLACEMENT ALGORITHM
38. Different Replacement Algorithm:
Least-frequently used (LFU): Counts how often an item is needed. Those that are used least often are
discarded first. This works very similar to LRU except that instead of storing the value of how recently a
block was accessed, we store the value of how many times it was accessed. So of course while running
an access sequence we will replace a block which was used fewest times from our cache. E.g., if A was
used (accessed) 5 times and B was used 3 times and others C and D were used 10 times each, we will
replace B.
Random replacement (RR) Randomly selects a candidate item and discards it to make space when
necessary. This algorithm does not require keeping any information about the access history. For its
simplicity, it has been used in ARM processors. It admits efficient stochastic simulation.
Least recently used (LRU): Discards the least recently used items first. This algorithm requires keeping
track of what was used when, which is expensive if one wants to make sure the algorithm always discards
the least recently used item. General implementations of this technique require keeping "age bits" for
cache-lines and track the "Least Recently Used" cache-line based on age-bits. In such an implementation,
every time a cache-line is used, the age of all other cache-lines changes.
DIFFERENT REPLACEMENT ALGORITHM
40. CACHE WRITE POLICIES
• What is Cache Write Policy?
Cache is a technique of storing a copy of data temporarily in rapidly accessible storage
memory. Cache stores most recently used words in small memory to increase the speed in
which a data is accessed. It acts like a buffer between RAM and CPU and thus increases the
speed in which data is available to the processor.
There are two main Cache Write Policy:
1. Write-Through policy.
2. Write-Back policy.
41. TYPES OF CACHE WRITE POLICIES
1. Write through policy:
Write-through policy is the most commonly used methods of writing into the cache memory.
In write-through method when the cache memory is updated simultaneously the main memory is also updated.
Thus at any given time, the main memory contains the same data which is available in the cache memory.
It is to be noted that, write-through technique is a slow process as everytime it needs to access main memory.
2. Write back policy:
Write-back policy can also be used for cache writing.
During a write operation only the cache location is updated while following write-back method. When update in
cache occurs then updated location is marked by a flag. The flag is known as modified or dirty bit.
When the word is replaced from the cache, it is written into main memory if its flag bit is set. The logic behind
this technique is based on the fact that during a cache write operation, the word present in the cache may be
accessed several times. This method helps reduce the number of references to main memory.
43. CACHE MEMORY HEIRARCHY
Hierarchy List
• Registers
• L1 Cache
• L2 Cache
• Main memory
• Disk cache
• Disk
• Optical
• Tape
As one goes down the
hierarchy
• Decreasing cost per bit
• Increasing capacity
• Increasing access time
• Decreasing frequency of
access of the memory by
the processor – locality of
reference
44. CACHE MEMORY HEIRARCHY
Semiconductor Memory Read Only Memory (ROM)
RAM – Random Access Memory
• Misnamed as all semiconductor
memory is random access
• Read/Write
• Volatile
• Temporary storage
• Two main types: Static or
Dynamic
• Permanent storage
• Microprogramming
• Library subroutines
• Systems programs (BIOS)
• Function tables
46. QUESTIONS
1. The number successful access to memory stated as a fraction is called as _____.
2. A cache line is 64 bytes. The main memory has latency 32ns and bandwidth 1 GBytes/s. The time required
to fetch the entire cache line from the main memory is _____?
3. Consider a 4-way set associative mapped cache with block size 4 KB. The size of main memory is 16 GB and
there are 10 bits in the tag. Find size of cache memory.
4. A computer has a 256 KByte, 4-way set associative, write back data cache with the block size of 32 Bytes. The
processor sends 32-bit addresses to the cache controller. Each cache tag directory entry contains, in addition,
to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an
address is
a) 11 b) 14 c) 16 d) 27
5. Consider a direct mapped cache of size 512 KB with block size 1 KB. There are 7 bits in the tag. Find size of main
memory.
a) 32 MB b) 60 MB c) 64 KB d) 64 MB
47. QUESTIONS
6. Memory management technique in which system stores and retrieves data from secondary storage for use in main
memory is called ______
a) Fragmentation b) paging
c) Mapping d) none of the mentioned
7. The address of a page table in memory is pointed by ____________
a) stack pointer b) page table base register c) page register d) program counter
8. The page table contains ____________
a) base address of each page in physical memory
b) page offset
c) page sized
d) none of the mentioned
48. QUESTIONS
9. Operating System maintains the page table for ____________
a) each process b) each thread
c) each instruction d) each address
10. The LRU provides very bad performance when it comes to
a) Blocks being accessed is sequential
b) When the blocks are randomized
c) When the consecutive blocks accessed are in the extremes
d) None of the mentioned
11. The algorithm which removes the recently used page first is ________
a) LRU b) MRU
c) OFM d) None of the mentioned
49. O N L I N E P R E S E N TAT I O N
B Y S U B I D B I S W A S
M A I L : s u b i d b i s w a s 2 0 6 1 @ g m a i l . c o m