2. Cache replacement policies
Cache memory size is very less comparative to main memory size.
Processor fetches data from cache memory to perform execution operation.
So when required block is not found within cache, then main memory block is transferred to cache and
previously present block is replaced.
For that reason cache replacement algorithm is used.
Page replacement policies are normally used in Set-associative and Fully-associative mapping.
Different cache replacement policies are used.
➢ FIFO
➢ Optimal algorithm
➢ LRU
➢ MRU
➢ Pseudo LRU
3. FIFO(First In First Out)
Memory block which referenced first , that block is replaced first.
Let main memory references are given:- 1, 2, 3, 4, 1, 2, 3, 5, 6,7
Number of blocks in cache memory=4
1
2
1
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
5
4
7
6
5
4
3
6
5
1 miss 2 miss 3 miss 2 hit
1 hit
4 miss 5 miss
3 hit 6 miss 7 miss
0
1
2
3
4. Hit ratio and Miss ratio using FIFO
Total reference = 10
Total number of hit = 3
Total number of miss = 7
Hit ratio = number of hit / total reference
=3/10=0.3
%of hit ratio = 0.3 x100 = 30%
Miss ratio = number of miss / total reference
=7/10=0.7
%of miss ratio = 0.7 x100 = 70%
Total probability=1. If hit ratio=0.3 then miss ratio= 1-0.3=0.7
5. Optimal page replacement algorithm
The page which is not referenced in near future, that page is replaced.
It has no practical implementation.
But gives more efficiency.
Used as reference model.
Let memory reference is given :- 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 2
Number of Cache block=4
7
0
7
1
0
7
2
1
0
7
2
1
0
7
2
1
0
3
2
1
0
3
2
4
0
3
2
4
0
3
2
4
0
3
0
1
2
3
7 miss 0 miss 1 miss 3 miss
0 hit
2 miss 4 miss
0 hit 2 hit
2
4
0
3
2
4
0
3
3 hit 0 hit 2hit
6. Hit ratio and miss ratio using optimal
replacement algorithm
Total reference = 12
Total number of hit = 6
Total number of miss = 6
Hit ratio = number of hit / total reference
=6/12=0.5
%of hit ratio = 0.5 x100 = 50%
Miss ratio = number of miss / total reference
=6/12=0.5
%of miss ratio = 0.5 x100 = 50%
7. LRU(Least Recently Used)
This algorithm replaces the cache memory block by main memory block, the block which
is least recently used.
Let memory reference is given :- 7, 0, 1, 2, 0, 3, 5, 4, 2, 3, 0
Number of Cache block=4
0
1
2
3
7
0
7
1
0
7
2
1
0
7
2
1
0
7
2
1
0
3
2
5
0
3
4
5
0
3
4
5
2
3
4
5
2
3
7 miss 0 miss 1 miss 3 miss
0 hit
2 miss 4 miss
5 miss 2 miss
4
0
2
3
3 hit 0 miss
8. Hit ratio and miss Ratio in LRU
Total reference = 11
Total number of hit = 2
Total number of miss = 9
Hit ratio = number of hit / total reference
=2/11= .18
%of hit ratio = 0.,18 x100 = 18%
Miss ratio = number of miss / total reference
=9/11=0.82
%of miss ratio = 0.82 x100 = 82%
9. MRU(Most Recently Used)
This algorithm replaces the cache memory block by main memory block, the block which
is least recently used.
Let memory reference is given :- 7, 0, 1, 2, 0, 3 ,0, 5, 4, 3, 0
Number of Cache block=4
7
0
7
1
0
7
2
1
0
7
2
1
0
7
2
1
3
7
2
1
0
7
2
1
5
7
2
1
3
7
2
1
4
7
7 miss 0 miss 1 miss 3 miss
0 hit
2miss 5 miss
0 miss 4 miss
2
1
0
7
3 miss 0 miss
10. Hit ratio and miss ratio using MRU
Total reference = 11
Total number of hit = 1
Total number of miss = 10
Hit ratio = number of hit / total reference
=1/11= o.09
%of hit ratio = 0.09 x100 = 9%
Miss ratio = number of miss / total reference
=10/11=0.9099=0.91
%of miss ratio = 0.91 x100 = 91%
11. Types of Miss
When some required block is searched inside cache memory and not found then that is
called miss in cache.
There are 3-different types of miss occurs in cache memory.
➢ Compulsory miss
➢ Capacity miss
➢ Conflict miss
Compulsory miss: Any main memory block is missed for very first time is called Compulsory
miss. It can be reduced by increasing the capacity of cache memory size.
12. Cont…
Capacity miss: if cache block is full then capacity miss occurs. It can be reduced by
increasing the cache memory size.
Conflict miss: if cache set is full and number of main memory block is mapped to same
tag, then conflict miss occurs. It can be reduced by increasing the associativity. Conflict
miss never occurs in fully-associative memory.
Note: -
If number of set = 1, it is same as direct mapping.
Number of set =k implies only tag bits are present and same as fully-associative memory.
13. Writing policy
If cache lines are not modified ,it needs updating using different writing policies.
Two different writing policy is used. Write through Write back.
Write Through: Data is updated simultaneously in cache memory as well as in main
memory.
Consistent data is present.
But consumes more time.
Write back: data in cache is updated first and final value is updated in main memory.
Less time consumes.
But inconsistent data is present.
14. Examples of write through and write-back
policies
Ex : for( i=0 ; i<10 ; i++)
Printf( i )
Using write through technique ‘i’ value is updated in cache memory and in main memory,
but using write back policy cache memory is updated first then final value of ‘i’ is updated
in main memory.
i=01 2
3 4 5 6
7 8 9
10
i=0 1 2 3 4 5
6 7 8 9 10
i=01 2
3 4 5 6
7 8 9
10
i = 10
Write Through policy Write Back policy
Main memory
Main memory
Cache memory Cache memory
15. Cache coherence
In multi processor system data is present in main memory along with local cache memory
of computer. It is required to update the data in all cache memory and in main memory.
This is called cache coherence.
If data is not updated in some of the local cache that leads to inconsistency. This is called
cache coherence problem means synchronisation between local cache and shared
memory is lost.
Local
memory
CPU-1 CPU-2 CPU-3
Shared
Memory
Local
memory
Local
memory
Int x=5
Int x=5
Int x=5
Int x=5