8. Cache of 64k Byte
Cache block of 4 bytes
16MBytes main memory
24 bit address
9. Each block of main memory maps to only one cache line
Address is in two parts
1.identify unique word
2. specify one memory block
The MSBs are split into a cache line field r and a tag of s-r (most
significant)
Direct Mapping
10. 24 bit address
1. 2 bit word identifier (4 byte
block)
2. 22 bit block identifier
{8 bit tag (=22-14)
14 bit slot or line}
Direct Mapping Address Structure
11. 1. Cache line
2. Main Memory blocks held
1. 0
2. 0, m, 2m, 3m…2s-m
1. 1
2. 1,m+1, 2m+1…2s-m+1
…
1. m-1
2. m-1, 2m-1,3m-1…2s-1
13. Simple
Inexpensive
Fixed location for given block
If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high
Summary
15. Writing into Cache
• When memory write operations are performed , CPU first writes into
the cache memory. These modifications made by CPU during a write
operations , on the data saved in cache , need to be written back to
main memory or to auxiliary memory.
• These two popular cache write policies are:
Write-through
Write back
16. Write-Through
• In a write-through cache , the main memory is updated each time
the CPU writes into cache.
• The advantage of the write through cache is that the main memory
always contains the same data as the cache contains.
• This characteristic is desirable in a system which uses direct memory
access scheme of data transfer. The I/O devices communicating
receive the most recent data.
17. • In a write back scheme , only the cache memory is updated during
a write operation.
• The updated locations in the cache memory are marked by a flag so
that later on , when the letter is removed from the cache it is copied
into the main memory.
• The words are removed from the cache time to time to make room
for a new block of words.
Write-Back
18. Unified vs Split Caches
• Unified Cache: Data and instructions are cached in the same cache
• Split Cache: Separate caches for data and instructions
• Advantages of unified cache:
-Higher hit rate
-Balances load of instruction and data fetch
-Only one cache to design & implement
• Advantages of split cache:
-Eliminates cache contention between instruction
-fetch/decode unit and execution unit
-Important in pipelining