SlideShare a Scribd company logo
1 of 74
CS252/Kubiatowicz
Lec 16.1
10/29/00
CS252
Graduate Computer Architecture
Lecture 16
Caches and Memory Systems
October 29, 2000
Computer Science 252
CS252/Kubiatowicz
Lec 16.2
10/29/00
Review: Who Cares About the
Memory Hierarchy?
µProc
60%/yr.
DRAM
7%/yr.
1
10
100
1000
1980
1981
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
DRAM
CPU
1982
Processor-Memory
Performance Gap:
(grows 50% / year)
Performance
“Moore’s Law”
• Processor Only Thus Far in Course:
– CPU cost/performance, ISA, Pipelined Execution
CPU-DRAM Gap
• 1980: no cache in µproc; 1995 2-level cache on chip
(1989 first Intel µproc with a cache on chip)
“Less’ Law?”
CS252/Kubiatowicz
Lec 16.3
10/29/00
What is a cache?
• Small, fast storage used to improve average access
time to slow memory.
• Exploits spacial and temporal locality
• In computer architecture, almost everything is a cache!
– Registers a cache on variables
– First-level cache a cache on second-level cache
– Second-level cache a cache on memory
– Memory a cache on disk (virtual memory)
– TLB a cache on page table
– Branch-prediction a cache on prediction information?
Proc/Regs
L1-Cache
L2-Cache
Memory
Disk, Tape, etc.
Bigger Faster
CS252/Kubiatowicz
Lec 16.4
10/29/00
Review: Cache performance
CycleTime
y
MissPenalt
MissRate
Inst
MemAccess
CPI
IC
CPUtime Execution 











• Miss-oriented Approach to Memory Access:
• Separating out Memory component entirely
– AMAT = Average Memory Access Time
CycleTime
AMAT
Inst
MemAccess
CPI
IC
CPUtime AluOps 










y
MissPenalt
MissRate
HitTime
AMAT 


 
 
Data
Data
Data
Inst
Inst
Inst
y
MissPenalt
MissRate
HitTime
y
MissPenalt
MissRate
HitTime






CS252/Kubiatowicz
Lec 16.5
10/29/00
Example: Harvard Architecture
• Unified vs Separate I&D (Harvard)
• Table on page 384:
– 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%
– 32KB unified: Aggregate miss rate=1.99%
• Which is better (ignore L2 cache)?
– Assume 33% data ops  75% accesses from instructions (1.0/1.33)
– hit time=1, miss time=50
– Note that data hit has 1 stall for unified cache (only one port)
AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05
AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24
Proc
I-Cache-1
Proc
Unified
Cache-1
Unified
Cache-2
D-Cache-1
Proc
Unified
Cache-2
CS252/Kubiatowicz
Lec 16.6
10/29/00
Review: Improving Cache
Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
CS252/Kubiatowicz
Lec 16.7
10/29/00
Review: Miss Classification
• Classifying Misses: 3 Cs
– Compulsory—The first access to a block is not in the cache,
so the block must be brought into the cache. Also called cold
start misses or first reference misses.
(Misses in even an Infinite Cache)
– Capacity—If the cache cannot contain all the blocks needed
during execution of a program, capacity misses will occur due to
blocks being discarded and later retrieved.
(Misses in Fully Associative Size X Cache)
– Conflict—If block-placement strategy is set associative or
direct mapped, conflict misses (in addition to compulsory &
capacity misses) will occur because a block can be discarded and
later retrieved if too many blocks map to its set. Also called
collision misses or interference misses.
(Misses in N-way Associative, Size X Cache)
• More recent, 4th “C”:
– Coherence - Misses caused by cache coherence.
CS252/Kubiatowicz
Lec 16.8
10/29/00
Block Size (bytes)
Miss
Rate
0%
5%
10%
15%
20%
25%
16
32
64
128
256
1K
4K
16K
64K
256K
1. Reduce Misses via Larger
Block Size
CS252/Kubiatowicz
Lec 16.9
10/29/00
2. Reduce Misses via Higher
Associativity
• 2:1 Cache Rule:
– Miss Rate DM cache size N Miss Rate 2-way cache
size N/2
• Beware: Execution time is only final measure!
– Will Clock Cycle time increase?
– Hill [1988] suggested hit time for 2-way vs. 1-way
external cache +10%,
internal + 2%
CS252/Kubiatowicz
Lec 16.10
10/29/00
Example: Avg. Memory Access
Time vs. Miss Rate
• Example: assume CCT = 1.10 for 2-way, 1.12 for
4-way, 1.14 for 8-way vs. CCT direct mapped
Cache Size Associativity
(KB) 1-way 2-way 4-way 8-way
1 2.33 2.15 2.07 2.01
2 1.98 1.86 1.76 1.68
4 1.72 1.67 1.61 1.53
8 1.46 1.48 1.47 1.43
16 1.29 1.32 1.32 1.32
32 1.20 1.24 1.25 1.27
64 1.14 1.20 1.21 1.23
128 1.10 1.17 1.18 1.20
(Red means A.M.A.T. not improved by more associativity)
CS252/Kubiatowicz
Lec 16.11
10/29/00
3. Reducing Misses via a
“Victim Cache”
• How to combine fast hit
time of direct mapped
yet still avoid conflict
misses?
• Add buffer to place data
discarded from cache
• Jouppi [1990]: 4-entry
victim cache removed
20% to 95% of conflicts
for a 4 KB direct mapped
data cache
• Used in Alpha, HP
machines
To Next Lower Level In
Hierarchy
DATA
TAGS
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
CS252/Kubiatowicz
Lec 16.12
10/29/00
4. Reducing Misses via
“Pseudo-Associativity”
• How to combine fast hit time of Direct Mapped and
have the lower conflict misses of 2-way SA cache?
• Divide cache: on a miss, check other half of cache to
see if there, if so have a pseudo-hit (slow hit)
• Drawback: CPU pipeline is hard if hit takes 1 or 2
cycles
– Better for caches not tied directly to processor (L2)
– Used in MIPS R1000 L2 cache, similar in UltraSPARC
Hit Time
Pseudo Hit Time Miss Penalty
Time
CS252/Kubiatowicz
Lec 16.13
10/29/00
5. Reducing Misses by Hardware
Prefetching of Instructions & Datals
• E.g., Instruction Prefetching
– Alpha 21064 fetches 2 blocks on a miss
– Extra block placed in “stream buffer”
– On miss check stream buffer
• Works with data blocks too:
– Jouppi [1990] 1 data stream buffer got 25% misses from
4KB cache; 4 streams got 43%
– Palacharla & Kessler [1994] for scientific programs for 8
streams got 50% to 70% of misses from
2 64KB, 4-way set associative caches
• Prefetching relies on having extra memory
bandwidth that can be used without penalty
CS252/Kubiatowicz
Lec 16.14
10/29/00
6. Reducing Misses by
Software Prefetching Data
• Data Prefetch
– Load data into register (HP PA-RISC loads)
– Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9)
– Special prefetching instructions cannot cause faults; a form of
speculative execution
• Prefetching comes in two flavors:
– Binding prefetch: Requests load directly into register.
» Must be correct address and register!
– Non-Binding prefetch: Load into cache.
» Can be incorrect. Frees HW/SW to guess!
• Issuing Prefetch Instructions takes time
– Is cost of prefetch issues < savings in reduced misses?
– Higher superscalar reduces difficulty of issue bandwidth
CS252/Kubiatowicz
Lec 16.15
10/29/00
7. Reducing Misses by
Compiler Optimizations
• McFarling [1989] reduced caches misses by 75%
on 8KB direct mapped cache, 4 byte blocks in
software
• Instructions
– Reorder procedures in memory so as to reduce conflict misses
– Profiling to look at conflicts(using tools they developed)
• Data
– Merging Arrays: improve spatial locality by single array of compound
elements vs. 2 arrays
– Loop Interchange: change nesting of loops to access data in order
stored in memory
– Loop Fusion: Combine 2 independent loops that have same looping and
some variables overlap
– Blocking: Improve temporal locality by accessing “blocks” of data
repeatedly vs. going down whole columns or rows
CS252/Kubiatowicz
Lec 16.16
10/29/00
Merging Arrays Example
/* Before: 2 sequential arrays */
int val[SIZE];
int key[SIZE];
/* After: 1 array of stuctures */
struct merge {
int val;
int key;
};
struct merge merged_array[SIZE];
Reducing conflicts between val & key;
improve spatial locality
CS252/Kubiatowicz
Lec 16.17
10/29/00
Loop Interchange Example
/* Before */
for (k = 0; k < 100; k = k+1)
for (j = 0; j < 100; j = j+1)
for (i = 0; i < 5000; i = i+1)
x[i][j] = 2 * x[i][j];
/* After */
for (k = 0; k < 100; k = k+1)
for (i = 0; i < 5000; i = i+1)
for (j = 0; j < 100; j = j+1)
x[i][j] = 2 * x[i][j];
Sequential accesses instead of striding
through memory every 100 words; improved
spatial locality
CS252/Kubiatowicz
Lec 16.18
10/29/00
Loop Fusion Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
a[i][j] = 1/b[i][j] * c[i][j];
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
d[i][j] = a[i][j] + c[i][j];
/* After */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{ a[i][j] = 1/b[i][j] * c[i][j];
d[i][j] = a[i][j] + c[i][j];}
2 misses per access to a & c vs. one miss per
access; improve spatial locality
CS252/Kubiatowicz
Lec 16.19
10/29/00
Blocking Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{r = 0;
for (k = 0; k < N; k = k+1){
r = r + y[i][k]*z[k][j];};
x[i][j] = r;
};
• Two Inner Loops:
– Read all NxN elements of z[]
– Read N elements of 1 row of y[] repeatedly
– Write N elements of 1 row of x[]
• Capacity Misses a function of N & Cache Size:
– 2N3 + N2 => (assuming no conflict; otherwise …)
• Idea: compute on BxB submatrix that fits
CS252/Kubiatowicz
Lec 16.20
10/29/00
Blocking Example
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk = kk+B)
for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B-1,N); j = j+1)
{r = 0;
for (k = kk; k < min(kk+B-1,N); k = k+1) {
r = r + y[i][k]*z[k][j];};
x[i][j] = x[i][j] + r;
};
• B called Blocking Factor
• Capacity Misses from 2N3 + N2 to N3/B+2N2
• Conflict Misses Too?
CS252/Kubiatowicz
Lec 16.21
10/29/00
Reducing Conflict Misses by Blocking
• Conflict misses in caches not FA vs. Blocking size
– Lam et al [1991] a blocking factor of 24 had a fifth the misses
vs. 48 despite both fit in cache
Blocking Factor
Miss
Rate
0
0.05
0.1
0 50 100 150
Fully Associative Cache
Direct Mapped Cache
CS252/Kubiatowicz
Lec 16.22
10/29/00
Performance Improvement
1 1.5 2 2.5 3
compress
cholesky
(nasa7)
spice
mxm (nasa7)
btrix (nasa7)
tomcatv
gmty (nasa7)
vpenta (nasa7)
merged
arrays
loop
interchange
loop fusion blocking
Summary of Compiler Optimizations to
Reduce Cache Misses (by hand)
CS252/Kubiatowicz
Lec 16.23
10/29/00
Summary: Miss Rate Reduction
• 3 Cs: Compulsory, Capacity, Conflict
1. Reduce Misses via Larger Block Size
2. Reduce Misses via Higher Associativity
3. Reducing Misses via Victim Cache
4. Reducing Misses via Pseudo-Associativity
5. Reducing Misses by HW Prefetching Instr, Data
6. Reducing Misses by SW Prefetching Data
7. Reducing Misses by Compiler Optimizations
• Prefetching comes in two flavors:
– Binding prefetch: Requests load directly into register.
» Must be correct address and register!
– Non-Binding prefetch: Load into cache.
» Can be incorrect. Frees HW/SW to guess!
CPUtime  IC  CPIExecution

Memory accesses
Instruction
 Miss rate  Miss penalty




 Clock cycle time
CS252/Kubiatowicz
Lec 16.24
10/29/00
Administrative
• Final project descriptions due today by 5pm!
• Submit web site via email:
– Web site will contain all of your project results, etc.
– Minimum initial site: Cool title, link to proposal
• Anyone need resources?
– NOW: apparently can get account via web site
– Millenium: can get account via web site
– SimpleScalar: info on my web page
CS252/Kubiatowicz
Lec 16.25
10/29/00
Improving Cache Performance
Continued
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
CS252/Kubiatowicz
Lec 16.26
10/29/00
1. Reducing Miss Penalty:
Read Priority over Write on Miss
write
buffer
CPU
in out
DRAM
(or lower mem)
Write Buffer
CS252/Kubiatowicz
Lec 16.27
10/29/00
1. Reducing Miss Penalty:
Read Priority over Write on Miss
• Write through with write buffers offer RAW
conflicts with main memory reads on cache misses
– If simply wait for write buffer to empty, might increase read
miss penalty (old MIPS 1000 by 50% )
– Check write buffer contents before read;
if no conflicts, let the memory access continue
• Alternative: Write Back
– Read miss replacing dirty block
– Normal: Write dirty block to memory, and then do the read
– Instead copy the dirty block to a write buffer, then do the
read, and then do the write
– CPU stall less since restarts as soon as do read
CS252/Kubiatowicz
Lec 16.28
10/29/00
2. Reduce Miss Penalty:
Subblock Placement
• Don’t have to load full block on a miss
• Have valid bits per subblock to indicate valid
• (Originally invented to reduce tag storage)
Valid Bits Subblocks
CS252/Kubiatowicz
Lec 16.29
10/29/00
3. Reduce Miss Penalty:
Early Restart and Critical Word
First
• Don’t wait for full block to be loaded before
restarting CPU
– Early restart—As soon as the requested word of the block
arrives, send it to the CPU and let the CPU continue execution
– Critical Word First—Request the missed word first from memory
and send it to the CPU as soon as it arrives; let the CPU continue
execution while filling the rest of the words in the block. Also
called wrapped fetch and requested word first
• Generally useful only in large blocks,
• Spatial locality a problem; tend to want next
sequential word, so not clear if benefit by early
restart
block
CS252/Kubiatowicz
Lec 16.30
10/29/00
4. Reduce Miss Penalty:
Non-blocking Caches to reduce stalls
on misses
• Non-blocking cache or lockup-free cache allow data
cache to continue to supply cache hits during a miss
– requires F/E bits on registers or out-of-order execution
– requires multi-bank memories
• “hit under miss” reduces the effective miss penalty
by working during miss vs. ignoring CPU requests
• “hit under multiple miss” or “miss under miss” may
further lower the effective miss penalty by
overlapping multiple misses
– Significantly increases the complexity of the cache controller as
there can be multiple outstanding memory accesses
– Requires multiple memory banks (otherwise cannot support)
– Penium Pro allows 4 outstanding memory misses
CS252/Kubiatowicz
Lec 16.31
10/29/00
Value of Hit Under Miss for SPEC
• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss
Hit Under i Misses
Avg.
Mem.
Access
Time
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
eqntott
espresso
xlisp
compress
mdljsp2
ear
fpppp
tomcatv
swm256
doduc
su2cor
wave5
mdljdp2
hydro2d
alvinn
nasa7
spice2g6
ora
0->1
1->2
2->64
Base
Integer Floating Point
“Hit under n Misses”
0->1
1->2
2->64
Base
CS252/Kubiatowicz
Lec 16.32
10/29/00
5. Second level cache
• L2 Equations
AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1
Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2
AMAT = Hit TimeL1 +
Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2)
• Definitions:
– Local miss rate— misses in this cache divided by the total number of
memory accesses to this cache (Miss rateL2)
– Global miss rate—misses in this cache divided by the total number of
memory accesses generated by the CPU
(Miss RateL1 x Miss RateL2)
– Global Miss Rate is what matters
CS252/Kubiatowicz
Lec 16.33
10/29/00
Comparing Local and Global
Miss Rates
• 32 KByte 1st level cache;
Increasing 2nd level cache
• Global miss rate close to
single level cache rate
provided L2 >> L1
• Don’t use local miss rate
• L2 not tied to CPU clock
cycle!
• Cost & A.M.A.T.
• Generally Fast Hit Times
and fewer misses
• Since hits are few, target
miss reduction
Linear
Log
Cache Size
Cache Size
CS252/Kubiatowicz
Lec 16.34
10/29/00
Reducing Misses:
Which apply to L2 Cache?
• Reducing Miss Rate
1. Reduce Misses via Larger Block Size
2. Reduce Conflict Misses via Higher Associativity
3. Reducing Conflict Misses via Victim Cache
4. Reducing Conflict Misses via Pseudo-Associativity
5. Reducing Misses by HW Prefetching Instr, Data
6. Reducing Misses by SW Prefetching Data
7. Reducing Capacity/Conf. Misses by Compiler Optimizations
CS252/Kubiatowicz
Lec 16.35
10/29/00
Relative CPU Time
Block Size
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
16 32 64 128 256 512
1.36
1.28 1.27
1.34
1.54
1.95
L2 cache block size &
A.M.A.T.
• 32KB L1, 8 byte path to memory
CS252/Kubiatowicz
Lec 16.36
10/29/00
Reducing Miss Penalty Summary
• Five techniques
– Read priority over write on miss
– Subblock placement
– Early Restart and Critical Word First on miss
– Non-blocking Caches (Hit under Miss, Miss under Miss)
– Second Level Cache
• Can be applied recursively to Multilevel Caches
– Danger is that time to DRAM will grow with multiple levels in
between
– First attempts at L2 caches can make things worse, since
increased worst case is worse
CPUtime  IC  CPIExecution

Memory accesses
Instruction
 Miss rate  Miss penalty




 Clock cycle time
CS252/Kubiatowicz
Lec 16.37
10/29/00
Main Memory Background
• Performance of Main Memory:
– Latency: Cache Miss Penalty
» Access Time: time between request and word arrives
» Cycle Time: time between requests
– Bandwidth: I/O & Large Block Miss Penalty (L2)
• Main Memory is DRAM: Dynamic Random Access Memory
– Dynamic since needs to be refreshed periodically (8 ms, 1% time)
– Addresses divided into 2 halves (Memory as a 2D matrix):
» RAS or Row Access Strobe
» CAS or Column Access Strobe
• Cache uses SRAM: Static Random Access Memory
– No refresh (6 transistors/bit vs. 1 transistor
Size: DRAM/SRAM 4-8,
Cost/Cycle time: SRAM/DRAM 8-16
CS252/Kubiatowicz
Lec 16.38
10/29/00
Main Memory Deep Background
• “Out-of-Core”, “In-Core,” “Core Dump”?
• “Core memory”?
• Non-volatile, magnetic
• Lost to 4 Kbit DRAM (today using 64Kbit DRAM)
• Access time 750 ns, cycle time 1500-3000 ns
CS252/Kubiatowicz
Lec 16.39
10/29/00
DRAM logical organization
(4 Mbit)
• Square root of bits per RAS/CAS
Column Decoder
SenseAmps & I/O
Memory Array
(2,048 x 2,048)
A0…A10
…
11 D
Q
Word Line
Storage
Cell
CS252/Kubiatowicz
Lec 16.40
10/29/00
4 Key DRAM Timing
Parameters
• tRAC: minimum time from RAS line falling to
the valid data output.
– Quoted as the speed of a DRAM when buy
– A typical 4Mb DRAM tRAC = 60 ns
– Speed of DRAM since on purchase sheet?
• tRC: minimum time from the start of one row
access to the start of the next.
– tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns
• tCAC: minimum time from CAS line falling to
valid data output.
– 15 ns for a 4Mbit DRAM with a tRAC of 60 ns
• tPC: minimum time from the start of one
column access to the start of the next.
– 35 ns for a 4Mbit DRAM with a tRAC of 60 ns
CS252/Kubiatowicz
Lec 16.41
10/29/00
DRAM Performance
• A 60 ns (tRAC) DRAM can
– perform a row access only every 110 ns (tRC)
– perform column access (tCAC) in 15 ns, but time
between column accesses is at least 35 ns (tPC).
» In practice, external address delays and turning
around buses make it 40 to 50 ns
• These times do not include the time to drive
the addresses off the microprocessor nor
the memory controller overhead!
CS252/Kubiatowicz
Lec 16.42
10/29/00
DRAM History
• DRAMs: capacity +60%/yr, cost –30%/yr
– 2.5X cells/area, 1.5X die size in 3 years
• ‘98 DRAM fab line costs $2B
– DRAM only: density, leakage v. speed
• Rely on increasing no. of computers & memory
per computer (60% market)
– SIMM or DIMM is replaceable unit
=> computers use any generation DRAM
• Commodity, second source industry
=> high volume, low profit, conservative
– Little organization innovation in 20 years
• Order of importance: 1) Cost/bit 2) Capacity
– First RAMBUS: 10X BW, +30% cost => little impact
CS252/Kubiatowicz
Lec 16.43
10/29/00
DRAM Future: 1 Gbit DRAM
(ISSCC ‘96; production ‘02?)
Mitsubishi Samsung
• Blocks 512 x 2 Mbit 1024 x 1 Mbit
• Clock 200 MHz 250 MHz
• Data Pins 64 16
• Die Size 24 x 24 mm 31 x 21 mm
– Sizes will be much smaller in production
• Metal Layers 3 4
• Technology 0.15 micron 0.16 micron
CS252/Kubiatowicz
Lec 16.44
10/29/00
Main Memory Performance
• Simple:
– CPU, Cache, Bus, Memory
same width
(32 or 64 bits)
• Wide:
– CPU/Mux 1 word;
Mux/Cache, Bus, Memory
N words (Alpha: 64 bits &
256 bits; UtraSPARC 512)
• Interleaved:
– CPU, Cache, Bus 1 word:
Memory N Modules
(4 Modules); example is
word interleaved
CS252/Kubiatowicz
Lec 16.45
10/29/00
Main Memory Performance
• Timing model (word size is 32 bits)
– 1 to send address,
– 6 access time, 1 to send data
– Cache Block is 4 words
• Simple M.P. = 4 x (1+6+1) = 32
• Wide M.P. = 1 + 6 + 1 = 8
• Interleaved M.P. = 1 + 6 + 4x1 = 11
CS252/Kubiatowicz
Lec 16.46
10/29/00
Independent Memory Banks
• Memory banks for independent accesses
vs. faster sequential accesses
– Multiprocessor
– I/O
– CPU with Hit under n Misses, Non-blocking Cache
• Superbank: all memory active on one block transfer
(or Bank)
• Bank: portion within a superbank that is word
interleaved (or Subbank)
Superbank Bank
…
Superbank Number Superbank Offset
Bank Number Bank Offset
CS252/Kubiatowicz
Lec 16.47
10/29/00
Independent Memory Banks
• How many banks?
number banks  number clocks to access word in bank
– For sequential accesses, otherwise will return to original bank
before it has next word ready
– (like in vector case)
• Increasing DRAM => fewer chips => harder to have
banks
CS252/Kubiatowicz
Lec 16.48
10/29/00
DRAMs per PC over Time
Minimum
Memory
Size
DRAM Generation
‘86 ‘89 ‘92 ‘96 ‘99 ‘02
1 Mb 4 Mb 16 Mb 64 Mb 256 Mb 1 Gb
4 MB
8 MB
16 MB
32 MB
64 MB
128 MB
256 MB
32 8
16 4
8 2
4 1
8 2
4 1
8 2
CS252/Kubiatowicz
Lec 16.49
10/29/00
Avoiding Bank Conflicts
• Lots of banks
int x[256][512];
for (j = 0; j < 512; j = j+1)
for (i = 0; i < 256; i = i+1)
x[i][j] = 2 * x[i][j];
• Even with 128 banks, since 512 is multiple of 128,
conflict on word accesses
• SW: loop interchange or declaring array not power of
2 (“array padding”)
• HW: Prime number of banks
– bank number = address mod number of banks
– address within bank = address / number of words in bank
– modulo & divide per memory access with prime no. banks?
– address within bank = address mod number words in bank
– bank number? easy if 2N words per bank
CS252/Kubiatowicz
Lec 16.50
10/29/00
• Chinese Remainder Theorem
As long as two sets of integers ai and bi follow these rules
and that ai and aj are co-prime if i  j, then the integer x has only one
solution (unambiguous mapping):
– bank number = b0, number of banks = a0 (= 3 in example)
– address within bank = b1, number of words in bank = a1
(= 8 in example)
– N word address 0 to N-1, prime no. banks, words power of 2
bi  x mod ai,0  bi  ai,0  x  a0  a1  a2
Fast Bank Number
Seq. Interleaved Modulo Interleaved
Bank Number: 0 1 2 0 1 2
Address
within Bank: 0 0 1 2 0 16 8
1 3 4 5 9 1 17
2 6 7 8 18 10 2
3 9 10 11 3 19 11
4 12 13 14 12 4 20
5 15 16 17 21 13 5
6 18 19 20 6 22 14
7 21 22 23 15 7 23
CS252/Kubiatowicz
Lec 16.51
10/29/00
Fast Memory Systems: DRAM specific
• Multiple CAS accesses: several names (page mode)
– Extended Data Out (EDO): 30% faster in page mode
• New DRAMs to address gap;
what will they cost, will they survive?
– RAMBUS: startup company; reinvent DRAM interface
» Each Chip a module vs. slice of memory
» Short bus between CPU and chips
» Does own refresh
» Variable amount of data returned
» 1 byte / 2 ns (500 MB/s per chip)
– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM,
transfer synchronous to system clock (66 - 150 MHz)
– Intel claims RAMBUS Direct (16 b wide) is future PC memory
• Niche memory or main memory?
– e.g., Video RAM for frame buffers, DRAM + fast serial output
CS252/Kubiatowicz
Lec 16.52
10/29/00
DRAM Latency >> BW
• More App Bandwidth =>
Cache misses
=> DRAM RAS/CAS
• Application BW =>
Lower DRAM Latency
• RAMBUS, Synch DRAM
increase BW but higher
latency
• EDO DRAM < 5% in PC
D
R
A
M
D
R
A
M
D
R
A
M
D
R
A
M
Bus
I$ D$
Proc
L2$
CS252/Kubiatowicz
Lec 16.53
10/29/00
Potential
DRAM Crossroads?
• After 20 years of 4X every 3 years,
running into wall? (64Mb - 1 Gb)
• How can keep $1B fab lines full if buy
fewer DRAMs per computer?
• Cost/bit –30%/yr if stop 4X/3 yr?
• What will happen to $40B/yr DRAM
industry?
CS252/Kubiatowicz
Lec 16.54
10/29/00
Main Memory Summary
• Wider Memory
• Interleaved Memory: for sequential or
independent accesses
• Avoiding bank conflicts: SW & HW
• DRAM specific optimizations: page mode &
Specialty DRAM
• DRAM future less rosy?
CS252/Kubiatowicz
Lec 16.55
10/29/00
Big storage (such as DRAM/DISK):
Potential for Errors!
• On board discussion of Parity and ECC.
CS252/Kubiatowicz
Lec 16.56
10/29/00
Review: Improving Cache
Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
CS252/Kubiatowicz
Lec 16.57
10/29/00
1. Fast Hit times
via Small and Simple Caches
• Why Alpha 21164 has 8KB Instruction and
8KB data cache + 96KB second level cache?
– Small data cache and clock rate
• Direct Mapped, on chip
CS252/Kubiatowicz
Lec 16.58
10/29/00
2. Fast hits by Avoiding Address
Translation
• Send virtual address to cache? Called Virtually
Addressed Cache or just Virtual Cache vs. Physical
Cache
– Every time process is switched logically must flush the cache; otherwise
get false hits
» Cost is time to flush + “compulsory” misses from empty cache
– Dealing with aliases (sometimes called synonyms);
Two different virtual addresses map to same physical address
– I/O must interact with cache, so need virtual address
• Solution to aliases
– HW guaranteess covers index field & direct mapped, they must be
unique;
called page coloring
• Solution to cache flush
– Add process identifier tag that identifies process as well as address
within process: can’t get a hit if wrong process
CS252/Kubiatowicz
Lec 16.59
10/29/00
Virtually Addressed Caches
CPU
TB
$
MEM
VA
PA
PA
Conventional
Organization
CPU
$
TB
MEM
VA
VA
PA
Virtually Addressed Cache
Translate only on miss
Synonym Problem
CPU
$ TB
MEM
VA
PA
Tags
PA
Overlap $ access
with VA translation:
requires $ index to
remain invariant
across translation
VA
Tags
L2 $
CS252/Kubiatowicz
Lec 16.60
10/29/00
2. Fast Cache Hits by Avoiding
Translation: Process ID impact
• Black is uniprocess
• Light Gray is multiprocess
when flush cache
• Dark Gray is multiprocess
when use Process ID tag
• Y axis: Miss Rates up to 20%
• X axis: Cache size from 2 KB
to 1024 KB
CS252/Kubiatowicz
Lec 16.61
10/29/00
2. Fast Cache Hits by Avoiding
Translation: Index with Physical
Portion of Address
• If index is physical part of address, can
start tag access in parallel with translation
so that can compare to physical tag
• Limits cache to page size: what if want
bigger caches and uses same trick?
– Higher associativity moves barrier to right
– Page coloring
Page Address Page Offset
Address Tag Index Block Offset
CS252/Kubiatowicz
Lec 16.62
10/29/00
• Pipeline Tag Check and Update Cache as separate
stages; current write tag check & previous write cache
update
• Only STORES in the pipeline; empty during a miss
Store r2, (r1) Check r1
Add --
Sub --
Store r4, (r3) M[r1]<-r2&
check r3
• In shade is “Delayed Write Buffer”; must be checked on
reads; either complete write or read from buffer
3. Fast Hit Times Via Pipelined
Writes
CS252/Kubiatowicz
Lec 16.63
10/29/00
4. Fast Writes on Misses Via
Small Subblocks
• If most writes are 1 word, subblock size is 1 word, &
write through then always write subblock & tag
immediately
– Tag match and valid bit already set: Writing the block was proper, &
nothing lost by setting valid bit on again.
– Tag match and valid bit not set: The tag match means that this is the
proper block; writing the data into the subblock makes it appropriate to
turn the valid bit on.
– Tag mismatch: This is a miss and will modify the data portion of the
block. Since write-through cache, no harm was done; memory still has
an up-to-date copy of the old value. Only the tag to the address of
the write and the valid bits of the other subblock need be changed
because the valid bit for this subblock has already been set
• Doesn’t work with write back due to last case
CS252/Kubiatowicz
Lec 16.64
10/29/00
Cache Optimization Summary
Technique MR MP HT Complexity
Larger Block Size + – 0
Higher Associativity + – 1
Victim Caches + 2
Pseudo-Associative Caches + 2
HW Prefetching of Instr/Data + 2
Compiler Controlled Prefetching + 3
Compiler Reduce Misses + 0
Priority to Read Misses + 1
Subblock Placement + + 1
Early Restart & Critical Word 1st + 2
Non-Blocking Caches + 3
Second Level Caches + 2
Small & Simple Caches – + 0
Avoiding Address Translation + 2
Pipelining Writes + 1
miss
rate
hit
time
miss
penalty
CS252/Kubiatowicz
Lec 16.65
10/29/00
What is the Impact of What
You’ve Learned About Caches?
• 1960-1985: Speed
= ƒ(no. operations)
• 1990
– Pipelined
Execution &
Fast Clock Rate
– Out-of-Order
execution
– Superscalar
Instruction Issue
• 1998: Speed =
ƒ(non-cached memory accesses)
• What does this mean for
– Compilers?,Operating Systems?, Algorithms?
Data Structures?
1
10
100
1000
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
DRAM
CPU
CS252/Kubiatowicz
Lec 16.66
10/29/00
Cache Cross Cutting Issues
• Superscalar CPU & Number Cache Ports
must match: number memory
accesses/cycle?
• Speculative Execution and non-faulting
option on memory/TLB
• Parallel Execution vs. Cache locality
– Want far separation to find independent operations vs.
want reuse of data accesses to avoid misses
• I/O and consistencyCaches => multiple
copies of data
– Consistency
CS252/Kubiatowicz
Lec 16.67
10/29/00
Alpha 21064
• Separate Instr & Data
TLB & Caches
• TLBs fully associative
• TLB updates in SW
(“Priv Arch Libr”)
• Caches 8KB direct
mapped, write thru
• Critical 8 bytes first
• Prefetch instr. stream
buffer
• 2 MB L2 cache, direct
mapped, WB (off-chip)
• 256 bit path to main
memory, 4 x 64-bit
modules
• Victim Buffer: to give
read priority over
write
• 4 entry write buffer
between D$ & L2$
Stream
Buffer
Write
Buffer
Victim Buffer
Instr Data
CS252/Kubiatowicz
Lec 16.68
10/29/00
0.01%
0.10%
1.00%
10.00%
100.00%
AlphaSort Li Compress Ear Tomcatv Spice
Miss
Rate
I $
D $
L2
Alpha Memory Performance:
Miss Rates of SPEC92
8K
8K
2M
I$ miss = 2%
D$ miss = 13%
L2 miss =
0.6%
I$ miss = 1%
D$ miss = 21%
L2 miss = 0.3%
I$ miss = 6%
D$ miss = 32%
L2 miss = 10%
CS252/Kubiatowicz
Lec 16.69
10/29/00
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
AlphaSort Espresso Sc Mdljsp2 Ear Alvinn Mdljp2
CPI
L2
I$
D$
I Stall
Other
Alpha CPI Components
• Instruction stall: branch mispredict (green);
• Data cache (blue); Instruction cache (yellow); L2$
(pink)
Other: compute + reg conflicts, structural conflicts
CS252/Kubiatowicz
Lec 16.70
10/29/00
Pitfall: Predicting Cache Performance
from Different Prog. (ISA, compiler,
...)
• 4KB Data cache miss
rate 8%,12%, or
28%?
• 1KB Instr cache miss
rate 0%,3%,or 10%?
• Alpha vs. MIPS
for 8KB Data $:
17% vs. 10%
• Why 2X Alpha v.
MIPS?
0%
5%
10%
15%
20%
25%
30%
35%
1 2 4 8 16 32 64 128
Cache Size (KB)
Miss
Rate
D: tomcatv
D: gcc
D: espresso
I: gcc
I: espresso
I: tomcatv
D$, Tom
D$, gcc
D$, esp
I$, gcc
I$, esp
I$, Tom
CS252/Kubiatowicz
Lec 16.71
10/29/00
Instructions Executed (billions)
Cummlati
ve
Average
Memory
Access
Time
1
1.5
2
2.5
3
3.5
4
4.5
0 1 2 3 4 5 6 7 8 9 10 11 12
Pitfall: Simulating Too Small
an Address Trace
I$ = 4 KB, B=16B
D$ = 4 KB, B=16B
L2 = 512 KB, B=128B
MP = 12, 200
CS252/Kubiatowicz
Lec 16.72
10/29/00
Main Memory Summary
• Wider Memory
• Interleaved Memory: for sequential or
independent accesses
• Avoiding bank conflicts: SW & HW
• DRAM specific optimizations: page mode &
Specialty DRAM
• DRAM future less rosy?
CS252/Kubiatowicz
Lec 16.73
10/29/00
Cache Optimization Summary
Technique MR MP HT Complexity
Larger Block Size + – 0
Higher Associativity + – 1
Victim Caches + 2
Pseudo-Associative Caches + 2
HW Prefetching of Instr/Data + 2
Compiler Controlled Prefetching + 3
Compiler Reduce Misses + 0
Priority to Read Misses + 1
Subblock Placement + + 1
Early Restart & Critical Word 1st + 2
Non-Blocking Caches + 3
Second Level Caches + 2
Small & Simple Caches – + 0
Avoiding Address Translation + 2
Pipelining Writes + 1
miss
rate
hit
time
miss
penalty
CS252/Kubiatowicz
Lec 16.74
10/29/00
Practical Memory Hierarchy
• Issue is NOT inventing new mechanisms
• Issue is taste in selecting between many
alternatives in putting together a memory
hierarchy that fit well together
– e.g., L1 Data cache write through, L2 Write back
– e.g., L1 small for fast hit time/clock cycle,
– e.g., L2 big enough to avoid going to DRAM?

More Related Content

Similar to lec16-memory.ppt

Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolversinside-BigData.com
 
Short.course.introduction.to.vhdl
Short.course.introduction.to.vhdlShort.course.introduction.to.vhdl
Short.course.introduction.to.vhdlRavi Sony
 
Low latency & mechanical sympathy issues and solutions
Low latency & mechanical sympathy  issues and solutionsLow latency & mechanical sympathy  issues and solutions
Low latency & mechanical sympathy issues and solutionsJean-Philippe BEMPEL
 
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like system
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like systemAccelerate Reed-Solomon coding for Fault-Tolerance in RAID-like system
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like systemShuai Yuan
 
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor API
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor APIBeyond the DSL - Unlocking the power of Kafka Streams with the Processor API
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor APIconfluent
 
Hailey_Database_Performance_Made_Easy_through_Graphics.pdf
Hailey_Database_Performance_Made_Easy_through_Graphics.pdfHailey_Database_Performance_Made_Easy_through_Graphics.pdf
Hailey_Database_Performance_Made_Easy_through_Graphics.pdfcookie1969
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionTanel Poder
 
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...Akihiro Hayashi
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Community
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big DataAlbert Bifet
 
Meeting the challenges of OLTP Big Data with Scylla
Meeting the challenges of OLTP Big Data with ScyllaMeeting the challenges of OLTP Big Data with Scylla
Meeting the challenges of OLTP Big Data with ScyllaScyllaDB
 
L3-.pptx
L3-.pptxL3-.pptx
L3-.pptxasdq4
 
23_Advanced_Processors controller system
23_Advanced_Processors controller system23_Advanced_Processors controller system
23_Advanced_Processors controller systemstellan7
 
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...Thom Lane
 
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)Jean-François Gagné
 
Introduction to Apache Cassandra™ + What’s New in 4.0
Introduction to Apache Cassandra™ + What’s New in 4.0Introduction to Apache Cassandra™ + What’s New in 4.0
Introduction to Apache Cassandra™ + What’s New in 4.0DataStax
 
Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Andriy Berestovskyy
 

Similar to lec16-memory.ppt (20)

Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 
Short.course.introduction.to.vhdl
Short.course.introduction.to.vhdlShort.course.introduction.to.vhdl
Short.course.introduction.to.vhdl
 
Low latency & mechanical sympathy issues and solutions
Low latency & mechanical sympathy  issues and solutionsLow latency & mechanical sympathy  issues and solutions
Low latency & mechanical sympathy issues and solutions
 
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like system
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like systemAccelerate Reed-Solomon coding for Fault-Tolerance in RAID-like system
Accelerate Reed-Solomon coding for Fault-Tolerance in RAID-like system
 
MaPU-HPCA2016
MaPU-HPCA2016MaPU-HPCA2016
MaPU-HPCA2016
 
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor API
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor APIBeyond the DSL - Unlocking the power of Kafka Streams with the Processor API
Beyond the DSL - Unlocking the power of Kafka Streams with the Processor API
 
Fmcad08
Fmcad08Fmcad08
Fmcad08
 
Hailey_Database_Performance_Made_Easy_through_Graphics.pdf
Hailey_Database_Performance_Made_Easy_through_Graphics.pdfHailey_Database_Performance_Made_Easy_through_Graphics.pdf
Hailey_Database_Performance_Made_Easy_through_Graphics.pdf
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in Action
 
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...
Exploring Compiler Optimization Opportunities for the OpenMP 4.x Accelerator...
 
Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT Ceph Day Netherlands - Ceph @ BIT
Ceph Day Netherlands - Ceph @ BIT
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big Data
 
Meeting the challenges of OLTP Big Data with Scylla
Meeting the challenges of OLTP Big Data with ScyllaMeeting the challenges of OLTP Big Data with Scylla
Meeting the challenges of OLTP Big Data with Scylla
 
L3-.pptx
L3-.pptxL3-.pptx
L3-.pptx
 
23_Advanced_Processors controller system
23_Advanced_Processors controller system23_Advanced_Processors controller system
23_Advanced_Processors controller system
 
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...
CIFAR-10 for DAWNBench: Wide ResNets, Mixup Augmentation and "Super Convergen...
 
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)
MySQL Parallel Replication: All the 5.7 and 8.0 Details (LOGICAL_CLOCK)
 
Introduction to Apache Cassandra™ + What’s New in 4.0
Introduction to Apache Cassandra™ + What’s New in 4.0Introduction to Apache Cassandra™ + What’s New in 4.0
Introduction to Apache Cassandra™ + What’s New in 4.0
 
Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)
 

More from AshokRachapalli1

17.INTRODUCTION TO SCHEMA REFINEMENT.pptx
17.INTRODUCTION TO SCHEMA REFINEMENT.pptx17.INTRODUCTION TO SCHEMA REFINEMENT.pptx
17.INTRODUCTION TO SCHEMA REFINEMENT.pptxAshokRachapalli1
 
joins in dbms its describes about how joins are important and necessity in d...
joins in dbms  its describes about how joins are important and necessity in d...joins in dbms  its describes about how joins are important and necessity in d...
joins in dbms its describes about how joins are important and necessity in d...AshokRachapalli1
 
6.Database Languages lab-1.pptx
6.Database Languages lab-1.pptx6.Database Languages lab-1.pptx
6.Database Languages lab-1.pptxAshokRachapalli1
 
inputoutputorganization-140722085906-phpapp01.pptx
inputoutputorganization-140722085906-phpapp01.pptxinputoutputorganization-140722085906-phpapp01.pptx
inputoutputorganization-140722085906-phpapp01.pptxAshokRachapalli1
 
arithmeticmicrooperations-180130061637.pptx
arithmeticmicrooperations-180130061637.pptxarithmeticmicrooperations-180130061637.pptx
arithmeticmicrooperations-180130061637.pptxAshokRachapalli1
 
Computer Network Architecture.pptx
Computer Network Architecture.pptxComputer Network Architecture.pptx
Computer Network Architecture.pptxAshokRachapalli1
 
Computer Network Types.pptx
Computer Network Types.pptxComputer Network Types.pptx
Computer Network Types.pptxAshokRachapalli1
 
dataencoding-150701201133-lva1-app6891.pptx
dataencoding-150701201133-lva1-app6891.pptxdataencoding-150701201133-lva1-app6891.pptx
dataencoding-150701201133-lva1-app6891.pptxAshokRachapalli1
 
Computer Organization and Architecture.pptx
Computer Organization and Architecture.pptxComputer Organization and Architecture.pptx
Computer Organization and Architecture.pptxAshokRachapalli1
 

More from AshokRachapalli1 (20)

17.INTRODUCTION TO SCHEMA REFINEMENT.pptx
17.INTRODUCTION TO SCHEMA REFINEMENT.pptx17.INTRODUCTION TO SCHEMA REFINEMENT.pptx
17.INTRODUCTION TO SCHEMA REFINEMENT.pptx
 
joins in dbms its describes about how joins are important and necessity in d...
joins in dbms  its describes about how joins are important and necessity in d...joins in dbms  its describes about how joins are important and necessity in d...
joins in dbms its describes about how joins are important and necessity in d...
 
6.Database Languages lab-1.pptx
6.Database Languages lab-1.pptx6.Database Languages lab-1.pptx
6.Database Languages lab-1.pptx
 
Chapter5 (1).ppt
Chapter5 (1).pptChapter5 (1).ppt
Chapter5 (1).ppt
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Addressing Modes.pptx
Addressing Modes.pptxAddressing Modes.pptx
Addressing Modes.pptx
 
inputoutputorganization-140722085906-phpapp01.pptx
inputoutputorganization-140722085906-phpapp01.pptxinputoutputorganization-140722085906-phpapp01.pptx
inputoutputorganization-140722085906-phpapp01.pptx
 
NOV11 virtual memory.ppt
NOV11 virtual memory.pptNOV11 virtual memory.ppt
NOV11 virtual memory.ppt
 
lecture-17.ppt
lecture-17.pptlecture-17.ppt
lecture-17.ppt
 
arithmeticmicrooperations-180130061637.pptx
arithmeticmicrooperations-180130061637.pptxarithmeticmicrooperations-180130061637.pptx
arithmeticmicrooperations-180130061637.pptx
 
instruction format.pptx
instruction format.pptxinstruction format.pptx
instruction format.pptx
 
Computer Network Architecture.pptx
Computer Network Architecture.pptxComputer Network Architecture.pptx
Computer Network Architecture.pptx
 
Computer Network Types.pptx
Computer Network Types.pptxComputer Network Types.pptx
Computer Network Types.pptx
 
dataencoding-150701201133-lva1-app6891.pptx
dataencoding-150701201133-lva1-app6891.pptxdataencoding-150701201133-lva1-app6891.pptx
dataencoding-150701201133-lva1-app6891.pptx
 
Flow Control.pptx
Flow Control.pptxFlow Control.pptx
Flow Control.pptx
 
Chapter4.ppt
Chapter4.pptChapter4.ppt
Chapter4.ppt
 
intro22.ppt
intro22.pptintro22.ppt
intro22.ppt
 
osi.ppt
osi.pptosi.ppt
osi.ppt
 
switching.pptx
switching.pptxswitching.pptx
switching.pptx
 
Computer Organization and Architecture.pptx
Computer Organization and Architecture.pptxComputer Organization and Architecture.pptx
Computer Organization and Architecture.pptx
 

Recently uploaded

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 

Recently uploaded (20)

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 

lec16-memory.ppt

  • 1. CS252/Kubiatowicz Lec 16.1 10/29/00 CS252 Graduate Computer Architecture Lecture 16 Caches and Memory Systems October 29, 2000 Computer Science 252
  • 2. CS252/Kubiatowicz Lec 16.2 10/29/00 Review: Who Cares About the Memory Hierarchy? µProc 60%/yr. DRAM 7%/yr. 1 10 100 1000 1980 1981 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance “Moore’s Law” • Processor Only Thus Far in Course: – CPU cost/performance, ISA, Pipelined Execution CPU-DRAM Gap • 1980: no cache in µproc; 1995 2-level cache on chip (1989 first Intel µproc with a cache on chip) “Less’ Law?”
  • 3. CS252/Kubiatowicz Lec 16.3 10/29/00 What is a cache? • Small, fast storage used to improve average access time to slow memory. • Exploits spacial and temporal locality • In computer architecture, almost everything is a cache! – Registers a cache on variables – First-level cache a cache on second-level cache – Second-level cache a cache on memory – Memory a cache on disk (virtual memory) – TLB a cache on page table – Branch-prediction a cache on prediction information? Proc/Regs L1-Cache L2-Cache Memory Disk, Tape, etc. Bigger Faster
  • 4. CS252/Kubiatowicz Lec 16.4 10/29/00 Review: Cache performance CycleTime y MissPenalt MissRate Inst MemAccess CPI IC CPUtime Execution             • Miss-oriented Approach to Memory Access: • Separating out Memory component entirely – AMAT = Average Memory Access Time CycleTime AMAT Inst MemAccess CPI IC CPUtime AluOps            y MissPenalt MissRate HitTime AMAT        Data Data Data Inst Inst Inst y MissPenalt MissRate HitTime y MissPenalt MissRate HitTime      
  • 5. CS252/Kubiatowicz Lec 16.5 10/29/00 Example: Harvard Architecture • Unified vs Separate I&D (Harvard) • Table on page 384: – 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% – 32KB unified: Aggregate miss rate=1.99% • Which is better (ignore L2 cache)? – Assume 33% data ops  75% accesses from instructions (1.0/1.33) – hit time=1, miss time=50 – Note that data hit has 1 stall for unified cache (only one port) AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24 Proc I-Cache-1 Proc Unified Cache-1 Unified Cache-2 D-Cache-1 Proc Unified Cache-2
  • 6. CS252/Kubiatowicz Lec 16.6 10/29/00 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.
  • 7. CS252/Kubiatowicz Lec 16.7 10/29/00 Review: Miss Classification • Classifying Misses: 3 Cs – Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses. (Misses in even an Infinite Cache) – Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) – Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses. (Misses in N-way Associative, Size X Cache) • More recent, 4th “C”: – Coherence - Misses caused by cache coherence.
  • 8. CS252/Kubiatowicz Lec 16.8 10/29/00 Block Size (bytes) Miss Rate 0% 5% 10% 15% 20% 25% 16 32 64 128 256 1K 4K 16K 64K 256K 1. Reduce Misses via Larger Block Size
  • 9. CS252/Kubiatowicz Lec 16.9 10/29/00 2. Reduce Misses via Higher Associativity • 2:1 Cache Rule: – Miss Rate DM cache size N Miss Rate 2-way cache size N/2 • Beware: Execution time is only final measure! – Will Clock Cycle time increase? – Hill [1988] suggested hit time for 2-way vs. 1-way external cache +10%, internal + 2%
  • 10. CS252/Kubiatowicz Lec 16.10 10/29/00 Example: Avg. Memory Access Time vs. Miss Rate • Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped Cache Size Associativity (KB) 1-way 2-way 4-way 8-way 1 2.33 2.15 2.07 2.01 2 1.98 1.86 1.76 1.68 4 1.72 1.67 1.61 1.53 8 1.46 1.48 1.47 1.43 16 1.29 1.32 1.32 1.32 32 1.20 1.24 1.25 1.27 64 1.14 1.20 1.21 1.23 128 1.10 1.17 1.18 1.20 (Red means A.M.A.T. not improved by more associativity)
  • 11. CS252/Kubiatowicz Lec 16.11 10/29/00 3. Reducing Misses via a “Victim Cache” • How to combine fast hit time of direct mapped yet still avoid conflict misses? • Add buffer to place data discarded from cache • Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache • Used in Alpha, HP machines To Next Lower Level In Hierarchy DATA TAGS One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator
  • 12. CS252/Kubiatowicz Lec 16.12 10/29/00 4. Reducing Misses via “Pseudo-Associativity” • How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? • Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) • Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles – Better for caches not tied directly to processor (L2) – Used in MIPS R1000 L2 cache, similar in UltraSPARC Hit Time Pseudo Hit Time Miss Penalty Time
  • 13. CS252/Kubiatowicz Lec 16.13 10/29/00 5. Reducing Misses by Hardware Prefetching of Instructions & Datals • E.g., Instruction Prefetching – Alpha 21064 fetches 2 blocks on a miss – Extra block placed in “stream buffer” – On miss check stream buffer • Works with data blocks too: – Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% – Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches • Prefetching relies on having extra memory bandwidth that can be used without penalty
  • 14. CS252/Kubiatowicz Lec 16.14 10/29/00 6. Reducing Misses by Software Prefetching Data • Data Prefetch – Load data into register (HP PA-RISC loads) – Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) – Special prefetching instructions cannot cause faults; a form of speculative execution • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess! • Issuing Prefetch Instructions takes time – Is cost of prefetch issues < savings in reduced misses? – Higher superscalar reduces difficulty of issue bandwidth
  • 15. CS252/Kubiatowicz Lec 16.15 10/29/00 7. Reducing Misses by Compiler Optimizations • McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software • Instructions – Reorder procedures in memory so as to reduce conflict misses – Profiling to look at conflicts(using tools they developed) • Data – Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays – Loop Interchange: change nesting of loops to access data in order stored in memory – Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap – Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows
  • 16. CS252/Kubiatowicz Lec 16.16 10/29/00 Merging Arrays Example /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality
  • 17. CS252/Kubiatowicz Lec 16.17 10/29/00 Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improved spatial locality
  • 18. CS252/Kubiatowicz Lec 16.18 10/29/00 Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality
  • 19. CS252/Kubiatowicz Lec 16.19 10/29/00 Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; • Two Inner Loops: – Read all NxN elements of z[] – Read N elements of 1 row of y[] repeatedly – Write N elements of 1 row of x[] • Capacity Misses a function of N & Cache Size: – 2N3 + N2 => (assuming no conflict; otherwise …) • Idea: compute on BxB submatrix that fits
  • 20. CS252/Kubiatowicz Lec 16.20 10/29/00 Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; • B called Blocking Factor • Capacity Misses from 2N3 + N2 to N3/B+2N2 • Conflict Misses Too?
  • 21. CS252/Kubiatowicz Lec 16.21 10/29/00 Reducing Conflict Misses by Blocking • Conflict misses in caches not FA vs. Blocking size – Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache Blocking Factor Miss Rate 0 0.05 0.1 0 50 100 150 Fully Associative Cache Direct Mapped Cache
  • 22. CS252/Kubiatowicz Lec 16.22 10/29/00 Performance Improvement 1 1.5 2 2.5 3 compress cholesky (nasa7) spice mxm (nasa7) btrix (nasa7) tomcatv gmty (nasa7) vpenta (nasa7) merged arrays loop interchange loop fusion blocking Summary of Compiler Optimizations to Reduce Cache Misses (by hand)
  • 23. CS252/Kubiatowicz Lec 16.23 10/29/00 Summary: Miss Rate Reduction • 3 Cs: Compulsory, Capacity, Conflict 1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache 4. Reducing Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Misses by Compiler Optimizations • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess! CPUtime  IC  CPIExecution  Memory accesses Instruction  Miss rate  Miss penalty      Clock cycle time
  • 24. CS252/Kubiatowicz Lec 16.24 10/29/00 Administrative • Final project descriptions due today by 5pm! • Submit web site via email: – Web site will contain all of your project results, etc. – Minimum initial site: Cool title, link to proposal • Anyone need resources? – NOW: apparently can get account via web site – Millenium: can get account via web site – SimpleScalar: info on my web page
  • 25. CS252/Kubiatowicz Lec 16.25 10/29/00 Improving Cache Performance Continued 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.
  • 26. CS252/Kubiatowicz Lec 16.26 10/29/00 1. Reducing Miss Penalty: Read Priority over Write on Miss write buffer CPU in out DRAM (or lower mem) Write Buffer
  • 27. CS252/Kubiatowicz Lec 16.27 10/29/00 1. Reducing Miss Penalty: Read Priority over Write on Miss • Write through with write buffers offer RAW conflicts with main memory reads on cache misses – If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% ) – Check write buffer contents before read; if no conflicts, let the memory access continue • Alternative: Write Back – Read miss replacing dirty block – Normal: Write dirty block to memory, and then do the read – Instead copy the dirty block to a write buffer, then do the read, and then do the write – CPU stall less since restarts as soon as do read
  • 28. CS252/Kubiatowicz Lec 16.28 10/29/00 2. Reduce Miss Penalty: Subblock Placement • Don’t have to load full block on a miss • Have valid bits per subblock to indicate valid • (Originally invented to reduce tag storage) Valid Bits Subblocks
  • 29. CS252/Kubiatowicz Lec 16.29 10/29/00 3. Reduce Miss Penalty: Early Restart and Critical Word First • Don’t wait for full block to be loaded before restarting CPU – Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution – Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first • Generally useful only in large blocks, • Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart block
  • 30. CS252/Kubiatowicz Lec 16.30 10/29/00 4. Reduce Miss Penalty: Non-blocking Caches to reduce stalls on misses • Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss – requires F/E bits on registers or out-of-order execution – requires multi-bank memories • “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests • “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses – Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses – Requires multiple memory banks (otherwise cannot support) – Penium Pro allows 4 outstanding memory misses
  • 31. CS252/Kubiatowicz Lec 16.31 10/29/00 Value of Hit Under Miss for SPEC • FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 • Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 • 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss Hit Under i Misses Avg. Mem. Access Time 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 eqntott espresso xlisp compress mdljsp2 ear fpppp tomcatv swm256 doduc su2cor wave5 mdljdp2 hydro2d alvinn nasa7 spice2g6 ora 0->1 1->2 2->64 Base Integer Floating Point “Hit under n Misses” 0->1 1->2 2->64 Base
  • 32. CS252/Kubiatowicz Lec 16.32 10/29/00 5. Second level cache • L2 Equations AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2) • Definitions: – Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) – Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2) – Global Miss Rate is what matters
  • 33. CS252/Kubiatowicz Lec 16.33 10/29/00 Comparing Local and Global Miss Rates • 32 KByte 1st level cache; Increasing 2nd level cache • Global miss rate close to single level cache rate provided L2 >> L1 • Don’t use local miss rate • L2 not tied to CPU clock cycle! • Cost & A.M.A.T. • Generally Fast Hit Times and fewer misses • Since hits are few, target miss reduction Linear Log Cache Size Cache Size
  • 34. CS252/Kubiatowicz Lec 16.34 10/29/00 Reducing Misses: Which apply to L2 Cache? • Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations
  • 35. CS252/Kubiatowicz Lec 16.35 10/29/00 Relative CPU Time Block Size 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 16 32 64 128 256 512 1.36 1.28 1.27 1.34 1.54 1.95 L2 cache block size & A.M.A.T. • 32KB L1, 8 byte path to memory
  • 36. CS252/Kubiatowicz Lec 16.36 10/29/00 Reducing Miss Penalty Summary • Five techniques – Read priority over write on miss – Subblock placement – Early Restart and Critical Word First on miss – Non-blocking Caches (Hit under Miss, Miss under Miss) – Second Level Cache • Can be applied recursively to Multilevel Caches – Danger is that time to DRAM will grow with multiple levels in between – First attempts at L2 caches can make things worse, since increased worst case is worse CPUtime  IC  CPIExecution  Memory accesses Instruction  Miss rate  Miss penalty      Clock cycle time
  • 37. CS252/Kubiatowicz Lec 16.37 10/29/00 Main Memory Background • Performance of Main Memory: – Latency: Cache Miss Penalty » Access Time: time between request and word arrives » Cycle Time: time between requests – Bandwidth: I/O & Large Block Miss Penalty (L2) • Main Memory is DRAM: Dynamic Random Access Memory – Dynamic since needs to be refreshed periodically (8 ms, 1% time) – Addresses divided into 2 halves (Memory as a 2D matrix): » RAS or Row Access Strobe » CAS or Column Access Strobe • Cache uses SRAM: Static Random Access Memory – No refresh (6 transistors/bit vs. 1 transistor Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16
  • 38. CS252/Kubiatowicz Lec 16.38 10/29/00 Main Memory Deep Background • “Out-of-Core”, “In-Core,” “Core Dump”? • “Core memory”? • Non-volatile, magnetic • Lost to 4 Kbit DRAM (today using 64Kbit DRAM) • Access time 750 ns, cycle time 1500-3000 ns
  • 39. CS252/Kubiatowicz Lec 16.39 10/29/00 DRAM logical organization (4 Mbit) • Square root of bits per RAS/CAS Column Decoder SenseAmps & I/O Memory Array (2,048 x 2,048) A0…A10 … 11 D Q Word Line Storage Cell
  • 40. CS252/Kubiatowicz Lec 16.40 10/29/00 4 Key DRAM Timing Parameters • tRAC: minimum time from RAS line falling to the valid data output. – Quoted as the speed of a DRAM when buy – A typical 4Mb DRAM tRAC = 60 ns – Speed of DRAM since on purchase sheet? • tRC: minimum time from the start of one row access to the start of the next. – tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns • tCAC: minimum time from CAS line falling to valid data output. – 15 ns for a 4Mbit DRAM with a tRAC of 60 ns • tPC: minimum time from the start of one column access to the start of the next. – 35 ns for a 4Mbit DRAM with a tRAC of 60 ns
  • 41. CS252/Kubiatowicz Lec 16.41 10/29/00 DRAM Performance • A 60 ns (tRAC) DRAM can – perform a row access only every 110 ns (tRC) – perform column access (tCAC) in 15 ns, but time between column accesses is at least 35 ns (tPC). » In practice, external address delays and turning around buses make it 40 to 50 ns • These times do not include the time to drive the addresses off the microprocessor nor the memory controller overhead!
  • 42. CS252/Kubiatowicz Lec 16.42 10/29/00 DRAM History • DRAMs: capacity +60%/yr, cost –30%/yr – 2.5X cells/area, 1.5X die size in 3 years • ‘98 DRAM fab line costs $2B – DRAM only: density, leakage v. speed • Rely on increasing no. of computers & memory per computer (60% market) – SIMM or DIMM is replaceable unit => computers use any generation DRAM • Commodity, second source industry => high volume, low profit, conservative – Little organization innovation in 20 years • Order of importance: 1) Cost/bit 2) Capacity – First RAMBUS: 10X BW, +30% cost => little impact
  • 43. CS252/Kubiatowicz Lec 16.43 10/29/00 DRAM Future: 1 Gbit DRAM (ISSCC ‘96; production ‘02?) Mitsubishi Samsung • Blocks 512 x 2 Mbit 1024 x 1 Mbit • Clock 200 MHz 250 MHz • Data Pins 64 16 • Die Size 24 x 24 mm 31 x 21 mm – Sizes will be much smaller in production • Metal Layers 3 4 • Technology 0.15 micron 0.16 micron
  • 44. CS252/Kubiatowicz Lec 16.44 10/29/00 Main Memory Performance • Simple: – CPU, Cache, Bus, Memory same width (32 or 64 bits) • Wide: – CPU/Mux 1 word; Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits; UtraSPARC 512) • Interleaved: – CPU, Cache, Bus 1 word: Memory N Modules (4 Modules); example is word interleaved
  • 45. CS252/Kubiatowicz Lec 16.45 10/29/00 Main Memory Performance • Timing model (word size is 32 bits) – 1 to send address, – 6 access time, 1 to send data – Cache Block is 4 words • Simple M.P. = 4 x (1+6+1) = 32 • Wide M.P. = 1 + 6 + 1 = 8 • Interleaved M.P. = 1 + 6 + 4x1 = 11
  • 46. CS252/Kubiatowicz Lec 16.46 10/29/00 Independent Memory Banks • Memory banks for independent accesses vs. faster sequential accesses – Multiprocessor – I/O – CPU with Hit under n Misses, Non-blocking Cache • Superbank: all memory active on one block transfer (or Bank) • Bank: portion within a superbank that is word interleaved (or Subbank) Superbank Bank … Superbank Number Superbank Offset Bank Number Bank Offset
  • 47. CS252/Kubiatowicz Lec 16.47 10/29/00 Independent Memory Banks • How many banks? number banks  number clocks to access word in bank – For sequential accesses, otherwise will return to original bank before it has next word ready – (like in vector case) • Increasing DRAM => fewer chips => harder to have banks
  • 48. CS252/Kubiatowicz Lec 16.48 10/29/00 DRAMs per PC over Time Minimum Memory Size DRAM Generation ‘86 ‘89 ‘92 ‘96 ‘99 ‘02 1 Mb 4 Mb 16 Mb 64 Mb 256 Mb 1 Gb 4 MB 8 MB 16 MB 32 MB 64 MB 128 MB 256 MB 32 8 16 4 8 2 4 1 8 2 4 1 8 2
  • 49. CS252/Kubiatowicz Lec 16.49 10/29/00 Avoiding Bank Conflicts • Lots of banks int x[256][512]; for (j = 0; j < 512; j = j+1) for (i = 0; i < 256; i = i+1) x[i][j] = 2 * x[i][j]; • Even with 128 banks, since 512 is multiple of 128, conflict on word accesses • SW: loop interchange or declaring array not power of 2 (“array padding”) • HW: Prime number of banks – bank number = address mod number of banks – address within bank = address / number of words in bank – modulo & divide per memory access with prime no. banks? – address within bank = address mod number words in bank – bank number? easy if 2N words per bank
  • 50. CS252/Kubiatowicz Lec 16.50 10/29/00 • Chinese Remainder Theorem As long as two sets of integers ai and bi follow these rules and that ai and aj are co-prime if i  j, then the integer x has only one solution (unambiguous mapping): – bank number = b0, number of banks = a0 (= 3 in example) – address within bank = b1, number of words in bank = a1 (= 8 in example) – N word address 0 to N-1, prime no. banks, words power of 2 bi  x mod ai,0  bi  ai,0  x  a0  a1  a2 Fast Bank Number Seq. Interleaved Modulo Interleaved Bank Number: 0 1 2 0 1 2 Address within Bank: 0 0 1 2 0 16 8 1 3 4 5 9 1 17 2 6 7 8 18 10 2 3 9 10 11 3 19 11 4 12 13 14 12 4 20 5 15 16 17 21 13 5 6 18 19 20 6 22 14 7 21 22 23 15 7 23
  • 51. CS252/Kubiatowicz Lec 16.51 10/29/00 Fast Memory Systems: DRAM specific • Multiple CAS accesses: several names (page mode) – Extended Data Out (EDO): 30% faster in page mode • New DRAMs to address gap; what will they cost, will they survive? – RAMBUS: startup company; reinvent DRAM interface » Each Chip a module vs. slice of memory » Short bus between CPU and chips » Does own refresh » Variable amount of data returned » 1 byte / 2 ns (500 MB/s per chip) – Synchronous DRAM: 2 banks on chip, a clock signal to DRAM, transfer synchronous to system clock (66 - 150 MHz) – Intel claims RAMBUS Direct (16 b wide) is future PC memory • Niche memory or main memory? – e.g., Video RAM for frame buffers, DRAM + fast serial output
  • 52. CS252/Kubiatowicz Lec 16.52 10/29/00 DRAM Latency >> BW • More App Bandwidth => Cache misses => DRAM RAS/CAS • Application BW => Lower DRAM Latency • RAMBUS, Synch DRAM increase BW but higher latency • EDO DRAM < 5% in PC D R A M D R A M D R A M D R A M Bus I$ D$ Proc L2$
  • 53. CS252/Kubiatowicz Lec 16.53 10/29/00 Potential DRAM Crossroads? • After 20 years of 4X every 3 years, running into wall? (64Mb - 1 Gb) • How can keep $1B fab lines full if buy fewer DRAMs per computer? • Cost/bit –30%/yr if stop 4X/3 yr? • What will happen to $40B/yr DRAM industry?
  • 54. CS252/Kubiatowicz Lec 16.54 10/29/00 Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses • Avoiding bank conflicts: SW & HW • DRAM specific optimizations: page mode & Specialty DRAM • DRAM future less rosy?
  • 55. CS252/Kubiatowicz Lec 16.55 10/29/00 Big storage (such as DRAM/DISK): Potential for Errors! • On board discussion of Parity and ECC.
  • 56. CS252/Kubiatowicz Lec 16.56 10/29/00 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.
  • 57. CS252/Kubiatowicz Lec 16.57 10/29/00 1. Fast Hit times via Small and Simple Caches • Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache? – Small data cache and clock rate • Direct Mapped, on chip
  • 58. CS252/Kubiatowicz Lec 16.58 10/29/00 2. Fast hits by Avoiding Address Translation • Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache – Every time process is switched logically must flush the cache; otherwise get false hits » Cost is time to flush + “compulsory” misses from empty cache – Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address – I/O must interact with cache, so need virtual address • Solution to aliases – HW guaranteess covers index field & direct mapped, they must be unique; called page coloring • Solution to cache flush – Add process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process
  • 59. CS252/Kubiatowicz Lec 16.59 10/29/00 Virtually Addressed Caches CPU TB $ MEM VA PA PA Conventional Organization CPU $ TB MEM VA VA PA Virtually Addressed Cache Translate only on miss Synonym Problem CPU $ TB MEM VA PA Tags PA Overlap $ access with VA translation: requires $ index to remain invariant across translation VA Tags L2 $
  • 60. CS252/Kubiatowicz Lec 16.60 10/29/00 2. Fast Cache Hits by Avoiding Translation: Process ID impact • Black is uniprocess • Light Gray is multiprocess when flush cache • Dark Gray is multiprocess when use Process ID tag • Y axis: Miss Rates up to 20% • X axis: Cache size from 2 KB to 1024 KB
  • 61. CS252/Kubiatowicz Lec 16.61 10/29/00 2. Fast Cache Hits by Avoiding Translation: Index with Physical Portion of Address • If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag • Limits cache to page size: what if want bigger caches and uses same trick? – Higher associativity moves barrier to right – Page coloring Page Address Page Offset Address Tag Index Block Offset
  • 62. CS252/Kubiatowicz Lec 16.62 10/29/00 • Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update • Only STORES in the pipeline; empty during a miss Store r2, (r1) Check r1 Add -- Sub -- Store r4, (r3) M[r1]<-r2& check r3 • In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer 3. Fast Hit Times Via Pipelined Writes
  • 63. CS252/Kubiatowicz Lec 16.63 10/29/00 4. Fast Writes on Misses Via Small Subblocks • If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately – Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again. – Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on. – Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set • Doesn’t work with write back due to last case
  • 64. CS252/Kubiatowicz Lec 16.64 10/29/00 Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches – + 0 Avoiding Address Translation + 2 Pipelining Writes + 1 miss rate hit time miss penalty
  • 65. CS252/Kubiatowicz Lec 16.65 10/29/00 What is the Impact of What You’ve Learned About Caches? • 1960-1985: Speed = ƒ(no. operations) • 1990 – Pipelined Execution & Fast Clock Rate – Out-of-Order execution – Superscalar Instruction Issue • 1998: Speed = ƒ(non-cached memory accesses) • What does this mean for – Compilers?,Operating Systems?, Algorithms? Data Structures? 1 10 100 1000 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 DRAM CPU
  • 66. CS252/Kubiatowicz Lec 16.66 10/29/00 Cache Cross Cutting Issues • Superscalar CPU & Number Cache Ports must match: number memory accesses/cycle? • Speculative Execution and non-faulting option on memory/TLB • Parallel Execution vs. Cache locality – Want far separation to find independent operations vs. want reuse of data accesses to avoid misses • I/O and consistencyCaches => multiple copies of data – Consistency
  • 67. CS252/Kubiatowicz Lec 16.67 10/29/00 Alpha 21064 • Separate Instr & Data TLB & Caches • TLBs fully associative • TLB updates in SW (“Priv Arch Libr”) • Caches 8KB direct mapped, write thru • Critical 8 bytes first • Prefetch instr. stream buffer • 2 MB L2 cache, direct mapped, WB (off-chip) • 256 bit path to main memory, 4 x 64-bit modules • Victim Buffer: to give read priority over write • 4 entry write buffer between D$ & L2$ Stream Buffer Write Buffer Victim Buffer Instr Data
  • 68. CS252/Kubiatowicz Lec 16.68 10/29/00 0.01% 0.10% 1.00% 10.00% 100.00% AlphaSort Li Compress Ear Tomcatv Spice Miss Rate I $ D $ L2 Alpha Memory Performance: Miss Rates of SPEC92 8K 8K 2M I$ miss = 2% D$ miss = 13% L2 miss = 0.6% I$ miss = 1% D$ miss = 21% L2 miss = 0.3% I$ miss = 6% D$ miss = 32% L2 miss = 10%
  • 69. CS252/Kubiatowicz Lec 16.69 10/29/00 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 AlphaSort Espresso Sc Mdljsp2 Ear Alvinn Mdljp2 CPI L2 I$ D$ I Stall Other Alpha CPI Components • Instruction stall: branch mispredict (green); • Data cache (blue); Instruction cache (yellow); L2$ (pink) Other: compute + reg conflicts, structural conflicts
  • 70. CS252/Kubiatowicz Lec 16.70 10/29/00 Pitfall: Predicting Cache Performance from Different Prog. (ISA, compiler, ...) • 4KB Data cache miss rate 8%,12%, or 28%? • 1KB Instr cache miss rate 0%,3%,or 10%? • Alpha vs. MIPS for 8KB Data $: 17% vs. 10% • Why 2X Alpha v. MIPS? 0% 5% 10% 15% 20% 25% 30% 35% 1 2 4 8 16 32 64 128 Cache Size (KB) Miss Rate D: tomcatv D: gcc D: espresso I: gcc I: espresso I: tomcatv D$, Tom D$, gcc D$, esp I$, gcc I$, esp I$, Tom
  • 71. CS252/Kubiatowicz Lec 16.71 10/29/00 Instructions Executed (billions) Cummlati ve Average Memory Access Time 1 1.5 2 2.5 3 3.5 4 4.5 0 1 2 3 4 5 6 7 8 9 10 11 12 Pitfall: Simulating Too Small an Address Trace I$ = 4 KB, B=16B D$ = 4 KB, B=16B L2 = 512 KB, B=128B MP = 12, 200
  • 72. CS252/Kubiatowicz Lec 16.72 10/29/00 Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses • Avoiding bank conflicts: SW & HW • DRAM specific optimizations: page mode & Specialty DRAM • DRAM future less rosy?
  • 73. CS252/Kubiatowicz Lec 16.73 10/29/00 Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches – + 0 Avoiding Address Translation + 2 Pipelining Writes + 1 miss rate hit time miss penalty
  • 74. CS252/Kubiatowicz Lec 16.74 10/29/00 Practical Memory Hierarchy • Issue is NOT inventing new mechanisms • Issue is taste in selecting between many alternatives in putting together a memory hierarchy that fit well together – e.g., L1 Data cache write through, L2 Write back – e.g., L1 small for fast hit time/clock cycle, – e.g., L2 big enough to avoid going to DRAM?