cache memory

55
1 Cache Memory

Upload: keiko-orr

Post on 30-Dec-2015

59 views

Category:

Documents


2 download

DESCRIPTION

Cache Memory. Outline. General concepts 3 ways to organize cache memory Issues with writes Write cache friendly codes Cache mountain Suggested Reading: 6.4, 6.5, 6.6. 6.4 Cache Memories. Cache Memory. History At very beginning, 3 levels Registers, main memory, disk storage - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Cache Memory

1

Cache Memory

Page 2: Cache Memory

2

Outline

• General concepts

• 3 ways to organize cache memory

• Issues with writes

• Write cache friendly codes

• Cache mountain

• Suggested Reading: 6.4, 6.5, 6.6

Page 3: Cache Memory

3

6.4 Cache Memories

Page 4: Cache Memory

4

Cache Memory

• History– At very beginning, 3 levels

• Registers, main memory, disk storage

– 10 years later, 4 levels • Register, SRAM cache, main DRAM memory, disk

storage

– Modern processor, 4~5 levels• Registers, SRAM L1, L2(,L3) cache, main DRAM

memory, disk storage

– Cache memories• are small, fast SRAM-based memories • are managed by hardware automatically• can be on-chip, on-die, off-chip

Page 5: Cache Memory

5

Cache MemoryFigure 6.24 P488

mainmemory

I/Obridge

bus interfaceL2 cache

ALU

register file

CPU chip

cache bus system bus memory bus

L1 cache

Page 6: Cache Memory

6

Cache Memory

• L1 cache is on-chip

• L2 cache is off-chip several years ago

• L3 cache can be off-chip or on-chip

• CPU looks first for data in L1, then in L2,

then in main memory

– Hold frequently accessed blocks of main

memory are in caches

Page 7: Cache Memory

7

Inserting an L1 cache between the CPU and main memory

a b c dblock 10

p q r sblock 21

...

...

w x y zblock 30

...

The big slow main memoryhas room for many 4-wordblocks.

The small fast L1 cache has roomfor two 4-word blocks.

The tiny, very fast CPU register filehas room for four 4-byte words.

The transfer unit betweenthe cache and main memory is a 4-word block(16 bytes).

The transfer unit betweenthe CPU register file and the cache is a 4-byte block.

line 0

line 1

Page 8: Cache Memory

8

6.4.1 Generic Cache Memory OrganizationFigure 6.25 P488

• • •B–110

• • •B–110

valid

valid

tag

tagset 0:

B = 2b bytesper cache block

E lines per set

S = 2s sets

t tag bitsper line

1 valid bitper line

• • •

• • •B–110

• • •B–110

valid

valid

tag

tagset 1: • • •

• • •B–110

• • •B–110

valid

valid

tag

tagset S-1: • • •

• • •

Cache is an arrayof sets.

Each set containsone or more lines.

Each line holds ablock of data.

pp.488

Page 9: Cache Memory

9

Addressing cachesFigure 6.25 P488

t bits s bits b bits

0m-1

<tag> <set index> <block offset>

Address A:

• • •B–110

• • •B–110

v

v

tag

tagset 0: • • •

• • •B–110

• • •B–110

v

v

tag

tagset 1: • • •

• • •B–110

• • •B–110

v

v

tag

tagset S-1: • • •

• • •The word at address A is in the cache ifthe tag bits in one of the <valid> lines in set <set index> match <tag>.

The word contents begin at offset <block offset> bytes from the beginning of the block.

Page 10: Cache Memory

10

Cache Memory

Fundamental parameters

Parameters

Descriptions

S = 2s

EB=2b

m=log2(M)

Number of setsNumber of lines per setBlock size(bytes)Number of physical(main memory) address bits

Page 11: Cache Memory

11

Cache Memory

Derived quantities

Parameters

Descriptions

M=2m

s=log2(S)b=log2(B)t=m-(s+b)C=BE S

Maximum number of unique memory addressNumber of set index bitsNumber of block offset bitsNumber of tag bitsCache size (bytes) not including overhead such as the valid and tag bits

Page 12: Cache Memory

12

6.4.2 Direct-mapped cacheFigure 6.27 P490

• Simplest kind of cache• Characterized by exactly one line per set.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:

E=1 lines per setcache block

cache block

cache block

Page 13: Cache Memory

13

Accessing direct-mapped cachesFigure 6.28 P491

• Set selection– Use the set index bits to determine the set of

interest

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:t bits s bits

0 0  0 0 10m-1

b bits

tag set index block offset

selected set

cache block

cache block

cache block

Page 14: Cache Memory

14

Accessing direct-mapped caches

• Line matching and word extraction– find a valid line in the selected set with a

matching tag (line matching)– then extract the word (word selection)

Page 15: Cache Memory

15

Accessing direct-mapped cachesFigure 6.29 P491

1

t bits s bits

100i01100m-1

b bits

tag set index block offset

selected set (i):

=1?

= ? (3) If (1) and (2), then cache hit,

and block offset selects

starting byte.

(1) The valid bit must be set

(2) The tag bits in the cacheline must match the

tag bits in the address

0110 w3w0 w1 w2

30 1 2 74 5 6

Page 16: Cache Memory

16

Line Replacement on Misses in Directed Caches

• If cache misses

– Retrieve the requested block from the next

level in the memory hierarchy

– Store the new block in one of the cache lines of

the set indicated by the set index bits

Page 17: Cache Memory

17

Line Replacement on Misses in Directed Caches

• If the set is full of valid cache lines

– One of the existing lines must be evicted

• For a direct-mapped caches

– Each set contains only one line

– Current line is replaced by the newly fetched

line

Page 18: Cache Memory

18

Direct-mapped cache simulation P492

• M=16 byte addresses• B=2 bytes/block, S=4 sets, E=1 entry/set

Page 19: Cache Memory

19

Direct-mapped cache simulation P493

1 0 m[1] m[0]v tag data

1 1 m[13] m[12]

0 [0000] (miss)

(4)1 1 m[9] m[8]v tag data

1 1 m[13] m[12]

8 [1000] (miss)

(3)

1 0 m[1] m[0]v tag data

1 1 m[13] m[12]

13 [1101] (miss)

(2)1 0 m[1] m[0]v tag data

0 [0000] (miss)

(1)

M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/setAddress trace (reads):

0 [0000] 1 [0001] 13 [1101] 8 [1000] 0 [0000]xt=1 s=2 b=1

xx x

Page 20: Cache Memory

20

Direct-mapped cache simulation Figure 6.30 P493

Address bits

Address(decimal)

Tag bits(t=1)

Index bits(s=2)

Offset bits(b=1)

Block number

(decimal)

0123456789101112131415

0000000011111111

00000101101011110000010110101111

0101010101010101

0011223344556677

Page 21: Cache Memory

21

Why use middle bits as index?

• High-Order Bit Indexing– Adjacent memory lines would

map to same cache entry– Poor use of spatial locality

• Middle-Order Bit Indexing– Consecutive memory lines

map to different cache lines– Can hold C-byte region of

address space in cache at one time

4-line Cache High-OrderBit Indexing

Middle-OrderBit Indexing00

011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Figure 6.31 P497

Page 22: Cache Memory

22

6.4.3 Set associative caches

• Characterized by more than one line per set

valid tagset 0: E=2 lines per set

set 1:

set S-1:

• • •

cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

Figure 6.32 P498

Page 23: Cache Memory

23

Accessing set associative caches

• Set selection– identical to direct-mapped cache

valid

valid

tag

tagset 0:

valid

valid

tag

tagset 1:

valid

valid

tag

tagset S-1:

• • •

t bits s bits0 0  0 0 1

0m-1

b bits

tag set index block offset

Selected set

cache block

cache block

cache block

cache block

cache block

cache block

Figure 6.33 P498

Page 24: Cache Memory

24

Accessing set associative caches

• Line matching and word selection– must compare the tag in each valid line in the

selected set.

(3) If (1) and (2), then cache hit, and

block offset selects starting byte.

1 0110 w3w0 w1 w2

1 1001

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

=1?

= ?(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(1) The valid bit must be set.

30 1 2 74 5 6

Figure 6.34 P499

Page 25: Cache Memory

25

6.4.4 Fully associative caches

• Characterized by all of the lines in the only one set

• No set index bits in the address

set 0:

valid

valid

tag

tag

cache block

cache block

valid tag cache block

… E=C/B lines in the one and only set

t bits b bits

tag block offset

Figure 6.36 P500

Figure 6.35 P500

Page 26: Cache Memory

26

Accessing set associative caches

• Word selection– must compare the tag in each valid line

0 0110w3w0 w1 w2

1 1001

t bits1000110

0m-1

b bits

tag block offset

=1?

= ? (3) If (1) and (2), then cache hit, and

block offset selects starting byte.

(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(1) The valid bit must be set.

30 1 2 74 5 6

1

0

0110

1110

Figure 6.37 P500

Page 27: Cache Memory

27

6.4.5 Issues with Writes

• Write hits– 1) Write through

• Cache updates its copy• Immediately writes the corresponding cache block to

memory

– 2) Write back• Defers the memory update as long as possible• Writing the updated block to memory only when it is

evicted from the cache• Maintains a dirty bit for each cache line

Page 28: Cache Memory

28

Issues with Writes

• Write misses– 1) Write-allocate

• Loads the corresponding memory block into the cache• Then updates the cache block

– 2) No-write-allocate• Bypasses the cache• Writes the word directly to memory

• Combination– Write through, no-write-allocate– Write back, write-allocate

Page 29: Cache Memory

29

6.4.6 Multi-level caches

size:speed:$/Mbyte:line size:

8-64 KB3 ns

32 B

128 MB DRAM60 ns$1.50/MB8 KB

30 GB8 ms$0.05/MB

larger, slower, cheaper

MemoryMemory diskdisk

TLB

L1 I-cache

L1 D-cacheregsL2

Cache

Processor

1-4MB SRAM6 ns$100/MB32 B

larger line size, higher associativity, more likely to write back

Options: separate data and instruction caches, or a unified cache

Figure 6.38 P504

Page 30: Cache Memory

30

6.4.7 Cache performance metrics P505

• Miss Rate– fraction of memory references not found in

cache (misses/references)– Typical numbers:

3-10% for L1

• Hit Rate– fraction of memory references found in cache

(1 - miss rate)

Page 31: Cache Memory

31

Cache performance metrics

• Hit Time– time to deliver a line in the cache to the

processor (includes time to determine whether the line is in the cache)

– Typical numbers:1-2 clock cycle for L15-10 clock cycles for L2

• Miss Penalty– additional time required because of a miss

• Typically 25-100 cycles for main memory

Page 32: Cache Memory

32

Cache performance metrics P505

• 1> Cache size– Hit rate vs. hit time

• 2> Block size– Spatial locality vs. temporal locality

• 3> Associativity– Thrashing– Cost– Speed– Miss penalty

• 4> Write strategy– Simple, read misses, fewer transfer

Page 33: Cache Memory

33

6.5 Writing Cache-Friendly Code

Page 34: Cache Memory

34

Writing Cache-Friendly Code

• Principles

– Programs with better locality will tend to have

lower miss rates

– Programs with lower miss rates will tend to run

faster than programs with higher miss rates

Page 35: Cache Memory

35

Writing Cache-Friendly Code

• Basic approach– Make the common case go fast

• Programs often spend most of their time in a few core functions.

• These functions often spend most of their time in a few loops

– Minimize the number of cache misses in each inner loop

• All things being equal

Page 36: Cache Memory

36

Writing Cache-Friendly Code P507

8[h]7[h]6[h]5[m]4[h]3[h]2[h]1[m]Access order,

[h]it or [m]iss

i= 7i= 6i= 5i= 4i= 3i= 2i= 1i=0v[i]

Temporal locality,These variables are usually put in registers int sumvec(int v[N])

{int i, sum = 0 ;

for (i = 0 ; i < N ; i++)sum += v[i] ;

return sum ; }

Page 37: Cache Memory

37

Writing cache-friendly code

• Temporal locality

– Repeated references to local variables are good

because the compiler can cache them in the

register file

Page 38: Cache Memory

38

Writing cache-friendly code

• Spatial locality

– Stride-1 references patterns are good because

caches at all levels of the memory hierarchy

store data as contiguous blocks

• Spatial locality is especially important in

programs that operate on multidimensional

arrays

Page 39: Cache Memory

39

Writing cache-friendly code P508

• Example (M=4, N=8, 10cycles/iter)int sumvec(int a[M][N]){int i, j, sum = 0 ;

for (i = 0 ; i < M ; i++)for ( j = 0 ; j < N ; j++ )

sum += a[i][j] ;return sum ;

}

Page 40: Cache Memory

40

Writing cache-friendly code

a[i][j] j=0 j= 1 j= 2 j= 3 j= 4 j= 5 j= 6 j= 7

i=0

i=1

i=2

i=3

1[m]

9[m]

17[m]

25[m]

2[h]

10[h]

18[h]

26[h]

3[h]

11[h]

19[h]

27[h]

4[h]

12[h]

20[h]

28[h]

5[m]

13[m]

21[m]

29[m]

6[h]

14[h]

22[h]

30[h]

7[h]

15[h]

23[h]

31[h]

8[h]

16[h]

24[h]

32[h]

Page 41: Cache Memory

41

Writing cache-friendly code P508

• Example (M=4, N=8, 20cycles/iter) int sumvec(int v[M][N]) {int i, j, sum = 0 ;

for ( j = 0 ; j < N ; j++ )for ( i = 0 ; i < M ; i++ )

sum += v[i][j] ;return sum ;

}

Page 42: Cache Memory

42

Writing cache-friendly code

a[i][j] j=0 j= 1 j= 2 j= 3 j= 4 j= 5 j= 6 j= 7

i=0

i=1

i=2

i=3

1[m]

2[m]

3[m]

4[m]

5[m]

6[m]

7[m]

8[m]

9[m]

10[m]

11[m]

12[m]

13[m]

14[m]

15[m]

16[m]

17[m]

18[m]

19[m]

20[m]

21[m]

22[m]

23[m]

24[m]

25[m]

26[m]

27[m]

28[m]

29[m]

30[m]

31[m]

32[m]

Page 43: Cache Memory

43

6.6 Putting it Together: The Impact of Caches on Program Performance

6.6.1 The Memory Mountain

Page 44: Cache Memory

44

The Memory Mountain P512

• Read throughput (read bandwidth)– The rate that a program reads data from the

memory system

• Memory mountain– A two-dimensional function of read bandwidth

versus temporal and spatial locality– Characterizes the capabilities of the memory

system for each computer

Page 45: Cache Memory

45

Memory mountain main routine Figure 6.41 P513

/* mountain.c - Generate the memory mountain. */

#define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */

#define MAXBYTES (1 << 23) /* ... up to 8 MB */

#define MAXSTRIDE 16 /* Strides range from 1 to 16 */

#define MAXELEMS MAXBYTES/sizeof(int)

int data[MAXELEMS]; /* The array we'll be traversing */

Page 46: Cache Memory

46

Memory mountain main routine

int main()

{

int size; /* Working set size (in bytes) */

int stride; /* Stride (in array elements) */

double Mhz; /* Clock frequency */

init_data(data, MAXELEMS); /* Initialize each element in data to 1 */

Mhz = mhz(0); /* Estimate the clock frequency */

Page 47: Cache Memory

47

Memory mountain main routine

for (size = MAXBYTES; size >= MINBYTES; size >>= 1) {

for (stride = 1; stride <= MAXSTRIDE; stride++)

printf("%.1f\t", run(size, stride, Mhz));

printf("\n");

}

exit(0);

}

Page 48: Cache Memory

48

Memory mountain test functionFigure 6.40 P512

/* The test function */

void test (int elems, int stride) {

int i, result = 0;

volatile int sink;

for (i = 0; i < elems; i += stride)

result += data[i];

sink = result; /* So compiler doesn't optimize away the loop */

}

Page 49: Cache Memory

49

Memory mountain test function

/* Run test (elems, stride) and return read throughput (MB/s) */

double run (int size, int stride, double Mhz)

{

double cycles;

int elems = size / sizeof(int);

test (elems, stride); /* warm up the cache */

cycles = fcyc2(test, elems, stride, 0); /* call test (elems,stride) */

return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */

}

Page 50: Cache Memory

50

The Memory Mountain

• Data

– Size

• MAXBYTES(8M) bytes or MAXELEMS(2M) words

– Partially accessed

• Working set: from 8MB to 1KB

• Stride: from 1 to 16

Page 51: Cache Memory

51

The Memory MountainFigure 6.42 P514

s1

s3

s5

s7

s9

s11

s13

s15

8m 2

m 51

2k

12

8k 32

k 8k 2

k

0

200

400

600

800

1000

1200

rea

d t

hro

ug

hp

ut

(MB

/s)

stride (words) working set size (bytes)

Pentium III Xeon550 MHz16 KB on-chip L1 d-cache16 KB on-chip L1 i-cache512 KB off-chip unifiedL2 cache

Ridges ofTemporalLocality

L1

L2

mem

Slopes ofSpatialLocality

xe

Page 52: Cache Memory

52

Ridges of temporal locality

• Slice through the memory mountain with

stride=1

– illuminates read throughputs of different

caches and memory

Ridges: 山脊

Page 53: Cache Memory

53

Ridges of temporal localityFigure 6.43 P515

0

200

400

600

800

1000

12008

m

4m

2m

10

24

k

51

2k

25

6k

12

8k

64

k

32

k

16

k

8k

4k

2k

1k

working set size (bytes)

rea

d t

hro

ug

pu

t (M

B/s

)

L1 cacheregion

L2 cacheregion

main memoryregion

Page 54: Cache Memory

54

A slope of spatial locality

• Slice through memory mountain with

size=256KB

– shows cache block size.

Page 55: Cache Memory

55

A slope of spatial localityFigure 6.44 P516

0

100

200

300

400

500

600

700

800

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16

stride (words)

rea

d t

hro

ug

hp

ut

(MB

/s)

one access per cache line