1 the memory that connects to the processor should operate preferably at a speed that matches the...

38
1 The memory that connects to the processor should operate preferably at a speed that matches the processor, so as not to slow the system down. Processor Memory Instructions and dat Memory must be random access memory - individual memory locations can be accessed in any order at the same high speed. Processor - Memory Interface ITCS 3181 Logic and Computer Systems B. Wilkinson Slides13.ppt Modification date: April 20, 2015 Large dynamic semiconductor RAM used for main memory cannot operate at that speed (much slower). Relatively small static semiconductor memory can be designed to operate faster.

Upload: shana-tucker

Post on 19-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

1

The memory that connects to the processor should operate preferably at a speed that matches the processor, so as not to slow the system down.

Processor

Memory

Instructions and data

Memory must be random access memory - individual memory locations can be accessed in any order at the same high speed.

Processor - Memory Interface

ITCS 3181 Logic and Computer Systems B. Wilkinson Slides13.ppt Modification date: April 20, 2015

Large dynamic semiconductor RAM used for main memory cannot operate at that speed (much slower).

Relatively small static semiconductor memory can be designed to operate faster.

2

Cache MemoryA high speed memory called a cache memory placed between the processor and main memory, operating a speed closer to that of the processor.

Main memory

Processor

High speed

X

Xcache memory

Data transfer

Data transfer

Information must be in cache memory for processor to access it:

The first paper on cache memories:M. Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE Trans. On Electronic Computers, 1965.

What else did he invent/publish first?

3

To access a location initially, need to transfer contents from main memory into cache and the access cache. The average access time, ta is:

ta = tm + tc

wheretm = main memory access time, and tc = cache access time

If location never accessed again, caches would cause an additional overhead!

Fortunately, virtually all programs repeat sections of code and repeatedly access the same or nearby data. This characteristic is embodied in the Principle of Locality.

Time to access contents of memory

4

Principle of Locality

Found empirically to be obeyed by most programs. Applies to both instruction and data references, though more likely in instruction refs.

Two main aspects:

1. Temporal locality (locality in time) – individual locations, once referenced, are likely to be referenced again in the near future. Seen in instruction loops, stacks, variable accesses…

Temporal locality is essential for an effective cache.

2. Spatial locality (locality in space) – references are likely to be near last reference. Seen in data accesses as data often stored in consecutive locations. References to next location sometimes known as sequential locality.

Spatial locality helpful in the design of a cache but not essential.

5

Taking Advantage of Temporal Locality

Suppose a reference is repeated n times in all during a program loop and, after the first reference, the location is always found in the cache, then the average access time would be:

Average access time = (ntc + tm)/n = tc + tm/n

where n = number of references.

Example

If tc = 5 ns, tm = 60 ns and n = 10, average access time would be 11 ns, as opposed to 60 ns without cache.

6

Hit Ratio– the probability that the required word is already in the cache. A hit occurs when a location in the cache is found immediately, otherwise a miss occurs and a reference to the main memory is necessary.

Number of times required word found in cache

Total number of references

Main memory

Processor

High speed

X

Xcache memory

AddressData

On acache miss

The cache hit ratio, h, (or hit rate) is defined as:

h =

The miss ratio (or miss rate) is given by 1 - h.

7

Average access time using Hit RatioThe average access time, ta is given by:

ta = tc + (1 - h)tm

assuming again that the access must be to the cache on a hit or miss before an access is made to the main memory on a miss.*

ExampleIf hit ratio is 0.85 (a typical value), main memory access time is 50 ns and cache access time is 5 ns, average access time is 5+0.1550=12.5 ns.

Machine cyclesIn a practical system, each access time given as an integer number of machine cycles. Typically hit time will be 1–2 cycles. Cache miss penalty (extra time to access main memory) in order of 5–20 cycles.*Only read requests are consider. Write requests considered later.

8

Cache memory with multiple memory modules (wide word length memory)

Bus

Memory modules

Processor

Cache

0 1 2 3 4 5 6 78

Address

9 10 11 12 13 14 15

Byte

Line

location

Memory address

ByteLine

To take advantage of spatial locality, transfer not just one byte or word to/from main memory to cache but a series of sequential locations called a line or a block.

For best performance, line transferred simultaneously across a wide data bus to the cache.

Taking advantage of Spatial Locality

9

Cache Memory Organizations

Need a way to select the location within the cache. The memory address of its location in main memory is used.

Three ways of selecting cache location:

1. Fully associative

2. Direct mapped

3. Set associativeCache

Processor

Data Memory address

Memory

10

1. Fully Associative Mapping

Both memory address and data stored together in the cache. Incoming memory address is simultaneously compared with all stored addresses using the internal logic of the cache memory.

M ainC ache

M emory address fromprocessor

memory

DataAddress

Compare witha ll storedaddressessim ultaneously

Address found

M ain memory accessedi f address not in cache

Access location

Address notfound in cache

R equi res one address comparatorw ith each stored address(Content-addressable m em ory)

11

ExampleSuppose each line has 16 bytes. With 32-bit processors, a word consists of 4 bytes:

C ache

Access word in line

M em ory address fromprocessor

L ineC om pare wi tha ll storedaddressessimultaneously

Address found

Address Word 0 Word 1 Word 2 Word 3

W ord Byte

Select byte in wordi f necessary

Word w ithin line Byte with in word

2 2

“Byte” field specifies byte within word. In this example with 4 bytes in word, need 2 bits.

“Word” field specifies word within line. In this example, with 4 words in line, need 2 bits.

12

Selection/Replacement Algorithms

Fully associative cache needs an algorithm to select where to store information in cache, generally over some existing line (which is copied back to the main memory if altered).*

Must be implemented in hardware. (No software)

Ideally, algorithm should choose a line which is not likely to be needed again in the near future, from all lines that could be selected.

Common Algorithms

1. Random selection2. The least recently used algorithm (or an approximation to it).

* Note in caches the selection and replacement location usually refers to the same location whereas in virtual memory (OS course) they usually refer to different locations.

13

Least Recently Used (LRU) Algorithm

Line which has not been referenced for longest time removed from cache.

The word “recently” comes about because the line is not the least used, as this is likely to be back in memory.

It is the least used of those lines in the cache, and all of these are likely to have been recently used otherwise they would not be in the cache.

Can only be implemented in hardware fully when the number of lines that need to be considered is small.

14

Direct MappingLine held in cache at a location given by ”index” bits of main memory address. Line selected by index bits of main memory address. Most significant bits of address stored in cache compared with most significant bits of main memory address (tags):

Tag

Compare

Cache

Different

Same

Access word/byte in l ine

Memory address fromprocessor

Index

Read

Tag Word 0

Word

M ainmemory

accessedif tags donot match

Word n-1

Index

Byte

One external

H igh speed RAM

address comparator

L ine

15

Sample Direct-Mapped Cache Design8192-byte direct mapped cache with 32-byte line organized as eight 4-byte words. 32-bit memory address.

Tag

C ompare

Cache

Sam e Access word/byte in line

M em ory address from processor

Index

Read

Tag W ord 0

Word

Word 1 Word 2 W ord 3

Index

Byte

Line

Word 4 Word 5 Word 6 Word 7

23

5

819

3227

256(28)

Tag has 19 bits

8192/32 = 256

With 4 bytes in word, need 2 bits in byte field.With 8 words in line, need 3 bits in word field.With 8192 bytes in total and 32 bytes in each line, 8192/32 entries in cache (= 256 = 28). So index = 8 bits.

16

Advantages of Direct Mapped Caches

1. No replacement algorithm necessary - because there is no choice in the selection of the location for the incoming line. It is given by the index of the address of incoming line.

2. Simple hardware and low cost.

3. High speed of operation.

17

Major Disadvantage of Direct Mapped Caches

Performance drops significantly if accesses are made to different locations with the same index.

However, as the size of cache increases, the difference in the hit ratios of the direct and associative caches reduces and becomes insignificant.

18

Elements of an Array Stored in MemoryEvery n th location in memory map

into same location in cache where

there n locations in the cache.

A 2-dimensional array, a[ ][ ],with n

elements in the first position would

map all these elements into one

location (if row-major order as C).

n locations

n locations

Cachea[0][1]a[0][0]

a[2][0]

a[0][n-1]a[1][0]a[1][1]

a[1][n-1]

a[2][1]

19

Set-Associative Mapping

Allows a limited number of lines, with the same index and different tags, in the cache. A compromise between a fully associative cache and a direct mapped cache.

Cache divided into “sets” of lines. A four-way set associative cache would have four lines in each set.

The number of lines in a set is known as the associativity or set size. Each line in each set has a stored tag which, together with the index (set number), completes the identification of the line.

20

4-way Set-Associative Cache

Tag

Com pare

Cache

M ainmemory

accessedif tags donot match

Same

Memory address from processor

Index

Tag Data Tag Data Tag Data Tag Data

Access word/byte

Line

Word Byte

Set

First, index of address from processor used to access set. Then, all tags of selected set compared with incoming tag. If match found, corresponding location accessed, otherwise access main memory.

21

Sample 4-way Set-Associative Cache Design4096-byte 4-way set-associative cache with 8-byte line organized as two 4-byte words. 32-bit memory address.

Tag

Com pare

Cache

M ainmemory

accessedif tags donot match

Same

Memory addressIndex

Tag Tag Tag Tag

Access word/byte

Line

Word Byte

4096/(4 x 8)= 128

12

3

7

32

22

from processor

With 4 bytes in word, need 2 bits in byte field.With 2 words in line, need 1 bit in word field.With 4096 bytes in total and 8 bytes in each line and 4 lines in set (4-way set assoc.) 4096/(4 x 8) entries in cache (= 128 = 27). So index = 7 bits.

22

Set-Associative Cache Replacement Algorithm

Need only consider the lines in one set, as the choice of set is predetermined by the index (set number) in the address.

Hence, with two lines in each set, for example, only one additional bit is necessary in each set to identify the line to replace.

Set size

• Typically, set size is 2, 4, 8, or 16. • A set size of one line reduces organization to that of direct

mapping.• An organization with one set becomes fully associative mapping.

Set-associative cache popular for internal caches of microprocessors.

23

Valid Bits

In all caches, one valid bit provided with each line.*

Will assume one valid bit per line.

Valid bits set to a 0 initially. Then set to a 1 when contents of line is valid. Checked before accessing line.

Needed to handle start-up situation when cache holds random patterns of bits and also before cache is full.

* Or parts of a line if only parts transferred in separate transactions.

24

Sample Cache Design showing valid bits(assuming a line can be transferred in one transaction)

4096-byte 2-way set-associative cache with 16-byte lines organized as four 4-byte words. 32-bit memory address.

Tag

Compare

Cache

Same

Memory addressIndex

Tag Tag

Line

WordByte

Access word/byte in line

22

4

721

3228

Word 0 Word 1 Word 2 Word 3 Word 0 Word 1 Word 2 Word 3 128(27)

from processor

Same

Line

Validbits

Validbits

Valid bit set when line transferred into cache

With 4 bytes in word, need 2 bits in byte field.With 4 words in line, need 2 bit in word field.With 4096 bytes in total and 16 bytes in each line and 2 lines in set (2-way set assoc.) 4096/(16 x 2) entries in cache (= 128 = 27). So index = 7 bits.

25

Fetch policy

Three strategies for fetching lines from main memory to cache:

Demand fetch - fetching a line when it is needed on a miss.

Prefetch - fetching lines before they are requested.

Simple prefetch strategy - prefetch (i + 1)th line when ith line is initially referenced (assuming that the (i + 1)th line is not already in the cache) on the expectation that it is likely to be needed if the ith line is needed.

Selective fetch - policy of not always fetching lines, dependent upon some defined criterion. Then, main memory used rather than cache to hold the information. Individual locations could be tagged as non-cacheable.

May be advantageous to lock certain cache lines so that these are not be replaced. Hardware could be provided within cache to implement such locking.

26

Write Policies

Reading a word in cache does not affect it and no discrepancy between the cache word and copy held in main memory.

Writing can occur to cache words and then copy held in main memory different.

Important to maintain copies same if other devices such as disks access the main memory directly.

Two principal alternative mechanisms to update the main memory:

1. Write through

2. Write back

27

1. Write-Through

Every write operation to the cache is repeated to the main memory, normally at the same time. Then main memory always the same as the cache.

Main memory

Processor

Cache

X

X

AddressData

On eve rywrite reference (b ut see later)

28

Cache with write buffer

Write-through scheme can be enhanced by incorporating buffers:

Data Address

Write

Processor CacheMain

memory

Read

Allows the cache to be accessed while multiple previous memory write operations proceed. “Non-blocking” store.

29

1. Fetch-on-write (miss)Describes a policy of bringing a line from the main memory into the cache for a write operation on a write miss (when the line is not already in the cache).

Also called allocate on write because a line is allocated for an incoming line on cache miss.

Two ways to handle write misses

2. No-Fetch-on-write (miss)Describes a policy of not bringing a line from the main memory into the cache for a write operation.

Also called Non-allocate on write.

No fetch on write often practiced with a write-through cache. Why?

30

2. Write-Back (or copy back)Write operation to main memory only done at line replacement time. At this time, line displaced by incoming line written back to main memory.

Main memory

Processor

Cache

X

X

AddressData

X written back to main memory

Y

Reference to Y, a miss

when location used by incoming line (Y)

Only necessary if X altered in cacheRequires an altered (“dirty”) bit with line

X and Y have same indexif direct mapped/set associative

Step 1

Step 2

Step 3Bring in Y

Here

31

Instruction and Data Caches

Several advantages if separate cache into two parts, one holding the data (a data cache) and one holding program instructions (an instruction cache or code cache):

• Separate paths could be provided from the processor to each cache, allowing simultaneous transfers to both the instruction cache and the data cache.

• Write policy would only have to be applied to the data cache assuming instructions are not modified.

• Designer may choose to have different sizes for the instruction cache and data cache, and have different internal organizations and line sizes for each cache.

32

Particularly convenient in a pipeline processor, as different stages of the pipeline access each cache (instruction fetch unit accesses instruction cache and memory access unit accesses data cache):

Instructionfetchunit

Memoryaccess

unit

Instruction pipeline

Instructioncache

Datacache

Main memory

Data paths

Processor

Instructions Data

Commonlyinside the processor

IF OF EX MEM

33

General Cache Performance Characteristics

Miss Ratio against Cache Size

2K 4K 8K 16K 32K

0.01

0.1

1.0

0.5

0.05

Program C

Program A

Program B

Cache size

Mis

s ra

tio?

34

Miss Ratio against Line Size

0.01

0.1

1.0

0.5

0.05

4 8 16 32 64 128

32

256

1024

4096

32768

Instruction/data cache

Instruction cache

Cache size (fixed)

Mis

s ra

tio

Line size (bytes)

Has a minimum(Why?)

35

Second Level CachesMost present-day systems use two levels of cache (or three levels).

Processor

First-levelcache(s)

Second-levelcache

Main memory

Usually separate data and instruction caches

Unified cache holding code and data

First-level cache access time matches processor. Second-level cache access time between main memory access time and first level cache access time.

36

Strictly inclusive caches -- all the data in the L2 cache is also in the L1 cache

Exclusive caches – data is guaranteed only to be in one cache (L1 or L2) at most, never in both.

Alternative: data could be in only L1 or L2 or both.

37

Caches Example

Intel i3-2120 (Sandy Bridge), 3.3 GHz, 32 nm (Launched 2011)

• L1 Data cache = 32 Kbyte, 8-WAY. (Write-Allocate?), line = 64 Bytes. • L1 Instruction cache = 32 Kbyte. 8-WAY, line = 64 Bytes • L2 Cache = 256 KB. 8-WAY, line = 64 bytes• L3 Cache = 3 MB. Direct?, line = 64 bytes

L1 Data Cache Latency = 4 cycles or 5 cycles L2 Cache Latency = 12 cycles L3 Cache Latency = 27.85 cycles

RAM Latency = 28 cycles + 49 ns or 56 ns.

http://www.7-cpu.com/cpu/SandyBridge.html

38

Questions