cache memory

16
by GANESH

Upload: ganesh-rocky

Post on 16-Jul-2015

67 views

Category:

Design


0 download

TRANSCRIPT

by GANESH

• Cache is a small high-speed memory.

• Stores data from frequently used addresses (of main memory)

• It reduce the average time to access data from the main memory.

• Most CPUs have different independent caches, including instruction and data caches.

• The instruction cache is used to store instructions.

• Helps to reduce the cost of going to memory to fetch instructions.

• Holds several other things, like branch prediction information.

• Example:- The instruction cache on Ultra SPARC pre-decodes the incoming instruction.

• Data cache is a fast buffer that contains the application data.

• Before the processor can operate on the data, it must be loaded from memory into the data cache.

• Then loaded from the cache line into a register and instruction using this value can operate on it.

• The resultant value is stored in a register. Then register to cache and cache to main memory.

• TLB – Translated Lookaside Buffer.

• Also called as content-addressable memory(CAM).

• It is a cache to store translated addresses.

• The CPU can only operate on data and instructions that are mapped into the TLB.

• Data found in cache, cache hit has occurred.

• Then processor immediately reads or writes the data in the cache line.

• Results in data transfer at maximum speed.

• The proportion of accesses that result in a cache hit is known as the hit rate.

• HIT RATIO: Percentage of memory accesses satisfied by the cache.

• If the processor does not find the memory location in the cache, a cache miss has occurred.

• Processor loads data from main memory and copies into cache.

• This results in extra delay called miss penalty.

• MISS RATIO: 1-HIT RATIO

• Cache is partitioned into lines (also called blocks).

• A whole line is read or written.

• Each line has a tag that indicates the address in memory from which the line has been copied.

• Cache hit is detected through an associative search of all the tags.

• Data is read only if a match is found.

1. FULLY ASSOCIATIVE

2. DIRECT MAPPED

3. SET ASSOCIATIVE

1. FULLY ASSOCIATIVE:

• No restriction on mapping from memory to

cache.

• Associative search of tag is expensive.

Feasible for very small size caches only.

2. CACHE LINE REPLACEMENT:

• To fetch a new line after a cache miss an existing line must be replaced. Two common policies for identifying the victim block are:

• LRU (Least Recently Used)

• Random

• ESTIMATING AVERAGE MEMORY ACCESS TIME:

• Average Memory Access Time=Hit rate + Miss rate * Miss penalty.

• Given memory block can be mapped into only one cache line.

• Tag used to see if a desired word is in cache.

• If there is no match, the block containing the required word must first be read from the memory.

• No need of expensive associative search.

• Miss rate may go up due to possible increase of mapping conflicts.

• 2-way, 4-way, N-way set associative cache are there.

• Cache is divided into number of sets.

• Each M-block can now be mapped into any one of a set of N cache blocks.

• The sets are predefined.

• Lower miss ratio than a direct mapped cache.

• Cheaper than fully associative cache.

• Manufacturer sites

• ARM

• INTEL

• ONLINE SEARCH ON CACHE

• PDF

• EBOOKS