ca session 02

30
Computer Architecture Cache Memory ARASH HABIBI LASHKARI 1 Computer Architecture 02

Upload: edwin-fock

Post on 05-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 1/30

Computer Architecture

Cache Memory

ARASH HABIBI LASHKARI‐

1Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 2/30

• Capacity

Access method• er ormance

• Physical type• Physical characteristics• Organisation

2Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 3/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 4/30

• – The natural unit of organisation

• um er o wor s –

or Bytes

4Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 5/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 6/30

• Se uential

Start at the beginning and read through in order – Access time depends on location of data and previous

– e.g. tape• Direct

– Individual blocks have unique address – Access is by jumping to vicinity plus sequential search –

ccess me epen s on oca on an prev ous location – e.g. disk

6Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 7/30

• Random

Individual addresses identify locations exactly – Access time is independent of location or previous

– e.g. RAM• Associative

– Data is located by a comparison with contents of a portion of the store

– access

– e.g. cache

7Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 8/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 9/30

9Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 10/30

• – Time between presenting the address and getting

the valid data• Memory Cycle time

– Time may be required for the memory to “recover” before next access

– Cycle time is access + recovery• Transfer Rate

– Rate at which data can be moved

10Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 11/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 12/30

• Volatility• Erasa e• Power consumption

12Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 13/30

• • Not always obvious• e.g. inter eave

13Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 14/30

– Capacity• ow as

Time is

money

• How expensive?

14Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 15/30

• L1 Cache

• Main memory• s cac e• Disk• Optical• Tape

15Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 16/30

only static RAM (see later)

This would

need

no

cache

– How can you cache cache?• This would cost a very large amount

16Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 17/30

• • Sits between normal main memory and CPU• May e ocate on CPU c ip or mo u e

17Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 18/30

18Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 19/30

– • • Check cache for this data• I present, get rom cac e ast•

If not present, read required block from main memory to cache

• Then deliver from cache to CPU• Cache includes tags to identify which block of

19Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 20/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 21/30

• Mapping Function• Rep acement A gorit m•

Write Policy• Block Size•

21Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 22/30

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 23/30

23Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 24/30

• • Cache block of 4 bytes

– .e. cac e s nes o ytes•

16MBytes main

memory

• 24 bit address – (224=16M)

24Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 25/30

memory is up to date

I/O may

address

main

memory

directly

25Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 26/30

• • Multiple CPUs can monitor main memory

Lots of

traffic

• Slows down writes

• Remember bogus write through caches!

26Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 27/30

• • Update bit for cache slot is set when update

If block

is

to

be

replaced,

write

to

main

memory on y i up ate it is set

• Other caches get out of sync• I/O must access main memory through cache• . .

27Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 28/30

• 80386 – no on chip cache• – us ng yte nes an our way set assoc at ve

organization• Pentium (all versions) – two on chip L1 caches – Data & instructions

• Pentium III – L3 cache added off chip• Pentium 4

– L1 caches• y es• 64 byte lines• four way set associative

– L2 cache

• 256k• 128 byte lines• 8 way set associative

28Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 29/30

29Computer Architecture ‐ 02

7/31/2019 CA Session 02

http://slidepdf.com/reader/full/ca-session-02 30/30