overview of computer architecture
DESCRIPTION
Overview of Computer Architecture. Guest Lecture ECE 153a, Fall, 2001. Computer Architecture Is …. - PowerPoint PPT PresentationTRANSCRIPT
Modified From Prof. Patterson's Slides at UCB, 1
Overview of Computer Architecture
Guest Lecture
ECE 153a, Fall, 2001
Modified From Prof. Patterson's Slides at UCB, 2
Computer Architecture Is …
the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation.
Amdahl, Blaaw, and Brooks, 1964
SOFTWARESOFTWARE
Modified From Prof. Patterson's Slides at UCB, 3
What are “Machine Structures”?
I/O systemProcessor
CompilerOperating
System(Windows 98)
Application (Netscape)
Digital DesignCircuit Design
Instruction Set Architecture
• Coordination of many levels of abstraction
Datapath & Control
transistors
MemoryHardware
Software Assembler
Modified From Prof. Patterson's Slides at UCB, 4
Levels of Representation
High Level Language Program (e.g., C)
Assembly Language Program (e.g.,MIPS)
Machine Language Program (MIPS)
Control Signal Specification
Compiler
Assembler
Machine Interpretation
temp = v[k];
v[k] = v[k+1];
v[k+1] = temp;
lw $to, 0($2)lw $t1, 4($2)sw$t1, 0($2)sw$t0, 4($2)
0000 1001 1100 0110 1010 1111 0101 10001010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111
°°
Modified From Prof. Patterson's Slides at UCB, 5
Anatomy: 5 components of any Computer
Personal Computer
Processor (active)
Computer
Control(“brain”)
Datapath(“brawn”)
Memory(passive)
(where programs, data live whenrunning)
Devices
Input
Output
Keyboard, Mouse
Display, Printer
Disk (where programs, data live whennot running)
Modified From Prof. Patterson's Slides at UCB, 6
A "Typical" RISC
• 32-bit fixed format instruction (3 formats)
• 32 32-bit GPR (R0 contains zero, DP take pair)
• 3-address, reg-reg arithmetic instruction
• Single address mode for load/store: base + displacement
– no indirection
• Simple branch conditions
• Delayed branch
see: SPARC, MIPS, HP PA-Risc, DEC Alpha, IBM PowerPC, CDC 6600, CDC 7600, Cray-1, Cray-2, Cray-3
Modified From Prof. Patterson's Slides at UCB, 7
Example: MIPS ( DLX)
Op
31 26 01516202125
Rs1 Rd immediate
Op
31 26 025
Op
31 26 01516202125
Rs1 Rs2
target
Rd Opx
Register-Register
561011
Register-Immediate
Op
31 26 01516202125
Rs1 Rs2/Opx immediate
Branch
Jump / Call
Modified From Prof. Patterson's Slides at UCB, 8
Three Key Subjects
• Pipeline– How data computation can be done faster?
» Pipeline, hazards, scheduling, prediction, super scalar, etc.
• Cache– How data access can be done faster?
» L1, L2, L3 caches, pipelined requests, parallel processing, etc.
• Virtual Memory– How data communication can be done faster?
» Virtual memory, paging, bus control and protocol, etc.
Modified From Prof. Patterson's Slides at UCB, 9
Pipelining: Its Natural!
• Laundry Example
• Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold
• Washer takes 30 minutes
• Dryer takes 40 minutes
• “Folder” takes 20 minutes
A B C D
Modified From Prof. Patterson's Slides at UCB, 10
Sequential Laundry
• Sequential laundry takes 6 hours for 4 loads
• If they learned pipelining, how long would laundry take?
A
B
C
D
30 40 20 30 40 20 30 40 20 30 40 20
6 PM 7 8 9 10 11 Midnight
Task
Order
Time
Modified From Prof. Patterson's Slides at UCB, 11
Pipelined LaundryStart work ASAP
• Pipelined laundry takes 3.5 hours for 4 loads
A
B
C
D
6 PM 7 8 9 10 11 Midnight
Task
Order
Time
30 40 40 40 40 20
Modified From Prof. Patterson's Slides at UCB, 12
Pipelining Lessons• Pipelining doesn’t help
latency of single task, it helps throughput of entire workload
• Pipeline rate limited by slowest pipeline stage
• Multiple tasks operating simultaneously
• Potential speedup = Number pipe stages
• Unbalanced lengths of pipe stages reduces speedup
• Time to “fill” pipeline and time to “drain” it reduces speedup
A
B
C
D
6 PM 7 8 9
Task
Order
Time
30 40 40 40 40 20
Modified From Prof. Patterson's Slides at UCB, 13
5 Steps of DLX Datapath
MemoryAccess
WriteBack
InstructionFetch
Instr. DecodeReg. Fetch
ExecuteAddr. Calc
IRLMD
Modified From Prof. Patterson's Slides at UCB, 14
Pipelined DLX Datapath
MemoryAccess
WriteBack
InstructionFetch Instr. Decode
Reg. FetchExecute
Addr. Calc.
• Data stationary control– local decode for each instruction phase / pipeline stage
Modified From Prof. Patterson's Slides at UCB, 15
Why Pipeline? Because the resources are there!
Instr.
Order
Time (clock cycles)
Inst 0
Inst 1
Inst 2
Inst 4
Inst 3
AL
UIm Reg Dm Reg
AL
UIm Reg Dm Reg
AL
UIm Reg Dm RegA
LUIm Reg Dm Reg
AL
UIm Reg Dm Reg
Modified From Prof. Patterson's Slides at UCB, 16
Its Not That Easy for Computers
• Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle
– Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away)
– Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock)
– Control hazards: Pipelining of branches & other instructionsstall the pipeline until the hazardbubbles” in the pipeline
Modified From Prof. Patterson's Slides at UCB, 17
• Stall: wait until decision is clear– Its possible to move up decision to 2nd stage by adding
hardware to check registers as being read
• Impact: 2 clock cycles per branch instruction => slow
Control Hazard
Instr.
Order
Time (clock cycles)
Add
Beq
Load
AL
UMem Reg Mem Reg
AL
UMem Reg Mem RegA
LUReg Mem RegMem
Modified From Prof. Patterson's Slides at UCB, 18
• Dependencies backwards in time are hazardsData Hazard
on r1:
Instr.
Order
Time (clock cycles)
add r1,r2,r3
sub r4,r1,r3
and r6,r1,r7
or r8,r1,r9
xor r10,r1,r11
IF
ID/RF
EX MEM WBAL
UIm Reg Dm Reg
AL
UIm Reg Dm RegA
LUIm Reg Dm Reg
Im
AL
UReg Dm Reg
AL
UIm Reg Dm Reg
Modified From Prof. Patterson's Slides at UCB, 19
Pipeline Hazards AgainI-Fet ch DCD MemOpFetch OpFetch Exec Store
IFetch DCD ° ° °StructuralHazard
I-Fet ch DCD OpFetch Jump
IFetch DCD ° ° °
Control Hazard
IF DCD EX Mem WB
IF DCD OF Ex Mem
RAW (read after write) Data Hazard
WAW Data Hazard (write after write)
IF DCD OF Ex RS WAR Data Hazard (write after read)
IF DCD EX Mem WB
IF DCD EX Mem WB
Modified From Prof. Patterson's Slides at UCB, 20
General Solutions
• Forwarding– Forward data to the requested unit(s) as soon as they are
available
– Don’t wait until they are written into the register file
• Stalling– Wait until things become clear
– Sacrifice performance for correctness
• Guessing– While waiting, why not make a guess and just do it
– Help to improve performance, but no guaranty
Modified From Prof. Patterson's Slides at UCB, 21
Issuing Multiple Instructions/Cycle
• Superscalar DLX: 2 instructions, 1 FP & 1 anything else– Fetch 64-bits/clock cycle; Int on left, FP on right
– Can only issue 2nd instruction if 1st instruction issues
– More ports for FP registers to do FP load & FP op in a pair
Type PipeStages
Int. instruction IF ID EX MEM WB
FP instruction IF ID EX MEM WB
Int. instruction IF ID EX MEM WB
FP instruction IF ID EX MEM WB
Int. instruction IF ID EX MEM WB
FP instruction IF ID EX MEM WB
• 1 cycle load delay expands to 3 instructions in SS– instruction in right half can’t use it, nor instructions in next slot
Modified From Prof. Patterson's Slides at UCB, 22
Try producing fast code for
a = b + c;
d = e – f;
assuming a, b, c, d ,e, and f in memory. Slow code:
LW Rb,b
LW Rc,c
ADD Ra,Rb,Rc
SW a,Ra
LW Re,e
LW Rf,f
SUB Rd,Re,Rf
SW d,Rd
Software Scheduling to Avoid Load Hazards
Fast code:
LW Rb,b
LW Rc,c
LW Re,e
ADD Ra,Rb,Rc
LW Rf,f
SW a,Ra
SUB Rd,Re,Rf
SW d,Rd
Modified From Prof. Patterson's Slides at UCB, 23
Recap: Who Cares About the Memory Hierarchy?
µProc60%/yr.(2X/1.5yr)
DRAM9%/yr.(2X/10 yrs)1
10
100
1000
198
0198
1 198
3198
4198
5 198
6198
7198
8198
9199
0199
1 199
2199
3199
4199
5199
6199
7199
8 199
9200
0
DRAM
CPU198
2
Processor-MemoryPerformance Gap:(grows 50% / year)
Per
form
ance
Time
“Moore’s Law”
Processor-DRAM Memory Gap (latency)
Modified From Prof. Patterson's Slides at UCB, 24
Levels of the Memory Hierarchy
CPU Registers100s Bytes<10s ns
CacheK Bytes10-100 ns1-0.1 cents/bit
Main MemoryM Bytes200ns- 500ns$.0001-.00001 cents /bit
DiskG Bytes, 10 ms (10,000,000 ns)
10 - 10 cents/bit-5 -6
CapacityAccess TimeCost
Tapeinfinitesec-min10 -8
Registers
Cache
Memory
Disk
Tape
Instr. Operands
Blocks
Pages
Files
StagingXfer Unit
prog./compiler1-8 bytes
cache cntl8-128 bytes
OS512-4K bytes
user/operatorMbytes
Upper Level
Lower Level
faster
Larger
Modified From Prof. Patterson's Slides at UCB, 25
The Principle of Locality
• The Principle of Locality:– Program access a relatively small portion of the address space at any instant of time.
• Two Different Types of Locality:– Temporal Locality (Locality in Time): If an item is referenced, it will tend to be
referenced again soon (e.g., loops, reuse)
– Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access)
• Last 15 years, HW relied on localilty for speed
Modified From Prof. Patterson's Slides at UCB, 26
Memory Hierarchy: Terminology• Hit: data appears in some block in the upper level (example:
Block X) – Hit Rate: the fraction of memory access found in the upper level– Hit Time: Time to access the upper level which consists of
RAM access time + Time to determine hit/miss
• Miss: data needs to be retrieve from a block in the lower level (Block Y)
– Miss Rate = 1 - (Hit Rate)– Miss Penalty: Time to replace a block in the upper level +
Time to deliver the block the processor
• Hit Time << Miss Penalty (500 instructions on 21264!)
Lower LevelMemoryUpper Level
MemoryTo Processor
From ProcessorBlk X
Blk Y
Modified From Prof. Patterson's Slides at UCB, 27
Cache Measures• Hit rate: fraction found in that level
– So high that usually talk about Miss rate– Miss rate fallacy: as MIPS to CPU performance,
miss rate to average memory access time in memory
• Average memory-access time = Hit time + Miss rate x Miss penalty
(ns or clocks)
• Miss penalty: time to replace a block from lower level, including time to replace in CPU
– access time: time to lower level
= f(latency to lower level)
– transfer time: time to transfer block
=f(BW between upper & lower levels)
Modified From Prof. Patterson's Slides at UCB, 28
1 KB Direct Mapped Cache, 32B blocks
• For a 2 ** N byte cache:– The uppermost (32 - N) bits are always the Cache Tag
– The lowest M bits are the Byte Select (Block Size = 2 ** M)
Cache Index
0
1
2
3
:
Cache Data
Byte 0
0431
:
Cache Tag Example: 0x50
Ex: 0x01
0x50
Stored as partof the cache “state”
Valid Bit
:
31
Byte 1Byte 31 :
Byte 32Byte 33Byte 63 :Byte 992Byte 1023 :
Cache Tag
Byte Select
Ex: 0x00
9
Modified From Prof. Patterson's Slides at UCB, 29
Two-way Set Associative Cache
• N-way set associative: N entries for each Cache Index– N direct mapped caches operates in parallel (N typically 2 to 4)
• Example: Two-way set associative cache– Cache Index selects a “set” from the cache
– The two tags in the set are compared in parallel
– Data is selected based on the tag result
Cache Data
Cache Block 0
Cache TagValid
:: :
Cache Data
Cache Block 0
Cache Tag Valid
: ::
Cache Index
Mux 01Sel1 Sel0
Cache Block
CompareAdr Tag
Compare
OR
Hit
Modified From Prof. Patterson's Slides at UCB, 30
4 Questions for Memory Hierarchy
• Q1: Where can a block be placed in the upper level? (Block placement)
• Q2: How is a block found if it is in the upper level? (Block identification)
• Q3: Which block should be replaced on a miss? (Block replacement)
• Q4: What happens on a write? (Write strategy)
Modified From Prof. Patterson's Slides at UCB, 31
Q1: Where can a block be placed in the upper level?
• Block 12 placed in 8 block cache:– Fully associative, direct mapped, 2-way set associative
– S.A. Mapping = Block Number Modulo Number Sets
Memory
Modified From Prof. Patterson's Slides at UCB, 32
Q2: How is a block found if it is in the upper level?
• Tag on each block– No need to check index or block offset
• Increasing associativity shrinks index, expands tag
Modified From Prof. Patterson's Slides at UCB, 33
Q3: Which block should be replaced on a miss?
• Easy for Direct Mapped
• Set Associative or Fully Associative:– Random
– LRU (Least Recently Used)
Associativity: 2-way 4-way 8-way
Size LRURandom LRURandom LRURandom
16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0%
64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5%
256 KB 1.15% 1.17%1.13% 1.13% 1.12% 1.12%
Modified From Prof. Patterson's Slides at UCB, 34
Q4: What happens on a write?
• Write through—The information is written to both the block in the cache and to the block in the lower-level memory.
• Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced.
– is block clean or dirty?
• Pros and Cons of each?– WT: read misses cannot result in writes
– WB: no repeated writes to same location
• WT always combined with write buffers so that don’t wait for lower level memory
Modified From Prof. Patterson's Slides at UCB, 35
Quicksort vs. Radix as vary number keys: Instrs & Time
Time
Set size in keys
Instructions
Radix sort
Quicksort
Modified From Prof. Patterson's Slides at UCB, 36
Quicksort vs. Radix as vary number keys: Cache misses
Cache misses
Set size in keys
Radix sort
Quicksort
What is proper approach to fast algorithms?
Modified From Prof. Patterson's Slides at UCB, 37
A Modern Memory Hierarchy• By taking advantage of the principle of locality:
– Present the user with as much memory as is available in the cheapest technology.
– Provide access at the speed offered by the fastest technology.
Control
Datapath
SecondaryStorage(Disk)
Processor
Registers
MainMemory(DRAM)
SecondLevelCache
(SRAM)
On
-Ch
ipC
ache
1s 10,000,000s (10s ms)
Speed (ns): 10s 100s
100sGs
Size (bytes):Ks Ms
TertiaryStorage
(Disk/Tape)
10,000,000,000s (10s sec)
Ts
Modified From Prof. Patterson's Slides at UCB, 38
Basic Issues in VM System Designsize of information blocks that are transferred from secondary to main storage (M)
block of information brought into M, and M is full, then some region of M must be released to make room for the new block --> replacement policy
which region of M is to hold the new block --> placement policy
missing item fetched from secondary memory only on the occurrence of a fault --> demand load policy
Paging Organization
virtual and physical address space partitioned into blocks of equal size
page frames
pages
pages
reg
cachemem disk
frame
Modified From Prof. Patterson's Slides at UCB, 39
Address MapV = {0, 1, . . . , n - 1} virtual address spaceM = {0, 1, . . . , m - 1} physical address space
MAP: V --> M U {0} address mapping function
n > m
MAP(a) = a' if data at virtual address a is present in physical address a' and a' in M
= 0 if data at virtual address a is not present in M
Processor
Name Space V
Addr TransMechanism
faulthandler
MainMemory
SecondaryMemory
a
aa'
0
missing item fault
physical address OS performsthis transfer
Modified From Prof. Patterson's Slides at UCB, 40
Paging Organizationframe 0
1
7
01024
7168
P.A.
PhysicalMemory
1K1K
1K
AddrTransMAP
page 01
31
1K1K
1K
01024
31744
unit of mapping
also unit oftransfer fromvirtual tophysical memory
Virtual Memory
Address Mapping
VA page no. disp10
Page Table
indexintopagetable
Page TableBase Reg
V AccessRights PA +
table locatedin physicalmemory
physicalmemoryaddress
actually, concatenation is more likely
V.A.
Modified From Prof. Patterson's Slides at UCB, 41
Virtual Address and a Cache
CPUTrans-lation
Cache MainMemory
VA PA miss
hitdata
It takes an extra memory access to translate VA to PA
This makes cache access very expensive, and this is the "innermost loop" that you want to go as fast as possible
ASIDE: Why access cache with PA at all? VA caches have a problem! synonym / alias problem: two different virtual addresses map to same physical address => two different cache entries holding data for the same physical address!
for update: must update all cache entries with same physical address or memory becomes inconsistent
determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits; or
software enforced alias boundary: same lsb of VA &PA > cache size
Modified From Prof. Patterson's Slides at UCB, 42
TLBsA way to speed up translation is to use a special cache of recently used page table entries -- this has many names, but the most frequently used is Translation Lookaside Buffer or TLB
Virtual Address Physical Address Dirty Ref Valid Access
Really just a cache on the page table mappings
TLB access time comparable to cache access time (much less than main memory access time)
Modified From Prof. Patterson's Slides at UCB, 43
Translation Look-Aside BuffersJust like any other cache, the TLB can be organized as fully associative, set associative, or direct mapped
TLBs are usually small, typically not more than 128 - 256 entries even on high end machines. This permits fully associative lookup on these machines. Most mid-range machines use small n-way set associative organizations.
CPUTLB
LookupCache Main
Memory
VA PA miss
hit
data
Trans-lation
hit
miss
20 tt1/2 t
Translationwith a TLB
Modified From Prof. Patterson's Slides at UCB, 44
Overlapped Cache & TLB Access
TLB Cache
10 2
00
4 bytes
index 1 K
page # disp20 12
assoclookup32
PA Hit/Miss PA Data Hit/
Miss
=
IF cache hit AND (cache tag = PA) then deliver data to CPUELSE IF [cache miss OR (cache tag = PA)] and TLB hit THEN access memory with the PA from the TLBELSE do standard VA translation