chapter 1 introduction, evolution & performance. chapter 1 objectives computer system...

Post on 19-Jan-2016

219 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Chapter 1INTRODUCTION, EVOLUTION

& PERFORMANCE

Chapter 1 Objectives

Computer system organization and architecture.

Units of measure common to computer systems.

Computer as a layered system.

Evolution technology of computing

Performance calculation

2

1.1 Overview

Why study computer system organization and architecture? Design better programs, including system

software such as compilers, operating systems, and device drivers.

Optimize program behavior. Evaluate (benchmark) computer system

performance. Understand time, space, and price

tradeoffs.

3

1.1 Overview

Computer organization physical aspects of computer systems. E.g., circuit design, control signals, memory types. How does a computer work?

Computer architecture Logical aspects of system as seen by the

programmer. E.g., instruction sets, instruction formats, data

types, addressing modes. How do I design a computer?

4

1.2 Computer Components

Principle of Equivalence of Hardware and Software:

Anything that can be done with software can also be done with hardware, and anything that can be done with hardware can also be done with software.*

5

* Assuming speed is not a concern.

1.2 Computer Components

At the most basic level, a computer is a device consisting of three pieces:

A processor to interpret and execute programs

A memory to store both data and programs

A mechanism for transferring data to and from the outside world.

6

1.3 An Example System

Consider this advertisement:

7

MHz??

MB??

PCI??USB??

L1 Cache??

What does it all mean??

1.3 An Example System

Measures of capacity and speed:

• Kilo- (K) = 1 thousand = 103 and 210

• Mega- (M) = 1 million = 106 and 220

• Giga- (G) = 1 billion = 109 and 230

• Tera- (T) = 1 trillion = 1012 and 240

• Peta- (P) = 1 quadrillion = 1015 and 250

8

1.3 An Example System

Hertz = clock cycles per second (frequency) 1MHz = 1,000,000Hz Processor speeds are measured in MHz or GHz.

Byte = a unit of storage 1KB = 210 = 1024 Bytes 1MB = 220 = 1,048,576 Bytes Main memory (RAM) is measured in MB Disk storage is measured in GB for small

systems, TB for large systems.

9

1.3 An Example System

Measures of time and space:

• Milli- (m) = 1 thousandth = 10 -3

• Micro- () = 1 millionth = 10 -6

• Nano- (n) = 1 billionth = 10 -9

• Pico- (p) = 1 trillionth = 10 -12

• Femto- (f) = 1 quadrillionth = 10 -15

10

1.3 An Example System

Millisecond = 1 thousandth of a second Hard disk drive access times are often 10 to

20 milliseconds.

Nanosecond = 1 billionth of a second Main memory access times are often 50 to

70 nanoseconds.

Micron (micrometer) = 1 millionth of a meter Circuits on computer chips are measured in

microns.

11

1.3 An Example System

Cycle time is the reciprocal of clock frequency.

A bus operating at 133MHz has a cycle time of 7.52 nanoseconds:

12

Now back to the advertisement ...

133,000,000 cycles/second = 7.52ns/cycle

1.3 An Example System13

A system bus moves data within the computer. The faster the bus the better. This one runs at 400MHz.

The microprocessor is the “brain” of the system. It executes program instructions. This one is a Pentium (Intel) running at 4.20GHz.

1.3 An Example System14

• Large main memory capacity means you can run larger programs with greater speed than computers having small memories.

• RAM = random access memory. Time to access contents is independent of its location.

• Cache is a type of temporary memory that can be accessed faster than RAM.

1.3 An Example System15

… and two levels of cache memory, the level 1 (L1) cache is smaller and (probably) faster than the L2 cache. Note that these cache sizes are measured in KB.

This system has 256MB of (fast) synchronous dynamic RAM (SDRAM) . . .

1.3 An Example System16

This one can store 80GB. 7200 RPM is the rotational speed of the disk. Generally, the faster a disk rotates, the faster it can deliver data to RAM. (There are many other factors involved.)

Hard disk capacity determines the amount of data and size of programs you can store.

1.3 An Example System17

ATA stands for advanced technology attachment, which describes how the hard disk interfaces with (or connects to) other system components.

A CD can store about 650MB of data. This drive supports rewritable CDs, CD-RW, that can be written to many times.. 48x describes its speed.

1.3 An Example System18

This system has ten ports.

Ports allow movement of data between a system and its external devices.

1.3 An Example System19

• Serial ports send data as a series of pulses along one or two data lines.

• Parallel ports send data as a single pulse along at least eight data lines.

• USB, Universal Serial Bus, is an intelligent serial interface that is self-configuring. (It supports “plug and play.”)

1.3 An Example System20

System buses can be augmented by dedicated I/O buses. PCI, peripheral component interface, is one such bus.

This system has three PCI devices: a video card, a sound card, and a data/fax modem.

1.3 An Example System21

The number of times per second that the image on a monitor is repainted is its refresh rate. The dot pitch of a monitor tells us how clear the image is.

This one has a dot pitch of 0.24mm and a refresh rate of 75Hz.

The video card contains memory and programs that support the monitor.

1.5 Historical Development

The evolution of computing machinery has taken place over several centuries. The evolution of computers is usually classified into different generations according to the technology of the era.

22

1.5 Historical Development

Generation Zero: Mechanical Calculating Machines (1642 - 1945) Calculating Clock - Wilhelm Schickard (1592 -

1635). Pascaline - Blaise Pascal (1623 - 1662). Difference Engine - Charles Babbage (1791 -

1871), also designed but never built the Analytical Engine.

Punched card tabulating machines - Herman Hollerith (1860 - 1929).

23

Hollerith cards were commonly used for computer input well into the 1970s.

1.5 Historical Development

The First Generation: Vacuum Tube Computers (1945 - 1953)

Atanasoff Berry Computer (1937 - 1938) solved systems of linear equations.

John Atanasoff and Clifford Berry

of Iowa State University.

24

1.5 Historical Development

The First Generation: Vacuum Tube Computers (1945 - 1953)

Electronic Numerical Integrator and Computer (ENIAC) by John Mauchly and J. Presper Eckertat the University of Pennsylvania, 1946

The IBM 650 first mass-produced computer. (1955). It was phased out in 1969.

25

1.5 Historical Development

The Second Generation: Transistorized Computers (1954 - 1965)

IBM 7094 (scientific) and 1401 (business) Digital Equipment Corporation (DEC) PDP-

1 Univac 1100 Control Data Corporation 1604. . . . and many others.

26

1.5 Historical Development

The Third Generation: Integrated Circuit Computers (1965 - 1980) IBM 360 DEC PDP-8 and PDP-11 Cray-1 supercomputer . . . and many others.

By this time, IBM had gained overwhelming dominance in the industry. Computer manufacturers of this era were

characterized as IBM and the BUNCH (Burroughs, Unisys, NCR, Control Data, and Honeywell).

27

1.5 Historical Development

The Fourth Generation: VLSI Computers (1980 - ????) Very large scale integrated circuits (VLSI) have

more than 10,000 components per chip. Enabled the creation of microprocessors. The first was the 4-bit Intel 4004. Later versions, such as the 8080, 8086, and

8088 spawned the idea of “personal computing.”

28

1.5 Historical Development

Moore’s Law (1965)Gordon Moore, Intel founder“The density of transistors in an

integrated circuit will double every year.” Contemporary version:

“The density of silicon chips doubles every 18 months.”

29

But this “law” cannot hold forever ...

1.5 Historical Development

Rock’s Law Arthur Rock, Intel financier “The cost of capital equipment to build

semiconductors will double every four years.” In 1968, a new chip plant cost about $12,000.

30

At the time, $12,000 would buy a nice home in the suburbs.

An executive earning $12,000 per year was “making a very comfortable living.”

1.5 Historical Development

Rock’s Law In 2005, a chip plants under

construction cost over $2.5 billion.

For Moore’s Law to hold, Rock’s Law must fall, or vice versa. But no one can say which will give out first.

31

$2.5 billion is more than the gross domestic product of some small countries, including Belize, Bhutan, and the Republic of Sierra Leone.

1.6 The Computer Level Hierarchy

Writing complex programs requires a “divide and conquer” approach, where each program module solves a smaller problem.

Complex computer systems employ a similar technique through a series of virtual machine layers.

32

1.6 The Computer Level Hierarchy

Each virtual machine layer is an abstraction of the level below it.

The machines at each level execute their own particular instructions, calling upon machines at lower levels to perform tasks as required.

Computer circuits ultimately carry out the work.

33

1.6 The Computer Level Hierarchy

Level 6: The User LevelProgram execution and user interface

level.The level with which we are most familiar.

Level 5: High-Level Language LevelThe level with which we interact when we

write programs in languages such as C, Pascal, Lisp, and Java.

34

1.6 The Computer Level Hierarchy

Level 4: Assembly Language Level Acts upon assembly language produced

from Level 5, as well as instructions programmed directly at this level.

Level 3: System Software Level Controls executing processes on the

system. Protects system resources. Assembly language instructions often pass

through Level 3 without modification.

35

1.6 The Computer Level Hierarchy

Level 2: Machine LevelAlso known as the Instruction Set

Architecture (ISA) Level.Consists of instructions that are

particular to the architecture of the machine.

Programs written in machine language need no compilers, interpreters, or assemblers.

36

1.6 The Computer Level Hierarchy

Level 1: Control Level A control unit decodes and executes

instructions and moves data through the system.

Control units can be microprogrammed or hardwired.

A microprogram is a program written in a low-level language that is implemented by the hardware.

Hardwired control units consist of hardware that directly executes machine instructions.

37

1.6 The Computer Level Hierarchy

Level 0: Digital Logic LevelThis level is where we find digital circuits

(the chips).Digital circuits consist of gates and wires.These components implement the

mathematical logic of all other levels.

38

Why Measure Performance?

SO THAT YOU KNOW WHAT YOU’RE SUPPOSEDTO BE GETTING!

So that you can compare systems!

So that you can knowif the machine can doits tasks.

1.7 Computer Performance39

Characterising Performance

Not always obvious how to characterise performance: motor cars football teams tennis players computers?

Can lead to serious errors improve the processor to improve speed?

1.7 Computer Performance40

Performance and Speed

Performance for a program on a particular machine

X is n times faster than Y

Performance XExecution X

Performance X

Performance Y

Execution Y

Execution Xn

( )( )

( )

( )

( )

( )

1

1.7 Computer Performance41

Measuring Time

Execution time is the amount of time it takes the program to execute in seconds.

Time (computers do several tasks!) elapsed time based on a normal clock; CPU time is time spent executing this program

excluding waiting for I/O, or other programs

1.7 Computer Performance42

Execution TimeElapsed Time(real time)

I/O waiting& other programs

CPU time for this program

user(direct)

system(time in OS)

Example (UNIX)11.099u 8.659s 10:43.67 3.0%(user) (system) (elapsed)(seconds) (seconds) (min:secs)CPU time = 3.0% of elapsed time

1.7 Computer Performance43

Computer Clock Times

Computers run according to a clock that runs at a steady rate

The time interval is called a clock cycle (eg, 10ns).

The clock rate is the reciprocal of clock cycle - a frequency, how many cycles per sec (eg, 100MHz). 10 ns = 1/100,000,000 (clock cycle), same as:- 1/10ns = 100,000,000 = 100MHz (clock rate).

1.7 Computer Performance44

Purchasing Decision

Computer A has a 100MHz processor Computer B has a 300MHz processor So, B is faster, right?

WRONG! Now, let’s get it right…..

1.7 Computer Performance45

Measuring Performance

The only important question: “HOW FAST WILL MY PROGRAM RUN?”

CPU execution time for a program = CPU clock cycles * cycle time (= CPU clock cycles/Clock rate)

In computer design, trade-off between: clock cycle time, and number of cycles required for a program

1.7 Computer Performance46

Cycles Per Instruction

The execution time of a program clearly must depend on the number of instructions but different instructions take different times

An expression that includes this is:- CPU clock cycles = N * CPI

N = number of instructions

CPI = average clock cycles per instruction

1.7 Computer Performance47

Example Machine A

clock cycle time 10ns/cycle

CPI = 2.0 for prog X

Machine B clock cycle time

30ns/cycle

CPI = 0.5 for prog X

Let I = number of instructions in the program.

CPU clock cycles (A) = I * 2.0CPU time (A) = CPU clock cycles *

clock cycle time= I * 2.0 * 10= I * 20 ns

CPU clock cycles (B) = I * 0.5CPU time (B) = CPU clock cycles *

clock cycle time= I * 0.5 * 30= I * 15 ns

Performance A

Performance B

Execution B

Execution A

( )

( )

( )

( ). 0 75

1.7 Computer Performance48

Clock Cycles per Instruction

• Not all instructions take the same amount of time to execute• One way to think about execution time is that it equals the number

of instructions executed multiplied by the average time per instruction

CPI for this instruction class

A B C

CPI 1 2 3

Clock cycles per instruction (CPI) – the average number of clock cycles each instruction takes to execute

A way to compare two different implementations of the same ISA

# CPU clock cycles # Instructions Average clock cycles for a program for a program per instruction = x

49

Effective CPI• Computing the overall effective CPI is done by looking at the

different types of instructions and their individual cycle counts and averaging

Overall effective CPI = (CPIi x ICi)i = 1

n

Where ICi is the count (percentage) of the number of instructions of class i executed

CPIi is the (average) number of clock cycles per instruction for that instruction class

n is the number of instruction classes

The overall effective CPI varies by instruction mix – a measure of the dynamic frequency of instructions across one or many programs

50

THE Performance Equation• Our basic performance equation is then

CPU time = Instruction_count x CPI x clock_cycle

Instruction_count x CPI

clock_rate CPU time = -----------------------------------------------

or

These equations separate the three key factors that affect performance

Can measure the CPU execution time by running the program The clock rate is usually given Can measure overall instruction count by using profilers/

simulators without knowing all of the implementation details CPI varies by instruction type and ISA implementation for which

we must know the implementation details

51

Determinates of CPU Performance

Instruction_count

CPI clock_cycle

Algorithm

Programming language

Compiler

ISA

Processor organization

Technology

CPU time = Instruction_count x CPI x clock_cycle

X

XX

XX

X X

X

X

X

X

X

53

A Simple Example – Calculate average CPI

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2

54

A Simple Example

• Q1 : How much faster would the machine be if a better data cache reduced the average load time to 2 cycles?

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2 1.6

Q1

.5

.4

.3

.4

56

A Simple Example

• Q1 : How much faster would the machine be if a better data cache reduced the average load time to 2 cycles?

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2

CPU time new = 1.6 x IC x CC so 2.2/1.6 means 37.5% faster

(Notice that 2.2 x IC x CC / 1.6 x IC x CC = 2.2/1.6, since the IC and CC

remain unchanged by adding a bigger cache)

1.6

Q1

.5

.4

.3

.4

57

A Simple Example

• Q2 : How does this compare with using branch prediction to shave a cycle off the branch time?

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2

Q2

.5

1.0

.3

.2

2.0

CPU time new = 2.0 x IC x CC so 2.2/2.0 means 10% faster

58

A Simple Example

• Q3 : What if two ALU instructions could be executed at once?

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2

Q3

.25

1.0

.3

.4

1.95

CPU time new = 1.95 x IC x CC so 2.2/1.95 means 12.8% faster

59

A Simple Example

• Q1 : How much faster would the machine be if a better data cache reduced the average load time to 2 cycles?

• Q2 : How does this compare with using branch prediction to shave a cycle off the branch time?

• Q3 : What if two ALU instructions could be executed at once?

Op Freq CPIi Freq x CPIi

ALU 50% 1

Load 20% 5

Store 10% 3

Branch 20% 2

Overall effective CPI =

.5

1.0

.3

.4

2.2

CPU time new = 1.6 x IC x CC so 2.2/1.6 means 37.5% faster

1.6

Q1

.5

.4

.3

.4

Q2

.5

1.0

.3

.2

2.0

CPU time new = 2.0 x IC x CC so 2.2/2.0 means 10% faster

Q3

.25

1.0

.3

.4

1.95

CPU time new = 1.95 x IC x CC so 2.2/1.95 means 12.8% faster

60

Amdahl's Law

Speedup due to enhancement E:

ExTime w/o E Performance w/ E

Speedup(E) = -------------------- = ---------------------

ExTime w/ E Performance w/o E

Suppose that enhancement E accelerates a fraction F of the task

by a factor S and the remainder of the task is unaffected then,

ExTime(with E) = ((1-F) + F/S) X ExTime(without E)

Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E)

61

ANY QUESTIONS ?

top related