1 introduction

28
COMPUTER SYSTEM ARCHITECTURE

Upload: varghese-george

Post on 12-Jul-2016

212 views

Category:

Documents


0 download

DESCRIPTION

Computer System Architecture Intro

TRANSCRIPT

COMPUTER SYSTEM ARCHITECTURE

Computer Organization Vs Computer Architecture

• Computer organization

– Is concerned with how hardware components operate and are connected to form a computer system.

– The various components are assumed to be in place and the task is to investigate the organization structure as to verify that they perform the intended operations.

Computer Organization Vs Computer Architecture cont..

• Computer architecture

– Deals with the structure and behavior of a computer including the information formats ,the instruction set and various techniques used for memory addressing.

– It also deals with the specifications of various functional modules such as processors and memories.

– The computer architecture is often referred to as Instruction set design and the other aspects of computer design are called implementation.

Defining Computer Architecture• “Old” view of computer architecture:

– Instruction Set Architecture (ISA) design– i.e. decisions regarding:

• registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding

• “Real” computer architecture:– Specific requirements of the target machine– Design to maximize performance within constraints:

cost, power, and availability– Includes ISA, microarchitecture, hardware

Defining C

omputer A

rchitecture

Computer Architecture

Computer Architecture =Instruction Set Architecture +

Machine Organization

Instruction Set Architecture• Portion of the computer that is visible to the programmer.

• Refer to the actual programmer visible instruction set.

• It serves as the boundary between the software and the hardware.

• Eg. DLX , MIPS, ARM, PowerPC etc.

Instruction Set Architecture examples• General purpose RISC architectures:

MIPS,PowerPC,SPARC

• Embedded RISC processors: ARM,Hitachi,MIPS-16,Thumb

• Older architectures: VAX,80x86,IBM 360/370

• MIPS used successfully in desktop, servers and embedded applications.

Instruction Set Architecture (subset of Computer Arch.)

... the attributes of a [computing] system as seen by the programmer, i.e. the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. –

Amdahl, Blaaw, and Brooks, 1964

Instruction Set:a Critical Interface–The instruction set architecture (ISA) serves

as the boundary between the software and hardware.

–How the machine looks to a programmer.

instruction set

software

hardware

Basic structure of a single processor computer

Basic structure of a single processor computer cont..

• It consists of an input unit which accepts or reads the list of instructions to solve a problem (a program)and the data relevant for the problem.

• It has memory or storage unit in which the procedure, data and intermediate results are stored .

• An arithmetic and logic unit (ALU)where arithmetic and logic operations are performed.

• An output unit which displays or prints the results.

Basic structure of a single processor computer cont..

• A control unit which interprets the instructions stored in the memory and carries them out.

• The combination of ALU and Control unit is called as Central processing unit(CPU) or Processing element(PE).

• A PE with its own memory is called a Computing element(CE).

• This structure is known as Von Neumann architecture proposed by John Von Neumann

Basic structure of a single processor computer cont..

• In this architecture a program is first stored in the memory.

• The PE retrieves one instruction of this program at a time ,interprets it and executes it.

• The operation of this computer is thus sequential.

• At a time the PE can execute only one instruction.

Basic structure of a single processor computer cont..

• The speed of this computer is thus limited by the speed at which a PE can retrieve instructions and data from memory and the speed at which it can process the retrieved data.

• To increase the speed of processing of data one may connect many sequential computers to work together

• Such a computer which consists of interconnected sequential computers which cooperatively execute a single program to solve a problem is called Parallel computer

Basic structure of a single processor computer cont..

• Rapid developments in electronics led to the emergence of PEs which can process over 500 million instructions per second and cost only $500.

• Thus it is economical to construct parallel computers which use around 4000 such PEs to carry out a trillion instructions per second and cost only $100 assuming 50% efficiency.

• Difficulty is in perceiving parallelism and developing software environment which will enable application programs to utilize this potential parallel processing power.

The need for High speed computing

• Many applications use computing speeds in the billion operations per second range.

• Some examples are:– Numerical simulation of the behavior of physical

systems.– High performance graphics-particularly

visualization and animation.– Database mining for strategic decision making.– Image and video database search.– Bioinformatics etc.

How we can increase the speed of computers?

• One method of increasing the speed of computer is to build the PE using faster semiconductor components.

– For eg. early supercomputers such as Cray-XMP used emitter coupled logic circuits which cost most and dissipated considerable heat.

• The rate of growth of speed using better device technology has been slow.

How we can increase the speed of computers cont..

• The rapid increase in speed has been primarily due to improved architecture of computers in which the working of different units is overlapped and they cooperate to solve the problem.

–For eg. While the processor is computing, data which may be needed later could be fetched from memory and simultaneously an I/O operation can be going on.

• Such an overlap of operations is achieved by using software and hardware features.

How we can increase the speed of computers cont..

• Besides overlapping operations of various units of a computer ,the processing unit itself may be designed to overlap operations of successive instructions.

– For eg. , an instruction execution can be broken up into five distinct tasks as shown in fig.1

• Five successive instructions can be overlapped ,each doing one of these tasks.

How we can increase the speed of computers cont..

• IF-Instruction Fetch• ID-Instruction Decode• Ex-Execute• MEM-Memory Reference• WB-Write Back

How we can increase the speed of computers cont..

• The arithmetic unit itself may be designed to exploit parallelism inherent in the problem.

• An arithmetic operation can be broken down into several tasks ,for eg. Matching exponents ,shifting mantissa and aligning them ,adding and normalizing.

• The components of two arrays to be added can be streamed through an adder and the four tasks can be performed simultaneously on four different pairs of operands thereby quadrupling the speed of addition.

• This method is said to exploit temporal parallelism.

How we can increase the speed of computers cont..

• Another method is to have 4 adders in the CPU and add four pairs of operands simultaneously.

• This type of parallelism is called data parallelism.

• Another method of increasing the speed of computation is to organize a set of computers to work simultaneously and cooperatively to carry out tasks in a program.

How we can increase the speed of computers cont..

• All these methods are called architectural methods which can be summarized as:

• Use parallelism in single processor.• Overlap execution of a number of instructions by

pipelining or by using multiple functional units.

• Overlap operations of different units.

• Increase the speed of arithmetic logic unit by exploiting data and/or temporal parallelism.

• Use parallelism in the problem.• Use number of interconnected processors to work

cooperatively to solve the problem.

Example parallel computers

• MasPar MP 1100• CM2• CM5• Power series• Cray T3E series

India’s Parallel computers

• PARAM 8000/8600/9000 ..• Flosolver Mark I/II/III/IV• PACE• ANUPAM etc..

Features of Parallel Computers• Higher speed is the main feature.• Besides that some other features are:• Better quality of solution

– When arithmetic operations are distributed to many computers ,each one does a smaller number of arithmetic operations.

– Thus rounding errors are lower when parallel computers are used.

• Better algorithms– The availability of many computers which can work

simultaneously leads to different algorithms which are not relevant for purely sequential algorithms.

Features of Parallel Computers• Better storage distribution

– Certain types of parallel computing systems provide much larger storage which is distributed .

– Access to the storage is faster in each computer.– This feature is of special interest in many applications such

as information retrieval and computer aided design.• Greater reliability.

– In principle a parallel computer will work even if a processor fails.

– Parallel computer’s hardware and software can be built for better fault tolerance.

END