slide02 parallel computers

Post on 07-Apr-2018

224 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 1/44

Parallel ComputerArchitecture

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 2/44

The End of the Road

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 3/44

Advantages of 

Multi rocessors• Able to create powerful computers by

simply connecting multiple processors

• More cost-effective than building a high-performance single processor

• Obtain fault-tolerance to carry on thetasks, albeit with degraded performance

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 4/44

4 Decades of Computing

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 5/44

Batch Era (1960s)

• IBM System/360 mainframedominated the corporate

computer centers (10 MBdisk, 1 MB magnetic corememory)

•Typical batch processing

machine

• No connection beyond thecomputer room

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 6/44

Time-Sharing Era (1970s)

• Advancing in ss-memory &ICs spawned the

minicomputer era

• Small, fast, and inexpensiveenough to be spreadthroughout the company at

the divisional level

• Still too expensive anddifficult to use to hand overto end-users

• Time-sharing computing

•Existing 2 kinds:

• centralized data processingmainframes

• time-sharing minicomputers

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 7/44

Desktop Era (1980s)

• PCs were introduced in1977

• Many players (Altairs, Tandy,

Commondore, Apple, IBM,and etc)

• Became pervasive and

change the face of computing

• Along came networkedcomputers (LAN & WAN)

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 8/44

Network Era (1990s)

• Advance network technologiesled to network computingparadigm

• Transition from a processor-centric view of computing to anetwork-centric view

•A number of commercialparallel computers withmultiple processors:

• Shared memory systems

• Distributed memory systems

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 9/44

Four Decades of ComputingFeature Batch Time-Sharing Desktop Network  

Decade 1960s 1970s 1980s 1990s

Location Computer Room Terminal Room Desktop Mobile

Users Experts Specialists Individuals Groups

Data Alphanumeric Text, numbers Fonts, graphs Multimedia

Objective Calculate Access Present Communicate

Interface Punched card Kbd & CRT See & point Ask & tell

Operation Process Edit Layout Orchestrate

Connectivity None Peripheral cable LAN Internet

OwnersCorporate

computer centersDivisional IS

shopsDepartmental

end-usersEveryone

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 10/44

Current Trends

• The substitution of expensive and specialized parallelmachines by the more cost-effective clusters of workstations

• A cluster is a collection of stand-alone computersconnected using some interconnection network 

• A pervasiveness of the Internet created interest innetwork computing and more recently in grid

computing

• Grids are geographically distributed platforms of computation - dependable, consistent, pervasive, andless expensive access to HPC facilities

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 11/44

Flynn’s Taxonomy of 

Com uter Architecture• Based on the notion of a stream of 

information

• instruction

•data

CPU

Memory

fetch execute(manipulate data as

programmed)

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 12/44

SingleInstruction

MultipleInstruction

SingleData

MultipleData

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 13/44

SIMD Architecture

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 14/44

Single Instruction,

Multi le Data SIMD

 t  i   m e

P1 P2 Pn

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 15/44

MIMD Architecture

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 16/44

Multiple Instruction,

Multi le Data MIMD

 t  i   m e

P1 P2 Pn

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 17/44

SIMD Architecture Model

• Consists of two parts:

• a front-end

computer

• a processor array

• each element in the processor array is identical toone another and performs operation on different datain sync

•front-end can access PE’s memory via the bus

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 18/44

SIMD Architecture Model

• lock-step

synchronization

• Processors either donothing or exactly thesame ops simultaneously

• In SIMD, parallelism is exploited by applyingsimultaneous operations across large sets of data

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 19/44

SIMD Configurations

Each PE has its own localmemory

PEs and memory modulescommunicate via the IN

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 20/44

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 21/44

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 22/44

MIMD Architecture

Interconnection Network

P

MM M M

P P P

 

Shared Memory MIMD Architecture

 

Interconnection Network

P P P P

MM M M

 

Message Passing MIMD Architecture

information exchangethrough central shared

memory

information exchangethrough network in

message passing systems

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 23/44

MIMD Architecture

Interconnection Network

P

MM M M

P P P

 

Shared Memory MIMD Architecture

• using bus/cachearchitecture

• called SMP (symmetricmultiprocessor) since

• equal chance to read/

write memory

• equal access speed

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 24/44

MIMD Architecture

Interconnection Network

P P P P

MM M M

 

Message Passing MIMD Architecture

• also known asdistributed memory

• no global memory• using message passing to

move data from one toanother (Send/Recieve

pair of commands)

• this architecture giveway to Internet

connected systems

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 25/44

MIMD Architecture

Interconnection Network

P

MM M M

P P P

 

Shared Memory MIMD Architecture

 

Interconnection Network

P P P P

MM M M

 

Message Passing MIMD Architecture

programming is easier provided scalability

DSM (distributed-shared memory) is

the hybrid between the two

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 26/44

DSM

• memory is physically distributed [messagepassing]

• memory can be addressed as one (logicallyshared) address space [shared memory]

• programming-wise, the architecture looks

and behaves like a shared memorymachine, but a message passing architecturelives underneath the software

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 27/44

SGI Origin2000

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 28/44

SIMD

• access control - which process accesses arepossible to which resources

• synchronization - constraints limit the timeof accesses from sharing processes toshared resources

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 29/44

SIMD

• protection - a system feature that preventsprocesses from making arbitrary access toresources belonging to other processes

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 30/44

MIMD

• nodes are typically able to simultaneously

• store messages in buffers

• perform send/receive operations

• scalable - the number of processors can beincreased without significant decrease in

efficiency of operation

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 31/44

InterconnectionNetworks

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 32/44

Interconnection

Networks INs• Can be classified based on

• mode of operation

• control strategy

• switching techniques

• topology

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 33/44

Mode of Operation

• Accordingly, INs are classified as:

• Synchronous

•a single global clock used by all

• operating in a lock-step manner

• Asynchronous

•does not require a global clock 

• handshaking signals are used

• Sync tends to be slower than async, sync is raceand hazard-free, however.

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 34/44

Control Strategy

• Accordingly, INs are classified as

• Centralized

• a single central CU is used to overseeand control the operation

•Decentralized

• the control function is distributedamong different components

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 35/44

Control Strategy

• The function and reliability of the central

control unit can become the bottleneck ina centralized control system

• While the crossbar is a centralized system,

the multistage interconnection networksare decentralized

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 36/44

Switching Techniques

• INs can be classified as:

• circuit switching

• a complete path has to be established and remain

existence during the whole communication

• packet switching

• communication takes place via messages that are dividedinto smaller entities (packets)

• packets travel in a store-and-forward manner

• While packet s/w tends to use resources more efficiently, itsuffers from variable packet delays

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 37/44

Topology

• Topology describes how to connectprocessors and memories to other

processors and memories

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 38/44

Shared Memory INs

bus-based switch-based

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 39/44

Message Passing INs

• Static interconnection network 

• Dynamic interconnection network 

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 40/44

Static INs

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 41/44

Dynamic INs

• Establish a connection between two ormore nodes on the fly as messages are

routed along the links

• The number of hops in a path from sourceto destination node is equal to the number

of point-to-point links a message musttraverse to reach its destination

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 42/44

Single-stage

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 43/44

8/6/2019 Slide02 Parallel Computers

http://slidepdf.com/reader/full/slide02-parallel-computers 44/44

Crossbar switch

top related