cs 470/570:introduction to parallel and distributed computing

Post on 24-Dec-2015

213 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

CS 470/570:Introduction to Parallel and Distributed

Computing

Lecture 1 Outline

• Course Details• General Issues in Parallel Software Development• Parallel Architectures• Parallel Programming Models

General Information

• Instructor: Thomas Gendreau• Office: 211 Wing• Office Phone: 785-6813• Office Hours: – Monday, Wednesday, Friday 2:15-3:15– Thursday 1:00-3:00 – Feel free to see me at times other than my office hours

• email: gendreau@cs.uwlax.edu• web: cs.uwlax.edu/~gendreau• Textbook: Introduction to Parallel Programming by Peter S.

Pacheco

Grading

• 240 points: 12 quizzes (best 12 out of 14)• 100 points: Assignments • 100 points Cumulative final exam – 4:45-6:45 Wednesday December 17

• 440 total points• A quiz will be given in class every Friday and on Wednesday

November 21st. No makeup quizzes will be given. Unusual situations such as multi-week illness will be handled on an individual basis.

General Issues/Terminology

• Identify possible parallelism• Match amount of parallelism to the architecture• Scalability• Synchronization

– Mutual exclusion

• Choose a parallel programming model• Performance

– Speedup– Amdahl’s law

• Data distribution

General Issues/Terminology

• Why study parallel programming now?– Evolution of computer architecture– Computer networks as potential parallel computers– Manage large data sets– Solve computationally difficult problems

General Issues/Terminology

• Parallelism in computer systems– Multiprogrammed OS– Instruction level parallelism (pipelines)– Vector processing– Multiprocessor machines• Multicores

– Networks of processors

Parallel Architectures

• Classics Taxonomy (Flynn)– Single Instruction Single Data Stream: SISD– Single Instruction Multiple Data Streams: SIMD– Multiple Instructions Multiple Data Streams: MIMD

SIMD

• Single control Unit • Multiple ALUs• All ALUs execute the same instruction on local

data

MIMD

• Multiple control units and ALUs • Separate stream of control on each processor• Possible organization– Shared Physical Address Space• Uniform or Non-uniform Memory Access• Cache Coherance issues

– Distributed Physical Address Space• Network of Computers

Parallel Programming Models

• How is parallelism expressed?– Process– Task– Thread– Implicitly

• How is information accurately shared among parallel actions?

• How are parallel actions synchronized?

Parallel Programming Models

• Shared Address Space Programming– Shared variables

• Message Based Programming– Send/receive values among cooperating processes

• Programming models are independent of the underlying architecture

• SPMD– Single Program Multiple Data

Shared Address Space Programming

• Processes• Threads • Directive Model

Unix Processes

• Heavyweight processes– Requires the creation of a copy of the parent’s

data space– Process creation has high overhead

• Example functions– fork– waitpid

Threads

• Lightweight process– Does not require the creation of a copy of the

parent’s data space

• Example pthread functions– pthread_create– pthread_join

Directive Based Parallel Programming

• Directives in source code tell compiler where parallelism should be used

• Threads are implicitly created• OpenMP– Compiler directives– #pragma omp contruct clause– #pragma omp parallel for

Message Based Parallel Programming

• Send/Receive messages to share values• Synchronous communication• Asynchronous communication• Blocking• Non-blocking

top related