lecture 8 objectives material from chapter 9 more complete introduction of mpi functions show how to...

40
Lecture 8 Objectives • Material from Chapter 9 • More complete introduction of MPI functions • Show how to implement manager- worker programs • Parallel Algorithms for Document Classification • Parallel Algorithms for Clustering

Post on 21-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Lecture 8 Objectives

• Material from Chapter 9

• More complete introduction of MPI functions

• Show how to implement manager-worker programs

• Parallel Algorithms for Document Classification

• Parallel Algorithms for Clustering

Page 2: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Outline

• Introduce MPI functionality• Introduce problem• Parallel algorithm design• Creating communicators• Non-blocking communications• Implementation• Pipelining• Clustering

Page 3: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Implementation of a Very Simple Document Classifier

• Manager/Worker Design Strategy

• Manager description (create initial tasks and communicate to/from workers)

• Worker description (receive tasks, enter an alternating communication from/to master and computation

Page 4: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Structure of Main program:Manager/Worker Paradigm

MPI_Init (&argc, &argv); // what is my rank?

MPI_Comm_rank(MPI_COMM_WORLD, &myrank);// how many processors are there?

MPI_Comm_size(MPI_COMM_WORLD, &p); if (myid == 0)

Manager(p); else

Worker(myid, p); MPI_Barrier(MPI_COMM_WORLD);MPI_Finalize();return(0);

Page 5: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

More MPI functions

• MPI_Abort

• MPI_Comm_split

• MPI_Isend, MPI_Irecv, MPI_Wait

• MPI_Probe

• MPI_Get_count

• MPI_Testsome

Page 6: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

MPI_Abort

• A “quick and dirty” way for one process to terminate all processes in a specified communicator

• Example use: If manager cannot allocate memory needed to store document profile vectors

int MPI_Abort ( MPI_Comm comm, /* Communicator */ int error_code)/* Value returned to calling environment */

Page 7: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Creating a Workers-only Communicator

• To support workers-only broadcast, need workers-only communicator

• Can use MPI_Comm_split

• Excluded processes (e.g., Manager) passes MPI_UNDEFINED as the value of split_key, meaning it will not be part of any new communicator

Page 8: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Workers-only Communicator

int id;MPI_Comm worker_comm;

...

if (!id) /* Manager */ MPI_Comm_split (MPI_COMM_WORLD, MPI_UNDEFINED, id, &worker_comm);

else /* Worker */ MPI_Comm_split (MPI_COMM_WORLD, 0, id, &worker_comm);

Page 9: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Nonblocking Send / Receive

• MPI_Isend, MPI_Irecv initiate operation• MPI_Wait blocks until operation complete• Calls can be made early

– MPI_Isend as soon as value(s) assigned– MPI_Irecv as soon as buffer available

• Can eliminate a message copying step• Allows communication / computation overlap

Page 10: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Isend

int MPI_Isend ( void *buffer, int cnt, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm, MPI_Request *handle)

Pointer to object that identifiescommunication operation

Page 11: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Irecv

int MPI_Irecv ( void *buffer, int cnt, MPI_Datatype dtype, int src, int tag, MPI_Comm comm, MPI_Request *handle)

Pointer to object that identifiescommunication operation

Page 12: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Wait

int MPI_Wait (

MPI_Request *handle,

MPI_Status *status

)

Blocks until operation associated with pointer handle completes.

status points to object containing info on received message

Page 13: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Receiving Problem

• Worker does not know length of message it will receive

• Example, the length of File Path Name

• Alternatives– Allocate huge buffer– Check length of incoming message, then

allocate buffer

• We’ll take the second alternative

Page 14: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Probe

int MPI_Probe (

int src, int tag, MPI_Comm comm, MPI_Status *status)

Blocks until message is available to be received from process with rank src with message tag tag; status pointer gives infoon message size.

Page 15: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Get_count

int MPI_Get_count ( MPI_Status *status, MPI_Datatype dtype, int *cnt)

cnt returns the number of elements in message

Page 16: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

MPI_Testsome

• Often need to check whether one or more messages have arrived

• Manager posts a nonblocking receive to each worker process

• Builds an array of handles or request objects• Testsome allows manager to determine how

many messages have arrived

Page 17: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Function MPI_Testsome

int MPI_Testsome ( int in_cnt, /* IN - Number of nonblocking receives to check */

MPI_Request *handlearray, /* IN - Handles of pending receives */

int *out_cnt, /* OUT - Number of completed communications */

int *index_array, /* OUT - Indices of completed communications */

MPI_Status *status_array) /* OUT - Status records for completed comms */

Page 18: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Document Classification Problem

• Search directories, subdirectories for documents (look for .html, .txt, .tex, etc.)

• Using a dictionary of key words, create a profile vector for each document

• Store profile vectors

Page 19: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Data Dependence Graph (1)

Page 20: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Partitioning and Communication

• Most time spent reading documents and generating profile vectors

• Create two primitive tasks for each document

Page 21: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Data Dependence Graph (2)

Page 22: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Agglomeration and Mapping

• Number of tasks not known at compile time

• Tasks do not communicate with each other

• Time needed to perform tasks varies widely

• Strategy: map tasks to processes at run time

Page 23: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Manager/worker-style Algorithm

1. Task/Functional Partitioning2. Domain/Data Partitioning

Page 24: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Roles of Manager and Workers

Page 25: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Manager Pseudocode

Identify documentsReceive dictionary size from worker 0Allocate matrix to store document vectorsrepeat

Receive message from workerif message contains document vector

Store document vectorendifif documents remain then Send worker file

nameelse Send worker termination messageendif

until all workers terminatedWrite document vectors to file

Page 26: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Worker Pseudocode

Send first request for work to managerif worker 0 then

Read dictionary from fileendifBroadcast dictionary among workersBuild hash table from dictionaryif worker 0 then

Send dictionary size to managerendifrepeat

Receive file name from managerif file name is NULL then terminate endifRead document, generate document vectorSend document vector to manager

forever

Page 27: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Task/Channel Graph

Page 28: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Enhancements

• Finding middle ground between pre-allocation and one-at-a-time allocation of file paths

• Pipelining of document processing

Page 29: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Allocation Alternatives

Documents Allocated per Requestn/p

Load imbalance

1

Excessivecommunicationoverhead

Time

Page 30: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Pipelining

Page 31: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Time Savings through Pipelining

Page 32: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Pipelined Manager Pseudocode

a 0 {assigned jobs}j 0 {available jobs}w 0 {workers waiting for assignment}repeat

if (j > 0) and (w > 0) thenassign job to workerj j – 1; w w – 1; a a + 1

elseif (j > 0) thenhandle an incoming message from workersincrement w

elseget another jobincrement j

endifuntil (a = n) and (w = p)

Page 33: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Summary

• Manager/worker paradigm– Dynamic number of tasks– Variable task lengths– No communications between tasks

• New tools for “kit”– Create manager/worker program– Create workers-only communicator– Non-blocking send/receive– Testing for completed communications

• Next Step: Cluster Profile Vectors

Page 34: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

K-Means Clustering

• Assumes documents are real-valued vectors. • Assumes distance function on vector pairs• Clusters based on centroids (aka the center of gravity or

mean) of points in a cluster, c:

• Reassignment of instances to clusters is based on distance of vector to the current cluster centroids.

– (Or one can equivalently phrase it in terms of similarities)

cx

xc

||

1(c)μ

Page 35: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

K-Means Algorithm

Let d be the distance measure between instances.Select k random instances {s1, s2,… sk} as seeds.Until clustering converges or other stopping criterion: For each instance xi: Assign xi to the cluster cj such that d(xi, sj) is minimal. // Now Update the seeds to the centroid of each cluster) For each cluster cj

sj = (cj)

Page 36: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

K Means Example(K=2)

Pick seeds

Reassign clusters

Compute centroids

xx

Reassign clusters

xx xx Compute centroids

Reassign clusters

Converged!

Page 37: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Termination conditions

• Desire that docs in a cluster are unchanged

• Several possibilities, e.g.,– A fixed number of iterations.– Doc partition unchanged.– Centroid positions don’t change.– We’ll choose termination when only small

fraction change (threshold value)

Page 38: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms
Page 39: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

• Quiz:– Describe Manager/Worker pseudo-code that

implements the K-means algorithm in parallel– What data partitioning for parallelism?– How are cluster centers updated and

distributed?

Page 40: Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms

Hints

– objects to be clustered are evenly partitioned among all processes

– cluster centers are replicated– Global-sum reduction on cluster centers is

performed at the end of each iteration to generate the new cluster centers.

– Use MPI_Bcast and MPI_Allreduce