mpi: the last episode by: camilo a. silva. topics modularity data types buffer issues + performance...
TRANSCRIPT
![Page 1: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/1.jpg)
MPI: the last episode
By: Camilo A. Silva
![Page 2: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/2.jpg)
Topics
• Modularity
• Data Types
• Buffer issues + Performance issues
• Compilation using MPICH2
• Other topics: MPI objects, tools for evaluating programs, and multiple program connection
![Page 3: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/3.jpg)
Modularity
What is a modular design?
-The basic idea underlying modular design is to organize a complex system (such as a large program, an electronic circuit, or a mechanical device) as a set of distinct components that can be developed independently and then plugged together.
![Page 4: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/4.jpg)
Why is it important?
• Programs may need to incorporate multiple parallel algorithms
• Large programs can be controlled by using modular designs
• Modular design increases reliability and reduces costs
![Page 5: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/5.jpg)
Modular design principles
• Provide simple interfaces
• Ensure that modules hide information
• Usage of appropriate tools
![Page 6: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/6.jpg)
Modular design checklistThe following design checklist can be used to evaluate the success of
a modular design. As usual, each question should be answered in the affirmative.
1. Does the design identify clearly defined modules? 2. Does each module have a clearly defined purpose? (Can you
summarize it in one sentence?) 3. Is each module's interface sufficiently abstract that you do not need
to think about its implementation in order to understand it? Does it hide its implementation details from other modules?
4. Have you subdivided modules as far as usefully possible? 5. Have you verified that different modules do not replicate
functionality? 6. Have you isolated those aspects of the design that are most
hardware specific, complex, or otherwise likely to change?
![Page 7: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/7.jpg)
Applying modularity in parallel programs
Three (3) general forms of modular composition exist in parallel programs: sequential, parallel, and concurrent
![Page 8: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/8.jpg)
Applying modularity using MPI
• MPI supports modular programming– Provides information hiding– Encapsulates internal communication
• Communicators are always specified by an MPI communication– Identifies the process group– identifies the context
![Page 9: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/9.jpg)
Implementing flexibility in communicators
• In the previous discussions, all communication operations have used the default communicator MPI_COMM_WORLD, which incorporates all processes involved in an MPI computation and defines a default context.
• There are other functions that add flexibility to the communicator and its context:– MPI_COMM_DUP – MPI_COMM_SPLIT – MPI_INTERCOMM_CREATE– MPI_COMM_FREE.
![Page 10: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/10.jpg)
Details of functions
![Page 11: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/11.jpg)
Creating communicators
• A call of the form MPI_COMM_DUP(comm, newcomm) creates a new communicator newcomm comprising the same processes as comm but with a new context.
integer comm, newcomm, ierr ! Handles are integers ...
call MPI_COMM_DUP(comm, newcomm, ierr) ! Create new context
call transpose(newcomm, A) ! Pass to library
call MPI_COMM_FREE(newcomm, ierr) ! Free new context
![Page 12: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/12.jpg)
Partitioning processes
The term parallel composition is used to denote the parallel execution
of two or more program components on disjoint sets of processors
Program 1:MPI_Comm comm, newcomm;
int myid, color;
MPI_Comm_rank(comm, &myid);
color = myid%3;
MPI_Comm_split(comm, color, myid, &newcomm);
Program 2:MPI_Comm comm, newcomm; int myid, color;
MPI_Comm_rank(comm, &myid);
if (myid < 8) /* Select first 8 processes */
color = 1;
else /* Others are not in group */
color = MPI_UNDEFINED;
MPI_Comm_split(comm, color, myid, &newcomm);
![Page 13: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/13.jpg)
Communicating between groups
![Page 14: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/14.jpg)
Datatypes
CODE 1call MPI_TYPE_CONTIGUOUS(10,
MPI_REAL, tenrealtype, ierr) call MPI_TYPE_COMMIT(tenrealtype, ierr) call MPI_SEND(data, 1, tenrealtype, dest,
tag, $ MPI_COMM_WORLD, ierr) CALL MPI_TYPE_FREE(tenrealtype, ierr)
CODE 2 float data[1024]; MPI_Datatype floattype;MPI_Type_vector(10, 1, 32, MPI_FLOAT,
&floattype); MPI_Type_commit (&floattype); MPI_Send(data, 1, float type, dest, tag,
MPI_COMM_WORLD);
MPI_Type_free(&floattype);
![Page 15: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/15.jpg)
Heterogeneity
• MPI datatypes have two main purposes • Heterogenity --- parallel programs
between different processors • Noncontiguous data --- structures, vectors
with non-unit stride, etc. • Basic datatype, corresponding to the
underlying language, are predefined. The user can construct new datatypes at run time; these are called derived datatypes.
![Page 16: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/16.jpg)
Datatypes• Elementary:
– Language-defined types (e.g., MPI_INT or MPI_DOUBLE_PRECISION )
• Vector: – Separated by constant ``stride''
• Contiguous: – Vector with stride of one
• Hvector: – Vector, with stride in bytes
• Indexed: – Array of indices (for scatter/gather)
• Hindexed: – Indexed, with indices in bytes
• Struct: – General mixed types (for C structs etc.)
![Page 17: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/17.jpg)
Vectors
To specify this row (in C order), we can use
MPI_Type_vector( count, blocklen, stride, oldtype, &newtype );
MPI_Type_commit( &newtype ); The exact code for this is
MPI_Type_vector( 5, 1, 7, MPI_DOUBLE, &newtype );
MPI_Type_commit( &newtype );
![Page 18: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/18.jpg)
Structures
Structures are described by arrays of number of elements (array_of_len) displacement or location (array_of_displs) datatype (array_of_types)
MPI_Type_structure( count, array_of_len, array_of_displs, array_of_types, &newtype );
![Page 19: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/19.jpg)
Structure example
![Page 20: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/20.jpg)
Buffering Issues
• Where does data go when you send it? One possibility is:
![Page 21: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/21.jpg)
Better buffering• This is not very efficient. There are three copies in addition to the exchange of
data between processes. We prefer
• But this requires that either that MPI_Send not return until the data has been delivered or that we allow a send operation to return before completing the transfer. In this case, we need to test for completion later.
![Page 22: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/22.jpg)
Blocking + Non-blocking communication
• So far we have used blocking communication: -- MPI_Send does not complete until buffer is empty (available for reuse). -- MPI_Recv does not complete until buffer is full (available for use).
• Simple, but can be ``unsafe'': – Completion depends in general on size of message
and amount of system buffering.
![Page 23: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/23.jpg)
Solutions to the “unsafe” problem
• Order the operations more carefully:
• Supply receive buffer at same time as send, with MPI_Sendrecv:
• Use non-blocking operations:
• Use MPI_Bsend
![Page 24: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/24.jpg)
Non blocking operations
• Non-blocking operations return (immediately) ``request handles'' that can be waited on and queried:
• MPI_Isend(start, count, datatype, dest, tag, comm, request)
• MPI_Irecv(start, count, datatype, dest, tag, comm, request)
• MPI_Wait(request, status) One can also test without waiting: MPI_Test( request, flag, status)
![Page 25: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/25.jpg)
Multiple completions
• It is often desirable to wait on multiple requests. An example is a master/slave program, where the master waits for one or more slaves to send it a message.
•
• MPI_Waitall(count, array_of_requests, array_of_statuses)
• MPI_Waitany(count, array_of_requests, index, status)
• MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses) There are corresponding versions of test for each of these.
![Page 26: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/26.jpg)
Fairness
![Page 27: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/27.jpg)
Fairness
• An parallel algorithm is fair if no process is effectively ignored. In the preceeding program, processes with low rank (like process zero) may be the only one whose messages are received.
• MPI makes no guarentees about fairness. However, MPI makes it possible to write efficient, fair programs.
![Page 28: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/28.jpg)
Communication Modes• MPI provides mulitple modes for sending messages:
• Synchronous mode ( MPI_Ssend): the send does not complete until a matching receive has begun. (Unsafe programs become incorrect and usually deadlock within an MPI_Ssend.)
• Buffered mode ( MPI_Bsend): the user supplies the buffer to system for its use. (User supplies enough memory to make unsafe program safe).
• Ready mode ( MPI_Rsend): user guarantees that matching receive has been posted. -- allows access to fast protocols -- undefined behavior if the matching receive is not posted
Non-blocking versions: MPI_Issend, MPI_Irsend, MPI_Ibsend
• Note that an MPI_Recv may receive messages sent with any send mode.
![Page 29: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/29.jpg)
Buffered Send
• MPI provides a send routine that may be used when MPI_Isend is awkward to use (e.g., lots of small messages).
• MPI_Bsend makes use of a user-provided buffer to save any messages that can not be immediately sent.
int bufsize; char *buf = malloc(bufsize); MPI_Buffer_attach( buf, bufsize ); ... MPI_Bsend( ... same as MPI_Send ... ); ... MPI_Buffer_detach( &buf, &bufsize );
The MPI_Buffer_detach call does not complete until all messages are sent.
![Page 30: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/30.jpg)
Performance Issues
![Page 31: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/31.jpg)
Performance Issues
![Page 32: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/32.jpg)
MPICH2
MPICH2 is an all-new implementation of the MPI Standard, designed to
implement all of the MPI-2 additions to MPI (dynamic process management,
one-sided operations, parallel I/O, and other extensions) and to apply the
lessons learned in implementing MPICH1 to make MPICH2 more robust,
efficient, and convenient to use.
![Page 33: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/33.jpg)
MPICH2: MPI compilation basic info
1. mpiexec -n 32 a.out
2. mpiexec -n 1 -host loginnode master : -n 32 -host smp slave
3. mpdtrace
![Page 34: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/34.jpg)
Other topics: MPI Objects
• MPI has a variety of objects (communicators, groups, datatypes, etc.) that can be created and destroyed
![Page 35: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/35.jpg)
MPI Objects
• MPI_Request – Handle for nonblocking communication, normally freed by MPI in
a test or wait • MPI_Datatype
– MPI datatype. Free with MPI_Type_free. • MPI_Op
– User-defined operation. Free with MPI_Op_free. • MPI_Comm
– Communicator. Free with MPI_Comm_free. • MPI_Group
– Group of processes. Free with MPI_Group_free. • MPI_Errhandler
– MPI errorhandler. Free with MPI_Errhandler_free.
![Page 36: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/36.jpg)
Freeing objects
• MPI_Type_vector( ly, 1, nx, MPI_DOUBLE, &newx1 );
• MPI_Type_hvector( lz, 1, nx*ny*sizeof(double), newx1, &newx );
• MPI_Type_free( &newx1 );
• MPI_Type_commit( &newx );
![Page 37: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/37.jpg)
Other topics: tools for evaluating programs
• MPI provides some tools for evaluating the performance of parallel programs.
• These are: – Timer – Profiling interface
![Page 38: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/38.jpg)
MPI Timer
• The elapsed (wall-clock) time between two points in an MPI program can be computed using MPI_Wtime:
double t1, t2; t1 = MPI_Wtime(); ... t2 = MPI_Wtime(); printf( "Elapsed time is %f\n", t2 - t1 );
• The value returned by a single call to MPI_Wtime has little value.
![Page 39: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/39.jpg)
MPI Profiling Mechanisms• All routines have two entry points: MPI_... and PMPI_....
• This makes it easy to provide a single level of low-overhead routines to intercept MPI calls without any source code modifications.
• Used to provide ``automatic'' generation of trace files.
static int nsend = 0; int MPI_Send( start, count, datatype, dest, tag, comm ) { nsend++; return PMPI_Send( start, count, datatype, dest, tag, comm ) }
![Page 40: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/40.jpg)
Profiling routines
![Page 41: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/41.jpg)
Log Files
![Page 42: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/42.jpg)
Creating Log Files
• This is very easy with the MPICH implementation of MPI. Simply replace -lmpi with -llmpi -lpmpi -lm in the link line for your program, and relink your program. You do not need to recompile.
• On some systems, you can get a real-time animation by using the libraries -lampi -lmpe -lm -lX11 -lpmpi.
• Alternately, you can use the -mpilog or -mpianim options to the mpicc or mpif77 commands.
![Page 43: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/43.jpg)
Other topics: connecting several programs together
• MPI provides support for connection separate message-passing programs together through the use of
intercommunicators.
![Page 44: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/44.jpg)
Exchanging data between programs
• Form intercommunicator (MPI_INTERCOMM_CREATE)
• Send data MPI_Send( ..., 0, intercomm ) MPI_Recv( buf, ..., 0, intercomm ); MPI_Bcast( buf, ..., localcomm ); More complex point-to-point operations can also be used
![Page 45: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/45.jpg)
Collective operations
• Use MPI_INTERCOMM_MERGE to create an intercommunicator.
![Page 46: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/46.jpg)
Conclusion
• So we learned:
• P2P, Collective, and asynchronous communications
• Modular programming techniques
• Data types
• MPICH2 basic compilation info
• Important and handy tools
![Page 47: MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI](https://reader033.vdocuments.us/reader033/viewer/2022051417/5697bf8b1a28abf838c8b0eb/html5/thumbnails/47.jpg)
References
• http://www-unix.mcs.anl.gov/dbpp/text/node1.html
• http://www-unix.mcs.anl.gov/mpi/tutorial/gropp/talk.html#Node0