virtual topologies self test with solution. self test 1.when using mpi_cart_create, if the cartesian...

20
Virtual Topologies Self Test with solution

Upload: ophelia-allen

Post on 17-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Virtual Topologies

Self Test with solution

Page 2: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

1. When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in old_comm, then:

a) error results.

b) new_comm returns MPI_COMM_NULL for calling processes not used for grid.

c) new_comm returns MPI_UNDEFINED for calling processes not used for grid.

Page 3: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

2. When using MPI_Cart_create, if the cartesian grid size is larger than processes available in old_comm, then:

a) error results.

b) the cartesian grid is automatically reduced to match processes available in old_comm.

c) more processes are added to match the requested cartesian grid size if possible; otherwise error results.

Page 4: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

3. After using MPI_Cart_create to generate a cartesian grid with grid size smaller than processes available in old_comm, a call to MPI_Cart_coords or MPI_Cart_rank unconditionally(i.e., without regard to whether it is appropriate to call) ends in error because:

a) calling processes not belonging to group have been assigned the communicator MPI_UNDEFINED, which is not a valid communicator for MPI_Cart_coords or MPI_Cart_rank.

b) calling processes not belonging to group have been assigned the communicator MPI_COMM_NULL, which is not a valid communicator for MPI_Cart_coords or MPI_Cart_rank.

c) grid size does not match what is in old_comm.

Page 5: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

4. When using MPI_Cart_rank to translate cartesian coordinates into equivalent rank, if some or all of the indices of the coordinates are outside of the defined range, then

a) error results.

b) error results unless periodicity is imposed in all dimensions.

c) error results unless each of the out-of-range indices is periodic.

Page 6: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

5. With MPI_Cart_shift(comm, direction, displ, source, dest), if the calling process is the first or the last entry along the shift direction and that displ is greater than 0, then

a) error results.

b) MPI_Cart_shift returns source and dest if periodicity is imposed along the shift direction. Otherwise, source and/or dest return MPI_UNDEFINED.

c) error results unless periodicity is imposed along the shift direction.

Page 7: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Self Test

6. MPI_Cart_sub can be used to subdivide a cartesian grid into subgrids of lower dimensions. These subgrids

a) have dimensions one lower than the original grid.

b) attributes such as periodicity must be reimposed.

c) possess appropriate attributes of the original cartesian grid.

Page 8: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Answer

1. B

2. A

3. B

4. C

5. B

6. C

Page 9: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Course Problem

• Description– The new problem still implements a parallel search of

an integer array. The program should find all occurrences of a certain integer which will be called the target. When a processor of a certain rank finds a target location, it should then calculate the average of

• The target value • An element from the processor with rank one higher (the

"right" processor). The right processor should send the first element from its local array.

• An element from the processor with rank one less (the "left" processor). The left processor should send the first element from its local array.

Page 10: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Course Problem

• For example, if processor 1 finds the target at index 33 in its local array, it should get from processors 0 (left) and 2 (right) the first element of their local arrays. These three numbers should then be averaged.

• In terms of right and left neighbors, you should visualize the four processors connected in a ring. That is, the left neighbor for P0 should be P3, and the right neighbor for P3 should be P0.

• Both the target location and the average should be written to an output file. As usual, the program should read both the target value and all the array elements from an input file.

Page 11: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Course Problem

• Exercise– Modify your code from Chapter 7 to solve this latest

version of the Course Problem using a virtual topology. First, create the topology (which should be called MPI_RING) in which the four processors are connected in a ring. Then, use the utility routines to determine which neighbors a given processor has.

Page 12: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

• Note: The sections of code shown in red are new code in which the MPI_RING virtual topology is created. The section of code in blue is where the new topology is used by each processor to determine its left and right neighbors.

Page 13: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

#include <stdio.h>#include <mpi.h>

#define N 300

int main(int argc, char **argv){

int i, target; /*local variables*/int b[N], a[N/4]; /*a is name of the array each slave searches*/int rank, size, err;MPI_Status status;int end_cnt;FILE *sourceFile;FILE *destinationFile;

int left, right; /*the left and right processes*/int lx, rx; /*store the left and right elements*/

int gi; /*global index*/float ave; /*average*/

Page 14: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

int blocklengths[2] = {1, 1}; /* initialize blocklengths array */MPI_Datatype types[2] = {MPI_INT, MPI_FLOAT}; /* initialize types array */MPI_Datatype MPI_Pair;MPI_Aint displacements[2];

MPI_Comm MPI_RING; /* Name of the new cartesian topology */int dim[1]; /* Number of dimensions */int period[1], reorder; /* Logical array to control if the dimension should "wrap-around" */int coord[1]; /* Coordinate of the processor in the new ring topology */

err = MPI_Init(&argc, &argv);err = MPI_Comm_rank(MPI_COMM_WORLD, &rank);err = MPI_Comm_size(MPI_COMM_WORLD, &size);

/* Initialize displacements array with memory addresses */err = MPI_Address(&gi, &displacements[0]);err = MPI_Address(&ave, &displacements[1]);/* This routine creates the new data type MPI_Pair */err = MPI_Type_struct(2, blocklengths, displacements, types, &MPI_Pair);err = MPI_Type_commit(&MPI_Pair); /* This routine allows it to be used in communication */

Page 15: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

if(size != 4) {printf("Error: You must use 4 processes to run this program.\n");return 1;

}

dim[0] = 4; /* Four processors in the one row */period[0] = 1; /* Have the row "wrap-around" to make a ring */reorder = 1;/* Create the the new ring cartesian topology with a call to the following routine */err = MPI_Cart_create(MPI_COMM_WORLD, 1, dim, period, reorder, &MPI_RING);

if (rank == 0){

/* File b.data has the target value on the first line *//* The remaining 300 lines of b.data have the values for the b array */sourceFile = fopen("b.data", "r");

/* File found.data will contain the indices of b where the target is */destinationFile = fopen("found.data", "w");

Page 16: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

if(sourceFile==NULL) {printf("Error: can't access file.c.\n");return 1;

} else if(destinationFile==NULL) {printf("Error: can't create file for writing.\n");return 1;

} else {/* Read in the target */fscanf(sourceFile, "%d", &target);

}}

/*Notice the broadcast is outside of the if, all processors must call it*/err = MPI_Bcast(&target, 1, MPI_INT, 0, MPI_COMM_WORLD);

if (rank == 0) {/* Read in b array */for (i=0; i<N; i++)

fscanf(sourceFile,"%d", &b[i]);}

Page 17: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

/* Again, the scatter is after the if, all processors must call it */err = MPI_Scatter(b, N/size, MPI_INT, a, N/size, MPI_INT, 0, MPI_COMM_WORLD);

/* Each processor easily determines its left and right neighbors *//* with the call to the following utility routine */err = MPI_Cart_shift(MPI_RING, 0, 1, &left, &right);

if (rank == 0) {/* P0 sends the first element of its subarray a to its neighbors */err = MPI_Send(&a[0], 1, MPI_INT, left, 33, MPI_COMM_WORLD);err = MPI_Send(&a[0], 1, MPI_INT, right, 33, MPI_COMM_WORLD);

/* P0 gets the first elements of its left and right processor's arrays */err = MPI_Recv(&lx, 1, MPI_INT, left, 33, MPI_COMM_WORLD, &status);err = MPI_Recv(&rx, 1, MPI_INT, right, 33, MPI_COMM_WORLD, &status);

/*Master now searches the first fourth of the array for the target */for (i=0; i<N/size; i++) {

if (a[i] == target) {gi = (rank)*N/size+i+1;ave = (target+lx+rx)/3.0;fprintf(destinationFile,"P %d, %d %f\n", rank, gi, ave);

}}

Page 18: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

end_cnt = 0;while (end_cnt != 3) {

err = MPI_Recv(MPI_BOTTOM, 1, MPI_Pair, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);

if (status.MPI_TAG == 52)end_cnt++; /*See Comment*/

elsefprintf(destinationFile,"P %d, %d %f\n", status.MPI_SOURCE, gi, ave);

}

fclose(sourceFile);fclose(destinationFile);

}else {

/* Each slave sends the first element of its subarray a to its neighbors */err = MPI_Send(&a[0], 1, MPI_INT, left, 33, MPI_COMM_WORLD);err = MPI_Send(&a[0], 1, MPI_INT, right, 33, MPI_COMM_WORLD);

/* Each slave gets the first elements of its left and right processor's arrays */err = MPI_Recv(&lx, 1, MPI_INT, left, 33, MPI_COMM_WORLD, &status);err = MPI_Recv(&rx, 1, MPI_INT, right, 33, MPI_COMM_WORLD, &status);

Page 19: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

/* Search the b array and output the target locations */for (i=0; i<N/size; i++) {

if (a[i] == target) {gi = (rank)*N/size+i+1; /*Equation to convert local index to glob

al index*/ave = (target+lx+rx)/3.0;err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 19, MPI_COM

M_WORLD);}

}

gi = target; /* Both are fake values */ave=3.45; /* The point of this send is the "end" tag (See Chapter 4) */err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 52, MPI_COMM_WORLD); /*See C

omment*/}

err = MPI_Type_free(&MPI_Pair);err = MPI_Finalize();return 0;

}

Page 20: Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in

Solution

• The results obtained from running this code are in the file "found.data" which contains the following: P 0, 62, -7.666667P 2, 183, -7.666667P 3, 271, 19.666666P 3, 291, 19.666666P 3, 296, 19.666666

• Notice that in this new version of the code we obtained the same results as the stencil version, which we should have.

• If you want to confirm that these results are correct, run the parallel code shown above using the input file "b.data" from Chapter 2.