csce 5160 parallel processing - csrl.cse.unt.edu · 3/18/19 2 csce5160 march 18, 2019 3 csce 5160...

14
3/18/19 1 CSCE5160 March 18, 2019 1 CSCE 5160 Parallel Processing Problem 1: a). Speedup = T1/Tp = (n 2 )/[(n 2 /p) + (n/p)* log p] = (n 2 *p)/[n 2 + n* log p ] = p/[1+ (log p)/n] Efficiency = Speedup/p = 1/[ 1+ (log p)/n] Exam 1: High = 80 Average = 73 Solutions posted b). Iso-efficiency = K* To Tp = n 2 /p+ (n/p)* log(p) =(n 2 +n*lop p)/p = (W +To(W,p))/p To =[ (n/p)* log p] iso-efficiency function = K* (n/p)* log p = K* (W 1/2 )/p *log p CSCE5160 March 18, 2019 2 CSCE 5160 Parallel Processing c). We need set T1 = Tp n 2 = x 2 /p + (x/p) * log p Here x 2 is the new work we are trying to solve Solving: we get x = -log p + (or -) 2p 1/2 n and you can further simplify this as O(p 1/2 * n) We need x 2 = O(p*n 2 ) 2. a). Each processor computes partial results and we need to perform reduction along ROW processors b). Each processor receives (n/p 1/2 )* (n/p 1/2 ) A elements and all n elements of B Cost of all-to-one reduction is (ts + tw*m) log p Since we have only p 1/2 processors in each row and each processor has (n/p 1/2 ) involved in the reduction Communication cost = (ts + tw* ((n/p 1/2 )) log (p 1/2 ) Total cost = t c * (n 2 /p) + (t s + t w * ((n/p 1/2 )) log (p 1/2 )

Upload: others

Post on 18-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

1

CSCE5160 March 18, 2019 1

CSCE 5160 Parallel Processing

Problem 1:a). Speedup = T1/Tp = (n2)/[(n2/p) + (n/p)* log p]

= (n2*p)/[n2+ n* log p ] = p/[1+ (log p)/n]

Efficiency = Speedup/p = 1/[ 1+ (log p)/n]

Exam 1: High = 80 Average = 73Solutions posted

b). Iso-efficiency = K* To

Tp = n2/p+ (n/p)* log(p) =(n2+n*lop p)/p = (W +To(W,p))/pTo =[ (n/p)* log p]iso-efficiency function = K* (n/p)* log p = K* (W1/2)/p *log p

CSCE5160 March 18, 2019 2

CSCE 5160 Parallel Processingc). We need set T1 = Tp

n2 = x2/p + (x/p) * log p Here x2 is the new work we are trying to solve

Solving: we get x = -log p + (or -) 2p1/2n and you can further simplify this as O(p1/2* n)We need x2 = O(p*n2)

2. a). Each processor computes partial results and we need to perform reduction along ROW processors

b). Each processor receives (n/p1/2)* (n/p1/2) A elements and all n elements of B

Cost of all-to-one reduction is (ts + tw*m) log pSince we have only p1/2 processors in each row and each processor has (n/p1/2)

involved in the reductionCommunication cost = (ts + tw* ((n/p1/2)) log (p1/2)

Total cost = tc* (n2/p) + (ts + tw* ((n/p1/2)) log (p1/2)

Page 2: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

2

CSCE5160 March 18, 2019 3

CSCE 5160 Parallel Processing

3. This is fairly easy

4. We can use either row striping or checker board.

Original Rank Color Key New Group New Rank0 0 7 0 11 1 1 1 02 0 0 0 03 1 3 1 1

If we use row striping, each thread compares n/p rows to n/p columns to see if they are equal. If so, we can increment a counter (if not add a zero to the counter).

If we use checker-board, each thread compares a submatrix with another submatrix that is diagonally opposite. Again add 1 to a counter if they are equal or zero otherwise

CSCE5160 March 18, 2019 4

CSCE 5160 Parallel Processing

REVIEW Solving Linear EquationsGaussian elimination technique

Convert A into Upper Triangular fashion

Division Step (ith row: Divide all coefficients to the right by diagonal elementElimination Step (i+1th on, subtract a multiple of ith row to create zeros to the

left of diagonalParallelization: Row striping

overlap communication with division step

If p is smaller than n: use cyclical row striping

We also talked about column striping and checker-board

There will be different amounts of communication based on the partitioning

a). CREW: The counter increment proceeds sequentially (using locks) Total complexity = n2/p + p

b). CRCW: counter increment can be done concurrently Total complexity = n2/p + 1

Page 3: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

3

CSCE5160 March 18, 2019 5

CSCE 5160 Parallel ProcessingHow about using column striping?

Need communication during division step but only one element is communicatedNo communication for elimination step

How about using checker board striping?Now we need communication during both division and elimination steps

OpenMP Version

#pragam omp parallel for private (i,j)for (k=0; k<n-1; k++) { for (j=k+1; j<n-1; j++) a[k,j] = a[k,j]/a[k,k]; # division step

y[k] = b[k]/a[k,k];a[k,k] =1;

#pragma omp barrier #need a barrier, no default barrier here

for (i=k+1; i<n-1; i++){ for (j=k+1; j<n-1; j++) a[i,j] = a[i,j] –a[i,k]*a[k,j]; #elimination step

b[i] = b[i] –a[i,k]*y[k];a[i,k]=0;

}#pragma omp barrier #need a barrier, no default barrier here

}

CSCE5160 March 18, 2019 6

CSCE 5160 Parallel ProcessingSchedule clause

Static: loop iterations are assigned at the very beginning of program executionIf chunk is specified, each thread receives that many consecutive iterationsYou can specify chunk size of 1 to achieve cyclical # pragma omp parallel ….schedule(static,1)If you omit chunk size, the system divides the number of iterations by # of threads

Dynamic: iterations are assigned either in chunks or one at a time, only when a thread completesassigned iterationsFor example suppose we have 100 iterations and 5 threads. We can initially assigneach thread 5 iterations. And assign additional 5 threads when a thread completes

good for load balancingGuided: Similar to dynamic, but the system assigns ½ of the work to threads first

Then one half of the remaining work (1/4) and so on

Auto: The run time system decides how to schedule iterationsWe found that thread zero tends to be assigned more work

User: More involved but you can write your own scheduling methods

Page 4: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

4

CSCE5160 March 18, 2019 7

CSCE 5160 Parallel Processing

Sorting algorithms (Chapter 9)

What is the fastest sequential algorithm for sorting lists?Quick Sort -- O (n log n)

This is a recursive function.

Logically, we need to divide the list of numbers to be sorted into two lists. All numbers in one list are smaller than the numbers in the other list.

Note the two sub-lists are not sorted, only that all numbers in one list are smaller than the other. So, if we sort each list then we can do simple merge.

How do we obtain the two lists from a single list?

CSCE5160 March 18, 2019 8

CSCE 5160 Parallel ProcessingFind a pivot element. Start two scans of the list, one from top to bottom and another from bottom to top. The top scan continues until an element whose value is greater than the pivot is found. Likewise the bottom scan continues until an element whose value is small than the pivot is found. Then swap the two elements.

Stop when the two scans meet.

26 19 11 15 5 5 537 15 1 111 1 15 1561 11 19 1911 61 26 2659 59 59 5915 37 37 3748 48 48 4819 26 61 61Pivot = 26 Two lists (19,5,15,1,11) Four lists seven lists

(61,59,37,48,26) (11,5,1), (15,19) (1, 5), (11), (15), (19)(26,59,37,48), (61) (26), (59, 37, 48), (61)

Page 5: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

5

CSCE5160 March 18, 2019 9

CSCE 5160 Parallel ProcessingQuicksort (list, start, end) {

Partition(list, start, end, pivot_index); /* partitions into two listsQuicksort(list, start, pivot_index-1); /* sorts first listQuicksort(list, start, pivot_index+1); /* sorts second list}

How do we parallelize this?In shared memory, the list of elements are stored globally in common memory.

We can �recursively� create 2 new threads to sort sub-listsBut what is the complexity of this algorithm?

Let us assume that all lists are split into 2 equal sized sub-lists (at each recursive call)Each thread needs to scan the sub-list

the complexity of the scan = O (size of the list)

The complexity of the parallel algorithm = n + n/2 + n/4+….2 = O(2*n) = O(n)

Compare this with sequential algorithm with a complexity of O(n log n)

Not cost optimal since the cost of parallel algorithm is O(n2)

CSCE5160 March 18, 2019 10

CSCE 5160 Parallel Processing

If we can parallelize the partitioning part and achieve O(1) complexity, then wecan obtain a cost-optimal algorithm for quick sort.

Is it possible to obtain O(1) algorithm for the partitioning phases?

Let us see how we can partition lists (as per quick sort) in parallel. The idea is to try to construct a binary tree for the purpose of partitioning. At any stage in Quick sort, all elements smaller than pivot are on the left sub-tree and all elements greater than the pivot are on the right sub-tree.

Let us assume each processor is responsible for one element in the list.

We will assume CRCW with arbitrary write that is if multiple processors (threads) try to write to a memory location concurrently, only one write will succeed.

Page 6: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

6

CSCE5160 March 18, 2019 11

CSCE 5160 Parallel Processing

(from page 403, algorithm 9.6) – this is for shared memory only

for (each processor i) {root = i; /* only one processor will succeedparenti = root; /* root is the parent for all

processors left[i]= right[i]= n+1;} /* you could use infinity if wan

for (each processor i <> root) {

if (A[i] < A[parenti]) or if (A[i]= A[parenti]) and (i < parenti)

{ left[parenti] = i; /* only one will succeedif i = left[parenti] exit; /* succeeding processor will quitelse parenti = left[parenti]; /* failed processor will continue

}else {

right[parenti ] = i; /* only one will succeedif i = right[parenti] exit; /* succeeding processor will quitelse parenti = right[parenti]; /* failed processor will continue}

}

CSCE5160 March 18, 2019 12

CSCE 5160 Parallel Processing

Page 7: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

7

CSCE5160 March 18, 2019 13

CSCE 5160 Parallel Processing

What is the complexity of the partitioning algorithm?

If we use n processors (as we did, with each processor being responsible for one element) the partitioning will take O(1).

Note that the algorithm given does not really re-arrange the elements into two lists as the sequential version of quick sort –we simply identify pivot elements which become the roots at each level of the tree.

Since the partitioning take O(1) and the recursive portion of quick sort takes O(log n) the parallel version takes O(log n) -- cost optimal.

Note the tree construction results in a sorted list

But, this is possible only if we can implement concurrent writes without using mutual exclusion (say such as read-modify in a single instruction)

We can use OpenMP�s atomic concept.

CSCE5160 March 18, 2019 14

CSCE 5160 Parallel Processingfor (each processor i) {

#pragma omp atomicroot = i; /* only one processor will succeedparenti = root; /* root is the parent for all

processors left[i]= right[i]= n+1;}

for (each processor i <> root) {

if (A[i] < A[parenti]) or if (A[i]= A[parenti]) and (i < parenti)

{ #pragma omp atomicleft[parenti] = i; /* only one will succeedif i = left[parenti] exit; /* succeeding processor will quitelse parenti = left[parenti]; /* failed processor will continue

}else {

#pragma omp atomicright[parenti ] = i; /* only one will succeedif i = right[parenti] exit; /* succeeding processor will quitelse parenti = right[parenti]; /* failed processor will continue}

}

Note most implementations of atomic serialize the action – so if you combine all atomic operations, they take n + n/2 + n/4+….2 = O(2*n) = O(n)

Page 8: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

8

CSCE5160 March 18, 2019 15

CSCE 5160 Parallel Processing

Consider implementing quicksort on a hypercube (that is using MPI)

In Hypercube, we will take advantage of the property that any hypercube with 2d

processors can be divided into two equal sized hypercubes with 2d-1 nodes; and each processor in one sub-cube is connected one processor in the other sub-cube.

At each step of the quick sort, we will divide our hypercube into 2 sub-cubes.

The Algorithm works as follows. Suppose we have p = 2d processors and a list with n elements. We assign n/p elements to each processor.

At each step i we select a pivot, and broadcast the pivot element to all processors.Then all neighboring processors in dimension i (or those that differ in i-th most significant bit), exchange their elements -- one processor keeps all elements that are smaller than the pivot, and the other keeps all elements greater than the pivot element.

At the end, if we read the elements based on the processor number, we get the sorted list.

CSCE5160 March 18, 2019 16

CSCE 5160 Parallel Processing

10, 20 001

000 010

011

101

100 110

111

11, 15

1, 22

3, 12

7, 8 2, 14

4, 9

13, 18

Pivot = 12

001

000 010

011

101

100 110

111

1, 11

15, 22

3, 12

20 14

2, 4, 9

13, 18

7,8,10

Now we have two sub-cubes; all processorsin one sub-cube have elements smaller than orequal to the pivot, and the other sub-cube hasall elements greater than the pivot.Dimension 1(most significant bit)

Page 9: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

9

CSCE5160 March 18, 2019 17

CSCE 5160 Parallel Processing

101

100 110

111

15, 22

20 14

13, 18

001

000 010

011

1, 11 3, 12

2, 4, 97,8,10

Pivot = 15

Pivot = 7

Now we have 4 sub-cubeseach with two elements

101

100 110

111

13, 15

14 20

18, 22

001

000 010

011

1,3 11,12

8,9,102,4,7

Dimension 2 (middle bit)

CSCE5160 March 18, 2019 18

CSCE 5160 Parallel Processing

Now, we have sorted listby reading the elements withprocessors according to theirbinary labels.

101

100 110

111

13, 15

14 20

18, 22

001

000 010

011

1,3 11,12

8,9,102,4,7

101

100 110

111

13, 14

15

18

001

000 010

011

1 2,3, 4 8,9

7

Pivot = 14

Pivot = 4 Pivot = 9

Pivot = 18

Dimension 3 (lsb)

Page 10: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

10

CSCE5160 March 18, 2019 19

CSCE 5160 Parallel ProcessingSelecting a Pivot at each step is very critical to the performance.Otherwise, some processor may be idle. Consider the case where we selectedthe largest number as pivot in the first stage?Half of the processors will be idle after that step.So how do we select a pivot?

Complexity?The algorithm performs d iterations (where d is the dimension of hypercube). In each iteration we have (a) pivot selection(b) Broadcast the pivot(c ) split the numbers

In special case, if numbers are randomly distributed over a range, and if each processor is assigned n/p numbers, the median of any n/p is close to the median of all n numbers.

If numbers are distributed uniformly, we can take the median as our pivot at each step.How do we find a median? – can we use reduction?

CSCE5160 March 18, 2019 20

CSCE 5160 Parallel Processing

Each processor must divide its list into two lists. Let us assume that each local list will be sorted, so we have a computation cost of n/p log (n/p) -- for sorting local lists

Then dividing the local list into two lists is simple (one unit of computation).

And then exchange these lists with its neighbor in the ith dimension (in ith iteration)involves communication

Communication: broadcasting pivot in hypercube the cost is (ts+tw*m) log pm = 1 (only one element sent)p is the number of processors – and the number changes in each iteration

p, p/2, … note log p = d, so log (p/2) = d-1, etc

Broadcast cost in step i = O(d-(i-1))

So total cost for all d= log p steps = O(log2 p)(d − (i −1)i=1

d

Page 11: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

11

CSCE5160 March 18, 2019 21

CSCE 5160 Parallel ProcessingThis communication costs O(n/p)– since each processor has at most n/p elements and all of these elements may have to be exchanged And since we have d = log p iterations, total communication cost for this step

=O(n/p* log p)

So total complexity = O[(n / p) * log(n / p)]+O[(n / p) * log p]+ [O(log2 p)]

If ignore log2(p) – we have a cost optimal algorithm

Merge SortLet us assume each processor sorts its local numbers-----takes (n/p) log (n/p)

We can merge these sorted lists – we can use tree like merge (merge two lists at a time)Merging two lists of length (n/p) takes O(n/p)

Tree like merge takes (n/p)+2*(n/p) +….log (p)* (n/p) = O[(n/p) * log2 (p)]

Total complexity = O[ (n/p)* log(n/p) + (n/p) * log2 (p) ]= O[(n/p) * log (n/p)]

Here we did not consider the actual communication costs.

CSCE5160 March 18, 2019 22

CSCE 5160 Parallel ProcessingCSCE 5160 Parallel ProcessingParallel Sorting by Regular Sampling (PSRS Algorithm)

• Each process sorts its share of elements• Each process selects p regular sample of sorted list – p is the number of processors• One process gathers and sorts samples, chooses p-1 pivot values from sorted sample list,

and broadcasts these pivot values• Each process partitions its list into p pieces, using pivot values• Each process sends partitions to other processes• Each process merges its partitions

75, 91, 15, 64, 21, 8, 88, 54

50, 12, 47, 72, 65, 54, 66, 22

83, 66, 67, 0, 70, 98, 99, 82

P0

P1

P2

Number of processors does not have to be a power of 2, unlike Hypercube.

Page 12: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

12

CSCE5160 March 18, 2019 23

CSCE 5160 Parallel Processing

Step 1: Each process sorts its list using quicksort.

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

Step 2: Each process chooses p regular samples

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

CSCE5160 March 18, 2019 24

CSCE 5160 Parallel Processing

One process collects p2 regular samples.

15, 54, 75, 22, 50, 65, 66, 70, 83

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

One process sorts p2 regular samples.

15, 22, 50, 54, 65, 66, 70, 75, 83

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

Page 13: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

13

CSCE5160 March 18, 2019 25

CSCE 5160 Parallel Processing

One process chooses p-1 pivot values.

15, 22, 50, 54, 65, 66, 70, 75, 83

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

One process broadcasts p-1 pivot values.

15, 22, 50, 54, 65, 66, 70, 75, 83

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

CSCE5160 March 18, 2019 26

CSCE 5160 Parallel Processing

Each process divides list, based on pivot values.

8, 15, 21, 54, 64, 75, 88, 91

12, 22, 47, 50, 54, 65, 66, 72

0, 66, 67, 70, 82, 83, 98, 99

P0

P1

P2

Each process sends partitions to correct destination process.(first partition to P0, second to P1 and third to P3)

8, 15, 21 12, 22, 47, 50 0

54, 64 54, 65, 66 66

75, 88, 91 72 67, 70, 82, 83, 98, 99

P0

P1

P2

Page 14: CSCE 5160 Parallel Processing - csrl.cse.unt.edu · 3/18/19 2 CSCE5160 March 18, 2019 3 CSCE 5160 Parallel Processing 3. This is fairly easy 4. We can use either row striping or checker

3/18/19

14

CSCE5160 March 18, 2019 27

CSCE 5160 Parallel Processing

Each process merges p partitions.

0, 8, 12, 15, 21, 22, 47, 50

54, 54, 64, 65, 66, 66

67, 70, 72, 75, 82, 83, 88, 91, 98, 99

P0

P1

P2

Complexity?

Same as the hypercube-quicksort, but we are more likely to have balanced load

We now look at another common sequential algorithm and see if we can parallelize it.