cs 240a: solving ax = b in parallel

24
CS 240A: Solving Ax = b in parallel CS 240A: Solving Ax = b in parallel °Dense A: Gaussian elimination with partial pivoting •Same flavor as matrix * matrix, but more complicated °Sparse A: Iterative methods – Conjugate gradient etc. •Sparse matrix times dense vector °Sparse A: Gaussian elimination – Cholesky, LU, etc. •Graph algorithms

Upload: jacqui

Post on 22-Jan-2016

31 views

Category:

Documents


0 download

DESCRIPTION

CS 240A: Solving Ax = b in parallel. Dense A: Gaussian elimination with partial pivoting Same flavor as matrix * matrix, but more complicated Sparse A: Iterative methods – Conjugate gradient etc. Sparse matrix times dense vector Sparse A: Gaussian elimination – Cholesky , LU, etc. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS 240A:  Solving Ax = b in parallel

CS 240A: Solving Ax = b in parallelCS 240A: Solving Ax = b in parallel

° Dense A: Gaussian elimination with partial pivoting

• Same flavor as matrix * matrix, but more complicated

° Sparse A: Iterative methods – Conjugate gradient etc.

• Sparse matrix times dense vector

° Sparse A: Gaussian elimination – Cholesky, LU, etc.

• Graph algorithms

° Sparse A: Preconditioned iterative methods and multigrid

• Mixture of lots of things

Page 2: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.2 Demmel Fa 2001

Dense Linear Algebra (Excerpts)

James Demmel

http://www.cs.berkeley.edu/~demmel/cs267_221001.ppt

Page 3: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.3 Demmel Fa 2001

Motivation

° 3 Basic Linear Algebra Problems• Linear Equations: Solve Ax=b for x

• Least Squares: Find x that minimizes ri2 where r=Ax-b

• Eigenvalues: Findand x where Ax = x

• Lots of variations depending on structure of A (eg symmetry)

° Why dense A, as opposed to sparse A?• Aren’t “most” large matrices sparse?

• Dense algorithms easier to understand

• Some applications yields large dense matrices

- Ax=b: Computational Electromagnetics

- Ax = x: Quantum Chemistry

• Benchmarking

- “How fast is your computer?” = “How fast can you solve dense Ax=b?”

• Large sparse matrix algorithms often yield smaller (but still large) dense problems

Page 4: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.4 Demmel Fa 2001

Review of Gaussian Elimination (GE) for solving Ax=b

° Add multiples of each row to later rows to make A upper triangular

° Solve resulting triangular system Ux = c by substitution

… for each column i… zero it out below the diagonal by adding multiples of row i to later rowsfor i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k)

Page 5: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.5 Demmel Fa 2001

Refine GE Algorithm (1)

° Initial Version

° Remove computation of constant A(j,i)/A(i,i) from inner loop

… for each column i… zero it out below the diagonal by adding multiples of row i to later rowsfor i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k)

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

Page 6: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.6 Demmel Fa 2001

Refine GE Algorithm (2)

° Last version

° Don’t compute what we already know: zeros below diagonal in column i

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k)

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

Page 7: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.7 Demmel Fa 2001

Refine GE Algorithm (3)

° Last version

° Store multipliers m below diagonal in zeroed entries for later use

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k)

for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 8: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.8 Demmel Fa 2001

Refine GE Algorithm (4)

° Last version for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

o Split Loopfor i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 9: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.9 Demmel Fa 2001

Refine GE Algorithm (5)

° Last version

° Express using matrix operations (BLAS)

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) * ( 1 / A(i,i) ) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 10: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.10 Demmel Fa 2001

What GE really computes

° Call the strictly lower triangular matrix of multipliers M, and let L = I+M

° Call the upper triangle of the final matrix U

° Lemma (LU Factorization): If the above algorithm terminates (does not divide by zero) then A = L*U

° Solving A*x=b using GE• Factorize A = L*U using GE (cost = 2/3 n3 flops)

• Solve L*y = b for y, using substitution (cost = n2 flops)

• Solve U*x = y for x, using substitution (cost = n2 flops)

° Thus A*x = (L*U)*x = L*(U*x) = L*y = b as desired

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

Page 11: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.11 Demmel Fa 2001

Problems with basic GE algorithm

° What if some A(i,i) is zero? Or very small?• Result may not exist, or be “unstable”, so need to pivot

° Current computation all BLAS 1 or BLAS 2, but we know that BLAS 3 (matrix multiply) is fastest (earlier lectures…)

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n , i) * A(i , i+1:n)

PeakBLAS 3

BLAS 2

BLAS 1

Page 12: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.12 Demmel Fa 2001

Pivoting in Gaussian Elimination° A = [ 0 1 ] fails completely, even though A is “easy” [ 1 0 ]

° Illustrate problems in 3-decimal digit arithmetic:

A = [ 1e-4 1 ] and b = [ 1 ], correct answer to 3 places is x = [ 1 ] [ 1 1 ] [ 2 ] [ 1 ]

° Result of LU decomposition is

L = [ 1 0 ] = [ 1 0 ] … No roundoff error yet [ fl(1/1e-4) 1 ] [ 1e4 1 ]

U = [ 1e-4 1 ] = [ 1e-4 1 ] … Error in 4th decimal place [ 0 fl(1-1e4*1) ] [ 0 -1e4 ]

Check if A = L*U = [ 1e-4 1 ] … (2,2) entry entirely wrong [ 1 0 ]

° Algorithm “forgets” (2,2) entry, gets same L and U for all |A(2,2)|<5° Numerical instability° Computed solution x totally inaccurate

° Cure: Pivot (swap rows of A) so entries of L and U bounded

Page 13: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.13 Demmel Fa 2001

Gaussian Elimination with Partial Pivoting (GEPP)° Partial Pivoting: swap rows so that each multiplier |L(i,j)| = |A(j,i)/A(i,i)| <= 1

for i = 1 to n-1

find and record k where |A(k,i)| = max{i <= j <= n} |A(j,i)| … i.e. largest entry in rest of column i if |A(k,i)| = 0 exit with a warning that A is singular, or nearly so elseif k != i swap rows i and k of A end if A(i+1:n,i) = A(i+1:n,i) / A(i,i) … each quotient lies in [-1,1] A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

° Lemma: This algorithm computes A = P*L*U, where P is a permutation matrix° Since each entry of |L(i,j)| <= 1, this algorithm is considered numerically stable° For details see LAPACK code at www.netlib.org/lapack/single/sgetf2.f

Page 14: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.14 Demmel Fa 2001

Converting BLAS2 to BLAS3 in GEPP

° Blocking• Used to optimize matrix-multiplication

• Harder here because of data dependencies in GEPP

° Delayed Updates• Save updates to “trailing matrix” from several consecutive BLAS2

updates

• Apply many saved updates simultaneously in one BLAS3 operation

° Same idea works for much of dense linear algebra• Open questions remain

° Need to choose a block size b• Algorithm will save and apply b updates

• b must be small enough so that active submatrix consisting of b columns of A fits in cache

• b must be large enough to make BLAS3 fast

Page 15: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.15 Demmel Fa 2001

Blocked GEPP (www.netlib.org/lapack/single/sgetrf.f)for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n , ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end , ib:end) + I

A(ib:end , end+1:n) = LL-1 * A(ib:end , end+1:n) … update next b rows of U A(end+1:n , end+1:n ) = A(end+1:n , end+1:n ) - A(end+1:n , ib:end) * A(ib:end , end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b

Page 16: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.16 Demmel Fa 2001

Overview of LAPACK

° Standard library for dense/banded linear algebra• Linear systems: A*x=b

• Least squares problems: minx || A*x-b ||2• Eigenvalue problems: Ax =x, Ax = Bx

• Singular value decomposition (SVD): A = UVT

° Algorithms reorganized to use BLAS3 as much as possible

° Basis of math libraries on many computers, Matlab 6

° Many algorithmic innovations remain• Automatic optimization

• Quadtree matrix data structures for locality

• New eigenvalue algorithms

Page 17: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.17 Demmel Fa 2001

Parallelizing Gaussian Elimination

° Recall parallelization steps from earlier lecture• Decomposition: identify enough parallel work, but not too much

• Assignment: load balance work among threads

• Orchestrate: communication and synchronization

• Mapping: which processors execute which threads

° Decomposition• In BLAS 2 algorithm nearly each flop in inner loop can be done in

parallel, so with n2 processors, need 3n parallel steps

• This is too fine-grained, prefer calls to local matmuls instead

• Need to discuss parallel matrix multiplication

° Assignment• Which processors are responsible for which submatrices?

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n , i) * A(i , i+1:n)

Page 18: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.18 Demmel Fa 2001

Different Data Layouts for Parallel GE (on 4 procs)

The winner!

Bad load balance:P0 idle after firstn/4 steps

Load balanced, but can’t easilyuse BLAS2 or BLAS3

Can trade load balanceand BLAS2/3 performance by choosing b, butfactorization of blockcolumn is a bottleneck

Complicated addressing

Page 19: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.19 Demmel Fa 2001

Review: BLAS 3 (Blocked) GEPP

for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n , ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end , ib:end) + I

A(ib:end , end+1:n) = LL-1 * A(ib:end , end+1:n) … update next b rows of U A(end+1:n , end+1:n ) = A(end+1:n , end+1:n ) - A(end+1:n , ib:end) * A(ib:end , end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b

BLAS 3

Page 20: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.20 Demmel Fa 2001

Review: Row and Column Block Cyclic Layout

processors and matrix blocksare distributed in a 2d array

pcol-fold parallelismin any column, and calls to the BLAS2 and BLAS3 on matrices of size brow-by-bcol

serial bottleneck is eased

need not be symmetric in rows andcolumns

Page 21: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.21 Demmel Fa 2001

Distributed GE with a 2D Block Cyclic Layout

block size b in the algorithm and the block sizes brow and bcol in the layout satisfy b=brow=bcol.

shaded regions indicate busy processors or communication performed.

unnecessary to have a barrier between each step of the algorithm, e.g.. step 9, 10, and 11 can be pipelined

Page 22: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.22 Demmel Fa 2001

Distributed GE with a 2D Block Cyclic Layout

Page 23: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.23 Demmel Fa 2001

Ma

trix

mu

ltip

ly o

f

gre

en

= g

ree

n -

blu

e *

pin

k

Page 24: CS 240A:  Solving Ax = b in parallel

CS267 Dense Linear Algebra I.24 Demmel Fa 2001