sparse matrix notes 0 - university of minnesota

21
Sparse Matrix Techniques 1 SPARSE MATRICES In the previous lectures we have talked about the creation of the Y and Z matrices as being funda- mental to power system analysis. There an interesting property of the Y matrix that turns out to be fundamental to the way we treat the matrix when writing computer codes. If we look at a typical Y matrix we notice the following: 1) There is always a non zero term in each diagonal of the matrix corresponding to each bus. (Assuming that there is some connection from each bus to at least one other bus). 2) There is a non zero term and corresponding to the between every pair of buses ij where a transmission line or transformer exists in the power system net- work. 3) Where no transmission line or transformer exists, there is no term in the Y ma- trix and this results in a zero for the and in those positions. For the Y matrix to have no zeros, it would have to have a line or transformer from every bus to every other bus - an impossibility, and a grossly over designed power system to say the least. A convenient way to measure this property of a power system is the ratio of the number of lines to the number of buses: • For a transmission system this is usually between 1.5:1 and 1.75:1 • For a transmission system where a large part of the network has been “equiva- lenced” this ratio usually ranges between 2.0:1 and 2.5:1 For a Y matrix of an N bus system with a line to bus ratio of 1.5:1 we would have these terms: 1) N diagonal terns 2) 1.5 * N terms above the diagonal 3) 1.5 * N terms below the diagonal Thus leading to a total of 4 * N terms. Now if N is 1000, for example, this means that we have a total of 4000 non zero terms out of a possible 1000 X 1000 or 1 million terms. Expressed another way, 0nly 0.4% of the possible terms in such a Y matrix are non zero. We say that such matrices are “sparse”. It turns out that computer codes that take account of sparseness of matrices (“Sparsity codes”) can be written to process very large power system networks with great speed and memory efficiency. CALCULATING THE INVERSE OF A SPARSE Y MATRIX We saw in the previous notes that we could solve for the inverse of a Y matrix by applying a cur- rent vector such as the one below which consists of all zeros and a single 1 corresponding to the desired column: Y ii Y ij Y ji y ij Y ij Y ji

Upload: others

Post on 10-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Techniques 1

SPARSE MATRICES

In the previous lectures we have talked about the creation of the Y and Z matrices as being funda-mental to power system analysis. There an interesting property of the Y matrix that turns out to be fundamental to the way we treat the matrix when writing computer codes. If we look at a typical Y matrix we notice the following:

1) There is always a non zero term in each diagonal of the matrix corresponding to each bus. (Assuming that there is some connection from each bus to at least one other bus).2) There is a non zero term and corresponding to the between every pair of buses ij where a transmission line or transformer exists in the power system net-work.3) Where no transmission line or transformer exists, there is no term in the Y ma-trix and this results in a zero for the and in those positions.

For the Y matrix to have no zeros, it would have to have a line or transformer from every bus to every other bus - an impossibility, and a grossly over designed power system to say the least.A convenient way to measure this property of a power system is the ratio of the number of lines to the number of buses:

• For a transmission system this is usually between 1.5:1 and 1.75:1• For a transmission system where a large part of the network has been “equiva-lenced” this ratio usually ranges between 2.0:1 and 2.5:1

For a Y matrix of an N bus system with a line to bus ratio of 1.5:1 we would have these terms:1) N diagonal terns2) 1.5 * N terms above the diagonal3) 1.5 * N terms below the diagonal

Thus leading to a total of 4 * N terms. Now if N is 1000, for example, this means that we have a total of 4000 non zero terms out of a possible 1000 X 1000 or 1 million terms. Expressed another way, 0nly 0.4% of the possible terms in such a Y matrix are non zero. We say that such matrices are “sparse”.It turns out that computer codes that take account of sparseness of matrices (“Sparsity codes”) can be written to process very large power system networks with great speed and memory efficiency.

CALCULATING THE INVERSE OF A SPARSE Y MATRIX

We saw in the previous notes that we could solve for the inverse of a Y matrix by applying a cur-rent vector such as the one below which consists of all zeros and a single 1 corresponding to the desired column:

Yii

Yij Yji yij

Yij Yji

Page 2: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Techniques 2

To solve for the voltages corresponding to the above current vector we need to solve this equation

Here we will use the technique of Gaussian Elimination which starts with the first row and con-verts the diagonal term to 1.0 by dividing into all terms in the first row and the first row posi-tion of the current vector. We then eliminate the term in the second row by multiplying the first row by and subtracting it from the second row in both the Y matrix and the current vec-tor. We continue this process until all of the terms below the diagonal (i.e., the “lower triangle” of the matrix) have been set to zero and all diagonal terms are 1. At this point we can solve for the last voltage because it is equal to the last row in the new altered current vector. We then solve for

using the N-1 row of the Y matrix and the term we have just solved for. We call this pro-cess “back substitution” since we are working our way backwards up the voltage vector. As we showed in the previous notes, the resulting voltage vector is equal to the ith column of the Z ma-trix.Incidentally, the fact that the current vector above had only one non zero term allows the above process to be made even more efficient. Algorithms that account for the sparseness of vectors as well as the sparseness of matrices (“sparse vector” techniques) are well know to power system analysis. This course will not go into sparse vector techniques but the student should be aware of them none the less.Now suppose that we wish to generate a different column of the Z matrix, for example the i+1 th column. We could go through the exact process but it ought to be obvious that almost all of the arithmetic has been done once already. That is, all of the operations on the terms in the Y matrix

0…010…0

0…010…0

Y E=

Y11Y12

Y21

ENEN 1– IN

Page 3: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Techniques 3

as we set its lower triangle terms to zero and its diagonals to 1.0 will simply be repeated. The only new work will be the multiplies and subtractions taking place in the new current vector. Power system engineers, it turns out, were some of the first to realize that there would be great value if the operations on the Y matrix were done once and somehow saved. Any new current vector that was to be substituted for the original could be processed very quickly. Such techniques are called “triangular factorization” in which the arithmetic for reducing the Y matrix to a triangular form are saved in memory in tables called the “table of factors”. These techniques are explained in the next section’s notes called LDU decomposition.

Page 4: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 1

1) Given a set of linear equations:

where: [A] is an n x n nonsingular matrixx is an unknown n x 1 (column) vectorb is a known n x 1 vector

2) Later we will show how to make [A] equivalent to the product of three matrices [L],[D], and [U]. Where:

[L] is a “lower triangular” matrix (i.e. all terms in [L] which are above and to theright of the diagonal are zero). All diagonal terms are equal to 1

[D] is a “diagonal” matrix. All numbers are on the diagonal, all off diagonalterms are zero

[U] is an “upper triangular” matrix (all terms in [U] which are below andto the left of the diagonal are zero), its diagonal terms are all 1

3) Then:

Where the x’s in the above simply stand for non zero terms.

A x b=

L1 0 0x 1 0x x 1

= Dd 0 00 d 00 0 d

= U1 x x0 1 x0 0 1

=

Page 5: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 2

4) We derive [L], [D], and [U] so that:

Then the equation:

can be transformed to:

The above is carried out in three steps using two temporary vectors b’ and b” as follows:

let

then

and finally:

Notice that the inverse of [L] is quite trivial since the solution to [L]b’ = b starts at the top by set-ting the first term in b’ equal to the first term in b (i.e. ). Then the next equation to besolved is of the form, which gives , and so forth for the entire b’ vector.

A L D U=

A x b=

x U

1–

D

1–

L

1–

b=

b‘ L

1–

b=b‘ L

1–

b=

b“ D

1–

b‘=

x U

1–

b“=

b1‘ b1=L 21

b‘1 b‘2+ b2= b‘2

Page 6: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 3

Note also that the inverse of [D] is the trivial matrix containing only diagonals that are themselvesthe inverse is each of the diagonal terms in [D], that is:,

Similarly the solution for [U]x = b” proceeds by solving first for the last term in x (i.e. )by ob-serving the fact that: and then solving for and so forth up the vector until all termsare solved.

Here is the way this looks for a 3 x 3 matrix:

The first step [L]b’ = b looks like”

then the following can be done:

Now the b’ vector is multiplied by

Now solve for x using [U]:

D ii1– 1

D ii

------------=

xnxn b“n= xn 1–

1L21 1

L31 L32 1

b1‘

b2‘

b3‘

b1b2b3

=

b1‘ b1=

b2‘ b2 L21b1

‘–=

b3‘ b3 L31b1

‘– L32b2‘–=

D1–

b1“

b2“

b3“

d111–

d221–

d331–

b1‘

b2‘

b3‘

=

Page 7: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 4

The last matrix multiply represents the following:

From here on the assumption sill be made that [A] is symmetric. That is will equal and thenit turns out that

and this simplifies the discussion.

Now, it has been shown that if one has [L], [D], and [U] then operations requiring the inverseof[A] can be carried out easily (and without explicitly calculating the inverse). The steps belowwill show how one derives the [L], [D], and [U] matrices from [A]. To do this we use a very oldtechnique that forces [A] to look like [U], that is, to force [A] into an upper triangular matrix.Note, that mathematically this is equivalent to multiplying [A] by and as follows:

You already know how to do this if you have ever used what mathematicians call Gaussian Elim-ination to solve a set of linear equations. For example, given the set of linear equations on the nextpage:

1 U12 U130 1 U230 0 U33

x1x2x3

b1“

b2“

b3“

=

x3 b3“=

x2 b2“ U23x3–=

x1 b1“ U12x2– U13x1–=

Aji Aij

L U1–

=

D1–

A1–

D1–L

1–A D

1–L

1–L D U U= =

Page 8: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 5

We solve for by eliminating variables in the equations. Usually, this is done by startingat the first equation and dividing all the coefficients by the coefficient of the first variable (here, itis already 1 so nothing changes) then “eliminating” the first variable from the second and thirdequations by multiply and subtract operations. Then the second equation is divided by the new co-efficient on the second variable and then the second variable is eliminated from the third equation.Last of all the third equation is divided by the coefficient of the third variable. Here are these stepsfor the three equation in our example above:

x1 2x2 x3–+ 2=

2x1 x2 x3+ + 1=

x1– x2 2x3+ + 4=

x1 x2 x3, ,

Page 9: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 6

First multiply the first equation by 2 and subtract it from the second, the result is:

We now multiply the second equation by -1/3 to make the second variable coefficient equal to 1:

Next we add the first equation to the third (actually this should be stated as multiplying the firstequation by -(-1) and adding to the third, the result is:

Next we multiply the second equation by -3 and add to the third equation, the result is:

Finally, we take the trivial step of multiplying the third equation by 1/4 to get:

Note that the third equation now tells us the value of , then from the second equation, if we sub-stitute this value for we get:

x1 2x2 x3–+ 2=

0 3x2– 3x3+ 3–=

x1– x2 2x3+ + 4=

x1 2x2 x3–+ 2=

0 x2 x3–+ 1=

x1– x2 2x3+ + 4=

x1 2x2 x3–+ 2=

0 x2 x3–+ 1=

0 3x2 x3+ + 6=

x1 2x2 x3–+ 2=

0 x2 x3–+ 1=

0 0 4x3+ + 3=

x1 2x2 x3–+ 2=

0 x2 x3–+ 1=

0 0 x3+ + 0.75=

x3x3

x2 1 x3+ 1 0.75+ 1.75= = =( )ℵ

Page 10: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 7

and finally from equation one:

In vector form:

If we put the three equations into matrix form we can carry out the same steps and derive the [U]matrix and the [D] matrix as follows:

Starting at the first row we multiply it by the number that will make the(1,1) term equal to 1 (inthis case it is already 1 so we can go to the next step). However, note that this would be expressedas the multiplying of the first row by 1. This also happens, then, to tell us the value of the term. So we can write this much:

We shall derive the other terms from later.

We now perform the same operation on the matrix that we did above to the equations. To repeat,we multiply the first row by 2 and subtract it from the second to obtain:

x1 2 2x2– x3+ 2 2 1.75( )– 0.75+ 0.75–= = =

x1x2x3

0.75–1.750.75

=

1 2 1–2 1 11– 1 2

x1x2x3

214

=

D111–

D1– 1

??

=

D111–

1 2 1–0 3– 31– 1 2

Page 11: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 8

This continues until we obtain:

and:

Note, that each time we went to convert the (i,i) term to 1 we save the divisor into the matrix, Finally:

We call [L], [D], [U] the table of factors of [A] and we use the table of factors to solve for un-known vectors of x when we are given a right hand side vector b.

For example, in our case:

To solve for x we proceed as before to use two temporary vectors b’ and b”. That is:

1 2 1–0 1 1–0 0 1

U=

13–

4D=

D 1–

1 0 02 1 01– 1– 1

L=

b214

=

1 2 1–2 1 11– 1 2

x1x2x3

214

=

Page 12: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 9

is equivalent to:

with [L], [D], and [U] defined as above. The first step is to get b’ from [L]b’ = b or:

This is easily solved for:

Next we solve for b” from [D]b” = b’ or:

finally we solve for x from [U] x = b” :

This results in the solution:

Note that if we were now presented with a new b vector we would not need to recreate the [L],[D], and [U] matrices to solve for the new values in x. By using the [L], [D], and [U] matrices we

L D U x214

=

1 0 02 1 01– 1– 1

b1

b2‘

b3‘

214

=

b1

b2‘

b3‘

23–

3

=

b1

b2“

b3“

d111–

d221–

d331–

b1

b2‘

b3‘

11/3–

1/4

23–

3

21

0.75

= = =

1 2 1–0 1 1–0 0 1

x1x2x3

21

0.75

=

x1x2x3

0.75–1.750.75

=

Page 13: Sparse Matrix Notes 0 - University of Minnesota

Sparse Matrix Notes: LDU Decomposition 10

make use of the “factors” of the original matrix to get repeated solutions with differing b vectors.

It is also important to note here that we are performing Gaussian elimination in three stages. Thefirst multiplying by or solving [L] b’ = b proceeds to solve for b’ from top to bottom, andthese operations are called the forward operations. Similarly, the solution of [U] x = b” solves forthe terms of x starting with the bottom term and proceeding up to the top and therefore these oper-ations are called the backward operations.

Many technical papers on sparsity programming applications for power system problems refer tothe entire procedure as forward-back substitution to refer to the solution using [L], [D], and [U].

L1–

Page 14: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 1

In the last lecture we saw how one could reduce a matrix to its table of factors symbolized by the matrices L, D and U. We also had developed the idea that the table of factors was extremely easy to generate when the original matrix, A, was sparse. Finally, we saw that a power system’s Y matrix, Y, is an extremely sparse matrix having only a fraction of a percent of the possible terms as non zeros.In the lecture today we shall develop the idea of “Sparsity Programming”, that is the techniques behind the computer algorithms that manipulate sparse matrices. Suppose we develop the Y ma-trix for a network like the one below:

In the figure below we have a ten by ten matrix representing this network that is sparse:

If we were to store this matrix in a standard FORTRAN storage mode in a computer we would “dimension” it at (10,10) and store all onehundred terms in memory. However, we can see from the above that only ten diagonal terms and twenty off diagonal terms are needed for a total of thir-

1 4 9

3

6

10

2

5

78

all line admittances = 1 puthe ground admittance is 2 pu

2 1– 1–2 1– 1–

2 1– 1–1– 1– 3 1–

1– 2 1–1– 2 1–

1 1–1– 1– 2

1– 1– 21– 1– 4

Page 15: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 2

ty terms are necessary, all other terms being zero. Now let us try to develop the table of factors for this matrix. To this we will perform Gaussian elimination on the matrix starting at the first row. The first step is to make the diagonal term unity by dividing the row by 2, in fact this can be car-ried out directly for the first three rows. The resulting matrix is as follows:

Where the matrix on the left is the diagonal matrix, D. The next step requires that we eliminate terms in the fourth row at the 4,1 and 4,3 locations. To eliminate the term at the 4,1 location we must add the first row to the fourth row, the result is:

Note several things that have happened here. The 4,1 term has been eliminated or set to zero. The diagonal, or 4,4 term has been changed, and a new term has been added to the fourth row, namely the 4,10 term. In the jargon of sparsity programs, the 4,10 term is called a “fill in” term. The next operation is to eliminate the 4,3 term by adding the third row to the fourth row. YOUR HOMEWORK ASSIGNMENT FOR THIS WEEK IS TO CONTINUE THIS PROCESS UNTIL ALL OF THE TERMS BELOW THE DIAGONAL ARE ELIMINATED. YOU SHOULD SHOW THE RESULTING U MATRIX AND THE RESULTING D MATRIX. FI-NALLY, YOU SHOULD SOLVE FOR ALL BUS VOLTAGES WHEN THERE IS A 1 PU CUR-RENT AT BUS 2 AND ALL OTHER CURRENTS ARE ZERO.

1 0.5– 0.5–1 0.5– 0.5–

1 0.5– 0.5–1– 1– 3 1–

1– 2 1–1– 2 1–

1 1–1– 1– 2

1– 1– 21– 1– 4

22

2

1 0.5– 0.5–1 0.5– 0.5–

1 0.5– 0.5–0 1– 2.5 1– 0.5–

1– 2 1–1– 2 1–

1 1–1– 1– 2

1– 1– 21– 1– 4

Page 16: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 3

As you can see from the preceding matrix manipulations, only the non zero terms are of interest. In fact, for large power system matrices, it is impractical to store anything other than the non zero terms. The heart of sparsity programming techniques is the ability to store and manipulate sparse matrices. To do this, we will introduce a storage scheme that does not use the standard FORTRAN (or C, or PASCAL) “full” matrix storage methods. To review, in full matrix storage we dimension the matrix as N,N and the compiler then knows that the first N locations are for column 1, the second N locations are for column 2, etc. to get the i,j term in the matrix then we can use the following algorithm. Let L be the location for the i,j term, then:

Note that the computer can go directly to any matrix location using the above. (Also note that the above applies only to a FORTRAN storage scheme where the locations in a table go from 1 to N and not from 0 to N-1).We do not wish to store all the zeros so we shall use a storage scheme which puts the diagonal terms in a separate table called DIAG and dimensioned Nx1. The off diagonal terms shall be stored in what is known as a “link list”. The diagonal terms will always be stored in a conventional array and accessed by the following rule: If we seek to access the term, we simply access DIAG(i).

L j 1–( )N i+=

Yii

2223221224

DIAG

Page 17: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 4

The link list storage for the off diagonal terms will look like this:

Notice that we do not store anything but the y line terms themselves. The link list scheme works as follows. The table labeled as FIRST contains numbers that tell the row number of the first loca-tion for each row. That is, FIRST(i) contains the index in the other three tables for the first line ad-mittance attached to bus i. The other three tables are always means to be accessed together. Thus the tables are always looked at as COLUMN(m), YLINE(m), and NEXT(m) where m is simply the index being used to access the tables, i.e. m is not a bus number. The table COLUMN contains the bus number, table YLINE contains the admittance, and NEXT points to the index containing the next term in the row. (Remember that we are storing the diagonal terms in a separate table so that the link list will not contain the diagonals.)We see from the above that the first term in the first row is stored in the first link list location so FIRST(1) equals 1. Row 1 contains two terms so NEXT(1) is set to 2 to indicate that the next term in row 1 is in the second location of the link list. Finally, NEXT(2) is zero to indicate that there are no more terms. Note the values stored in COLUMN(1), and COLUMN(2) which contain the col-umn numbers 4 and 10 for the two terms in the first row.HOMEWORK CONTINUED: COMPLETE THE ABOVE TABLE FOR THE STORAGE OF THE Y MATRIX.Last of all, we must be able to perform Gaussian elimination on the sparse matrix stored as a sep-arate diagonal term table and as a link list of off diagonal terms. The only disadvantage to work-ing with the link list scheme is the constant need to access the off diagonal terms in order across the row, we cannot just go directly to any term since we have no algorithm to access it. This turns out to be only a slight problem. In the next lecture we shall develop the idea of gaussian elimina-tion using the link list storage scheme.

13571012

4 1– 210 1– 05 1– 49 1– 04 1– 66 1– 0

COLUMNY LINE

NEXTFIRST

LINK LIST STORAGE OF OFF DIAGONAL TERMS

Page 18: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 5

GAUSSIAN ELIMINATION ON A LINK LIST STROED MATRIXWe have seen how we can store a large dimension sparse matrix in a link list storage scheme and attain great efficiency over the conventional compiler techniques that require all matrix terms to be stored and that use a simple calculation to find the location of any individual term.Gaussian elimination can still be carried out when the matrix elements are stored in a link list as long as we are willing to live with the programming complications. We shall see that all opera-tions involved in Gaussian elimination can be carried out if we are able to process the terms in each row starting at the left most term and proceeding to the right most term. Suppose we look at the two basic operations that we are called on to carry out. First, suppose that the first row of a sparse Y matrix has terms in the (1,1), the (1,2), and the (1,12) positions. (Note, the example be-low does not correspond to that on the previous pages.) This row shall be shown as:

When we divide this row of the Y matrix by (i.e., the first step in Gaussian elimination) we get:

If the Y matrix had been stored in a sparse matrix link list, the first row would appear like this be-fore any Gaussian elimination steps:

Y1 1, Y1 2, 0 0 0 0 0 0 0 0 0 Y1 12, 0 0 0

Y1 1,

1Y1 2,Y1 1,---------- 0 0 0 0 0 0 0 0 0

Y1 12,

Y1 1,------------ 0 0 0

1 2 Y1 2, 212 Y1 12, 0

Page 19: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 6

After the first Gaussian elimination, the link list table would appear as:

The diagonal term would have been stored in the DIAG table. As for the off diagonal terms, this operation can be carried out easily starting from the left term and proceeding to the right, or simply by following the link pointers FIRST and NEXT in the link tables.Now suppose that the second row looks like this:

Here again, if this row were stored in the link list tables we would have:

The above shows the link list storage after the division of row i by and before any operations involving row 2.The next step in gaussian elimination would be to “eliminate” the term by multiplying the first row times and subtracting it from the second row.

The result is:

12Y1 2,Y1 1,---------- 2

12Y1 12,

Y1 1,------------ 0

Y1 1,

Y2 1, Y2 2, 0 0 Y2 5, 0 0 0 0 0 0 0 0 Y2 14, 0

13

2Y1 2,Y1 1,---------- 2

12Y1 12,

Y1 1,------------ 0

2 Y2 1, 45 Y2 5, 5

14 Y2 14, 0

y1 1,

Y2 1,Y2 1,

Page 20: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 7

The new term, shown in parentheses above, would be stored in the second term of the DIAG table. Note that the term has been eliminated (i.e., it was set to zero and can then be forgot-ten). The and terms are not changed since there were no corresponding terms in those columns of row 1, and finally, there is a new term in row 2 at column 12 corresponding to a fill in of the term from column 12 of row 1. The resulting link list shows how we remove terms and add “fill in” terms to the link list:

We note the following about the new link list table:• The eliminated term, is simply “unlinked” and left in the table with no active point-ers to it. In actual practice this position would be reused later.• The FIRST(2) pointer is changed to 4 to show that the first non zero term in row 2 is the

term which is stored in the fourth position of the link list.• The new term in row 2 must be added and “linked in”. The link list table was assumed to have L-1 terms and the L term was the next available open position. Therefore, the new term in the 12th column of row 2 is stored here. The NEXT pointer corresponding to the

stored in position 4 is set to the value of L so that it points to the new position at the bottom of the list.• The NEXT(L) pointer is then set to 5 so that it points back to the term stored in the 5th position.

Tracing the terms of the second row we start with FIRST(2) and go to position 4 in the link list to get the first term, next we look at NEXT(4) and get L so we go to the Lth position to get the next

0 Y2 2,Y2 1, Y1 2,Y1 1,

--------------------– 0 0 Y2 5, 0 0 0 0 0 0

Y2 1, Y1 12,–Y1 1,

-------------------------- 0 Y2 14, 0

Y2 2,Y1 2,

Y2 5, Y2 14,

14

2Y1 2,Y1 1,---------- 2

12Y1 12,

Y1 1,------------ 0

2 Y2 1, 45 Y2 5, L14 Y2 14, 0

12Y2 1, Y1 12,

Y1 1,-----------------------–

5

eliminated termis no longer linked (nothingpoints to it)

L is the location ofthe next available term at the bottomof the link list

location L contains thenew fill in term.

FIRST value changedto 4 since the first non zero in row 2 isnow at the fourthposition.

Y2 1,

Y2 5,

Y2 5,

Y2 14,

Page 21: Sparse Matrix Notes 0 - University of Minnesota

Sparsity Matrix Notes 8

term. Finally, NEXT(L) sends us back to the 5th position where we find NEXT(5)=0 indicating that this is the last term in the second row. NOTE THAT THE LINKS IN THE LINK LIST NEED NOT BE STORED IN CONSECUTIVE POSITIONS, IT IS THE POINTERS THAT DETERMINE THEIR ORDER.We can now proceed to divide the new second row by the current value of , that is:

which is stored in DIAG(2). We proceed in this manner to perform Gaussian elimination on the entire matrix.

Y2 2,

Y2 2,new Y2 2,

Y2 1, Y1 2,Y1 1,

--------------------– =