mth term paper

Upload: gurmukh-singh

Post on 05-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 Mth Term Paper

    1/23

    TOPIC: - BIOPOLYMERS.

    SUBMITTED TO: SUBMITTED BY:

    JAGANJOT KAUR

    (SUBJECT TEACHER) GURMUKH SINGH

    SECTION: E4001

    ROLL NO.: B42

    REGD.NO:-11006705

  • 8/2/2019 Mth Term Paper

    2/23

    ACKNOWLEDGEMENT

    Thanks giving is a sacred Indian culture. So, first of all, I would liketo thank our subject teacher, MISS JAGANJOT KAUR for his

    humble support and encouragement which enhanced me through the

    project. His likeness towards my topic uplifted my spirit which in

    turn helped me throughout.

    Secondly, I express my gratitude to my Parents for being a

    continuous source of encouragement and the financial aid given tome.

    Thirdly, I would love to thank my beloved friends who

    helped me in their little ways. They were of great help and support as

    they helped me a lot with their innovative ideas. They helped me a lot

    in completing my work in time.

    Finally, I would like to thank God for showering his

    blessings upon me.

    GURMUKH SINGH

  • 8/2/2019 Mth Term Paper

    3/23

    CONTENTS

    1. INTRODUCTION 12. HISTORY 23. ALGORITHM OVERVIEW 34. EXAMPLES 3-55. APPLICATIONS 6-96. SYSTEM OF LINEAR EQUATIONS 10-157. GAUSS-JORDAN METHOD 15-168. EXAMPLES 17-189. CONCLUSION 19

    REFERENCES

  • 8/2/2019 Mth Term Paper

    4/23

    INTRODUCTION

    In linear algebra, Gaussian elimination is an algorithm for solving systems of

    linear equations, finding the rank of a matrix, and calculating the inverse of an

    invertible square matrix. Gaussian elimination is named after German

    mathematician and scientist Carl Friedrich Gauss, which makes it an example of

    Stigler's law.

    Elementary row operations are used to reduce a matrix to row echelon form.

    GaussJordan elimination, an extension of this algorithm, reduces the matrix

    further to reduced row echelon form. Gaussian elimination alone is sufficient for

    many applications, and is cheaper than the -Jordan version.

    1

    http://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Rank_%28linear_algebra%29http://en.wikipedia.org/wiki/Matrix_%28mathematics%29http://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Stigler%27s_lawhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Stigler%27s_lawhttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Invertible_matrixhttp://en.wikipedia.org/wiki/Matrix_%28mathematics%29http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29http://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Linear_algebra
  • 8/2/2019 Mth Term Paper

    5/23

    History

    The method of Gaussian elimination appears in Chapter Eight, Rectangular

    Arrays, of the important Chinese mathematical text Jiuzhang suanshu or The

    Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen

    problems, with two to five equations. The first reference to the book by this title

    is dated to 179 CE, but parts of it were written as early as approximately 150

    BCE.[1]

    It was commented on by Liu Hui in the 3rd century.

    The method in Europe stems from the notes of Isaac Newton.[2]In 1670, he wrote

    that all the algebra books known to him lacked a lesson for solving simultaneous

    equations, which Newton then supplied. Cambridge University eventually

    published the notes as Arithmetica Universalis in 1707 long after Newton left

    academic life. The notes were widely imitated, which made (what is now called)Gaussian elimination a standard lesson in algebra textbooks by the end of the

    18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric

    elimination that was adopted in the 19th century by professional hand computers

    to solve the normal equations of least-squares problems. The algorithm that is

    taught in high school was named for Gauss only in the 1950s as a result of

    confusion over the history of the subject.

    2

    http://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/Liu_Huihttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Liu_Huihttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Art
  • 8/2/2019 Mth Term Paper

    6/23

    Algorithm overview

    The process of Gaussian elimination has two parts. The first part (Forward

    Elimination) reduces a given system to either triangular or echelon form, or

    results in a degenerate equation with no solution, indicating the system has nosolution. This is accomplished through the use ofelementary row operations. The

    second step uses back substitution to find the solution of the system above.

    Stated equivalently for matrices, the first part reduces a matrix to row echelon

    form using elementary row operations while the second reduces it to reduced

    row echelon form, or row canonical form.

    Another point of view, which turns out to be very useful to analyze the

    algorithm, is that Gaussian elimination computes matrix decomposition. The

    three elementary row operations used in the Gaussian elimination (multiplying

    rows, switching rows, and adding multiples of rows to other rows) amount to

    multiplying the original matrix with invertible matrices from the left. The first

    part of the algorithm computes LU decomposition, while the second part writes

    the original matrix as the product of a uniquely determined invertible matrix

    and a uniquely determined reduced row-echelon matrix.

    Example:

    Suppose the goal is to find and describe the solution(s), if any, of the following

    system of linear equations:

    The algorithm is as follows: eliminate x from all equations below L1, and then

    eliminate y from all equations below L2. This will put the system into triangular

    form. Then, using back-substitution, each unknown can be solved for.

    In the example, x is eliminated from L2 by adding to L2. X is then eliminated

    from L3 by adding L1 to L3. Formally:

    3

    http://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Degeneracy_%28mathematics%29http://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitutionhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitutionhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Degeneracy_%28mathematics%29http://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Triangular_form
  • 8/2/2019 Mth Term Paper

    7/23

    The result is:

    Now y is eliminated from L3 by adding 4L2 to L3:

    The result is:

    This result is a system of linear equations in triangular form, and so the first part

    of the algorithm is complete.

    The last part, back-substitution, consists of solving for the known in reverse

    order. It can thus be seen that

    Then, z can be substituted into L2, which can then be solved to obtain

    Next, z and y can be substituted into L1, which can be solved to obtain

    The system is solved.

    Some systems cannot be reduced to triangular form, yet still have

    at least one valid solution: for example, if y had not occurred in L2 and L3 after

    the first step above, the algorithm would have been unable to reduce the system

    to triangular form.

    4

  • 8/2/2019 Mth Term Paper

    8/23

    However, it would still have reduced the system to echelon form. In this case, the

    system does not have a unique solution, as it contains at least one free variable.

    The solution set can then be expressed parametrically (that is, in terms of the

    free variables, so that if values for the free variables are chosen, a solution will be

    generated).

    In practice, one does not usually deal with the systems in terms of equations but

    instead makes use of the augmented matrix (which is also suitable for computer

    manipulations). For example:

    Therefore, the Gaussian Elimination algorithm applied to the augmented matrix

    begins with:

    Which, at the end of the first part (Gaussian elimination, zeros only under the

    leading 1) of the algorithm, looks like this:

    That is, it is in row echelon form.

    At the end of the algorithm, if the GaussJordan elimination (zeros under and

    above the leading 1) is applied:

    That is, it is in reduced row echelon form, or row canonical form.

    5

    http://en.wikipedia.org/wiki/Free_variablehttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Free_variable
  • 8/2/2019 Mth Term Paper

    9/23

    APPLICATIONS

    Finding the inverse of a matrix

    Suppose A is a matrix and you need to calculate its inverse. The

    identity matrix is augmented to the right of A, forming a matrix

    (the block matrix B = [A, I]). Through application of elementary row operations

    and the Gaussian elimination algorithm, the left block of B can be reduced to the

    identity matrix I, which leaves A 1

    in the right block of B.

    If the algorithm is unable to reduce A to triangular form, then A is not

    invertible.

    General algorithm to compute ranks and bases

    The Gaussian elimination algorithm can be applied to any matrix A. If

    we get "stuck" in a given column, we move to the next column. In this way, for

    example, some matrices can be transformed to a matrix that has a

    reduced row echelon form like

    (The *'s are arbitrary entries). This echelon matrix T contains a wealth of

    information about A: the rank of A is 5 since there are 5 non-zero rows in T; the

    vector space spanned by the columns of A has a basis consisting of the first,

    third, fourth, seventh and ninth column of A (the columns of the ones in T), and

    the *'s tell you how the other columns of A can be written as linear combinations

    of the basis columns.

    6

    http://en.wikipedia.org/wiki/Matrix_inversionhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/wiki/Rank_of_a_matrixhttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Rank_of_a_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Matrix_inversion
  • 8/2/2019 Mth Term Paper

    10/23

    Analysis

    Gaussian elimination to solve a system of n equations for n unknowns

    requires n (n+1) / 2 divisions, (2n3

    + 3n2 5n)/6 multiplications, and (2n

    3+ 3n

    2

    5n)/6 subtractions, for a total of approximately 2n3 / 3 operations. So it has a

    complexity of .

    This algorithm can be used on a computer for systems with thousands of

    equations and unknowns. However, the cost becomes prohibitive for systems

    with millions of equations. These large systems are generally solved using

    iterative methods. Specific methods exist for systems whose coefficients follow a

    regular pattern (see system of linear equations).

    The Gaussian elimination can be performed over any field.

    Gaussian elimination is numerically stable for diagonally dominant or positive-

    definite matrices. For general matrices, Gaussian elimination is usually

    considered to be stable in practice if you use partial pivoting as described below,

    even though there are examples for which it is unstable.

    Higher order tensors

    Gaussian elimination does not generalize in any simple way to higher order

    tensors (matrices are order 2 tensors); even computing the rank of a tensor of

    order greater than 2 is a difficult problem.

    Gaussian elimination is a method for solving matrix equations of

    the form

    (1)

    To perform Gaussian elimination starting with the system of equations

    (2)

    compose the "augmented matrix equation"

    7

    http://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Field_%28mathematics%29http://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Diagonally_dominanthttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Pivot_elementhttp://en.wikipedia.org/wiki/Tensorshttp://mathworld.wolfram.com/MatrixEquation.htmlhttp://mathworld.wolfram.com/OftheForm.htmlhttp://mathworld.wolfram.com/OftheForm.htmlhttp://mathworld.wolfram.com/OftheForm.htmlhttp://mathworld.wolfram.com/OftheForm.htmlhttp://mathworld.wolfram.com/MatrixEquation.htmlhttp://en.wikipedia.org/wiki/Tensorshttp://en.wikipedia.org/wiki/Pivot_elementhttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Diagonally_dominanthttp://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Field_%28mathematics%29http://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Iterative_method
  • 8/2/2019 Mth Term Paper

    11/23

    (3)

    Here, the column vector in the variables is carried along for labeling the matrix

    rows. Now, perform elementary row operations to put the augmented matrix

    into the upper triangular form

    (4)

    Solve the equation of the th row for , then substitute back into the equation of

    the st row to obtain a solution for , etc., according to the formula

    (5)

    In Mathematical, Row Reduce performs a version of Gaussian elimination, with

    the equation being solved by

    Gaussian Elimination [m_?MatrixQ, v_?VectorQ] :=

    Last /@ Row Reduce [Flatten /@ Transpose [{m, v}]]

    LU decomposition of a matrix is frequently used as part of a Gaussian

    elimination process for solving a matrix equation.

    A matrix that has undergone Gaussian elimination is said to be in echelon form.

    For example, consider the matrix equation

    (6)

    8

    http://mathworld.wolfram.com/ColumnVector.htmlhttp://mathworld.wolfram.com/ElementaryRowandColumnOperations.htmlhttp://mathworld.wolfram.com/UpperTriangularMatrix.htmlhttp://www.wolfram.com/products/mathematica/http://reference.wolfram.com/mathematica/ref/RowReduce.htmlhttp://mathworld.wolfram.com/LUDecomposition.htmlhttp://mathworld.wolfram.com/EchelonForm.htmlhttp://mathworld.wolfram.com/MatrixEquation.htmlhttp://mathworld.wolfram.com/MatrixEquation.htmlhttp://mathworld.wolfram.com/EchelonForm.htmlhttp://mathworld.wolfram.com/LUDecomposition.htmlhttp://reference.wolfram.com/mathematica/ref/RowReduce.htmlhttp://www.wolfram.com/products/mathematica/http://mathworld.wolfram.com/UpperTriangularMatrix.htmlhttp://mathworld.wolfram.com/ElementaryRowandColumnOperations.htmlhttp://mathworld.wolfram.com/ColumnVector.html
  • 8/2/2019 Mth Term Paper

    12/23

    In augmented form, this becomes

    (7)

    Switching the first and third rows (without switching the elements in the right-

    hand column vector) gives

    (8)

    Subtracting 9 times the first row from the third row gives

    (9)

    Subtracting 4 times the first row from the second row gives

    (10)

    Finally, adding times the second row to the third row gives

    (11)

    Restoring the transformed matrix equation gives

    (12)

    Which can be solved immediately to give , back-substituting to obtain

    (which actually follows trivially in this example), and then again back-

    substituting to find

    9

  • 8/2/2019 Mth Term Paper

    13/23

    Systems of Linear Equations: Gaussian Elimination

    It is quite hard to solve non-linear systems of equations, while linear systems are

    quite easy to study. There are numerical techniques which help to approximatenonlinear systems with linear ones in the hope that the solutions of the linear

    systems are close enough to the solutions of the nonlinear systems. We will not

    discuss this here. Instead, we will focus our attention on linear systems.

    For the sake of simplicity, we will restrict ourselves to three, at most four,

    unknowns. The reader interested in the case of more unknowns may easily

    extend the following ideas.

    Definition. The equation

    a x + b y + c z + d w = h

    Where a, b, c, d, and h are known numbers, while x, y, z, and w are unknown

    numbers, is called a linear equation. If h =0, the linear equation is said to be

    homogeneous. A linear system is a set of linear equations and a homogeneous

    linear system is a set of homogeneous linear equations.

    For example,

    And

    Are linear systems, while

    10

  • 8/2/2019 Mth Term Paper

    14/23

    is a nonlinear system (because of y2

    ). The system

    Is a homogeneous linear system.

    Matrix Representation of a Linear System

    Matrices are helpful in rewriting a linear system in a very simple form. The

    algebraic properties of matrices may then be used to solve systems. First,

    consider the linear system

    Set the matrices

    Using matrix multiplications, we can rewrite the linear system above as the

    matrix equation

    11

  • 8/2/2019 Mth Term Paper

    15/23

    As you can see this is far nicer than the equations. But sometimes it is worth to

    solve the system directly without going through the matrix form. The matrix A is

    called the matrix coefficient of the linear system. The matrix C is called the

    nonhomogeneous term. When , the linear system is homogeneous. The

    matrix X is the unknown matrix. Its entries are the unknowns of the linear

    system. The augmented matrix associated with the system is the matrix [A|C],

    where

    In general if the linear system has n equations with m unknowns, then the matrix

    coefficient will be a nxm matrix and the augmented matrix an nx (m+1) matrix.

    Now we turn our attention to the solutions of a system.

    Definition. Two linear systems with n unknowns are said to be equivalent if and

    only if they have the same set of solutions.

    This definition is important since the idea behind solving a system is to find an

    equivalent system which is easy to solve. You may wonder how we will come up

    with such system. Easy, we do that through elementary operations. Indeed, it is

    clear that if we interchange two equations, the new system is still equivalent to

    the old one. If we multiply an equation with a nonzero number, we obtain a new

    system still equivalent to old one. And finally replacing one equation with the

    sum of two equations, we again obtain an equivalent system. These operationsare called elementary operations on systems. Let us see how it works in a

    particular case.

    Example. Consider the linear system

    12

  • 8/2/2019 Mth Term Paper

    16/23

    The idea is to keep the first equation and work on the last two. In doing that, we

    will try to kill one of the unknowns and solve for the other two. For example, if

    we keep the first and second equation, and subtract the first one from the last

    one, we get the equivalent system

    Next we keep the first and the last equation, and we subtract the first from the

    second. We get the equivalent system

    Now we focus on the second and the third equation. We repeat the same

    procedure. Try to kill one of the two unknowns (y or z). Indeed, we keep the first

    and second equation, and we add the second to the third after multiplying it by 3.We get

    This obviously implies z = -2. From the second equation, we get y = -2, andfinally from the first equation we get x = 4. Therefore the linear system has one

    solution

    13

  • 8/2/2019 Mth Term Paper

    17/23

    Going from the last equation to the first while solving for the unknowns is called

    backsolving.

    Keep in mind that linear systems for which the matrix coefficient is upper-triangular are easy to solve. This is particularly true, if the matrix is in echelon

    form. So the trick is to perform elementary operations to transform the initiallinear system into another one for which the coefficient matrix is in echelon

    form.

    Using our knowledge about matrices, is there any way we can rewrite what we

    did above in matrix form which will make our notation (or representation)

    easier? Indeed, consider the augmented matrix

    Let us perform some elementary row operations on this matrix. Indeed, if we

    keep the first and second row, and subtract the first one from the last one we get

    Next we keep the first and the last rows, and we subtract the first from the

    second. We get

    Then we keep the first and second row, and we add the second to the third after

    multiplying it by 3 to get

    14

  • 8/2/2019 Mth Term Paper

    18/23

    This is a triangular matrix which is not in echelon form. The linear system for

    which this matrix is an augmented one is

    As you can see we obtained the same system as before. In fact, we followed the

    same elementary operations performed above. In every step the new matrix was

    exactly the augmented matrix associated to the new system. This shows that

    instead of writing the systems over and over again, it is easy to play around with

    the elementary row operations and once we obtain a triangular matrix, write the

    associated linear system and then solve it. This is known as Gaussian

    Elimination.

    GAUSS-JORDAN METHOD:

    The Gauss-Jordan method is a version of Gaussian elimination in solving

    systems of linear equations. The variables' coefficients, instead of merely being

    reduced to a triangular shape, are reduced to a diagonal. This eliminates the

    need for successive substitution, allowing one to just read off the solutions.

    Gauss-Jordan Elimination

    Gauss-Jordan elimination goes the extra step of using such operations to

    eliminate variables above the diagonal as well.

    As a result, one can just read off the solution, for example, that x1 = -1, x2

    = 2, and so on. The need for back-substitution to solve for each variable,

    15

  • 8/2/2019 Mth Term Paper

    19/23

    As in Gaussian substitution, is therefore eliminated.

    Difference from Gaussian Elimination

    The additional operations Gauss-Jordan performs to put the

    variables into a diagonal form triples the number of computations

    required, even with Gaussian elimination's back-substitution operations.

    The gain, however, is in being able to read answers off immediately.

    Disadvantages

    1. The additional operations of Gauss-Jordan add to rounding error andcomputer time. A disadvantage of both Gaussian and Gauss-Jordan

    elimination is that they require the right vector, for example, (4,1,-3,4)

    above, to be known. If these numbers are to be learned later, a method

    called matrix factorization can prepare a triangular shape for easy

    calculation when the vector is known. If the vector changes, the effort in

    factorizing has saved time as well.

    Elimination Method for Solving Systems of LinearEquations

    A technique for solving systems of linear equations of any size is the Gauss-

    Jordan Elimination Method. This method uses a sequence of operations on a

    system of linear equations to obtain an equivalent system at each stage. An

    equivalent system is a system having the same solution as the original system.

    The operations of the Gauss-Jordan method are:1. Interchange any two equations.

    2. Replace an equation by a nonzero constant multiple of itself.

    3. Replace an equation by the sum of that equation and a constant multiple of

    any other equation.

    16

    http://www.ehow.com/computers/http://www.ehow.com/computers/
  • 8/2/2019 Mth Term Paper

    20/23

    EXAMPLES:

    1. Let's solve the system:-2x

    1- 3x

    2= -6

    5x1 + 4x2 = 31

    We form the matrix (A|b) and reduce it:

    This is the matrix (A|b).

    Row 1:= 1/2 x row 1

    Row 2:= row 2 - 5 x row1

    Row 2:= 2/23 x row 2

    Row 1:= row 1 + 3/2 row 2

    The reduced matrix represents the system

    1x1 + 0x2 = 3

    0x1 + 1x2 = 4

    Which has the unique solution (x1,x2) = (3,4). This is also the unique

    solution to the original system.

    17

  • 8/2/2019 Mth Term Paper

    21/23

    2. Let's solve the system

    x1 + x2 + x3 = 5

    x1 + 2x2 + 3x3 = 8

    We form the matrix (A|b) and reduce it:

    This is the matrix (A|b)

    Row 2:= row2 - row1 the matrix is in echelon form

    row1:= row1- row2 the matrix is reduced

    The reduced matrix represents the system

    1x1 + 0x2 - x3 = 2

    0x1 + 1x2 + 2x3 = 3

    Which has infinitely many solutions:

    x1 = 2 + x3, x2 = 3 - 2x3, x3 = any number

    We wrote the solution to the exercise on the before page as two equations:

    X1 = 2 + x3, x2 = 3 - 2x3, (x3 R)

    Notice that these two equations have the perfect same meaning as one

    vector equation:

    When we write the equation this way, it looks clear that the solution set is

    the line through the point (2,3,0) that is parallel to the vector (1, -2, 1).

    18

  • 8/2/2019 Mth Term Paper

    22/23

    CONCLUSION

    Gaussian elimination is a method of solving a linear system (consisting

    of equations in unknowns) by bringing the augmented matrix. In the

    Gaussian Elimination Method, Elementary Row Operations (E.R.O.'s) are

    applied in a specific order to transform an augmented matrix into triangular

    echelon form as efficiently as possible.

    This is the essence of the method: Given a system of m equations in n variables

    or unknowns, pick the first equation and subtract suitable multiples of it from

    the remaining m-1 equations. In each case choose the multiple so that the

    subtraction cancels or eliminates the same variable, say x1. The result is that the

    remaining m-1 equations contain only n-1 unknowns (x1 no longer appears).

    Now set aside the first equation and repeat the above process with the remaining

    m-1 equations in n-1 unknowns.

    Continue repeating the process. Each cycle reduces the number of variables and

    the number of equations. The process stops when either:

    There remains one equation in one variable. In that case, there is a uniquesolution and back-substitution is used to find the values of the other

    variables.

    There remain variables but no equations. In that case there is no uniquesolution.

    There remain equations but no variables (i.e. the lowest row(s) of theaugmented matrix contain only zeros on the left side of the vertical line).

    This indicates that either the system of equations is inconsistent or

    redundant. In the case of inconsistency the information contained in the

    equations is contradictory. In the case of redundancy, there may still be a

    unique solution and back-substitution can be used to find the values of the

    other variables.

    19

  • 8/2/2019 Mth Term Paper

    23/23

    REFERENCES

    1. HIGHER ENGINEERING MATHEMATICSBY B.V. RAMANA.

    2. MODERN APPROACH TO MATHEMATICSBY N.K. NAG.

    3. S. CHAND MATHEMATICS FOR CLASS XIIBY S. CHAND PUBLICATIONS.

    4. www.google.com

    http://www.google.com/http://www.google.com/