matrices definition

32
Anti-diagonal matrix In mathematics , an anti-diagonal matrix is a matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (), known as the anti- diagonal. More precisely, an n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero for all i, j {1, …, n} with i + j n + 1. An example of an anti-diagonal matrix is All anti-diagonal matrices are also persymmetric . The product of two anti-diagonal matrices is a diagonal matrix Augmented matrix In linear algebra , the augmented matrix of a matrix is obtained by changing a matrix in some way. Given the matrices A and B, where: Then, the augmented matrix (A|B) is written as: This is useful when solving systems of linear equations or the augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix .

Upload: shantan02

Post on 18-Nov-2014

1.797 views

Category:

Documents


1 download

DESCRIPTION

It contains all type matrices definitions..

TRANSCRIPT

Page 1: Matrices Definition

Anti-diagonal matrix In mathematics, an anti-diagonal matrix is a matrix where all the entries are zero except those

on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-

diagonal.

More precisely, an n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero for all i, j ∈

{1, …, n} with i + j ≠ n + 1.

An example of an anti-diagonal matrix is

All anti-diagonal matrices are also persymmetric.

The product of two anti-diagonal matrices is a diagonal matrix

Augmented matrix In linear algebra, the augmented matrix of a matrix is obtained by changing a matrix in some

way.

Given the matrices A and B, where:

Then, the augmented matrix (A|B) is written as:

This is useful when solving systems of linear equations or the augmented matrix may also be

used to find the inverse of a matrix by combining it with the identity matrix.

Page 2: Matrices Definition

Band matrix In mathematics, particularly matrix theory, a band matrix is a sparse matrix, whose non-zero

entries are confined to a diagonal band, comprising the main diagonal and zero or more

diagonals on either side.

Formally, an n×n matrix A=(ai,j ) is a band matrix if all matrix elements are zero outside a

diagonally bordered band whose range is determined by constants k1 and k2:

The quantities k1 and k2 are the left and right half-bandwidth, respectively. The bandwidth of

the matrix is k1 + k2 + 1 (in other words, the smallest number of adjacent diagonals to which

the non-zero elements are confined).

A band matrix with k1 = k2 = 0 is a diagonal matrix; a band matrix with k1 = k2 = 1 is

a tridiagonal matrix; when k1 = k2 = 2 one has a pentadiagonal matrix and so on. If one

puts k1 = 0,k2 = n−1, one obtains the definition of an upper triangular matrix; similarly,

for k1 = n−1, k2 = 0 one obtains a lower triangular matrix.

Conjugate transpose "Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix".

In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-by-

n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking

thetranspose and then taking the complex conjugate of each entry (i.e. negating their imaginary

parts but not their real parts). The conjugate transpose is formally defined by

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar

denotes a scalar complex conjugate. (The complex conjugate of a + bi, where a and b are

reals, isa − bi.)

This definition can also be written as

where denotes the transpose and denotes the matrix with complex

conjugated entries.

Other names for the conjugate transpose of a matrix are Hermitian conjugate,

or transjugate. The conjugate transpose of a matrix A can be denoted by any of these

symbols:

Page 3: Matrices Definition

� or , commonly used in linear algebra

� (sometimes pronounced "A dagger"), universally used in quantum mechanics

� , although this symbol is more commonly used for the Moore-Penrose

pseudoinverse

In some contexts, denotes the matrix with complex conjugated entries, and thus

the conjugate transpose is denoted by or .

Example

If

then

Diagonal matrix In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main

diagonal (↘) are all zero. The diagonal entries themselves may or may not be zero. Thus, the

matrix D = (di,j) with n columns and n rows is diagonal if:

For example, the following matrix is diagonal:

The term diagonal matrix may sometimes refer to a rectangular diagonal matrix,

which is an m-by-n matrix with only the entries of the form di,i possibly non-zero; for

example,

, or

Page 4: Matrices Definition

Hermitian matrix A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is

equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal

to the complex conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix is denoted by , then the Hermitian property can

be written concisely as

Hermitian matrices can be understood as the complex extension of a real symmetric

matrix.

Examples

For example,

is a Hermitian matrix

Identity matrix In linear algebra, the identity matrix or unit matrix of size n is the n-by-n square matrix with

ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is

immaterial or can be trivially determined by the context. (In some fields, such as quantum

mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.)

Some mathematics books use U and E to represent the Identity Matrix (meaning "Unit

Matrix" and "Elementary Matrix", or from the German "Einheitsmatrix",[1] respectively),

although I is considered more universal.

Page 5: Matrices Definition

The important property of matrix multiplication of identity matrix is that for m-by-n A

Invertible matrix In linear algebra, an n-by-n (square) matrix A is

called invertible or nonsingular or nondegenerate if there exists an n-by-n matrix B such that

where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix

multiplication. If this is the case, then the matrix B is uniquely determined by A and is called

theinverse of A, denoted by A−1. It follows from the theory of matrices that if

for square matrices A and B, then also

[1]

Non-square matrices (m-by-n matrices for which m ≠ n) do not have an inverse.

However, in some cases such a matrix may have a left inverse or right inverse.

If A is m-by-n and the rank ofA is equal to n, then A has a left inverse: an n-by-

m matrix B such that BA = I. If A has rank m, then it has a right inverse: an n-by-

m matrix B such that AB = I.

While the most common case is that of matrices over

the real or complex numbers, all these definitions can be given for matrices over

any commutative ring.

A square matrix that is not invertible is called singular or degenerate. A square

matrix is singular if and only if its determinant is 0. Singular matrices are rare in

the sense that if you pick a random square matrix, it will almost surely not be

singular.

Matrix inversion is the process of finding the matrix B that satisfies the prior

equation for a given invertible matrix A.

Page 6: Matrices Definition

Involutory matrix In mathematics, an involutory matrix is a matrix that is its own inverse. That is, matrix A is an

involution iff A2 = I. One of the three classes of elementary matrix is involutory, namely therow-

interchange elementary matrix. A special case of another class of elementary matrix, that which

represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example

of a signature matrix, all of which are involutory.

Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the

fact that any nonsingular matrix multiplied by its inverse is the identity. If A is an n × nmatrix,

then A is involutory if and only if ½(A + I) is idempotent.

An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents

an isometry (a linear transformation which preserves Euclidean distance). A reflection matrix is an

example of an involutory matrix.

Irregular matrix

An irregular matrix , or ragged matrix, can be described as a matrix that has a different number of elements in each row. Ragged matrices are not used in linear algebra, since standard matrix transformations cannot be performed on them, but they are useful as arrays in computing. Irregular matrices are typically stored using Iliffe vectors.

For example, the following is an irregular matrix:

Page 7: Matrices Definition

List of matrices

Organization of a matrix

This page lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries, as shown at the right. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. An important example is the identity matrix given by

Further ways of classifying matrices are according to their eigenvalues or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry have particular matrices that are applied chiefly in these areas.

Matrices with explicitly constrained entries

The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal).

Page 8: Matrices Definition

Name Explanation Notes, References

(0,1)-matrix A matrix with all elements either 0 or 1. Synonym for binary matrix and

logical matrix.

Alternant matrix

A matrix in which successive columns

have a particular function applied to their

entries.

Anti-diagonal

matrix

A square matrix with all entries off the

anti-diagonal equal to zero.

Anti-Hermitian

matrix

Synonym for skew-Hermitian

matrix.

Anti-symmetric

matrix

Synonym for skew-symmetric

matrix.

Arrowhead matrix

A square matrix containing zeros in all

entries except for the first row, first

column, and main diagonal.

Band matrix A square matrix whose non-zero entries

are confined to a diagonal band.

Bidiagonal matrix

A matrix with elements only on the main

diagonal and either the superdiagonal or

subdiagonal.

Sometimes defined differently,

see article.

Binary matrix A matrix whose entries are all either 0 or

1.

Synonym for (0,1)-matrix and

logical matrix. [1]

Bisymmetric matrix

A square matrix that is symmetric

with respect to its main diagonal and its

main cross-diagonal.

Block-diagonal

matrix

A block matrix with entries only on the

diagonal.

Block matrix A matrix partitioned in sub-matrices

Page 9: Matrices Definition

called blocks.

Block tridiagonal

matrix

A block matrix which is essentially a

tridiagonal matrix but with submatrices in

place of scalar elements

Cauchy matrix

A matrix whose elements are of the form

1/(xi + yj) for (xi), (yj) injective sequences

(i.e., taking every value only once).

Centrosymmetric

matrix

A matrix symmetric about its center; i.e.,

aij = an−i+1,n−j+1

Conference matrix

A square matrix with zero diagonal and

+1 and −1 off the diagonal, such that CTC

is a multiple of the identity matrix.

Complex Hadamard

matrix

A matrix with all rows and columns

mutually orthogonal, whose entries are

unimodular.

Copositive matrix

A square matrix A with real coefficients,

such that f(x) = xTAx is nonnegative for

every nonnegative vector x

Diagonally

dominant matrix aii| > Σj≠i |aij|.

Diagonal matrix

A square matrix with all entries off the

main diagonal equal to zero.

Elementary matrix

A square matrix derived by applying an

elementary row operation to the identity

matrix.

Equivalent matrix

A matrix that can be derived from

another matrix through a sequence of

elementary row or column operations.

Frobenius matrix A square matrix in the form of an identity

matrix but with arbitrary entries in one

Page 10: Matrices Definition

column below the main diagonal.

Generalized

permutation matrix

A square matrix with precisely one

nonzero element in each row and

column.

Integer matrix A matrix whose entries are all integers.

Hadamard matrix A square matrix with entries +1, −1

whose rows are mutually orthogonal.

Hankel matrix A matrix with constant skew-diagonals;

also an upside down Toeplitz matrix.

A square Hankel matrix is

symmetric.

Hermitian matrix A square matrix which is equal to its

conjugate transpose, A = A*.

Hessenberg matrix

An "almost" triangular matrix, for

example, an upper Hessenberg matrix has

zero entries below the first subdiagonal.

Hollow matrix A square matrix whose main diagonal

comprises only zero elements.

Logical matrix A matrix with all entries either 0 or 1

Synonym for (0,1)-matrix or

binary matrix. Can be used to

represent a k-adic relation.

Metzler matrix A matrix whose off-diagonal entries are

non-negative.

Monomial matrix A square matrix with exactly one non-

zero entry in each row and column.

Synonym for generalized

permutation matrix.

Moore matrix A row consists of 1, a, a

q, a

q², etc., and

each row uses a different variable

Nonnegative matrix A matrix with all nonnegative entries.

Partitioned matrix

A matrix partitioned into sub-matrices, or

equivalently, a matrix whose entries are

themselves matrices rather than scalars

Synonym for block matrix

Page 11: Matrices Definition

Pentadiagonal

matrix

A matrix with the only nonzero entries on

the main diagonal and the two diagonals

just above and below the main one.

Permutation matrix

A matrix representation of a

permutation, a square matrix with exactly

one 1 in each row and column, and all

other elements 0.

Persymmetric

matrix

A matrix that is symmetric about its

northeast-southwest diagonal, i.e.,

aij = an−j+1,n−i+1

Polynomial matrix A matrix whose entries are polynomials.

Positive matrix A matrix with all positive entries.

Sign matrix A matrix whose entries are either +1, 0,

or −1.

Signature matrix A diagonal matrix where the diagonal

elements are either +1 or −1.

Skew-Hermitian

matrix

A square matrix which is equal to the

negative of its conjugate transpose, A* =

−A.

Skew-symmetric

matrix

A matrix which is equal to the negative of

its transpose, AT = −A.

Skyline matrix A rearrangement of the entries of a

banded matrix which requires less space.

Sparse matrix A matrix with relatively few non-zero

elements.

Sparse matrix algorithms can

tackle huge sparse matrices that

are utterly impractical for dense

matrix algorithms.

Sylvester matrix A square matrix whose entries come from

coefficients of two polynomials.

The Sylvester matrix is

nonsingular if and only if the two

polynomials are coprime to each

other.

Page 12: Matrices Definition

Symmetric matrix A square matrix which is equal to its

transpose, A = AT (ai,j = aj,i).

Toeplitz matrix A matrix with constant diagonals.

Triangular matrix

A matrix with all entries above the main

diagonal equal to zero (lower triangular)

or with all entries below the main

diagonal equal to zero (upper triangular).

Tridiagonal matrix

A matrix with the only nonzero entries on

the main diagonal and the diagonals just

above and below the main one.

Unitary matrix A square matrix whose inverse is equal to

its conjugate transpose, A−1

= A*.

Vandermonde

matrix

A row consists of 1, a, a², a³, etc., and

each row uses a different variable.

Walsh matrix

A square matrix, with dimensions a

power of 2, the entries of which are +1 or

-1.

Z-matrix A matrix with all off-diagonal entries less

than zero.

[edit] Constant matrices

The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker symbol δij for two integers i and j which is 1 if i = j and 0 else.

Name Explanation

Symbolic

description of the

entries

Notes

Exchange

matrix

A binary matrix with ones on the anti-

diagonal, and zeroes everywhere else. aij = δn + 1 − i,j

A permutation

matrix.

Page 13: Matrices Definition

Hilbert

matrix aij = (i + j − 1)

−1. A Hankel matrix.

Identity

matrix

A square diagonal matrix, with all

entries on the main diagonal equal to 1,

and the rest 0

aij = δij

Lehmer

matrix

aij = min(i,j) ÷

max(i,j).

A positive

symmetric matrix.

Matrix of

ones A matrix with all entries equal to one aij = 1.

Pascal matrix A matrix containing the entries of

Pascal's triangle.

Pauli

matrices

A set of three 2 × 2 complex Hermitian

and unitary matrices. When combined

with the I2 identity matrix, they form an

orthogonal basis for the 2 × 2 complex

Hermitian matrices.

Redheffer

matrix

aij are 1 if i divides j

or if j = 1;

otherwise, aij = 0.

A (0, 1)-matrix.

Shift matrix

A matrix with ones on the

superdiagonal or subdiagonal and

zeroes elsewhere.

aij = δi+1,j or aij = δi−1,j

Multiplication by it

shifts matrix

elements by one

position.

Zero matrix A matrix with all entries equal to zero. aij = 0.

Matrices with conditions on eigenvalues or eigenvectors

Name Explanation Notes

Companion

matrix

A matrix whose eigenvalues are equal to the

roots of the polynomial.

Defective matrix A square matrix that does not have a complete

Page 14: Matrices Definition

basis of eigenvectors, and is thus not

diagonalisable.

Diagonalizable

matrix A square matrix similar to a diagonal matrix.

It has an eigenbasis, that is, a

complete set of linearly

independent eigenvectors.

Hurwitz matrix

A matrix whose eigenvalues have strictly

negative real part. A stable system of

differential equations may be represented by a

Hurwitz matrix.

Positive-definite

matrix

A Hermitian matrix with every eigenvalue

positive.

Stability matrix Synonym for Hurwitz matrix.

Stieltjes matrix A real symmetric positive definite matrix with

nonpositive off-diagonal entries. Special case of an M-matrix.

[edit] Matrices satisfying conditions on products or inverses

A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by

This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix A is a matrix B (necessarily of the same dimension as A) such that AB = 1. Equivalently, BA = 1. An inverse need not exist. If it exists, B is uniquely determined, and is also called the inverse of A, denoted A−1.

Name Explanation Notes

Page 15: Matrices Definition

Congruent

matrix

Two matrices A and B are congruent if there exists

an invertible matrix P such that PT A P = B.

Compare with similar

matrices.

Idempotent

matrix A matrix that has the property A² = AA = A.

Invertible

matrix

A square matrix having a multiplicative inverse, that

is, a matrix B such that AB = BA = I.

Invertible matrices form

the general linear group.

Involutary

matrix A square matrix which is its own inverse, i.e., AA = I.

Signature matrices have

this property.

Nilpotent

matrix

A square matrix satisfying Aq = 0 for some positive

integer q.

Equivalently, the only

eigenvalue of A is 0.

Normal matrix A square matrix that commutes with its conjugate

transpose: AA∗ = A∗A

They are the matrices to

which the spectral

theorem applies.

Orthogonal

matrix

A matrix whose inverse is equal to its transpose, A−1

= AT.

They form the

orthogonal group.

Orthonormal

matrix

A matrix whose columns are orthonormal vectors.

Similar matrix Two matrices A and B are similar if there exists an

invertible matrix P such that P−1

AP = B.

Compare with congruent

matrices.

Singular matrix A square matrix that is not invertible.

Unimodular

matrix

An invertible matrix with entries in the integers

(integer matrix)

Necessarily the

determinant is +1 or −1.

Unipotent

matrix A square matrix with all eigenvalues equal to 1.

Equivalently, A − I is

nilpotent. See also

unipotent group.

Totally

unimodular

matrix

A matrix for which every non-singular square

submatrix is unimodular. This has some implications

in the linear programming relaxation of an integer

program.

Page 16: Matrices Definition

Weighing matrix A square matrix the entries of which are in {0, 1, −1},

such that AAT = wI for some positive integer w.

[edit] Matrices with specific applications

Name Explanation Used in Notes

Adjugate matrix The matrix containing minors of

a given square matrix.

Calculating inverse

matrices via Laplace's

formula.

Alternating sign

matrix

A square matrix of with entries

0, 1 and −1 such that the sum of

each row and column is 1 and

the nonzero entries in each row

and column alternate in sign.

Dodgson condensation to

calculate determinants

Augmented matrix

A matrix whose rows are

concatenations of the rows of

two smaller matrices.

Calculating inverse

matrices.

Bézout matrix

A square matrix which may be

used as a tool for the efficient

location of polynomial zeros

Control theory, Stable

polynomials

Carleman matrix

A matrix that converts

composition of functions to

multiplication of matrices.

Cartan matrix

A matrix representing a non-

semisimple finite-dimensional

algebra, or a Lie algebra (the two

meanings are distinct).

Circulant matrix A matrix where each row is a

circular shift of its predecessor.

System of linear equations,

discrete Fourier transform

Cofactor matrix A containing the cofactors, i.e.,

signed minors, of a given matrix.

Commutation

matrix

A matrix used for transforming

the vectorized form of a matrix

into the vectorized form of its

Page 17: Matrices Definition

transpose.

Coxeter matrix

A matrix related to Coxeter

groups, which describe

symmetries in a structure or

system.

Distance matrix

A square matrix containing the

distances, taken pairwise, of a

set of points.

Computer vision, network

analysis.

See also

Euclidean

distance

matrix.

Duplication matrix

A linear transformation matrix

used for transforming half-

vectorizations of matrices into

vectorizations.

Elimination matrix

A linear transformation matrix

used for transforming

vectorizations of matrices into

half-vectorizations.

Euclidean distance

matrix

A matrix that describes the

pairwise distances between

points in Euclidean space.

See also

distance

matrix.

Fundamental

matrix (linear

differential

equation)

A matrix containing the

fundamental solutions of a

linear ordinary differential

equation.

Generator matrix A matrix whose rows generate

all elements of a linear code. Coding theory

Gramian matrix

A matrix containing the pairwise

angles of given vectors in an

inner product space.

Test linear independence

of vectors, including ones

in function spaces.

They are real

symmetric.

Hessian matrix

A square matrix of second

partial derivatives of a scalar-

valued function.

Detecting local minima and

maxima of scalar-valued

functions in several

variables; Blob detection

Page 18: Matrices Definition

(computer vision)

Householder

matrix

A transformation matrix widely

used in matrix algorithms. QR decomposition.

Jacobian matrix

A matrix of first-order partial

derivatives of a vector-valued

function.

Implicit function theorem;

Smooth morphisms

(algebraic geometry).

Payoff matrix

A matrix in game theory and

economics, that represents the

payoffs in a normal form game

where players move

simultaneously

Pick matrix

A matrix that occurs in the study

of analytical interpolation

problems.

Random matrix

A matrix whose entries consist

of random numbers from some

specified random distribution.

Rotation matrix

A matrix representing a

rotational geometric

transformation.

Special orthogonal group,

Euler angles

Seifert matrix

A matrix in knot theory,

primarily for the algebraic

analysis of topological properties

of knots and links.

Alexander polynomial

Shear matrix

An elementary matrix whose

corresponding geometric

transformation is a shear

transformation.

Similarity matrix

A matrix of scores which express

the similarity between two data

points.

Sequence alignment

Symplectic matrix A square matrix preserving a Symplectic group,

Page 19: Matrices Definition

standard skew-symmetric form. symplectic manifold.

Totally positive

matrix

A matrix with determinants of all

its square submatrices positive.

Generating the reference

points of Bézier curve in

computer graphics.

Transformation

matrix

A matrix representing a linear

transformation, often from one

co-ordinate space to another to

facilitate a geometric transform

or projection.

• Derogatory matrix — a square n×n matrix whose minimal polynomial is of order less

than n.

• Moment matrix — a symmetric matrix whose elements are the products of common

row/column index dependent monomials.

• X-Y-Z matrix — a generalisation of the (rectangular) matrix to a cuboidal form (a 3-

dimensional array of entries).

substitution matrix

Main diagonal

In linear algebra, the main diagonal (sometimes leading diagonal or primary diagonal) of a matrix A is the collection of cells Ai,j where i is equal to j.

The main diagonal of a square matrix is the diagonal which runs from the top left corner to the bottom right corner. For example, the following matrix has 1s down its main diagonal:

A square matrix like the above in which the entries outside the main diagonal are all zero is called a diagonal matrix. The sum of the entries on the main diagonal of a matrix is known as the trace of that matrix.

Page 20: Matrices Definition

The main diagonal of a rectangular matrix is the diagonal which runs from the top left corner and steps down and right, until the right edge is reached.

The other diagonal is called antidiagonal, counterdiagonal, secondary diagonal, or minor diagonal.

Modal matrix

In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.

Assume a linear system of the following form:

where X is n×1, A is n×n, and B is n×1. X typically represents the state vector, and U the system input.

Specifically the modal matrix M is the n×n matrix formed with the eigenvectors of A as columns in M. It is utilized in

where D is an n×n diagonal matrix with the eigenvalues of A on the main diagonal of D and zeros elsewhere. (note the eigenvalues should appear left→right top→bottom in the same order as its eigenvectors are arranged left→right into M)

This process is also known as the similarity transform.

Page 21: Matrices Definition

Nilpotent matrix

In linear algebra, a nilpotent matrix is a square matrix N such that

for some positive integer k. The smallest such k is sometimes called the degree of N.

More generally, a nilpotent transformation is a linear transformation L of a vector space such that Lk = 0 for some positive integer k. Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings.

Examples

The matrix

is nilpotent, since M2 = 0. More generally, any triangular matrix with 0's along the main diagonal is nilpotent. For example, the matrix

is nilpotent, with

Though the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example, the matrices

Page 22: Matrices Definition

both square to zero, though neither matrix has zero entries.

Normal matrix

A complex square matrix A is a normal matrix if

A*A=AA*

where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose.

If A is a real matrix, then A*=AT; it is normal if ATA = AAT.

Normality is a convenient test for diagonalizability: every normal matrix can be converted to a diagonal matrix by a unitary transform, and every matrix which can be made diagonal by a unitary transform is also normal, but finding the desired transform requires much more work than simply testing to see whether the matrix is normal.

Orthogonal matrix

In linear algebra, an orthogonal matrix is a square matrix with real entries whose columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns are unit vectors in addition to being orthogonal, some people use the term orthonormal to describe such matrices.

Equivalently, a matrix Q is orthogonal if its transpose is equal to its inverse:

Page 23: Matrices Definition

alternatively,

As a linear transformation, an orthogonal matrix preserves the dot product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation or reflection. In other words, it is a unitary transformation.

Pentadiagonal matrix

In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the only nonzero entries are on the main diagonal, and the first two diagonals above and below it. So it is of the form

It follows that a pentadiagonal matrix has at most 5n − 6 nonzero entries, where n is the size of the matrix. Hence, pentadiagonal matrices are sparse. This makes them useful in numerical analysis.

Polynomial matrix

A polynomial matrix or matrix polynomial is a matrix whose elements are univariate or multivariate polynomials.

A univariate polynomial matrix P of degree p is defined as:

Page 24: Matrices Definition

where A(i) denotes a matrix of constant coefficients, and A(p) is non-zero. Thus a polynomial matrix is the matrix-equivalent of a polynomial, with each element of the matrix satisfying the definition of a polynomial of degree p.

An example 3×3 polynomial matrix, degree 2:

We can express this by saying that for a ring R, the rings Mn(R[X]) and (Mn(R))[X] are isomorphic.

Rank (linear algebra)

The column rank of a matrix A is the maximal number of linearly independent columns of A. Likewise, the row rank is the maximal number of linearly independent rows of A.

Since the column rank and the row rank are always equal, they are simply called the rank of A. More abstractly, it is the dimension of the image of A. For the proofs, see, e.g., Murase (1960)[1], Andrea & Wong (1960)[2], Williams & Cater (1968)[3], Mackiw (1995)[4]. It is commonly denoted by either rk(A) or rank A.

The rank of an matrix is at most min(m,n). A matrix that has a rank as large as possible is said to have full rank ; otherwise, the matrix is rank deficient.

[edit] Equivalence of definitions

A common approach is reduce to a simpler form, generally row-echelon form by row operations: row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row-echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots and the number of non-zero rows;

Page 25: Matrices Definition

Equivalence of the determinantal definition (rank of largest non-vanishing minor) is generally proved alternatively. It is a generalization of the statement that if the span of n vectors has dimension p, then p of those vectors span the space: one can choose a spanning set that is a subset of the vectors. For determinantal rank, the statement is that if the row rank (column rank) of a matrix is p, then one can choose a p × p submatrix that is invertible: a subset of the rows and a subset of the columns simultaneously define an invertible submatrix. It can be alternatively stated as: if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent.

A non-vanishing p-minor (p × p submatrix with non-vanishing determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward

Applications

One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. The system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. This theorem is due to Rouché and Capelli.

In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

Row vector

In linear algebra, a row vector or row matrix is a 1 × n matrix, that is, a matrix consisting of a single row:[1]

The transpose of a row vector is a column vector:

Page 26: Matrices Definition

The set of all row vectors forms a vector space which is the dual space to the set of all column vector

Skew-Hermitian matrix

In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is skew-Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex conjugation.

Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers.[2] The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm.

Skew-symmetric matrix

In linear algebra, a skew-symmetric (or antisymmetric or antimetric [1]) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation:

Page 27: Matrices Definition

or in component form, if :

for all and

For example, the following matrix is skew-symmetric:

Compare this with a symmetric matrix whose transpose is the same as the matrix

or to an orthogonal matrix, the transpose of which is equal to its inverse:

Symmetric matrix

In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to bottom right). So if the entries are written as A = (aij), then

for all indices i and j. The following 3×3 matrix is symmetric:

A matrix is called skew-symmetric or antisymmetric if its transpose is the same as its negative. The following 3×3 matrix is skew-symmetric:

Page 28: Matrices Definition

The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.

In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is generally assumed that a symmetric matrix refers to one which has has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.

Transpose

This article is about the Matrix Transpose operator. For other uses, see Transposition

In linear algebra, the transpose of a matrix A is another matrix AT (also written A′, A tr or tA) created by any one of the following equivalent actions:

• write the rows of A as the columns of AT • write the columns of A as the rows of AT • reflect A by its main diagonal (which starts from the top left) to obtain AT

Formally, the (i,j) element of AT is the (j,i) element of A.

[AT] ij = [A]ji

Page 29: Matrices Definition

If A is a m × n matrix then AT is a n × m matrix. The transpose of a scalar is the same scalar.

Triangular matrix

In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where the entries either below or above the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve they are very important in numerical analysis. The LU decomposition gives an algorithm to decompose any invertible matrix A into a normed lower triangle matrix L and an upper triangle matrix U.

A matrix of the form

is called lower triangular matrix or left triangular matrix , and analogously a matrix of the form

is called upper triangular matrix or right triangular matrix .

The standard operations on triangular matrices conveniently preserve the triangular form: the sum and product of two upper triangular matrices is again upper triangular. The

Page 30: Matrices Definition

inverse of an upper triangular matrix is also upper triangular, and of course we can multiply an upper triangular matrix by a constant and it will still be upper triangular. This means that the upper triangular matrices form a subalgebra of the ring of square matrices for any given size. The analogous result holds for lower triangular matrices. Note, however, that the product of a lower triangular with an upper triangular matrix does not preserve triangularity.

Tridiagonal matrix

In linear algebra, a tridiagonal matrix is a matrix that is "almost" a diagonal matrix. To be exact: a tridiagonal matrix has nonzero elements only in the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal.

For example, the following matrix is tridiagonal:

A determinant formed from a tridiagonal matrix is known as a continuant.[1

Unitary matrix

In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition

Page 31: Matrices Definition

where is the identity matrix in n dimensions and is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix G preserves the (real) inner product of two real vectors,

so also a unitary matrix U satisfies

for all complex vectors x and y, where stands now for the standard inner product on .

If is an n by n matrix then the following are all equivalent conditions:

1. is unitary 2. is unitary 3. the columns of form an orthonormal basis of with respect to this inner

product 4. the rows of form an orthonormal basis of with respect to this inner product 5. is an isometry with respect to the norm from this inner product 6. U is a normal matrix with eigenvalues lying on the unit circle.

X–Y–Z matrix

An X–Y–Z matrix is a generalization of the concept of matrix to three dimensions.

An X–Y–Z matrix A will thus have components Ai,j,k where

for some positive integers M,N,P.

Such matrices are helpful for example when considering grids in three dimensions, as in computer simulations of three-dimensional problems.

Page 32: Matrices Definition

Zero matrix

In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries being zero. Some examples of zero matrices are

The set of m×n matrices with entries in a ring K forms a ring . The zero matrix

in is the matrix with all entries equal to , where is the additive identity in K.

The zero matrix is the additive identity in . That is, for all it satisfies

There is exactly one zero matrix of any given size m×n having entries in a given ring, so when the context is clear one often refers to the zero matrix. In general the zero element of a ring is unique and typically denoted as 0 without any subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring.

The zero matrix represents the linear transformation sending all vectors to the zero vector.