linear algebra - department of mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices,...

290
Linear Algebra Santanu Dey January 17, 2011 1/51

Upload: others

Post on 13-Mar-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Linear Algebra

Santanu Dey

January 17, 2011

1/51

Page 2: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

1. INTRODUCTION

This is the second course in Mathematics for you at IIT

Central theme: study thoroughly geometric objects inhigher dimensions and also functions.Simplest geometric objects are lines and planes. Simplestfunctions are linear functions.Linear algebra brings an unified approach to topics likecoordinate geometry and vector algebraUseful for calculus of several variables, systems ofdifferential equations, etc.; applications to electricalnetworks, mechanics, optimization problems, processes instatistics, etc.

2/51

Page 3: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

1. INTRODUCTION

This is the second course in Mathematics for you at IITCentral theme: study thoroughly geometric objects inhigher dimensions and also functions.

Simplest geometric objects are lines and planes. Simplestfunctions are linear functions.Linear algebra brings an unified approach to topics likecoordinate geometry and vector algebraUseful for calculus of several variables, systems ofdifferential equations, etc.; applications to electricalnetworks, mechanics, optimization problems, processes instatistics, etc.

2/51

Page 4: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

1. INTRODUCTION

This is the second course in Mathematics for you at IITCentral theme: study thoroughly geometric objects inhigher dimensions and also functions.Simplest geometric objects are lines and planes. Simplestfunctions are linear functions.

Linear algebra brings an unified approach to topics likecoordinate geometry and vector algebraUseful for calculus of several variables, systems ofdifferential equations, etc.; applications to electricalnetworks, mechanics, optimization problems, processes instatistics, etc.

2/51

Page 5: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

1. INTRODUCTION

This is the second course in Mathematics for you at IITCentral theme: study thoroughly geometric objects inhigher dimensions and also functions.Simplest geometric objects are lines and planes. Simplestfunctions are linear functions.Linear algebra brings an unified approach to topics likecoordinate geometry and vector algebra

Useful for calculus of several variables, systems ofdifferential equations, etc.; applications to electricalnetworks, mechanics, optimization problems, processes instatistics, etc.

2/51

Page 6: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

1. INTRODUCTION

This is the second course in Mathematics for you at IITCentral theme: study thoroughly geometric objects inhigher dimensions and also functions.Simplest geometric objects are lines and planes. Simplestfunctions are linear functions.Linear algebra brings an unified approach to topics likecoordinate geometry and vector algebraUseful for calculus of several variables, systems ofdifferential equations, etc.; applications to electricalnetworks, mechanics, optimization problems, processes instatistics, etc.

2/51

Page 7: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

HOW DO WE GO ABOUT IT?

1. proper foundation: vector spaces, linear transformations,linear dependence, dimension, matrices, determinants,eigenvalues, inner product spaces, etc.

2. applications: the study of quadratic forms3. an elementary proof of fundamental theorem of algebra

(using linear algebra)

Main Textbook for the Course: Chapter 6 and 7 of“Advanced Engineering Mathematics" by E. Kreyszig, 8th

edition

3/51

Page 8: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

HOW DO WE GO ABOUT IT?1. proper foundation: vector spaces, linear transformations,

linear dependence, dimension, matrices, determinants,eigenvalues, inner product spaces, etc.

2. applications: the study of quadratic forms3. an elementary proof of fundamental theorem of algebra

(using linear algebra)

Main Textbook for the Course: Chapter 6 and 7 of“Advanced Engineering Mathematics" by E. Kreyszig, 8th

edition

3/51

Page 9: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

HOW DO WE GO ABOUT IT?1. proper foundation: vector spaces, linear transformations,

linear dependence, dimension, matrices, determinants,eigenvalues, inner product spaces, etc.

2. applications: the study of quadratic forms

3. an elementary proof of fundamental theorem of algebra(using linear algebra)

Main Textbook for the Course: Chapter 6 and 7 of“Advanced Engineering Mathematics" by E. Kreyszig, 8th

edition

3/51

Page 10: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

HOW DO WE GO ABOUT IT?1. proper foundation: vector spaces, linear transformations,

linear dependence, dimension, matrices, determinants,eigenvalues, inner product spaces, etc.

2. applications: the study of quadratic forms3. an elementary proof of fundamental theorem of algebra

(using linear algebra)

Main Textbook for the Course: Chapter 6 and 7 of“Advanced Engineering Mathematics" by E. Kreyszig, 8th

edition

3/51

Page 11: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

HOW DO WE GO ABOUT IT?1. proper foundation: vector spaces, linear transformations,

linear dependence, dimension, matrices, determinants,eigenvalues, inner product spaces, etc.

2. applications: the study of quadratic forms3. an elementary proof of fundamental theorem of algebra

(using linear algebra)

Main Textbook for the Course: Chapter 6 and 7 of“Advanced Engineering Mathematics" by E. Kreyszig, 8th

edition

3/51

Page 12: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Cartesian coordinate space

René Descarte (1596-1650)

French philosopher, mathematician, physicist, and writer.

“cogito ergo sum" (I think therefore I am)

4/51

Page 13: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Cartesian coordinate space

René Descarte (1596-1650)

French philosopher, mathematician, physicist, and writer.“cogito ergo sum" (I think therefore I am)

4/51

Page 14: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)

ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝfor n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojectionGiven a function f : A→ ℝn,define fi := �i ∘ fcompletely determines f

5/51

Page 15: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝ

for n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojectionGiven a function f : A→ ℝn,define fi := �i ∘ fcompletely determines f

5/51

Page 16: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝfor n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojectionGiven a function f : A→ ℝn,define fi := �i ∘ fcompletely determines f

5/51

Page 17: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝfor n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojection

Given a function f : A→ ℝn,define fi := �i ∘ fcompletely determines f

5/51

Page 18: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝfor n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojectionGiven a function f : A→ ℝn,define fi := �i ∘ f

completely determines f

5/51

Page 19: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

n-dimensional Cartesian coordinate space ℝn

∼= ℝ× . . .× ℝ (n factors)ℝn is the totality of all ordered n-tuples (x1, . . . , xn) wherexi ∈ ℝfor n = 2 is the (x , y) ∈ ℝ2

�i : ℝn → ℝ defined by

�i((x1, . . . , xn)) = xi

is called the i th coordinate function or i th coordinateprojectionGiven a function f : A→ ℝn,define fi := �i ∘ fcompletely determines f

5/51

Page 20: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 21: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.

So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 22: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.

Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 23: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 24: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 25: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}

Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 26: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 27: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n < m we have the inclusion map � : ℝn → ℝm

� : (x1, . . . , xn)→ (x1, . . . , xn,0, . . . ,0)

fail to determine the behaviour of function completely.So calculus of several variables is not a mere extension ofcalculus of 1-variable.Examples:

1. Let f (x , y) = (x2y , x + y)g(x , y) = (cos(x + y), sin(x/y))

⇒ Domain of g is ℝ2∖{(x ,0) : x ∈ ℝ}Therefore, neither f ∘ g nor g ∘ f is defined on ℝ2

Restrict domain of f to ℝ2∖{(x , y) : x + y = 0}, then

g ∘ f (x , y) = (cos(x2y + x + y), sin[x2y/(x + y)])

6/51

Page 28: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. Let N = {1,2, . . . ,n}.

Nn = N × ⋅ ⋅ ⋅ × N;S(n) = Set of all sequences of length n and with values in

N;F (N,N) is the set all functions from N to N

There are natural ways of getting one-to-one mappings ofany one of these three sets into another.Let Σ(n) denote the subset of F (N,N) consisting of thosewhich are one-to-one. What does it correspond to in Nn

and S(n)?

7/51

Page 29: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. Let N = {1,2, . . . ,n}.

Nn = N × ⋅ ⋅ ⋅ × N;S(n) = Set of all sequences of length n and with values in

N;F (N,N) is the set all functions from N to N

There are natural ways of getting one-to-one mappings ofany one of these three sets into another.

Let Σ(n) denote the subset of F (N,N) consisting of thosewhich are one-to-one. What does it correspond to in Nn

and S(n)?

7/51

Page 30: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. Let N = {1,2, . . . ,n}.

Nn = N × ⋅ ⋅ ⋅ × N;S(n) = Set of all sequences of length n and with values in

N;F (N,N) is the set all functions from N to N

There are natural ways of getting one-to-one mappings ofany one of these three sets into another.Let Σ(n) denote the subset of F (N,N) consisting of thosewhich are one-to-one. What does it correspond to in Nn

and S(n)?

7/51

Page 31: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijectivemap?

(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then thereexist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},

f−1(r) is either ∅ or infinite.Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 32: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?

(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then thereexist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},

f−1(r) is either ∅ or infinite.Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 33: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then there

exist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},f−1(r) is either ∅ or infinite.

Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 34: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then there

exist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},f−1(r) is either ∅ or infinite.

Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 35: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then there

exist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},f−1(r) is either ∅ or infinite.

Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 36: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then there

exist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},f−1(r) is either ∅ or infinite.

Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ

(c) Find a continuous function f : ℝ2 → ℝ such thatf−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 37: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Let (x , y) = (x + y2, y + x2 + 2xy2 + y4). Is a bijective

map?(ii) (optional) Let f : ℝ2 → ℝ be a continuous map. Then there

exist a,b ∈ ℝ such that for all r ∈ ℝ∖{a,b},f−1(r) is either ∅ or infinite.

Prove that:

(a) for every n there exist at least n points which are mapped tothe same point by f

(b) if f is surjective then f−1(r) is infinite for all r ∈ ℝ(c) Find a continuous function f : ℝ2 → ℝ such that

f−1(−1) = {−1} and f−1(1) = {1}

8/51

Page 38: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x2. distributive: �(x + y) = �x + �y3. identity: 1x = x

9/51

Page 39: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x2. distributive: �(x + y) = �x + �y3. identity: 1x = x

9/51

Page 40: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x2. distributive: �(x + y) = �x + �y3. identity: 1x = x

9/51

Page 41: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x

2. distributive: �(x + y) = �x + �y3. identity: 1x = x

9/51

Page 42: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x2. distributive: �(x + y) = �x + �y

3. identity: 1x = x

9/51

Page 43: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Algebraic structure of ℝn

For x = (x1, . . . , xn),y = (y1, . . . , yn) define

x + y = (x1 + y1, . . . , xn + yn)

Note: usual laws of addition,

0 = (0, . . . ,0),−x = (−x1, . . . ,−xn)

Scalar multiplication: �x := (�x1, . . . , �xn)

1. associative: �(�x) = (��)x2. distributive: �(x + y) = �x + �y3. identity: 1x = x

9/51

Page 44: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 45: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 46: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 47: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥

(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 48: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 49: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 50: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y

(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 51: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )

10/51

Page 52: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Geometry of ℝn

Distance function on ℝn (linear metric):

d(x,y) =

√√√⎷ n∑i=1

(xi − yi)2, x,y ∈ ℝn

This is related to dot product: (x,y) 7→ x.y :=∑n

i=1 xiyi

The norm function:

∥x∥ := d(x,0) =√∑

x2i =√

x.x

It follows that d(x,y) = ∥x− y∥(d1) symmetry: d(x,y) = d(y,x)

(d2) triangle inequality: d(x,y) ≤ d(x,w) + d(w,y)

(d3) positivity: d(x,y) ≥ 0 and d(x,y) = 0⇔ x = y(d4) homogeneity: d(�x, �y) = ∣�∣d(x,y)

( for (d2) use Cauchy-Schwarz inequality: ∣x.y∣ ≤ ∥x∥∥y∥ )10/51

Page 53: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Define �(x , y) =∑n

i=1 ∣xi − yi ∣ and

D(x , y) = max{∣xi − yi ∣ : 1 ≤ i ≤ n}.

Show that �,D satisfy (d1), (d2), (d3).

(ii) On the set of 64 squares of a chess board define thedistance d from one square to another to be the leastnumber of knight moves required.

(a) Check that this distance function satisfy (d1), (d2), (d3).(b) Determine the diameter of the chess board with respect to

this distance where diameter of a space is the supremum ofall values of d .

11/51

Page 54: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Define �(x , y) =

∑ni=1 ∣xi − yi ∣ and

D(x , y) = max{∣xi − yi ∣ : 1 ≤ i ≤ n}.

Show that �,D satisfy (d1), (d2), (d3).

(ii) On the set of 64 squares of a chess board define thedistance d from one square to another to be the leastnumber of knight moves required.

(a) Check that this distance function satisfy (d1), (d2), (d3).(b) Determine the diameter of the chess board with respect to

this distance where diameter of a space is the supremum ofall values of d .

11/51

Page 55: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Define �(x , y) =

∑ni=1 ∣xi − yi ∣ and

D(x , y) = max{∣xi − yi ∣ : 1 ≤ i ≤ n}.

Show that �,D satisfy (d1), (d2), (d3).

(ii) On the set of 64 squares of a chess board define thedistance d from one square to another to be the leastnumber of knight moves required.

(a) Check that this distance function satisfy (d1), (d2), (d3).(b) Determine the diameter of the chess board with respect to

this distance where diameter of a space is the supremum ofall values of d .

11/51

Page 56: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Define �(x , y) =

∑ni=1 ∣xi − yi ∣ and

D(x , y) = max{∣xi − yi ∣ : 1 ≤ i ≤ n}.

Show that �,D satisfy (d1), (d2), (d3).

(ii) On the set of 64 squares of a chess board define thedistance d from one square to another to be the leastnumber of knight moves required.

(a) Check that this distance function satisfy (d1), (d2), (d3).

(b) Determine the diameter of the chess board with respect tothis distance where diameter of a space is the supremum ofall values of d .

11/51

Page 57: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:(i) Define �(x , y) =

∑ni=1 ∣xi − yi ∣ and

D(x , y) = max{∣xi − yi ∣ : 1 ≤ i ≤ n}.

Show that �,D satisfy (d1), (d2), (d3).

(ii) On the set of 64 squares of a chess board define thedistance d from one square to another to be the leastnumber of knight moves required.

(a) Check that this distance function satisfy (d1), (d2), (d3).(b) Determine the diameter of the chess board with respect to

this distance where diameter of a space is the supremum ofall values of d .

11/51

Page 58: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

“Not to rely too much on intuition when dealing with higherdimensions!"

Consider a square of side 4 units and place four coins of unitradius one in each corner so as to touch two of the sides. Ofcourse each coin touches other two coins. Now place a coin atthe center of the square so as to touch all the four coins.

Do this inside a n-dimensional cube of side 4 units with2n n-dimensional balls of unit radius.

12/51

Page 59: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

“Not to rely too much on intuition when dealing with higherdimensions!"Consider a square of side 4 units and place four coins of unitradius one in each corner so as to touch two of the sides. Ofcourse each coin touches other two coins. Now place a coin atthe center of the square so as to touch all the four coins.

Do this inside a n-dimensional cube of side 4 units with2n n-dimensional balls of unit radius.

12/51

Page 60: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

“Not to rely too much on intuition when dealing with higherdimensions!"Consider a square of side 4 units and place four coins of unitradius one in each corner so as to touch two of the sides. Ofcourse each coin touches other two coins. Now place a coin atthe center of the square so as to touch all the four coins.

Do this inside a n-dimensional cube of side 4 units with2n n-dimensional balls of unit radius.

12/51

Page 61: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

“Not to rely too much on intuition when dealing with higherdimensions!"Consider a square of side 4 units and place four coins of unitradius one in each corner so as to touch two of the sides. Ofcourse each coin touches other two coins. Now place a coin atthe center of the square so as to touch all the four coins.

Do this inside a n-dimensional cube of side 4 units with2n n-dimensional balls of unit radius.

12/51

Page 62: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n = 2,3, . . . ,9 the central ball which is kept touching all theballs in the corner lies inside the cube.

The surprise is that for n > 9 the central ball cannot fit insidethe cube. Prove this by showing that the radius of the centralball is

√n − 1.

13/51

Page 63: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n = 2,3, . . . ,9 the central ball which is kept touching all theballs in the corner lies inside the cube.

The surprise is that for n > 9 the central ball cannot fit insidethe cube.

Prove this by showing that the radius of the centralball is

√n − 1.

13/51

Page 64: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

For n = 2,3, . . . ,9 the central ball which is kept touching all theballs in the corner lies inside the cube.

The surprise is that for n > 9 the central ball cannot fit insidethe cube. Prove this by showing that the radius of the centralball is

√n − 1.

13/51

Page 65: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. LINEAR MAPS ON EUCLIDEAN SPACES AND

MATRICES

Definitionf : ℝn → ℝm is said to be a linear map iff (�x + �y) = �f (x) + �f (y).

Examples: Projection map �i , inclusion map, multiplicationby scalar

dot product by a fixed vector; what about the converse?f : ℝn → ℝm is linear iff fi s are linearDistance travelled is a linear function of time when velocityis constant. So is the voltage as a function of resistancewhen the current is constant. The logarithm of the changein concentration in any first order chemical reaction is alinear function of time.∣x ∣, xn (n > 1), sin x , etc. are not linear

14/51

Page 66: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. LINEAR MAPS ON EUCLIDEAN SPACES AND

MATRICES

Definitionf : ℝn → ℝm is said to be a linear map iff (�x + �y) = �f (x) + �f (y).

Examples: Projection map �i , inclusion map, multiplicationby scalardot product by a fixed vector; what about the converse?

f : ℝn → ℝm is linear iff fi s are linearDistance travelled is a linear function of time when velocityis constant. So is the voltage as a function of resistancewhen the current is constant. The logarithm of the changein concentration in any first order chemical reaction is alinear function of time.∣x ∣, xn (n > 1), sin x , etc. are not linear

14/51

Page 67: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. LINEAR MAPS ON EUCLIDEAN SPACES AND

MATRICES

Definitionf : ℝn → ℝm is said to be a linear map iff (�x + �y) = �f (x) + �f (y).

Examples: Projection map �i , inclusion map, multiplicationby scalardot product by a fixed vector; what about the converse?f : ℝn → ℝm is linear iff fi s are linear

Distance travelled is a linear function of time when velocityis constant. So is the voltage as a function of resistancewhen the current is constant. The logarithm of the changein concentration in any first order chemical reaction is alinear function of time.∣x ∣, xn (n > 1), sin x , etc. are not linear

14/51

Page 68: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. LINEAR MAPS ON EUCLIDEAN SPACES AND

MATRICES

Definitionf : ℝn → ℝm is said to be a linear map iff (�x + �y) = �f (x) + �f (y).

Examples: Projection map �i , inclusion map, multiplicationby scalardot product by a fixed vector; what about the converse?f : ℝn → ℝm is linear iff fi s are linearDistance travelled is a linear function of time when velocityis constant. So is the voltage as a function of resistancewhen the current is constant. The logarithm of the changein concentration in any first order chemical reaction is alinear function of time.

∣x ∣, xn (n > 1), sin x , etc. are not linear

14/51

Page 69: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

2. LINEAR MAPS ON EUCLIDEAN SPACES AND

MATRICES

Definitionf : ℝn → ℝm is said to be a linear map iff (�x + �y) = �f (x) + �f (y).

Examples: Projection map �i , inclusion map, multiplicationby scalardot product by a fixed vector; what about the converse?f : ℝn → ℝm is linear iff fi s are linearDistance travelled is a linear function of time when velocityis constant. So is the voltage as a function of resistancewhen the current is constant. The logarithm of the changein concentration in any first order chemical reaction is alinear function of time.∣x ∣, xn (n > 1), sin x , etc. are not linear

14/51

Page 70: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Show that if f is a linear map then

f (k∑

i=1

�ixi) =k∑

i=1

�i f (xi).

(ii) Show that the projection on a line L passing through theorigin defines a linear map of ℝ2 to ℝ2 and its image isequal to L.

(iii) Show that rotation through a fixed angle � is a linear mapfrom ℝ2 → ℝ2.

(iv) By a rigid motion of ℝn we mean a map f : ℝn → ℝn suchthat

d(f (x), f (y)) = d(x,y).

Show that a rigid motion of ℝ3 which fixes the origin is alinear map.

15/51

Page 71: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Show that if f is a linear map then

f (k∑

i=1

�ixi) =k∑

i=1

�i f (xi).

(ii) Show that the projection on a line L passing through theorigin defines a linear map of ℝ2 to ℝ2 and its image isequal to L.

(iii) Show that rotation through a fixed angle � is a linear mapfrom ℝ2 → ℝ2.

(iv) By a rigid motion of ℝn we mean a map f : ℝn → ℝn suchthat

d(f (x), f (y)) = d(x,y).

Show that a rigid motion of ℝ3 which fixes the origin is alinear map.

15/51

Page 72: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Show that if f is a linear map then

f (k∑

i=1

�ixi) =k∑

i=1

�i f (xi).

(ii) Show that the projection on a line L passing through theorigin defines a linear map of ℝ2 to ℝ2 and its image isequal to L.

(iii) Show that rotation through a fixed angle � is a linear mapfrom ℝ2 → ℝ2.

(iv) By a rigid motion of ℝn we mean a map f : ℝn → ℝn suchthat

d(f (x), f (y)) = d(x,y).

Show that a rigid motion of ℝ3 which fixes the origin is alinear map.

15/51

Page 73: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Show that if f is a linear map then

f (k∑

i=1

�ixi) =k∑

i=1

�i f (xi).

(ii) Show that the projection on a line L passing through theorigin defines a linear map of ℝ2 to ℝ2 and its image isequal to L.

(iii) Show that rotation through a fixed angle � is a linear mapfrom ℝ2 → ℝ2.

(iv) By a rigid motion of ℝn we mean a map f : ℝn → ℝn suchthat

d(f (x), f (y)) = d(x,y).

Show that a rigid motion of ℝ3 which fixes the origin is alinear map.

15/51

Page 74: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Exercise:

(i) Show that if f is a linear map then

f (k∑

i=1

�ixi) =k∑

i=1

�i f (xi).

(ii) Show that the projection on a line L passing through theorigin defines a linear map of ℝ2 to ℝ2 and its image isequal to L.

(iii) Show that rotation through a fixed angle � is a linear mapfrom ℝ2 → ℝ2.

(iv) By a rigid motion of ℝn we mean a map f : ℝn → ℝn suchthat

d(f (x), f (y)) = d(x,y).

Show that a rigid motion of ℝ3 which fixes the origin is alinear map.

15/51

Page 75: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 76: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 77: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)

If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 78: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 79: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 80: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Structure of linear maps

L(n,m) = Set of all linear maps from ℝn to ℝm

For f ,g ∈ L(n,m) define �f and f + g by

(�f )(x) = �f (x); (f + g)(x) = f (x) + g(x)

If f ∈ L(n,m) and g ∈ L(m, l), then g ∘ f ∈ L(n, l)If f ,g ∈ L(n,1), then define fg : ℝn → ℝ by

(fg)(x) = f (x)g(x).

Does fg ∈ L(n,1)?

Let ei = (0, . . . ,0,1,0, . . . ,0) (standard basis elements). Ifx ∈ ℝn, then x =

∑ni=1 xiei .

If f ∈ L(n,m), then

f (x) =∑

i

xi f (ei)

16/51

Page 81: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Conversely, if given v1, . . . , vn ∈ ℝm, define a (unique)linear map f by assigning f (ei) = vi

Examples:1. Given a f ∈ L(n,1), if we put u = (f (e1), . . . , f (en)), then

f (x) =∑

i xi f (ei) = u.x2.

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

. . . . . .

am1x1 + am2x2 + . . . + amnxn = bm

Set of all solutions of j th equation is a hyper plane Pj in ℝn.Solving the system means finding

P1 ∩ . . . ∩ Pm.

17/51

Page 82: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Conversely, if given v1, . . . , vn ∈ ℝm, define a (unique)linear map f by assigning f (ei) = vi

Examples:

1. Given a f ∈ L(n,1), if we put u = (f (e1), . . . , f (en)), thenf (x) =

∑i xi f (ei) = u.x

2.

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

. . . . . .

am1x1 + am2x2 + . . . + amnxn = bm

Set of all solutions of j th equation is a hyper plane Pj in ℝn.Solving the system means finding

P1 ∩ . . . ∩ Pm.

17/51

Page 83: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Conversely, if given v1, . . . , vn ∈ ℝm, define a (unique)linear map f by assigning f (ei) = vi

Examples:1. Given a f ∈ L(n,1), if we put u = (f (e1), . . . , f (en)), then

f (x) =∑

i xi f (ei) = u.x

2.

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

. . . . . .

am1x1 + am2x2 + . . . + amnxn = bm

Set of all solutions of j th equation is a hyper plane Pj in ℝn.Solving the system means finding

P1 ∩ . . . ∩ Pm.

17/51

Page 84: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Conversely, if given v1, . . . , vn ∈ ℝm, define a (unique)linear map f by assigning f (ei) = vi

Examples:1. Given a f ∈ L(n,1), if we put u = (f (e1), . . . , f (en)), then

f (x) =∑

i xi f (ei) = u.x2.

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

. . . . . .

am1x1 + am2x2 + . . . + amnxn = bm

Set of all solutions of j th equation is a hyper plane Pj in ℝn.Solving the system means finding

P1 ∩ . . . ∩ Pm.

17/51

Page 85: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Conversely, if given v1, . . . , vn ∈ ℝm, define a (unique)linear map f by assigning f (ei) = vi

Examples:1. Given a f ∈ L(n,1), if we put u = (f (e1), . . . , f (en)), then

f (x) =∑

i xi f (ei) = u.x2.

a11x1 + a12x2 + . . . + a1nxn = b1

a21x1 + a22x2 + . . . + a2nxn = b2

. . . . . .

am1x1 + am2x2 + . . . + amnxn = bm

Set of all solutions of j th equation is a hyper plane Pj in ℝn.Solving the system means finding

P1 ∩ . . . ∩ Pm.

17/51

Page 86: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

On the other hand the lhs of each of these equations can bethought of as a linear map Ti : ℝn → ℝ.

Together, they defineone function

T ∈ L(ℝn,ℝm)

such that T = (T1, . . . ,Tm).

Determining x ∈ ℝn such that T (x) = b,where b = (b1, . . . ,bm)

18/51

Page 87: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

On the other hand the lhs of each of these equations can bethought of as a linear map Ti : ℝn → ℝ. Together, they defineone function

T ∈ L(ℝn,ℝm)

such that T = (T1, . . . ,Tm).

Determining x ∈ ℝn such that T (x) = b,where b = (b1, . . . ,bm)

18/51

Page 88: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

On the other hand the lhs of each of these equations can bethought of as a linear map Ti : ℝn → ℝ. Together, they defineone function

T ∈ L(ℝn,ℝm)

such that T = (T1, . . . ,Tm).

Determining x ∈ ℝn such that T (x) = b,where b = (b1, . . . ,bm)

18/51

Page 89: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠

= (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 90: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn

(T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 91: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 92: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectors

Given linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 93: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:

For instance, if f (ej) = (f1j , f2j , . . . , fmj)T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 94: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠

This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 95: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.

We say matrixℳf is of size m × n.

19/51

Page 96: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Matrix representation

⎛⎜⎜⎜⎝x1x2...

xn

⎞⎟⎟⎟⎠ = (x1, x2, . . . , xn)T ∈ ℝn (T stands for transpose)

called column vectorsGiven linear map f : ℝn → ℝm we get n column vectors (ofsize m) viz., f (e1), . . . , f (en). Place them side by side:For instance, if f (ej) = (f1j , f2j , . . . , fmj)

T , then we obtain

ℳf =

⎛⎜⎜⎜⎝f11 f12 . . . f1nf21 f22 . . . f2n...

...fm1 fm2 . . . fmn

⎞⎟⎟⎟⎠This array is called a matrix with m rows and n columns.We say matrixℳf is of size m × n.

19/51

Page 97: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Notation: ℳf = ((fij))

Matrices are equal if their sizes are the same and theentries are the same.If m = 1 we get row matrices; if n = 1 we get columnmatrices

ℳ : L(n,m)→ Mm,nf 7→ ℳf

is one-one and called matrix representation of linear maps

ℳf+g =ℳf +ℳg ; ℳ�f = �ℳf

20/51

Page 98: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Notation: ℳf = ((fij))

Matrices are equal if their sizes are the same and theentries are the same.

If m = 1 we get row matrices; if n = 1 we get columnmatrices

ℳ : L(n,m)→ Mm,nf 7→ ℳf

is one-one and called matrix representation of linear maps

ℳf+g =ℳf +ℳg ; ℳ�f = �ℳf

20/51

Page 99: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Notation: ℳf = ((fij))

Matrices are equal if their sizes are the same and theentries are the same.If m = 1 we get row matrices; if n = 1 we get columnmatrices

ℳ : L(n,m)→ Mm,nf 7→ ℳf

is one-one and called matrix representation of linear maps

ℳf+g =ℳf +ℳg ; ℳ�f = �ℳf

20/51

Page 100: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Notation: ℳf = ((fij))

Matrices are equal if their sizes are the same and theentries are the same.If m = 1 we get row matrices; if n = 1 we get columnmatrices

ℳ : L(n,m)→ Mm,nf 7→ ℳf

is one-one and called matrix representation of linear maps

ℳf+g =ℳf +ℳg ; ℳ�f = �ℳf

20/51

Page 101: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Notation: ℳf = ((fij))

Matrices are equal if their sizes are the same and theentries are the same.If m = 1 we get row matrices; if n = 1 we get columnmatrices

ℳ : L(n,m)→ Mm,nf 7→ ℳf

is one-one and called matrix representation of linear maps

ℳf+g =ℳf +ℳg ; ℳ�f = �ℳf

20/51

Page 102: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 103: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId

= I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 104: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠

= ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 105: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 106: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise

(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 107: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 108: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by

(0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 109: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)

3. Corresponding to multiplication by � ∈ ℝ is the diagonalmatrix D(�, . . . , �) = ((��ij))

21/51

Page 110: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is

the diagonalmatrix D(�, . . . , �) = ((��ij))

21/51

Page 111: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �)

= ((��ij))

21/51

Page 112: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Examples:

1. ℳId = I or In where

I =

⎛⎜⎜⎜⎝1 0 . . . 00 1 . . . 0...

...0 0 . . . 1

⎞⎟⎟⎟⎠ = ((�ij))

�ij = 1 if i = j and = 0 otherwise(Kronecker delta)

2. Linear T : ℝ2 → ℝ2 which interchange coordinates is

represented by(

0 11 0

)3. Corresponding to multiplication by � ∈ ℝ is the diagonal

matrix D(�, . . . , �) = ((��ij))

21/51

Page 113: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 114: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)

Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 115: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:

Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 116: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;

Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 117: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation,

write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 118: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M

and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 119: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 120: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 121: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

4. The rotation through an angle � about the origin

←→(

cos � − sin �sin � cos �

)Linearity can be shown by:Law of congruent triangles;Alternatively, write down the image of e1 and e2 underrotation, write down the matrix M and then show thatrotation takes (x , y)T to M(x , y)T .

5. the reflection through a line L (with 0 ≤ < �) passingthrough the origin

←→(

cos 2 sin 2 sin 2 − cos 2

)

22/51

Page 122: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 123: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 124: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 125: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 126: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 127: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Operations on matrices

Set of all matrices of size m × n is denoted by Mm,n

For A = ((aij)),B = ((bij)) ∈ Mm,n

A + B = ((aij + bij)); �A = ((�aij));

0 matrix with all entries 0; −A = ((−aij))

ℝn ←→ M1,n, similarly ℝm ←→ Mm,1

‘Transpose’ operation introduced earlier can be extendedto all matrices

AT := ((bij)) ∈ Mn,m where bij := aji

(�A + �B)T = �AT + �BT

Let f : ℝn → ℝm,g : ℝm → ℝl be linear maps. If A :=ℳfand B :=ℳg thenℳg∘f =?

23/51

Page 128: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:1. Associativity: A(BC) = (AB)C (if AB and BC are

defined)2. Right and Left Distributivity:

A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 129: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:1. Associativity: A(BC) = (AB)C (if AB and BC are

defined)2. Right and Left Distributivity:

A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 130: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:1. Associativity: A(BC) = (AB)C (if AB and BC are

defined)2. Right and Left Distributivity:

A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 131: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:

1. Associativity: A(BC) = (AB)C (if AB and BC aredefined)

2. Right and Left Distributivity:A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 132: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:1. Associativity: A(BC) = (AB)C (if AB and BC are

defined)

2. Right and Left Distributivity:A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 133: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We have

(g ∘ f )(ej) = g(∑

i

aijei) =∑

i

aijg(ei)

=∑

i

aij(∑

k

bkiek ) =∑

k

(∑

i

bkiaij)ek

Soℳg∘f = C = ((ckj)), where ckj =∑m

i=1 bkiaij

DefineBA := C

Properties:1. Associativity: A(BC) = (AB)C (if AB and BC are

defined)2. Right and Left Distributivity:

A(B + C) = AB + AC, (B + C)A = BA + CA

24/51

Page 134: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product). Similarly

Mm,n ←→ ℝmn

25/51

Page 135: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product). Similarly

Mm,n ←→ ℝmn

25/51

Page 136: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product). Similarly

Mm,n ←→ ℝmn

25/51

Page 137: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product). Similarly

Mm,n ←→ ℝmn

25/51

Page 138: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product).

Similarly

Mm,n ←→ ℝmn

25/51

Page 139: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

3. Multiplicative identity: if A ∈ Mm,n, B ∈ Mn,k thenAIn = A and InB = B

4. (AB)T = BT AT

5. Let f : ℝn → ℝm, g : ℝm → ℝl be linear maps. Thenℳg∘f =ℳgℳf

Remark

M2,2 ←→ ℝ4

(preserves sum and scalar product). Similarly

Mm,n ←→ ℝmn

25/51

Page 140: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Invertible Transformations and Matrices

DefinitionAny function f : X → Y is said to be invertible, if there existsg : Y → X such that

g ∘ f = IdX and f ∘ g = IdY .

The inverse of a function if it exists is unique and is denoted byf−1.

A n × n matrix (i.e, square matrix) A is said to beinvertible if there exists another n × n matrix B such thatAB = BA = In. We call B an inverse of A.Remarks:

(i) An inverse of a matrix is unique. [if C is another matrixsuch that CA = AC = In then

C = CIn = C(AB) = (CA)B = InB = B.]

Denote it by A−1.

26/51

Page 141: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Invertible Transformations and Matrices

DefinitionAny function f : X → Y is said to be invertible, if there existsg : Y → X such that

g ∘ f = IdX and f ∘ g = IdY .

The inverse of a function if it exists is unique and is denoted byf−1.

A n × n matrix (i.e, square matrix) A is said to beinvertible if there exists another n × n matrix B such thatAB = BA = In. We call B an inverse of A.

Remarks:(i) An inverse of a matrix is unique. [if C is another matrix

such that CA = AC = In then

C = CIn = C(AB) = (CA)B = InB = B.]

Denote it by A−1.

26/51

Page 142: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Invertible Transformations and Matrices

DefinitionAny function f : X → Y is said to be invertible, if there existsg : Y → X such that

g ∘ f = IdX and f ∘ g = IdY .

The inverse of a function if it exists is unique and is denoted byf−1.

A n × n matrix (i.e, square matrix) A is said to beinvertible if there exists another n × n matrix B such thatAB = BA = In. We call B an inverse of A.Remarks:

(i) An inverse of a matrix is unique. [if C is another matrixsuch that CA = AC = In then

C = CIn = C(AB) = (CA)B = InB = B.]

Denote it by A−1.

26/51

Page 143: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Invertible Transformations and Matrices

DefinitionAny function f : X → Y is said to be invertible, if there existsg : Y → X such that

g ∘ f = IdX and f ∘ g = IdY .

The inverse of a function if it exists is unique and is denoted byf−1.

A n × n matrix (i.e, square matrix) A is said to beinvertible if there exists another n × n matrix B such thatAB = BA = In. We call B an inverse of A.Remarks:

(i) An inverse of a matrix is unique. [if C is another matrixsuch that CA = AC = In then

C = CIn = C(AB) = (CA)B = InB = B.]

Denote it by A−1.

26/51

Page 144: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Invertible Transformations and Matrices

DefinitionAny function f : X → Y is said to be invertible, if there existsg : Y → X such that

g ∘ f = IdX and f ∘ g = IdY .

The inverse of a function if it exists is unique and is denoted byf−1.

A n × n matrix (i.e, square matrix) A is said to beinvertible if there exists another n × n matrix B such thatAB = BA = In. We call B an inverse of A.Remarks:

(i) An inverse of a matrix is unique. [if C is another matrixsuch that CA = AC = In then

C = CIn = C(AB) = (CA)B = InB = B.]

Denote it by A−1.26/51

Page 145: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2.

What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 146: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?

(iii) Clearly In, anddiag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 147: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.

(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear mapsassociated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 148: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id .

Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 149: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 150: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.

(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 151: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(ii) If A1,A2 are invertible then so is A1A2. What is its inverse?(iii) Clearly In, and

diag(a1,a2, . . . ,an)

with ai ∕= 0 are invertible.(iv) Let B := A−1. If fA, fB : ℝn → ℝn are the linear maps

associated with A,B resp., then it follows that

fA ∘ fB = fAB = Id .

Likewise fB ∘ fA = Id . Even the converse holds. (viz,ℳfℳf−1 = In).

(v) An invertible map is one-one and onto.(vi) If f : ℝn → ℝn is an invertible linear map, then f−1 is linear.

27/51

Page 152: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.

If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 153: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero.

In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 154: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).

It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 155: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 156: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 157: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.

Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 158: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Elementary and Permutation Matrix

Consider a square matrix Eij whose (i , j)th entry is 1 and allother entries are 0.If we multiply a matrix A by Eij on the left, then what we getis a matrix whose i th row is equal to the j th row of A and allother rows are zero. In particular EijEij = 0 for (i ∕= j).It follows that for any � and i ∕= j

(I + �Eij)(I − �Eij) = I + �Eij − �Eij − �2EijEij = I.

(I + �Eii)(I + �Eii) = I + (� + � + ��)Eii .

So rhs to be equal to I then we must have � + � + �� = 0.Thus I + �Eii is invertible if � ∕= −1.Alternatively, I + �Eii is the diagonal matrix with all thediagonal entries equal to 1 except the (i , i)th one which isequal to 1 + �.

28/51

Page 159: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible.

This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij . To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 160: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible. This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.

The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij . To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 161: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible. This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij .

To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 162: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible. This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij . To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 163: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible. This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij . To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 164: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Further consider I + Eij + Eji − Eii − Ejj which is similarlyinvertible. This matrix is nothing but the identity matrix afterinterchanging the i th and j th rows. These are calledtransposition matrices.The linear maps corresponding to them merelyinterchange the i th and j th coordinates. We shall denotethem simply by Tij . To sum up we have:

TheoremThe elementary matrices I + �Eij (i ∕= j), I + �Eii (� ∕= −1) andTij = I + Eij + Eji −Eii −Ejj are all invertible with their respectiveinverses I − �Eij , I − (�/(1 + �))Eii and Tij .

Permutation matrices are defined to be those squarematrices which have all the entries in any given row (andcolumn) equal to zero except one entry equal to 1.

29/51

Page 165: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

From a permutation matrix we can get a map � : N → Nwhich is one-to-one mapping.

Conversely, given a permutation � : N → N, we define amatrix P� = ((pij)) :

pij =

{0 if j ∕= �(i)1 if j = �(i)

A permutation matrix is obtained by merely shuffling therows of the identity matrix (or by shuffling the columns)If A denotes a permutation matrix, then

AAT = AT A = In.

In particular, they are invertible.

30/51

Page 166: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

From a permutation matrix we can get a map � : N → Nwhich is one-to-one mapping.Conversely, given a permutation � : N → N, we define amatrix P� = ((pij)) :

pij =

{0 if j ∕= �(i)1 if j = �(i)

A permutation matrix is obtained by merely shuffling therows of the identity matrix (or by shuffling the columns)If A denotes a permutation matrix, then

AAT = AT A = In.

In particular, they are invertible.

30/51

Page 167: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

From a permutation matrix we can get a map � : N → Nwhich is one-to-one mapping.Conversely, given a permutation � : N → N, we define amatrix P� = ((pij)) :

pij =

{0 if j ∕= �(i)1 if j = �(i)

A permutation matrix is obtained by merely shuffling therows of the identity matrix (or by shuffling the columns)

If A denotes a permutation matrix, then

AAT = AT A = In.

In particular, they are invertible.

30/51

Page 168: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

From a permutation matrix we can get a map � : N → Nwhich is one-to-one mapping.Conversely, given a permutation � : N → N, we define amatrix P� = ((pij)) :

pij =

{0 if j ∕= �(i)1 if j = �(i)

A permutation matrix is obtained by merely shuffling therows of the identity matrix (or by shuffling the columns)If A denotes a permutation matrix, then

AAT = AT A = In.

In particular, they are invertible.

30/51

Page 169: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss Elimination

Carl Friedrich Gauss (1777-1855)

German mathematician and scientist,

contributed to number theory, statistics, algebra, analysis,differential geometry, geophysics, electrostatics, astronomy,

optics

31/51

Page 170: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss Elimination

Carl Friedrich Gauss (1777-1855)

German mathematician and scientist,contributed to number theory, statistics, algebra, analysis,

differential geometry, geophysics, electrostatics, astronomy,optics

31/51

Page 171: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.

The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.(2) Multiplying all the terms of an equation by a nonzero scalar.(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 172: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.(2) Multiplying all the terms of an equation by a nonzero scalar.(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 173: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.

(2) Multiplying all the terms of an equation by a nonzero scalar.(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 174: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.(2) Multiplying all the terms of an equation by a nonzero scalar.

(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 175: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.(2) Multiplying all the terms of an equation by a nonzero scalar.(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 176: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Gauss elimination method: to solve a system of m linearequations in n unknowns.The three types of operations on these equations which donot alter the solutions:

(1) Interchanging two equations.(2) Multiplying all the terms of an equation by a nonzero scalar.(3) Adding to one equation a multiple of another equation.

Only need is to keep track of which coefficient came fromwhich variable.

32/51

Page 177: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

⎞⎟⎠⎛⎜⎝ x1

...xn

⎞⎟⎠ =

⎛⎜⎝ b1...

bm

⎞⎟⎠ (∗)

The matrix A = ((aij)) is called the coefficient matrix. Bya solution of (∗) we mean any choice of x1, x2, . . . , xn whichsatisfies all the equations in it.If each bi = 0 we say that the system is homogeneous.Otherwise it is called an inhomogeneous system. The

matrix

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

∣∣∣∣∣∣∣b1...

bm

⎞⎟⎠ is called the augmented

matrix.Now the above three operations on the equationscorrespond to certain operations on the row of theaugmented matrix. These are called elementary rowoperations.

33/51

Page 178: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

⎞⎟⎠⎛⎜⎝ x1

...xn

⎞⎟⎠ =

⎛⎜⎝ b1...

bm

⎞⎟⎠ (∗)

The matrix A = ((aij)) is called the coefficient matrix. Bya solution of (∗) we mean any choice of x1, x2, . . . , xn whichsatisfies all the equations in it.

If each bi = 0 we say that the system is homogeneous.Otherwise it is called an inhomogeneous system. The

matrix

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

∣∣∣∣∣∣∣b1...

bm

⎞⎟⎠ is called the augmented

matrix.Now the above three operations on the equationscorrespond to certain operations on the row of theaugmented matrix. These are called elementary rowoperations.

33/51

Page 179: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

⎞⎟⎠⎛⎜⎝ x1

...xn

⎞⎟⎠ =

⎛⎜⎝ b1...

bm

⎞⎟⎠ (∗)

The matrix A = ((aij)) is called the coefficient matrix. Bya solution of (∗) we mean any choice of x1, x2, . . . , xn whichsatisfies all the equations in it.If each bi = 0 we say that the system is homogeneous.Otherwise it is called an inhomogeneous system. The

matrix

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

∣∣∣∣∣∣∣b1...

bm

⎞⎟⎠ is called the augmented

matrix.

Now the above three operations on the equationscorrespond to certain operations on the row of theaugmented matrix. These are called elementary rowoperations.

33/51

Page 180: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

⎞⎟⎠⎛⎜⎝ x1

...xn

⎞⎟⎠ =

⎛⎜⎝ b1...

bm

⎞⎟⎠ (∗)

The matrix A = ((aij)) is called the coefficient matrix. Bya solution of (∗) we mean any choice of x1, x2, . . . , xn whichsatisfies all the equations in it.If each bi = 0 we say that the system is homogeneous.Otherwise it is called an inhomogeneous system. The

matrix

⎛⎜⎝ a11 . . . a1n...

...am1 . . . amn

∣∣∣∣∣∣∣b1...

bm

⎞⎟⎠ is called the augmented

matrix.Now the above three operations on the equationscorrespond to certain operations on the row of theaugmented matrix. These are called elementary rowoperations.

33/51

Page 181: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 1 (A system with a unique solution):

2x − 5y + 4z = −3x − 2y + z = 5

x − 4y + 6z = 10.

⎛⎝ 2 −5 41 −2 11 −4 6

∣∣∣∣∣∣−3510

⎞⎠The three basic operations mentioned above will beperformed on the rows of the augmented matrix,

i.e.,

⎛⎝ 1 −2 12 −5 41 −4 6

∣∣∣∣∣∣5−310

⎞⎠

34/51

Page 182: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 1 (A system with a unique solution):

2x − 5y + 4z = −3x − 2y + z = 5

x − 4y + 6z = 10.

⎛⎝ 2 −5 41 −2 11 −4 6

∣∣∣∣∣∣−3510

⎞⎠

The three basic operations mentioned above will beperformed on the rows of the augmented matrix,

i.e.,

⎛⎝ 1 −2 12 −5 41 −4 6

∣∣∣∣∣∣5−310

⎞⎠

34/51

Page 183: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 1 (A system with a unique solution):

2x − 5y + 4z = −3x − 2y + z = 5

x − 4y + 6z = 10.

⎛⎝ 2 −5 41 −2 11 −4 6

∣∣∣∣∣∣−3510

⎞⎠The three basic operations mentioned above will beperformed on the rows of the augmented matrix,

i.e.,

⎛⎝ 1 −2 12 −5 41 −4 6

∣∣∣∣∣∣5−310

⎞⎠

34/51

Page 184: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 1 (A system with a unique solution):

2x − 5y + 4z = −3x − 2y + z = 5

x − 4y + 6z = 10.

⎛⎝ 2 −5 41 −2 11 −4 6

∣∣∣∣∣∣−3510

⎞⎠The three basic operations mentioned above will beperformed on the rows of the augmented matrix,

i.e.,

⎛⎝ 1 −2 12 −5 41 −4 6

∣∣∣∣∣∣5−310

⎞⎠

34/51

Page 185: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 186: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠

This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 187: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 188: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)

⇒(

1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 189: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)

⇒(

1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 190: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)

Put back the rows and columns that has been cut outearlier:

35/51

Page 191: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

First we add -2 times the first row to the second row. Then wesubtract the first row from the third row.

i.e.,

⎛⎝ 1 −2 10 −1 20 −2 5

∣∣∣∣∣∣5−13

5

⎞⎠This last step is also called ‘sweeping’ a column. Now werepeat the process for the smaller matrix:

viz.(−1 2−2 5

∣∣∣∣ −135

)⇒

(1 −2−2 5

∣∣∣∣ 135

)⇒

(1 −20 1

∣∣∣∣ 1331

)Put back the rows and columns that has been cut outearlier:

35/51

Page 192: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣5

1331

⎞⎠ (∗)

The matrix represents the linear system:

x − 2y + z = 5y − 2z = 13

z = 31

These can be solved successively by backward substitutionz = 31; y = 13 + 2z = 75;x = 5 + 2y − z = 124

36/51

Page 193: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣5

1331

⎞⎠ (∗)

The matrix represents the linear system:

x − 2y + z = 5y − 2z = 13

z = 31

These can be solved successively by backward substitutionz = 31; y = 13 + 2z = 75;x = 5 + 2y − z = 124

36/51

Page 194: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣5

1331

⎞⎠ (∗)

The matrix represents the linear system:

x − 2y + z = 5y − 2z = 13

z = 31

These can be solved successively by backward substitutionz = 31;

y = 13 + 2z = 75;x = 5 + 2y − z = 124

36/51

Page 195: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣5

1331

⎞⎠ (∗)

The matrix represents the linear system:

x − 2y + z = 5y − 2z = 13

z = 31

These can be solved successively by backward substitutionz = 31; y = 13 + 2z = 75;

x = 5 + 2y − z = 124

36/51

Page 196: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣5

1331

⎞⎠ (∗)

The matrix represents the linear system:

x − 2y + z = 5y − 2z = 13

z = 31

These can be solved successively by backward substitutionz = 31; y = 13 + 2z = 75;x = 5 + 2y − z = 124

36/51

Page 197: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process.

Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too. Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 198: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process. Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too. Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 199: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process. Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too.

Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 200: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process. Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too. Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 201: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process. Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too. Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 202: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Alternatively, we can continue the process called theGauss-Jordan Process. Here at this stage, we first makesure that all diagonal entries are indeed 1 or 0.

Then for those those columns for which the diagonal entry are 1,we sweep the columns above the diagonal entry too. Thisis carried out in the decreasing order of the columnnumbers.

Recall

⎛⎝ 1 −2 10 1 −20 0 1

∣∣∣∣∣∣51331

⎞⎠ (∗)

(1) add twice third row to the second(2) then subtract the third row from the first(3) add twice the second row to the first

37/51

Page 203: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 0 00 1 00 0 1

∣∣∣∣∣∣1247531

⎞⎠

⇒ The augmented matrix gives the desired solutionx = 124; y = 75; z = 31.

Notation: Let Ri denote the i th row of a given matrix.

Operation NotationMultiply Ri by a scalar c cRiMultiply Rj by a scalar c and add to Ri Ri + cRjInterchange Ri and Rj Ri ↔ Rj

38/51

Page 204: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 0 00 1 00 0 1

∣∣∣∣∣∣1247531

⎞⎠⇒ The augmented matrix gives the desired solution

x = 124; y = 75; z = 31.

Notation: Let Ri denote the i th row of a given matrix.

Operation NotationMultiply Ri by a scalar c cRiMultiply Rj by a scalar c and add to Ri Ri + cRjInterchange Ri and Rj Ri ↔ Rj

38/51

Page 205: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

⎛⎝ 1 0 00 1 00 0 1

∣∣∣∣∣∣1247531

⎞⎠⇒ The augmented matrix gives the desired solution

x = 124; y = 75; z = 31.

Notation: Let Ri denote the i th row of a given matrix.

Operation NotationMultiply Ri by a scalar c cRiMultiply Rj by a scalar c and add to Ri Ri + cRjInterchange Ri and Rj Ri ↔ Rj

38/51

Page 206: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 2 (A system with infinitely many solutions):

x − 2y + z − u + v = 52x − 5y + 4z + u − v = −3x − 4y + 6z + 2u − v = 10

⎛⎝ 1 −2 1 −1 12 −5 4 1 −11 −4 6 2 −1

∣∣∣∣∣∣5−310

⎞⎠We shall use the notation introduced above for the rowoperations

.R2 − 2R1R3 − R1−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 −2 5 3 −2

∣∣∣∣∣∣5−13

5

⎞⎠.

R3 − 2R2−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 0 1 −3 4

∣∣∣∣∣∣5−1331

⎞⎠

39/51

Page 207: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 2 (A system with infinitely many solutions):

x − 2y + z − u + v = 52x − 5y + 4z + u − v = −3x − 4y + 6z + 2u − v = 10

⎛⎝ 1 −2 1 −1 12 −5 4 1 −11 −4 6 2 −1

∣∣∣∣∣∣5−310

⎞⎠

We shall use the notation introduced above for the rowoperations

.R2 − 2R1R3 − R1−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 −2 5 3 −2

∣∣∣∣∣∣5−13

5

⎞⎠.

R3 − 2R2−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 0 1 −3 4

∣∣∣∣∣∣5−1331

⎞⎠

39/51

Page 208: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 2 (A system with infinitely many solutions):

x − 2y + z − u + v = 52x − 5y + 4z + u − v = −3x − 4y + 6z + 2u − v = 10

⎛⎝ 1 −2 1 −1 12 −5 4 1 −11 −4 6 2 −1

∣∣∣∣∣∣5−310

⎞⎠We shall use the notation introduced above for the rowoperations

.R2 − 2R1R3 − R1−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 −2 5 3 −2

∣∣∣∣∣∣5−13

5

⎞⎠.

R3 − 2R2−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 0 1 −3 4

∣∣∣∣∣∣5−1331

⎞⎠

39/51

Page 209: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 2 (A system with infinitely many solutions):

x − 2y + z − u + v = 52x − 5y + 4z + u − v = −3x − 4y + 6z + 2u − v = 10

⎛⎝ 1 −2 1 −1 12 −5 4 1 −11 −4 6 2 −1

∣∣∣∣∣∣5−310

⎞⎠We shall use the notation introduced above for the rowoperations

.R2 − 2R1R3 − R1−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 −2 5 3 −2

∣∣∣∣∣∣5−13

5

⎞⎠

.R3 − 2R2−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 0 1 −3 4

∣∣∣∣∣∣5−1331

⎞⎠

39/51

Page 210: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 2 (A system with infinitely many solutions):

x − 2y + z − u + v = 52x − 5y + 4z + u − v = −3x − 4y + 6z + 2u − v = 10

⎛⎝ 1 −2 1 −1 12 −5 4 1 −11 −4 6 2 −1

∣∣∣∣∣∣5−310

⎞⎠We shall use the notation introduced above for the rowoperations

.R2 − 2R1R3 − R1−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 −2 5 3 −2

∣∣∣∣∣∣5−13

5

⎞⎠.

R3 − 2R2−→

⎛⎝ 1 −2 1 −1 10 −1 2 3 −30 0 1 −3 4

∣∣∣∣∣∣5−1331

⎞⎠39/51

Page 211: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

.−R2−→

⎛⎝ 1 −2 1 −1 10 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣51331

⎞⎠

.R1 + 2R2−→

⎛⎝ 1 0 −3 −7 70 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣311331

⎞⎠.

R2 + 2R3R1 + 3R3−→

⎛⎝ 1 0 0 −16 190 1 0 −9 110 0 1 −3 4

∣∣∣∣∣∣1247531

⎞⎠The system of linear equations corresponding to the lastaugmented matrix is:

x = 124 + 16u − 19vy = 75 + 9u − 11vz = 31 + 3u − 4v .

40/51

Page 212: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

.−R2−→

⎛⎝ 1 −2 1 −1 10 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣51331

⎞⎠.

R1 + 2R2−→

⎛⎝ 1 0 −3 −7 70 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣311331

⎞⎠

.R2 + 2R3R1 + 3R3−→

⎛⎝ 1 0 0 −16 190 1 0 −9 110 0 1 −3 4

∣∣∣∣∣∣1247531

⎞⎠The system of linear equations corresponding to the lastaugmented matrix is:

x = 124 + 16u − 19vy = 75 + 9u − 11vz = 31 + 3u − 4v .

40/51

Page 213: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

.−R2−→

⎛⎝ 1 −2 1 −1 10 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣51331

⎞⎠.

R1 + 2R2−→

⎛⎝ 1 0 −3 −7 70 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣311331

⎞⎠.

R2 + 2R3R1 + 3R3−→

⎛⎝ 1 0 0 −16 190 1 0 −9 110 0 1 −3 4

∣∣∣∣∣∣1247531

⎞⎠

The system of linear equations corresponding to the lastaugmented matrix is:

x = 124 + 16u − 19vy = 75 + 9u − 11vz = 31 + 3u − 4v .

40/51

Page 214: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

.−R2−→

⎛⎝ 1 −2 1 −1 10 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣51331

⎞⎠.

R1 + 2R2−→

⎛⎝ 1 0 −3 −7 70 1 −2 −3 30 0 1 −3 4

∣∣∣∣∣∣311331

⎞⎠.

R2 + 2R3R1 + 3R3−→

⎛⎝ 1 0 0 −16 190 1 0 −9 110 0 1 −3 4

∣∣∣∣∣∣1247531

⎞⎠The system of linear equations corresponding to the lastaugmented matrix is:

x = 124 + 16u − 19vy = 75 + 9u − 11vz = 31 + 3u − 4v .

40/51

Page 215: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We say that u and v are independent variables and x, y, zare dependent variables

(x , y , z,u, v)T

= (124 + 16t1 − 19t2,75 + 9t1 − 11t2,31 + 3t1 − 4t2, t1, t2)T

= (124,75,31,0,0)T + t1(16,9,3,1,0)T

+t2(−19,−11,−4,0,1)T .

The above equation gives the general solution to thesystem. (124,75,31,0,0) is a particular solution of theinhomogeneous system.v1 = (16,9,3,1,0) and v2 = (−19,−11,−4,0,1) aresolutions of the corresponding homogeneous system.(These two solutions are linearly independent and) everyother solution of the homogeneous system is a linearcombination of these two solutions.

41/51

Page 216: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We say that u and v are independent variables and x, y, zare dependent variables

(x , y , z,u, v)T

= (124 + 16t1 − 19t2,75 + 9t1 − 11t2,31 + 3t1 − 4t2, t1, t2)T

= (124,75,31,0,0)T + t1(16,9,3,1,0)T

+t2(−19,−11,−4,0,1)T .

The above equation gives the general solution to thesystem. (124,75,31,0,0) is a particular solution of theinhomogeneous system.v1 = (16,9,3,1,0) and v2 = (−19,−11,−4,0,1) aresolutions of the corresponding homogeneous system.(These two solutions are linearly independent and) everyother solution of the homogeneous system is a linearcombination of these two solutions.

41/51

Page 217: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We say that u and v are independent variables and x, y, zare dependent variables

(x , y , z,u, v)T

= (124 + 16t1 − 19t2,75 + 9t1 − 11t2,31 + 3t1 − 4t2, t1, t2)T

= (124,75,31,0,0)T + t1(16,9,3,1,0)T

+t2(−19,−11,−4,0,1)T .

The above equation gives the general solution to thesystem. (124,75,31,0,0) is a particular solution of theinhomogeneous system.

v1 = (16,9,3,1,0) and v2 = (−19,−11,−4,0,1) aresolutions of the corresponding homogeneous system.(These two solutions are linearly independent and) everyother solution of the homogeneous system is a linearcombination of these two solutions.

41/51

Page 218: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We say that u and v are independent variables and x, y, zare dependent variables

(x , y , z,u, v)T

= (124 + 16t1 − 19t2,75 + 9t1 − 11t2,31 + 3t1 − 4t2, t1, t2)T

= (124,75,31,0,0)T + t1(16,9,3,1,0)T

+t2(−19,−11,−4,0,1)T .

The above equation gives the general solution to thesystem. (124,75,31,0,0) is a particular solution of theinhomogeneous system.v1 = (16,9,3,1,0) and v2 = (−19,−11,−4,0,1) aresolutions of the corresponding homogeneous system.

(These two solutions are linearly independent and) everyother solution of the homogeneous system is a linearcombination of these two solutions.

41/51

Page 219: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We say that u and v are independent variables and x, y, zare dependent variables

(x , y , z,u, v)T

= (124 + 16t1 − 19t2,75 + 9t1 − 11t2,31 + 3t1 − 4t2, t1, t2)T

= (124,75,31,0,0)T + t1(16,9,3,1,0)T

+t2(−19,−11,−4,0,1)T .

The above equation gives the general solution to thesystem. (124,75,31,0,0) is a particular solution of theinhomogeneous system.v1 = (16,9,3,1,0) and v2 = (−19,−11,−4,0,1) aresolutions of the corresponding homogeneous system.(These two solutions are linearly independent and) everyother solution of the homogeneous system is a linearcombination of these two solutions.

41/51

Page 220: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T .

Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 221: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 222: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 223: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 224: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 225: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S.

Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 226: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 227: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 228: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □

42/51

Page 229: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

TheoremSuppose Ax = b is a system of linear equations whereA = ((aij)) is a m × n matrix and x = (x1, x2, . . . , xn)T ,b = (b1,b2, . . . ,bm)T . Suppose c = (c1, c2, . . . , cn)T is asolution of Ax = b and S is the set of all solutions to theassociated homogeneous system Ax = 0.

Then the set of all solutions to Ax = b is c + S := {c + v∣v ∈ S}.

Proof: Let r ∈ ℝn be a solution of Ax = b. Then

A(r− c) = Ar− Ac = b− b = 0.

Hence r− c ∈ S. Thus r ∈ c + S.

Conversely, let v ∈ S. Then

A(c + v) = Ac + Av = b + 0 = b.

Hence c + v is a solution to Ax = b. □42/51

Page 230: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 3 (A system with no solution):

x − 5y + 4z = 3x − 5y + 3z = 6

2x − 10y + 13z = 5

⎛⎝ 1 −5 41 −5 32 −10 13

∣∣∣∣∣∣365

⎞⎠→ Apply Gauss Elimination Method to get⎛⎝ 1 −5 4

0 0 −10 0 0

∣∣∣∣∣∣3314

⎞⎠→ The bottom row corresponds to the equation 0.z = 14.→ Hence the system has no solutions.

43/51

Page 231: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 3 (A system with no solution):

x − 5y + 4z = 3x − 5y + 3z = 6

2x − 10y + 13z = 5

⎛⎝ 1 −5 41 −5 32 −10 13

∣∣∣∣∣∣365

⎞⎠

→ Apply Gauss Elimination Method to get⎛⎝ 1 −5 40 0 −10 0 0

∣∣∣∣∣∣3314

⎞⎠→ The bottom row corresponds to the equation 0.z = 14.→ Hence the system has no solutions.

43/51

Page 232: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 3 (A system with no solution):

x − 5y + 4z = 3x − 5y + 3z = 6

2x − 10y + 13z = 5

⎛⎝ 1 −5 41 −5 32 −10 13

∣∣∣∣∣∣365

⎞⎠→ Apply Gauss Elimination Method to get⎛⎝ 1 −5 4

0 0 −10 0 0

∣∣∣∣∣∣3314

⎞⎠

→ The bottom row corresponds to the equation 0.z = 14.→ Hence the system has no solutions.

43/51

Page 233: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 3 (A system with no solution):

x − 5y + 4z = 3x − 5y + 3z = 6

2x − 10y + 13z = 5

⎛⎝ 1 −5 41 −5 32 −10 13

∣∣∣∣∣∣365

⎞⎠→ Apply Gauss Elimination Method to get⎛⎝ 1 −5 4

0 0 −10 0 0

∣∣∣∣∣∣3314

⎞⎠→ The bottom row corresponds to the equation 0.z = 14.

→ Hence the system has no solutions.

43/51

Page 234: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Example 3 (A system with no solution):

x − 5y + 4z = 3x − 5y + 3z = 6

2x − 10y + 13z = 5

⎛⎝ 1 −5 41 −5 32 −10 13

∣∣∣∣∣∣365

⎞⎠→ Apply Gauss Elimination Method to get⎛⎝ 1 −5 4

0 0 −10 0 0

∣∣∣∣∣∣3314

⎞⎠→ The bottom row corresponds to the equation 0.z = 14.→ Hence the system has no solutions.

43/51

Page 235: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix)

if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that(i) ai,j = 0 for all i > r and for all j .(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, for

s ∕= i ).

44/51

Page 236: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix) if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that

(i) ai,j = 0 for all i > r and for all j .(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, for

s ∕= i ).

44/51

Page 237: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix) if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that(i) ai,j = 0 for all i > r and for all j .

(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, for

s ∕= i ).

44/51

Page 238: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix) if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that(i) ai,j = 0 for all i > r and for all j .(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).

(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, for

s ∕= i ).

44/51

Page 239: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix) if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that(i) ai,j = 0 for all i > r and for all j .(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).

(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, fors ∕= i ).

44/51

Page 240: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

DefinitionA m × n matrix A = ((ai,j)) is called echelon matrix (respective-ly, reduced echelon matrix) if A = 0 or

if there exists an integer r , 1 ≤ r ≤ min{m,n} and integers

1 ≤ k(1) < k(2) < . . . < k(r) ≤ n

such that(i) ai,j = 0 for all i > r and for all j .(ii) for each 1 ≤ i ≤ r , ai,j = 0 for j < k(i).(iii) for each 1 ≤ i ≤ r , ai,k(i) ∕= 0 (respectively, = 1).(iv) for each 1 ≤ i ≤ r , as,k(i) = 0, for all s > i (respectively, for

s ∕= i ).

44/51

Page 241: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Remark(i) We also call echelon matrices as Gauss matrices and the

reduced echelon matrices as Gauss-Jordan matrices.

(ii) The integer r occurring in the definition is called thenumber of steps. The columns Ck(s) are called the stepcolumns.

(iii) The (i , k(i))th entries which are non zero in G(A) (andequal to 1 in J(A)) are called pivots.

(iv) Let �(A) denote the set of indices j such that the j th columnof A is not a step.

45/51

Page 242: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Remark(i) We also call echelon matrices as Gauss matrices and the

reduced echelon matrices as Gauss-Jordan matrices.

(ii) The integer r occurring in the definition is called thenumber of steps.

The columns Ck(s) are called the stepcolumns.

(iii) The (i , k(i))th entries which are non zero in G(A) (andequal to 1 in J(A)) are called pivots.

(iv) Let �(A) denote the set of indices j such that the j th columnof A is not a step.

45/51

Page 243: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Remark(i) We also call echelon matrices as Gauss matrices and the

reduced echelon matrices as Gauss-Jordan matrices.

(ii) The integer r occurring in the definition is called thenumber of steps. The columns Ck(s) are called the stepcolumns.

(iii) The (i , k(i))th entries which are non zero in G(A) (andequal to 1 in J(A)) are called pivots.

(iv) Let �(A) denote the set of indices j such that the j th columnof A is not a step.

45/51

Page 244: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Remark(i) We also call echelon matrices as Gauss matrices and the

reduced echelon matrices as Gauss-Jordan matrices.

(ii) The integer r occurring in the definition is called thenumber of steps. The columns Ck(s) are called the stepcolumns.

(iii) The (i , k(i))th entries which are non zero in G(A) (andequal to 1 in J(A)) are called pivots.

(iv) Let �(A) denote the set of indices j such that the j th columnof A is not a step.

45/51

Page 245: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Remark(i) We also call echelon matrices as Gauss matrices and the

reduced echelon matrices as Gauss-Jordan matrices.

(ii) The integer r occurring in the definition is called thenumber of steps. The columns Ck(s) are called the stepcolumns.

(iii) The (i , k(i))th entries which are non zero in G(A) (andequal to 1 in J(A)) are called pivots.

(iv) Let �(A) denote the set of indices j such that the j th columnof A is not a step.

45/51

Page 246: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 247: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 248: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 249: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .

If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 250: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1.

Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 251: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 252: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X .

(Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 253: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 254: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2

and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 255: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 256: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of X

and call the new matrix as X and go back to step 1.

46/51

Page 257: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm For GEM): Let A = ((aij)) be a m × n matrix. Thefollowing algorithm will convert A into an echelon matrix Z .

Step 0 Put X = A and X = ((xi,j)).

Step 1 If X is the zero matrix or empty then declare Z = A andstop. Else go to step 2.

Step 2 Look for a non zero entry in the ‘first column’ of X .If you cannot find one, cut down the ‘first column’ of X andcall the new matrix as X and go back to step 1. Otherwisefind the first non zero entry in the ‘first column’ say, = xi,1.

Step 3 Swap the first row and i th rows of X . (Swap thecorresponding rows of A also.)

Step 4 Add −(xj,1/x1,1)R1 to Rj for j ≥ 2 and perform thecorresponding row operations on the matrix A also.

Step 5 Cut down both the first column and the first row of Xand call the new matrix as X and go back to step 1.

46/51

Page 258: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such thatRA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 259: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such thatRA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 260: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such thatRA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 261: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such thatRA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 262: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such that

RA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 263: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Theorem(Algorithm for GJEM): Let A be an echelon matrix. The followingrow operations on A convert it into a reduced echelon matrix:

For 1 ≤ i ≤ r where r is the number of steps in A, let ai,k(i) bethe first non zero entry in the i th row.

→ Divide the i th row of A by ai,k(i).

→ Now for each 1 ≤ i ≤ r add −aj,k(i)Ri to Rj for all 1 ≤ j < i .

TheoremGiven any matrix A there exist an invertible matrix R such thatRA = G(A) (respectively, = J(A)) is a echelon (respectively,reduced echelon) matrix.

47/51

Page 264: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution. This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .) Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 265: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution.

This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .) Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 266: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution. This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .) Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 267: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution. This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .)

Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 268: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution. This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .) Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 269: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

Significance of the number of Steps in GEM

Let r be the number of steps that occur in the GEM for agiven m × n matrix.

(a) If r < m there is a possibility that the system Ax = b maynot have any solution. This depends on the vector b.Indeed, it is always possible to find such a b.

(b) Suppose now r = m. (Observe that this automaticallyimplies that n ≥ m = r .) Then irrespective of the vector bwe know that Ax = b will have at least one solution.Indeed, in this case, we get infinitely many solutions unless

(c) r = m = n when Ax = b has precisely one solution for allb ∈ ℝm.

48/51

Page 270: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix. On the other hand whenr = m = n we also that J(A) = Idn. Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 271: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix.

On the other hand whenr = m = n we also that J(A) = Idn. Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 272: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix. On the other hand whenr = m = n we also that J(A) = Idn.

Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 273: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix. On the other hand whenr = m = n we also that J(A) = Idn. Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.

GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 274: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix. On the other hand whenr = m = n we also that J(A) = Idn. Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 275: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

The number of steps r is called the row rank of A.

From GJEM, we know that

J(A) = RA

where R is an invertible matrix. On the other hand whenr = m = n we also that J(A) = Idn. Therefore

A = (R−1R)A = R−1(RA) = R−1J(A) = R−1.

This is the same as saying A is invertible and A−1 = R.GEM applied to find the inverseGiven an n × n matrix A apply GJEM to

[A∣Idn].

If your end result is[Idn∣R]

then declare R = A−1. Otherwise declare that A is notinvertible.

49/51

Page 276: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A.

Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 277: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 278: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 279: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 280: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].

Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 281: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 282: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution.

Otherwiseproceed.

50/51

Page 283: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

We shall call G(A) the Gauss form of A and J(A) theGauss-Jordan form of A. Observe that by the very definitionwhich is algorithmic, G(A) and J(A) are well defined.

HOW TO SOLVE A SYSTEM OF LINEAR EQUATIONS Ax = b.

Step 1 Right down the augmented matrix [A∣b].

Step 2 Apply JGEM and obtain the Gauss-Jordan form of it:J([A∣b]) = [J(A)∣b′].Let r be the number of nonzero rows of J(A) and letk(1) < . . . < k(r)be the indices corresponding to the step columns.

Step 3 If b′i ∕= 0 for any i > r declare that the system isinconsistent, i.e, the system has no solution. Otherwiseproceed.

50/51

Page 284: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,

(x1, . . . , xn) = (b′1, . . . ,b′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 285: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 286: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 287: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).

Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 288: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]

where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 289: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51

Page 290: Linear Algebra - Department of Mathematicsdey/linearalg01.pdflinear dependence, dimension, matrices, determinants, eigenvalues, inner product spaces, etc. 2.applications: the study

(contd...)

Step 4 If r = n (the number of columns of A) declare that thesystem has a unique solution viz.,(x1, . . . , xn) = (b′1, . . . ,b

′n).

Otherwise proceed.

Step 5 In [J(A)∣b′] delete all the zero-rows.

Step 6 Let the indices corresponding to the non-step columns bel(1) < l(2) < . . . < l(s), (s + r = n).Transfer all the non-step columns to the right side andchange their sign to obtain a matrix of the form [Ir ∣B′]where the first column of B′ is b′.

The general solution is given by puttingxl(i) = ti , 1 ≤ i ≤ s;and(xk(1), . . . , xk(r))

T = B′(1, t1, . . . , ts)T .

51/51