searching a linear subspace

27
Searching a Linear Subspace Lecture VI

Upload: jaser

Post on 23-Feb-2016

25 views

Category:

Documents


0 download

DESCRIPTION

Searching a Linear Subspace. Lecture VI. Deriving Subspaces. There are several ways to derive the nullspace matrix (or kernel matrix). The methodology developed on in our last meeting is referred to the Variable Reduction Technique . The nullspace is then defined as - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Searching a Linear Subspace

Searching a Linear SubspaceLecture VI

Page 2: Searching a Linear Subspace

Deriving SubspacesThere are several ways to derive

the nullspace matrix (or kernel matrix). ◦The methodology developed on in

our last meeting is referred to the Variable Reduction Technique.

1 1

V

U

V u

V u

xAx V U x b V U b

x

Vx Ux b

x V b V Ux

Page 3: Searching a Linear Subspace

◦The nullspace is then defined as

◦Let’s start out with the matrix form

1V UZ

I

1

1 2 4 1 2 4,

3 2 3 3 2 3

0.50 0.500.75 0.25

A V U

V

Page 4: Searching a Linear Subspace

◦The nullspace then becomes

0.50 2.251 00 1

Z

Page 5: Searching a Linear Subspace

An alternative approach is to use the AQ factorization which is related to the QR factorization.◦These approaches are based on

transformations using the Householder transformation

1H I ww

Page 6: Searching a Linear Subspace

◦where H is the Householder transformation w is a vector used to “annihilate” some terms the original A matrix

◦For any two distinct vectors and there exists a Householder matrix that can transform a into b

1 w aHa I ww a a w b

w aa b w

w a bw a

Page 7: Searching a Linear Subspace

◦The idea is that we can come up with a sequence of Householder transformations that will transform our original A matrix into a lower triangular L matrix and a zero matrix

here

0AQ L

1 2 nQ H H H

Page 8: Searching a Linear Subspace

◦As a starting point, consider the first row of

our objective is to annihilate the 2 (or to transform the matrix in such a way to make the 2 a zero) and the 4.

1 2 4

2 2 2

1

1 1 2 4 3.58262 24 4

w

Page 9: Searching a Linear Subspace

◦Thus, 16.4174

1

0.2182 0.4364 0.87290.4364 0.7564 0.48730.8729 0.4873 0.0254

H

1

4.5826 0 04.1461 1.3602 1.7203

AH

Page 10: Searching a Linear Subspace

◦Now we create a Householder transformation to annihilate the 1.7203

◦Multiplying

2 22 2

0 1 0 01.3602 1.3602 1.7204 0 0.6202 0.7844

1.7024 0 0.7844 0.6202w H

1 2

4.5826 0 04.1461 2.1930 0

AH H

Page 11: Searching a Linear Subspace

◦Therefore

◦The last column of this matrix is then the nullspace matrix

1 2

0.2182 0.9554 0.19900.4364 0.0869 0.89550.8729 0.2823 0.3980

Q H H

0.19900.89550.3980

Z

Page 12: Searching a Linear Subspace

Linear Equality Constraints◦The general optimization problem for

the linear equality constraints can be stated as:

LEP f x

st Ax bxmin ( )

Page 13: Searching a Linear Subspace

This time instead of searching over dimension n, we only have to search over dimension n-t where t is the number of nonredundant equations in A.◦In the vernacular of the problem, we

want to decompose the vector x into a range-specific portion which is required to solve the constraints and a null-space portion which can be varied.

Page 14: Searching a Linear Subspace

◦Specifically,

where Y xY denotes the range-specific portion of x and Z xZ denotes the null-space portion of x.

x Y x Z xY Z

Page 15: Searching a Linear Subspace

◦Algorithm LE (Model algorithm for solving LEP) LE1. [Test for Convergence] If the

conditions for convergence are satisfied, the algorithm terminates with xk.

LE2. [Compute a feasible search direction] Compute a nonzero vector pz, the unrestricted direction of the search. The actual direction of the search is then

p Z pk z

Page 16: Searching a Linear Subspace

LE3. [Compute a step length] Compute a positive ak, for which f(xk + akpk) < f(xk).

LE4. [Update the estimate of the minimum] xk+1 = xk + akpk and go back to LE1.

◦Computation of the Search Direction As is often the case in this course, the

question of the search direction starts with the second order Taylor series expansion. As in the unconstrained case, we derive the approximation of the objective function around some point xk as

f x p f x f x p p f x pk k k x k k k xx k k( ) ( ) ( ) ' ( ) 12

2

Page 17: Searching a Linear Subspace

Substituting only feasible steps for all possible steps, we derive the same expression in terms of the null-space:

Solving for the projection based on the Newton-Raphson concept, we derive much the same steps as the constrained optimization problem:

f x Zp f x f x Zp p Z f x Zpk Z k x k Z Z xx k Z( ) ( ) ( ) ' ' ( ) 12

2

' ( )k x kp Z f x

12' ( ) ' ( )k xx k x kp Z f x Z Z f x

Page 18: Searching a Linear Subspace

As an example, assume that the maximization problem is

1 2 31 2 3 1 2 3 1 2 1 2, ,

1 3 2 2 2 3 3 3

1 2 3

1 2 3

max , , 20 2.25 3.50 2.75 0.05 0.015

0.015 0.040 0.002 0.030. . 2 4 1603 2 3 190

x x xf x x x x x x x x x x

x x x x x x x xs t x x xx x x

Page 19: Searching a Linear Subspace

◦This problem has a relatively simple gradient vector and Hessian matrix

1 2 3

1 2 3

1 2 3

2

2.25 0.100 0.015 0.0153.50 0.015 0.080 0.0022.75 0.015 0.002 0.060

0.100 0.015 0.0150.015 0.080 0.0020.015 0.002 0.060

x

xx

x x xf x x x x

x x x

f x

Page 20: Searching a Linear Subspace

◦Let us start from the initial solution

◦To compute a feasible step 1 20 50 10x

1

1 2 3

1 2 3

1 2 3

0.1990 2.25 0.100 0.015 0.015 0.19900.8955 3.50 0.015 0.080 0.002 0.89550.3980 2.75 0.015 0.002 0.060 0.3980

0.19900.89550.3980

v

x x xx x x x

x x x

1 2 3

1 2 3

1 2 3

2.25 0.100 0.015 0.0153.50 0.015 0.080 0.0022.75 0.015 0.002 0.060

x x xx x xx x x

Page 21: Searching a Linear Subspace

◦In this case

◦Hence using the concept

13.21024vx

2 1

20 0.1990 22.628950 13.21024 0.8955 38.167910 0.3980 15.2579

vx x x Z

Page 22: Searching a Linear Subspace

Linear Inequality Constraints◦The general optimization problem with linear

inequality constraints can be written as:

◦This problem differs from the linearly constrained problem by the fact that some of the constraints may not be active at a given iteration, or may become active at the next iteration.

LIP f x

st Ax bxmin ( )

Page 23: Searching a Linear Subspace

◦ Algorithm LI LI1. [Test for convergence] If the conditions for

convergence are met at xk, terminate. LI2. [Choose which logic to perform] Decide

whether to continue minimizing in the current subspace or whether to delete a constraint from the working set. If a constraint is to be deleted go to step LI6. If the same working set is to be retained, go on to step LI3.

LI3. [Compute a feasible search direction] Compute a vector pk by applying the null-space equality

Page 24: Searching a Linear Subspace

LI4. [Compute a step length] Compute a, in this case, we must dtermine if the optimum step length will violate a constraint. Specifically a is equal to the traditional ak or min(gi) which is defined as the minimum distance to a constraint. If the optimum step is less than the minimum distance to another constraint, then go to LI7, otherwise go to LI5.

LI5. [Add a constraint to the working set] If the optimum step is greater than the minimum distance to another constraint, then you have to add, or make active, the constraint associated with gi. After adding this constraint, go to L17.

Page 25: Searching a Linear Subspace

LI6. [Delete a constraint] If the marginal value of one of the Lagrange multipliers is negative, then the associated constraint is binding the objective function suboptimally and the constraint should be eliminated. Delete the constraint from the active set and return to LI1.

LI7. [Update the estimate of the solution].

xk+1 = xk + akpk and go back to LE1.

Page 26: Searching a Linear Subspace

◦A significant portion of the discussion in the LIP algorithm centered around the addition or elimination of an active constraint. The concept is identical to the minimum

ratio rule in linear programming. Specifically, the minimum ratio rule in linear programming identifies the equation (row) which must leave solution in order to maintain feasibility. The rule is to select that row with the minimum positive ratio of the current right hand side to the aij coefficient in the matrix.

Page 27: Searching a Linear Subspace

◦In the nonlinear problem, we define

gii i

ii

b a xa p

a p

''

for 0