pseudoinverse advenced robotics nakamura

Upload: rofiqi-maulana-geometri

Post on 03-Apr-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    1/22

    2.4 Generalized Inverse and Pseudoinverse 41

    I f F is constrained by II F \I ::; 1, the set of all the possible end-effectoraccelerations forms an ellipsoid. This ellipsoid is called the generalizedinertia ellipsoid (Asada 1983). Note that A -l is symmetric and positivedefinite, and J A -1 JT is also symmetric and positive definite if rank 'J =6. Hence, Eq. 2.110 can be transformed to

    F = (JA- 1JT)-lX= (JT)-lAJ- 1x (i f n > 6)(if n = 6) 2.111

    (JA -1 JT)-1 can be interpreted as the inertia matrix that we feel whenwe accelerate the end-effector by applying direct force to the end-effector.When X takes all the values such that II X \I ::; 1, the necessary endeffector force takes all the values inside the ellipsoid determined by SVDof(JA-lJT)- l ; that is,

    2.112The direction of the largest (smallest) inertia is that of the first (nth)column vector of U, and the corresponding inertia is the largest (smallest)singular value.

    The coeffici ent J A -1 JT of Eq. 2.110 is referred to as the mechanicalimpedance matrix. Note that the mechanical impedance matrix is definedeven if rank J < 6, although the inertia matrix is not . If rank J = 6, th emechanical impedance matrix and the inertia matrix are inverse matricesof each other.

    2.4 Generalized Inverse and Pseudoinverse2.4.1 DefinitionsFor A E Rffixn and X E R"xm, the following equations are used to define ageneralized inverse , a reflexive generalized inverse, and a pseudoinverse of A(Boullion and Odell 1971):

    AXA=AXAX =X( AX?=AX(XA)T = XA

    2.1132.1142.1152.116

    Equations 2.113 through 2.116 are called the Penrose conditions (Penrose1955).

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    2/22

    42 Chapter 2 Mathematical Toolbox

    Definition 2.2 (Generalized Inverse)A generalized inverse of a matrix A E Rmxn is a matrix X = A - ERnxm satisfying Eq. 2.113.Definition 2.3 (Reflexive Generalized Inverse)A reflexive generalized inverse of a ~ a t r i x A E Rmxn is a matrix X =A;:- E Rnxm satisfying Eqs. 2.113 and 2.114.Definition 2.4 (Pseudoinverse)A pseudoinverse of a matrix A E Rmxn is a matrix X = A# E Rnxmsatisfying Eqs . 2.113 through 2.116.

    A pseudoinverse is sometimes called the Moore-Penrose inverse after thepioneering works by Moore (1920, 1935) and Penrose (1955).

    Example 2.17Let A E R2X3 , P, Q, and R E R 3X2 be as follows :

    2.117

    Now, we show that P, Q, and R are a generalized inverse, a reflexivegeneraiized inverse, and a pseudoinverse of A, respectively.

    APA = (!1 1 !1) = APAP= (!1 ~ ) : f P-1 0(AP f = ~ ~ 1 ) : f AP

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    3/22

    2.4 Generalized Inverse and Pseudoinverse 43

    (1 -1 - 1 )(PA?= -1 1 1 tP A1 -1 -1 2.118

    Equation 2.118 shows that P satisfies Eq. 2.113 only. Hence, P is ageneralized inverse of A.AQA = (1 -1 1) =A-1 1 -1QAQ=O D=Q(AQ? = ( ~ ~ 1 ) t AQ

    ( 1 0 0 )(QA)T = -1 0 0 t QA100

    2.119

    Equation 2.119 indicates. that Q satisfies Eqs. 2.113 and 2.114, but doesnot satisfy Eqs. 2.115 and 2.116. This result implies that Q is a refl ex ivegeneralized inverse.

    ARA = (1 -1 1) =A-1 1 -1RAR= - -1( 16 1(AR? = ( 12 -1(RA? = - -1( 13 1

    ~ 1 ) = R-1

    ~ 1 ) = AR~ : ~ 1 ) =RA

    2.120

    R satisfies all four Penrose conditions. Therefore, we conclude that R isa pseudoinverse of A .2.4.2 PropertiesGeneralized InverseY l(1) For a linear equation

    y=Ax

    t2. 1R ao and Mitra 1971 ; Kodama and Suda 1978.

    2.121

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    4/22

    44 Chapter 2 Mathematical Toolbox

    where A E Rf f ixn , X E R n , and Y E ~ , a necessary and sufficientcondition for the existence of solution X is

    rank [A y] = rank A 2.122I f Eq . 2.122 is satisfied for Eq. 2.121, then

    2.123is a solution of Eq. 2.121.

    Example 2.18We now discuss the solution of Eq. 2.121 with the A matrix given InExample 2.17, for two different y vectors- namely,

    2.124

    For both YI and Y2, Eq. 2.122 becomes

    rank [A yd = rank ( ~ 1 -1 1 -1) - 11 -1 1 -= rank AY2] = rank ( ~ 1

    2.125rank [A -1 1 1) _ 21 -1 o -

    "# rank A = 1vVe compute X l and X2 as follows:

    2.126

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    5/22

    2.4 Generalized Inverse and Pseudoinverse 45

    We verify by the fo llowing equations that X l is an exact solution of Eq.2.121 for Y = YI, but that X2 is not an exact solution for Y = Y 2:A XI = ~ -1 ~ ) n)

    ( ~ 1 ) = YI2. 127

    AX2 = ( ~ 1 . -1 ~ 1 ) m~ 1 i Y2

    (2) For an arbitrary A E Rmxn , there exists at least one generalized inverseA - and rank A - ~ rank A. A- coincides with a reflexive generalizedinverse if and only if rank A - = ran k A .Examp le 2.19Let 's compare the ranks of A , P =A -, and Q =A ; used in Example2.17 .

    Therefore ,

    Mor eover,

    rank A = rank ( ~ 1 -11 1 ) = 1- 1rank A - = rank ( ~ ) =2rank A; = rank ( ) -1

    rank A - > rank AA - i A ;

    rank A; = rank A

    2.128

    2.129

    2.130

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    6/22

    46 Chapter 2 Mathematical Toolbox

    (3) Generally A-and A ; are not unique. I f A is square and nonsingular,then the generalized inverse A-and the reflexive generalized inverse A ;are unique, and A - =A ; = A -1 .

    (4) AA- and A- A are idempotent.p 2Example 2.20Now, we use A and P = A- in Example 2.17 . AA- and A- A become :

    AA- =AP= ( ! 1 ~ )

    A -A=PA 2.131

    ( !1 ~ 1 ! 1 )-1 1 -1Therefore,

    (AA-)2 = AA- AA-= ( ~ 1 ~ ) ( ~ 1 ~ )

    = ( ~ 1 ~ ) 0=AA - 2.132

    (A- A)2 = A- AA- A( ~ 1 ~ 1 ~ 1 ) ( ~ 1 ~ 1 ~ 1 )-1 1 -1 -1 1 -1

    ( ~ 1 ~ 1 ~ 1 )-1 1 -1=A -A 2.133We can readily prove statement 4 rigorously by substituting _4_4- A =A into Eqs. 2.132 and 2.133.

    (5) Using an arbitrary matrix U E Rnxm and a generalized inverse A - , allthe generalized inverses of A can be represented by the following X :

    2.134

    t2 .2A square matrix M is cal led idempotent if M2 =M.

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    7/22

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    8/22

    48 Chapter 2 Mathematical Toolbox

    ( A X f = ( ~ ~ 1 ) T 1= AX( XA f =G or1 ; 1= XA 2.138- 2Since only Eq. 2.113 is satisfied among the four Penrose conditions of Eqs .2.113 through 2.116, X is a generalized inverse, and is neither a reflexive

    generalized inverse nor a pseudoinverse. Note that, since a pseudoinverseis also a generalized inverse and is unique, once we get it , we can computeevery generalized inverse using Eq. 2.134.

    Pseudoinverse. t2.3(1) For a given A E Rmxn, the pseudoinverse A # E Rnxm is unique, whereasA - and A;:- are not necessarily unique . Let the sets of A - , A;:-, andA # be S- , S;, and S#, respectively; then, the following inclusion holds :

    S# C S- C S-r 2.139(2) (A#)# = A .(3) (AT)# = (A#)T.(4) A# = (AT A)# AT = AT(AAT)#.

    For A E Rnlxn, if 171 < n ane! rank A = 171, then AAT is nonsingularane!I f 171 > n ane! rank A = n, then AT A is n9nsingu lar andI f 171 = n and rank A = 171, then

    Example 2.22A E R 2X3 is defined as follows :

    A --'- (10 -11 ~ )

    2.140

    2.141

    2.142

    2.143Since A is full rank and 171 =2 < 3 =n, we can use Eq. 2.140 to computeA#; namely,p.3 Rao an d Mitra 1971; Boullion and Ode ll 1971; Kodama and Suda 1978.

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    9/22

    2.4 Generalized Inverse and Pseudoinverse 49

    2.144

    Equation 2.141 does not work for A because m < nand

    det(AT A) = det{ ( ~ l D~ -1 ~ ) }( ~ l -1 D 2.145= de t 21

    =0and, therefore, (AT A)- l is not defined . For any two matrices, M l andM 2 , that can define a product M 1M 2 , the following equation holds:

    Therefore, for A E Rmxn , m < n,rank (AT A) ::; min (rank AT, rank A)= rank A::; min (m, n) = m 2.147

    Since ATA E Rnxn , ATA is not full rank and det (AT A) = o.(5) A# A , AA#, E - A# A and E - AA# are all symmetric and idem-

    potent , where E represents an identity matrix of appropriate dimension.Example 2.23For matrix A as used in Example 2.22, we can compute A# A, AA#,E - A#A, and E - AA# as follows:

    A # A = ~ ( ~ l TD

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    10/22

    50 Chapter 2 Mathematical Toolbox

    AA# _ (1 0)- 0 1E - A # A = ~ (3 -1 -1E -AA# - (O 0)- 0 0

    Verify the idempotency of the four matrices.

    -1)11 2.148(6) I f A E R nxn is symmetric and idempotent, then, for any matrix B E

    Rmxn , the following equation holds (Maciejewski and Klein 1985):2.149

    Example 2.24We use E - A # A as obtained in Eq. 2.148 as an example of a symmetricand idempotent matrix. Now, our A E R3X3 , and B E R2X3 in Eq. 2.149are as follows:

    ( 11A = 1 1-1 -1 -1)11B = (1 2 3)-1 0 1

    (BA)# is computed as(BA)# = ~ (-43 2 -4 4)}#2 -2

    _ ( -0 .2 0 .1 )- -0.2 0.10.2 -0.1

    A(BA)# is, then, given by

    A(BA)# ~ ! ( : 1 -1) (-021 -1 -0.23 -1 -1 1 0.20.1 \( 0.2= -0.2 0.1 )0.2 -0 .1

    2.150

    2.151

    01 ).1-0 .1 2.152

    Equations 2.151 and 2.152 indicate that A(BA)# = (BA)#. The sameresult is obtained for any 2 x 3 matrices of 'B. This relationship will beused later to simplify the computation for utilizing kinematic redundancy.

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    11/22

    2.4 Generalized Inverse and Pseudoinverse 51

    Properties 2, 3, 4, and 6 are readily obtained by verifying Eqs. 2.113through 2.116. Property 5 is obvious from Eqs. 2.115 and 2.116. For furtherproperties, see Rao and Mitra (1971), Boullion and Odell (1971), and Kodamaand Suda (1978) .2.4.3 Solving Linear EquationsThe pseudoinverse has wide applications in solving various types of linearproblems. The following theorem is particularly significant within the scopeof this book.

    Theorem 2.5 (Least-Squares Solutions)For a linear equation of X E Rn

    A x= y 2.153where A E Rm xn and y E Rm , the general form of the least-squaressolutions is given by

    x=A#y+ (E -A#A ) z 2.154where Z E Rn is an arbitrary vector, and E is an identity matrix. Theminimum norm solution among all the solutions provided by Eq. 2.154 is

    2.155

    For Eq. 2.153 , the least-squares solution is the X that minimizes the errornorm- name ly ,

    mill II y - Ax II 2.156where II * II denotes the Euclidean norm of a vector *. The least-squaressolution is not necessarily unique . Every solution is obtained by changing z.Equation 2.155 implies the solution that also minimizes II x II among all thesolutions given by Eq. 2.154.

    When at least one exact solution exists for Eq. 2.153, Eq. 2.154 yields thegeneral form of all the exact solutions. Note that the first and second termsof Eq . 2.154 are perpendicular to each other . Indeed,

    (A#y)T(E - A# A)z = yT(A#?(E - A#A)z= yT {(E - A# A)A#}T z=0

    where Eq. 2.114 and the symmetry of E - A# A were used.2.157

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    12/22

    52 Chapter 2 Mathematical Toolbox

    Example 2.25We revisit Example 2.18. The linear equation of Eq . 2.153 is now givenwith _ 1 -1A - -1 1and Yl and Y2 E R2 are the following vectors:

    Yl = ( -;1)Y2 = ( ~ )

    2.158

    2.159

    Recall that, in Example 2.18, Eq. 2.153 has an exact solution for Yl, butdoes not have one for Y2. The general solution for Y = Yl is computedby Eq . 2.154 and is investigated using

    2.160

    For an arbitrary z, th e error norm of Eq. 2.156 is identically zero. Indee d,II Yl - Ax II = II Yl - {AA#Yl + A(E - A# A)z} II= II (E - AA#)Yl II

    = II ~ ( i i) ( ;1) II=0 2.161Equation 2.161 verifies that X of Eq . 2.154 with an arbitra ry z and Y =Yl is an exact solution of Eq. 2.153 with Y = Yl . When z = 0 , we have

    x ~ A # y, ~ ~ ( ~ : ) ~ X , 2.162When z = (1 1 I l, for example , we have

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    13/22

    2.4 Generalized Inverse and Pseudoinverse 53

    2.163

    Thus X l and X2 are two different solutions for Eq. 2.153; they have thefollowing relationship:

    1II X l II = v'3 < .J3 = II X2 II 2.164In general, the solution with z = 0 (namely, Eq. 2.155) provides thesolution with minimum magnitude.

    For y = Y2, the error norm of Eq. 2.156 becomes

    2.165

    Equation 2.165 implies that, since Eq . 2.153 does not have an exact so-lu t ion for y = Y2, X of Eq. 2.154 with an arbitrary z and y = Y2 is anapproximate solution . Note that the norm of error is 1/-/2, regardless ofz . When z = 0 , we have th e following equation :

    x d#y, ~ ( ~ 1 ) ~ x , 2.166For Z = (1 1 l )T, we obtain

    X = A#Y2 + (E - A# A)z~ ~ ( 1 -1 1)+ ~ Ul Y)m~ ~ mx, 2.167

    Hence, we have a result similar to Eq . 2.164, as follows :

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    14/22

    54 Chapter 2 Mathematical Toolbox

    II X3 II = 0.2887 < 1.6583 = II X4 II 2.168In general, X of Eq. 2.154 with z = 0 and y = Y2 provides the

    approximate solution of Eq. 2.153 that has the minimum magnitude.

    2.4.4 Weighted Pseudo inverseTheorem 2.5 offers the general form of the least-squares solutions (Eq. 2.154),and the least-squares solution with the minimum norm based on the Euclideannorm (Eq. 2.155). In many physical problems, the components of X or y mayhave different physical dimensions. Even if the components are physicallyconsistent, the significance of magnitude can be different. For example, ify is the joint torque vector of a robot manipulator, a moderate value oftorque of a large motor would have critical meaning to a small motor. In thesecases, it would be necessary to evaluate the magnitude of error vector and themagnitude of solution based on an appropriate weighting of the components.In this subsection, we derive a result similar to Theorem 2.5, but for theweighted norm.

    Let II a Ilw represent the weighted norm such thatII a IIw = JaTWa 2.169

    where WE Rnxn is a symmetric positive definite matrix. A symmetric positi.ve definite matrix can be represented as follows:

    W=WoTWo 2.170where Wo E Rnxn is nonsingular. In Eq . 2.170, Wo is not unique . However,for any symmetric and positive definite matrix W, there exists a uniquesymmetric and positive definite matrix Wo satisfying Eq. 2.170, which iscalled the square TOot of W.

    The following two equivalences can be readily shown:mm II y - Ax lip

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    15/22

    2.4 Generalized Inverse and Pseudoinverse 55

    I t is obvious that x* = Qox computed from X that minimizes II Poy-PoAx II minimizes II Poy - PoAQo1X* II, and vice versa, because Qois nonsingular . Therefore , the general form of the least-squares solutions ofPoAx = Poy can be represented byX = Qo 1 x* 2.174

    x* = (PoAQo1)# Poy + {E - (PoAQo1)#(PoAQo1)}ZWe can see from Theorem 2.5 that x* = (PoAQo1)# Poy-namely X =Qo1(PoAQo1)# Poy- i s the one that minimizes II x* II = II Qox II amongall the least-squares solutions. From Eqs. 2.171 and 2.172, we can concludethat Eq . 2.174 provides the general form of the least P-weighted-norm solutions, and that X = Qo1(PoAQo1)# Poy is the minimum Q-weightednorm solution among all the least P-weighted-norm solutions. We summarizethis discussion in the following theorem .

    Theorem 2.6 (Weighted-Norm Solutions)For a linear equation of X E Rn ,

    A x= y 2.175where A E Rmxn and y E Rm, the general form of th e least P-weightednorm solutions is given by

    X = Qo -1 A*# Poy + Qo -\E - A*# A*)z 2.176A* ~ PoAQo -1 2.177

    where P o and Qo are defined for P and Q by Eq . 2.170, and Z ERn isan a rbitrary vector . The minimum Q-weighted-norm solution among allthe solutions provided by Eq . 2.176 is

    X=Qo-1 A *#poYwhere Qo - 1 A *# Po is called a weighted pseudoinverse of A.

    Example 2.26We again discuss a linear equation of Eq. 2.175 with

    A=(l-1y= Y2 =

    -11

    2.178

    2.179

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    16/22

    56 Chapter 2 Mathematical Toolbox

    Suppose P E R 2X2 and Q E R3 x3 are given as follows:

    o 0)0o 1 2.180We choose Po and Qo as the square roots of P and Q; namely,

    P o= ( ~ ~ )Q o=G D 2.18120

    We haveA* = PoAQol

    _ (1 ~ ) ( ~ 1 - 1 !1 ) (:0

    ~ r- 0 1

    0= ( ! 4 -3 - ~ 2 ) 2.182

    ( 0.0490 -0.0980 )A*#= -0.0735 0.14690.1469 -0.2939I( 0.9184 0.1224 -02449 )E-A*#A*= 0.1224 0.8163 0.3673

    -0.2449 0.3673 0.2653The weighted-norm solutions are compared with the solutions we ob

    tained in Example 2.25. When z = 0 , we obtain the following solutionusing Eq. 2.178:

    Q- 1A*#RX = a OY2(3 0 0)-1 I 0.0490= 0 2 0 I - 0.0735o 0 1 \ 0.1469(

    0.0163 \= -0.0367 I X5\ 0.1469 )

    -0.0980 \ (1 0) (1 )0.1469 I-0.2939} \ 0 2 . 02.183

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    17/22

    2.4 Generalized Inverse and Pseudoinverse 57

    When z = (1 1 1 f, we get the following solution from Eq . 2.176 :x = X5 + Q(j1(E - A *#A*)z

    = ( ~ O ~ ~ ~ : 7 ) + ( ~ ~ ~ ) - 10.1469 0 0 1(

    0.9184 0.1224 -0 .2449)0.1224 0.8163 0.3673-0.2449 0.3673 0.2653

    = ( ~ : ~ ~ ! ~ ) ~ X 60.5347 2.184Note th at , since Eq . 2.175 with A and y defined by Eq . 2.179 has noexact solution , as we demonstra ted in Example 2.18, X 5 and X 6 are alsoapproximations.

    Let 's compare X5 and X 6 with X3 and X4 in Eqs. 2.166 and 2.167,respectively. The P-weighted norms of the erro r vectors are as follows:

    II Y 2 - A X 3 lip = II Y 2 - AX 4 lip = 1.1180> 0.89 44 = II Y 2 - AX 5 lip = II Y 2 - A X 6 lip 2.185

    On t he oth er hand , the Euclidean norms of the error vectors areII Y 2 - A X 5 II = II Y2 - A X 6 II = 0.82461 2.186> V2 = II Y 2 - AX3 II = II Y 2 - AX4 II

    Note that th e Q-weighted norms of th e solutions areII X5 IIG = 0.1714 < 1.5872 = II X 6 IIG 2.187

    Equations 2.185 and 2.186 show th at X3 and X4 are bet te r approxim at ionsthan X 5 and X 6 are, in the sense of the Euclidean norms, and th a t , onthe cont rary, X 5 and X 6 are bet ter than X 3 and X 4 are, in the sense ofP-weighed norms.

    2.4.5 Mappings and ProjectionsPseudoinverses possess significant geome tri c charac teristics as pr ojec tions .The linear mapping by a linear equation

    y=Ax 2 .188

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    18/22

    58 Chapter 2 Mathematical Toolbox

    'R(A): range space of AN (A): null space of Adim 'R(A) = rank Adim N(A) = n - rank A

    Figure 2.4 Linear mapping by Eq. 2.188.

    where A E Rmxn , y E Rm, and X ERn, is shown in Fig . 2.4. Let 1?(A) CRm and N(A) C Rn be the range space t2 .4 of A, and the null space t25 ofA, respectively. We represent the orthogonal complementsP .6 of 1?(A) andN(A) by 1?(A)l. C Rm and N(A)l. C Rn , respectively. Four subspaces of1?(A), 1?(A)l., N(A), and N(A)l. have the following relationships with thepseudoinverse of A .1?(A) = N(A#)l. = 1?(AA#) = N (E - AA#)

    1?(A)l. = N(A#) = N(AA#) = 1?(E - AA#)N(A) = 1?(A#)l. = N(A# A) = 1?(E - A# A)N(A)l. = 1?(A#) = 1?(A# A) = N (E - A# A)

    2.1892.1902.1912.192

    These four relationships are illustrated conceptually in Fig. 2.5. We can interpret 1?(A)l. and N(A)l. as follows: I f y has a nonzero component in 1?(A)l.,it does not have an exact solution for Eq. 2.188. If x has a nonzero componentin N(A)l., its mapping by Eq. 2.88 is nonzero.A square matrix M E Rnxn is called an orthogonal projection if itsmapping M X of X E R n is perpendicular to X - M X for any x. I t is

    P.4 The se t of al l ys obtained by computing Eq. 2.188 for every X E Rn makes alinear subspace in Rm. Tllis linear subspace is termed th e range space.

    t2.5The se t of all XS that provide y = 0 in Eq . 2.188 makes a. li near space in Rn.This linear subspace is termed th e null space of A .

    t 2.6The orthogonal comp lement of a linear subspace implies a. se t of al l th e vectorsperpendicular to al l the vectors in the subspace . The orthogonal complement is also a linearsubspace of th e whole space.

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    19/22

    2.4 Generalized Inverse and Pseudoinverse 59

    noteworthy that AA # , A # A, E - AA # , and E - A # A are all orthogonalprojections .

    Example 2.27Equat ion 2.188 with

    (b)

    A=(l1

    (a)

    -11

    y E Rm

    y ER m

    (c)

    2.193

    Figure 2.5 Relationships among the pseudoinverse, the range space, the nullspace , and their orthogonal complements. (a) Mapping by A# . (b) Projectionby A # A . (c) Projec tion by AA#.

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    20/22

    60 Chapter 2 Mathematical Toolbox

    y E R'"x E Rn

    n(A)x ERn

    y E R'"(d) (e)

    Figure 2.5 (continued) (d) Projection by E - A# A. (e) Projection byE-AA# .

    is again used as an example. The orthogonal projections are computed asfollows:

    AA# =.!. (1 -1)2 -1 12.194

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    21/22

    2.4 Generalized Inverse and Pseudoinverse 61

    Now, we define the following vectors:

    zo=(D 2.195Yo = ~ )

    We can decompose Yo into two orthogonal vectors, which are the mem-bers ofR(A) and R(A).L and are denoted by YR and YRJ.., respectively.Similarly, Xo can be decomposed into two orthogonal vectors, which arethe members of N(A) and N(A).L and are denoted by XN and XNJ..,respectively. They are computed using Eq. 2.194 as follows:

    YR = AA#yo= ~ ( ! 1 ~ 1 ) ( ~ )= (!1)

    YRJ.. = (E - AA#)yo= ~ ( ~ ~ ) ( ~ )

    = ~ C)XN = (E - A # A)xo

    2.196

    2.197

    We check the results of Eqs . 2.196 and 2.197 by the following equa-tions:

  • 7/28/2019 Pseudoinverse Advenced Robotics Nakamura

    22/22

    62 Chapter 2 Mathematical Toolbox

    Note that

    rank [A YR] = rank ( ~ 1= rank A

    -11 1-1

    2.198

    0.5 )-0.5 2.199

    Therefore, Eq . 2.188 with Y = YR has exact solutions. An exact solutionis the pseudoinverse solution; namely,x = A#YR

    = ~ ( ~ 1 ! J H ~ l ) = ~ ( ~ l ) 2.200This solution is equal to the pseudoinverse approximation of Eq. 2.188,with Y = Yo . Indeed,

    Also, -note that

    x = A#yo

    -11

    2.4.6 Computation of Pseudoinverse

    2.201

    2.202

    In this subsection, three computational met hods of pseudoinverse are summarized.Computation by Singular Value Decomposition. The computational algorithms for SVD (see, for examp le , Golub and Van Loan 1983) are reliableschemes. By Eq. 2.78, the pseudoinverse of a matrix can be computed usingSVD,as in Example 2.12.