on the construction of real canonical forms of hamiltonian matrices whose spectrum is an imaginary...

39
On the construction of real canonical forms of Hamiltonian matrices whose spectrum is an imaginary pair R. Coleman * Tout IRMA, 51 rue des Mathe ´matiques, Domaine Universitaire de Saint – Martin – d’ He `res, France Accepted 16 November 1997 Abstract If A is a Hamiltonian matrix and P a symplectic matrix, the product P 1 AP is a Hamiltonian matrix. In this paper, we consider the case where the matrix A has a pair of imaginary eigenvalues and develop an algorithm which finds a matrix P such that the matrix P 1 AP has a particularly simple form, a canonical form. # 1998 IMACS/Elsevier Science B.V. Keywords: Hamiltonian matrix; Symplectic matrix; Canonical form 1. Introduction Hamiltonian matrices play an important role in Hamiltonian systems of differential equations. Such matrices are of even order and can be written AJS, where S is symmetric and J J 2n 0 I n I n 0 It is not difficult to show that this is equivalent to the condition A t J JA 0 In general, it is not true that conjugation by a non-singular matrix T produces a Hamiltonian matrix; however, this is the case if T is symplectic i.e. T t JT J It is natural to ask whether, for a given Hamiltonian matrix A, we can find a symplectic matrix T such that ~ A T 1 AT is of a particularly simple form and, if so, whether this form is unique in some sense. Mathematics and Computers in Simulation 46 (1998) 117–155 ———— * Corresponding author. 0378-4754/98/$19.00 # 1998 IMACS/Elsevier Science B.V. All rights reserved PII S0378-4754(98)00073-1

Upload: r-coleman

Post on 04-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

On the construction of real canonical forms of Hamiltonianmatrices whose spectrum is an imaginary pair

R. Coleman*

Tout IRMA, 51 rue des MatheÂmatiques, Domaine Universitaire de Saint ± Martin ± d' HeÁres, France

Accepted 16 November 1997

Abstract

If A is a Hamiltonian matrix and P a symplectic matrix, the product Pÿ1AP is a Hamiltonian matrix. In this paper, we

consider the case where the matrix A has a pair of imaginary eigenvalues and develop an algorithm which finds a matrix P

such that the matrix Pÿ1AP has a particularly simple form, a canonical form. # 1998 IMACS/Elsevier Science B.V.

Keywords: Hamiltonian matrix; Symplectic matrix; Canonical form

1. Introduction

Hamiltonian matrices play an important role in Hamiltonian systems of differential equations. Suchmatrices are of even order and can be written A�JS, where S is symmetric and

J � J2n � 0 In

ÿIn 0

� �It is not difficult to show that this is equivalent to the condition

AtJ � JA � 0

In general, it is not true that conjugation by a non-singular matrix T produces a Hamiltonian matrix;however, this is the case if T is symplectic i.e.

TtJT � J

It is natural to ask whether, for a given Hamiltonian matrix A, we can find a symplectic matrix T suchthat ~A � Tÿ1AT is of a particularly simple form and, if so, whether this form is unique in some sense.

Mathematics and Computers in Simulation 46 (1998) 117±155

ÐÐÐÐ

* Corresponding author.

0378-4754/98/$19.00 # 1998 IMACS/Elsevier Science B.V. All rights reserved

PII S 0 3 7 8 - 4 7 5 4 ( 9 8 ) 0 0 0 7 3 - 1

In fact we can find a symplectic matrix T such that ~A is of the form

where the blocks are of a very simple form. In addition, ~A is unique up to rearrangement of thequadruples of blocks �Ai1;Ai2;Ai3;Ai4� [9,3,4,6,1,5]. However, for the blocks, different simple formshave been proposed so it is difficult to speak of the real canonical form of a Hamiltonian matrix as wedo of the real Jordan form of a general matrix.

For the case which interests here, namely that of a Hamiltonian matrix with a pair of imaginaryeigenvalues �i�, we will find the Burgoyne and Cushman normal form, which is probably the best-known one, and then indicate how it would be possible to find the simpler Bruno form.

The algorithm which we propose differs fundamentally in approach from that of N. Burgoyne andR. Cushman [2]. It is based on ideas of A.Laub and K.R.Meyer presented in [6]. We will also give thecomplex canonical form. (This form, about which there is no disagreement, is that obtained whencomplex conjugating matrices are allowed.)

2. Symplectic spaces

In this section, we shall use the symbol K for R or C.Consider a K-vector space V of even dimension (dim V �2n). If ! is a bilinear form defined on V

which is both antisymmetric and non-degenerate, we call ! a symplectic form on V and say that the pair(V,!) is a symplectic space. We shall be particularly interested in the so called canonical form !K2n

defined on K2n:

8x; y 2 K2n : !K2n�x; y� � xtJy

and subspaces of K2n such that the restriction of !K2n to them is non-degenerate.A basis (vi) of the symplectic space (V,!) is said to be symplectic if the matrix of ! in the basis is J,

that is

!�vi; vn�i� � �1 i � 1; . . . ; n!�vi; viÿn� � ÿ1 i � n� 1; . . . ; 2n

!�vi; vj� � 0 otherwise

It can be shown that every symplectic space has a symplectic basis (see [7]).

118 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

As for Euclidean spaces, we have a notion of orthogonality. If x,y 2V we say that y is orthogonal to xif ! (x,y) �0. The orthogonal of a subset S�V, written S?, is the set of elements belonging to V whichare orthogonal to all elements of S:

S? � fx 2 V j8y 2 S : !�x; y� � 0gClearly, for any subset S�V, S? is a vector subspace of V. If U1 and U2 are subspaces of V and we have

8u1 2 U1; 8u2 2 U2 : !�u1; u2� � 0

we say that U1 and U2 are orthogonal and write U1?U2.If ! restricted to a subspace U is non-degenerate then we say that U is a symplectic subspace. (In this

case (U,!) is a symplectic space.) It should be noted that this is not always the case: for example, ifu6�0 is an element of V then the subspace U generated by u is not symplectic because u itself belongs tothe kernel of ! restricted to U.

We say that a subspace U has an orthogonal complement if we have the decomposition

V � U � U?

The result which follows is proved in [8] for a real space. It is easy to see that it is also true for acomplex space.

Theorem 1 If (V,!) is a symplectic space and U a symplectic subspace then

V � U � U?

and U? is symplectic. Also in the proof it is shown that if U1 and U2 are subspaces of V such that

V � U1 � U2 and U1 ? U2

then U1 and U2 are both symplectic. We will be particularly concerned with a certain type of linearoperator on a symplectic space, notably a Hamiltonian operator.

Definition If (V,!) is a symplectic space and A is an operator on V which verifies the condition

8u; v 2 V : !�Au; v� � ÿ!�u;Av����we say that A is a Hamiltonian operator on V.

Clearly, Hamiltonian operators play the same role in symplectic spaces as antisymmetric operators ineuclidean spaces.

To give an example, if A is a Hamiltonian matrix of order 2n and A is the operator on R2n defined by

8x 2 R2n : A�x� � Ax

then A is a Hamiltonian operator for the symplectic space (R2n; !R2n). If �vi�1�i�2n is a symplecticbasis of V, and x, y the coordinates of u, v and A the matrix of A in this basis, then the condition (*)implies

8x; y 2 K2n : �Ax�tJy � xtJAy

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 119

or8x; y 2 K2n : xt�AtJ � JA�y) AtJ � JA � 0

Thus, the matrix of a Hamiltonian operator in a symplectic basis is Hamiltonian.In the next section, we shall say a little more about symplectic spaces and operators on them.

3. The complexification of a real symplectic space

In this section, all K-vector spaces will be subspaces of some Km.If U is a complex subspace of Cm then UR � U \ Rm is a real subspace of Rm. Now, let V be a real

subspace of Rm and set

VC � z 2 Cm : z �X

i

�ixi; xi 2 V; �i 2 C

( )Then VC is a complex subspace of Cm, called the complexification of V.

We have formed a real subspace from a complex one and vice versa. There is a natural question,notably for what spaces does one operation followed by the other lead to the original subspace. Thefollowing proposition answers this question.

Proposition 1 For any subspace V of Rm, we have �VC�R � V . If U is a subspace of Cm, then�UR�C � U if and only if �U � U.

Proof � V is clearly a subset of VC, and as V � Rm we have

V � VC \ Rm � �VC�RIf x 2 VC \ Rm then

x �X

i

�ixi �X

i

�ixi � iX

i

�ixi

where for all i we have xi2V and �i��i�i�i. But

x 2 Rm ) x �X

i

�ixi 2 V

Thus

�VC�R � VC \ Rm � V

which concludes the proof of the first assertion.� Suppose that �U � U. Then

8z 2 U : Rez;Imz 2 UR ) z 2 �UR�CThus, U � �UR�C.

120 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

In addition, as UR � U we have �UR�C � U so �UR�C � U. Suppose now that �UR�C � U. Then ifz 2 U; z �Pi �ixi so

z � ��z �X

i

���ixi 2 �UR�C � �U

So U � �U. If z 2 �U we have

z �X

i

��ixi ) z 2 �UR�C � U

Hence, �U � U and it follows that �U � U.

Remarks 1 According to the proposition, if U is a subspace of Cm and �U � U then U is thecomplexification of a real space, notably UR. The reciprocal proposition is also true, that is, if U is thecomplexification of a real space then �U � U. In fact, we have

U � VC ) UR � U \ Rm � VCRm � V

) �UR�C � VC � U

) �U � U

Remarks 2 If U is the complexification of a real space then U has a real basis. In fact, if U is thecomplexification of a real space then by 1=�U � U, so by the proposition �UR�C � U. If �xj�1�j�s is abasis of UR it is clear that the xj generate �UR�C � U. In addition

��1 � i�1�x1 � � � � � ��s � i�s�xs � 0

) �1x1 � � � � � �sxs � i��1x1 � � � � � �sxs� � 0

) �1 � � � � � �s � �1 � � � � � �s � 0

and it follows that the xj are linearly independent.Let us now consider symplectic spaces. If U is a subspace of C2n such that �U � U and ! is a

symplectic form on U it is easy to see that ! restricted to UR is a symplectic form. In fact, the onlyproperty to prove is the non-degeneracy. But

8x 2 UR : !�x; y� � 0) 8�i 2 C; xi 2 UR : !X

i

�ixi; y

!� 0

Hence, because �UR�C � U, we have

8z 2 U : !�z; y� � 0

and, because ! is non-degenerate on U, y�0. If V is a subspace of R2n and ! is a symplectic form on Vwe can extend ! to VC in a natural way. If �xi�1�i�2s is a basis of V then �xi�1�i�2s is a basis of VC and ifw �Pi �ixi; z �Pj �jyj then we define ! (w,z) by

!�w; z� �X

i;j

�i�j!�xi; yj�

It is not difficult to see that this definition is independent of the basis chosen. Also it is not hard to

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 121

prove that ! so extended is non-degenerate. In fact if

8X

i

�ixi : !X

i

�ixi;X

j

�jyj

!� 0

then we certainly have

8x 2 R2n : ! x;X

j

�jyj

!� 0

and, writing �j � �j � i�j, we obtain

8x 2 R2n : ! x;X

j

�jyj

!� i! x;

Xj

�jyj

!� 0

As ! is non-degenerate on V we haveXj

�jyj �X

j

�jyj � 0)X

j

�jyj � 0

and it follows that ! is non-degenerate on VC. We shall call the space �VC;!� the complexification of thesymplectic space (V,!).

Any linear operator T defined on a subspace V of Rm can be extended to an operator TC defined onVC, the complexification of V.

Definition Let T be a linear operator defined on a subspace V of Rm and �xj�1�j�s a basis of V. Then�xj�1�j�s is a basis of VC and we define the complexification TC of T in the following way:

TC

Xi

�ixi

!�X

i

�iT�xi�

It is not difficult to show that the definition is independent of the basis chosen, so TC is well-defined.It is natural to ask which operators on VC are complexifications of operators on V. We shall now look

at this question.

Proposition 2 If S is a linear operator on VC, then S is the complexification of an operator T on V if andonly if

8z 2 VC : S��z� � S�z����

Proof � Suppose that S is the complexification of the operator T: S � TC. If z �Pi �ixi then

S��z� � SP

i��ixi

ÿ �� TC

Pi

��ixi

ÿ ��Pi

��iT�xi��Pi �iT�xi�� S�z�

122 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

� Now suppose that the condition (*) is verified and let x2V. Then

S�x� � S��x� � S�x�therefore

S�x� 2 fz 2 VC : �z � zg � �VC�R � V

Thus, if T is the operator S restricted to V, T is an operator on V. In addition, if z �Psj�1 �jxj, where

�xj�1�j�s is a basis of V, we have

TC�z� �P

j �jT�xj��Pj �jS�xj��Pj S��jxj�� S

Pj �jxj

� �� S�z�

Hence, S is the complexification of T. &Let us now look at Hamiltonian operators. If A is a Hamiltonian operator on the real space V then its

complexification is also a Hamiltonian operator on VC, the complexification of V. We have

! AC

Pi �ixi

ÿ �;P

j �jyj

� �� ! P

i �iA�xi�;P

j �jyj

� ��Pi;j �i�j!�A�xi�; yj�� ÿPi;j �i�j!�xi;A�yj��� ÿ! P

i �ixi;P

j �jA�yj�� �

� ÿ! Pi �ixi;AC�

Pj �jyj�

� �hence the result. We will need the following result whose proof is left to an appendix. Clearly weconsider the symplectic space complexified if necessary.

Theorem 2 If (V,!) is a symplectic space, A is a Hamiltonian operator on V and �, � are eigenvalues ofA such that ��� 6�0, then

x 2 gen���; y 2 gen��� ) !�x; y� � 0

where gen(�) (resp. gen(�)) is the generalized eigenspace of � (resp. �). In particular, if � 6�0

x; y 2 gen��� ) !�x; y� � 0

We close this section with a definition.

Definition If T is a linear operator on a subspace V of Rm we say that T is semi-simple if itscomplexification TC is diagonalizable.

A sufficient (but not necessary) condition for an operator to be semi-simple is that its eigenvalues bedistinct.

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 123

4. The algebra associated with a nilpotent operator on a complex vector space

In [8], we studied in some detail the algebra associated with a nilpotent operator on a real vectorspace. In fact the study of such algebras on complex vector spaces is very similar, so we shall notgive the proofs of all results. If a proof is not given, an analogous one can be found in the referencecited.

Let V be a complex vector space and B be a nilpotent operator on V whose nilpotency index is k�1.Then the subalgebra U of L(V) generated by B and idv, the identity on V, has the following explicitdescription:

U � � 2L�V� : � �Xk

i�0

�iBi

( )

We call U the algebra associated with B. U is clearly commutative.For �2U we define the conjugate �* of � in the following manner: if � �Pk

i�0 �iBi then

�� �Pki�0�ÿ1�i ��iB

i. It is clear that �* 2 U and that (�*)* ��.The non-singular elements of U are easy to characterize.

Proposition 3 � �Pki�0 �iB

i is non-singular if and only if �06�0. Now let us look at the square rootsof elements in U.

Proposition 4 Let � �Pki�0 �iB

iU. If � is non-singular, then there exists �Pki�0 �iB

i 2 U suchthat

2 � �

Proof If �Pki�0 �iA

i then

2 �Xk

i�0

Xi

j�0

�j�iÿj

!Ai

Thus, we wish to know whether there exists a solution (�0,. . ., �k) of the following system:

�20 � �0

�0�1 � �1�0 � �1

�0�2 � �21 � �2�0 � �2

..

. ... ..

.

�0�k � �1�kÿ1 � . . .� �k�0 � �k

It is clear that such a system can be resolved recursively and hence the result.We should notice that, as for non-zero complex numbers, there are two distinct solutions, one

corresponding to each of the square roots of �0; also that is non-singular. We need to look at anothersort of quadratic equation.

124 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

Proposition 5 Let

�1 �Xk

i�0

�iAi and �2 �

Xk

i�0

�iAi

be singular. Then there exists �Pki�0 �iA

i such that

� �1 � �22

Proof We have

�Pki�0 �iA

i ) 2 �Pki�0 �iA

i with �i �Pi

j�0 �j�iÿj

) �22 �Pki�0

Pij�0 �j�iÿj

� �Ai

) �1 � �22 �Pki�0 di �

Pij�0 �j�iÿj

� �Ai

Hence,

� �1 � �22

if and only if there is a solution �0,. . ., �k of the following system:

�0 � �0 � �0�0 � �0 � �0�20

�1 � �1 � �0�1 � �1�0 � �1 � �0��0�1 � �1�0� � �1�20

�2 � �2 � �0�2 � �1�1 � �2�0 � �2 � �0��0�2 � �21 � �2�0� � �1��0�1 � �1�0� � �2�

20

..

. ... ..

. ... ..

.

If �0��0�0 we can set �0�0 then solve recursively for �1,. . ., �k.

Remark If �2, �4,. . . �2, �4,. . . 2R and �1, �3,. . . �1, �3,. . .2iR then �2, �4,. . . 2R and �1, �3,. . . 2 iR.This is easy to prove by induction: We have

�0 � 0 2 R and �1 � �1 2 iR

Suppose that the result is true up to order 2tÿ2. Then

�2tÿ1 � �2tÿ1 � �0�2tÿ1 � �1�2tÿ2 � � � � � �2tÿ1�1 � �2tÿ1�0

� �2tÿ1 � �1�2tÿ2 � �3�2tÿ4 � � � � � �2tÿ3�2 � �2tÿ1�0

� �2tÿ1 � �1��0�2tÿ2 � �1�2tÿ3 � � � � � �2tÿ3�1 � �2tÿ2�0� � � � �By the induction hypothesis the term outside the brackets is imaginary and that inside real or vice-versa, so we have a sum of imaginary terms which is imaginary. A similar argument shows that �2tÿ2 isreal. In the same way we can show if �2, �4, . . . �2, �4, . . .2 iR and �1, �3,. . . �1, �3,. . .2 R then �2,�4,. . .2 iR and �1, �3,. . .2 R.

We can make V into a U-module by defining the product �x of an element �2U and an element x2Vin the following natural manner:

�x � ��x�We will call any generating set of the U-module a U-module basis.

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 125

We say that a mapping : V�V!U is a sesquilinear form on V if it verifies the following conditions:

� 8x1,x2,y2V: (x1�x2,y) � (x1,y) � (x2,y)� 8x,y1,y22 V: (x,y1�y2) � (x,y1) � (x,y2)� 8�2U, 8x,y2 V: (�x,y) � �(x,y)� 8�2 U, 8x,y2V: (x,�y) � �* (x,y)

5. Two special algebras

Let us now consider a Hamiltonian operator A on a real symplectic space (V,!), where V is asubspace of some R2n. Suppose that A has for unique eigenvalues �i�. Then the complexification VC

of V has the decomposition

VC � gen��i�� � gen�ÿi��and the two subspaces have the same dimension.

To simplify the notation we shall write X for the first space in the decomposition and Y for thesecond. We shall write B for the nilpotent operator on X defined by

8x 2 X : B�x� � �AC ÿ i�idX��x�and B for the nilpotent operator on Y defined by

8y 2 Y : B�y� � �AC � i�idY��y�These operators have the same nilpotency index which we shall note k�1. We shall write U (resp. �U)

for the algebra associated with B (resp. B ). We are going to define a sesquilinear form on U but beforedoing so let us prove three lemmas. The first is of a rather general nature.

Lemma 1 If V � C2n and (V,!) is a complex symplectic space then

8x; y 2 V : !��x; y� � !�x;�y�

Proof Writing x�x1�ix2 and y�y1�iy2 we have

!��x; y� � !�x1 ÿ ix2; y1 � iy2�� !�x1; x2� � !�y1; y2� � i�!�x1; y2� ÿ !�x2; y1��

and

!�x;�y� � !�x1 � ix2; y1 ÿ iy2�� !�x1; x2� � !�y1; y2� � i�!�x2; y1� ÿ !�x1; y2��

Hence, the result.

Lemma 2 If x2gen(�i�) and �y2gen(ÿi�) then

8n 2 N� : !�Bnx;�y� � �ÿ1�n!�x;Bn�y�

126 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

Proof We have

!�Bx;�y� � !��AC ÿ i�dX;�y�� !�ACx;�y� ÿ i�!�x;�y�� ÿ!�x;AC�y� ÿ !�x; i��y�� ÿ!�x; �AC � i���y�� ÿ!�x;B�y�

The general result follows easily by induction.

Lemma 3 If x2X then

Bx � B�x

Proof Writing x�x1�ix2 we have

Bx �ACxÿ i�x

�AC�x1 � ix2� ÿ i��x1 � ix2��ACx1 � iACx2 ÿ i�x1 � �x2

�ACx1 � �x2 ÿ iACx2 � i�x1

� �AC � i���x1 ÿ ix2�� B�x

Hence, the result. We define a mapping : X�X ! U as follows:

8�x; y� 2 X � X : �x; y� � !�Bkx;�y�idX � !�Bkÿ1x;�y�B� . . .� !�x;�y�Bk

Proposition 6 is a sesquilinear form on X.

Proof It is easy to see that the first two properties are of a sesquilinear form and are verified, namely

�x1 � x2; y� � �x1; y� � �x2; y��x; y1 � y2� � �x; y1� � �x; y2�

As preliminaries to proving the last two properties of a sesquilinear form, let us notice that if �2C then

��x; y� � ��x; y��x; �y� � ���x; y�

and that

�Bx; y� � B�x; y��x;By� � ÿB�x; y�

The first two statements are almost immediate. For the other two we have

�Bx; y� �Pki�0 !�Bkÿi�1x; y�Bi �Pk

i�1 !�Bkÿi�1x; y�Bi

�Pkÿ1i�0 !�Bkÿix; y�Bi�1 � B�x; y�

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 127

and

�x;By� �Pki�0 !�Bkÿix;B�y�Bi �using Lemma 3�

�Pki�0 !�Bkÿix; �AC � i���y�Bi

�Pki�0 !�Bkÿix;AC�y�Bi �Pk

i�0 i�!�Bkÿix;�y�Bi

� ÿPki�0 !�ACB

kÿix;�y�Bi �Pki�0 !�i�Bkÿix;�y�Bi

� ÿPki�0�Bkÿi�1x; y�Bi � ÿ�Bx; y� � ÿB�x; y�

Now we can finish the proof. If ��Pki�0 �iB

i then

��x; y� � Pk

i�0 �iBix; y

� ��Pk

i�0 ��iBix; y�

�Pki�0 �i�Bix; y� �Pk

i�0 �iBi�x; y� � ��x; y�

and

�x;�y� � x;Pk

i�0 �iBi

� ��Pk

i�0 �x; �iBiy�

�Pki�0 ��i�x;Biy� �Pk

i�0 ��i�ÿ1�iBi�x; y� � ���x; y�Thus is a sesquilinear form on X. We shall now prove another result which we will need later.

Proposition 7 If x,y2X then

�y; x� � �ÿ1�k�1�x; y��

Proof We have

�y; x� �Pki�0 !�Bkÿiy;�x�Bi

� ÿPki�0 !��x;Bkÿiy�Bi

� ÿPki�0 !�x;Bkÿiy�Bi �using Lemma1�

� ÿPki�0�ÿ1�kÿi!�Bkÿix; y�Bi

� �ÿ1�k�1�x; y��

Hence the result.

Corollary If x,y2X then

�x; y��� � �x; y��x; y� � 0, �y; x� � 0

We have seen in the previous section that X may be considered as a U-module and Y as a �U-module.More generally if S is a subspace of X and S is a U-module then �S is a subspace of Y and �S is a �U-module. In addition if (�1,. . .,�s) is a U-module basis of S then ���1; . . . ; ��s� is a �U-module basis of �S.

128 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

In what follows, if x1,. . .,xs2X (resp. y1,. . .,ys2Y) we shall write L(§x1,. . .,§xs) (resp. L(yy1,. . .,yys))for the U-module (resp. �U-module) generated by x1,. . .,xs (resp. y1,. . .,ys).

6. Two fundamental lemmas

In this section, we present two lemmas which are fundamental for the decomposition of aHamiltonian operator which will follow later. They are akin to similar lemmas presented for the case ofa nilpotent Hamiltonian matrix [8]: where the calculations are very alike those in their counterparts andwe will not give all the details. All spaces treated will be supposed contained in some C2n.

Lemma 4 Let (�1,. . .,�s) be a basis of the U-module S�X and suppose that there is a �i such that (�i,�i)is non-singular. Then we can find another basis (e; �

02; . . . ; �

0s) of the U-module S such that

� (e,e) � �idv k odd,��i idv k even� 8i2{2,. . .,s}: (e; �

0i)�0

� S�L(e)� L(�02; . . . ; �

0s)

Proof Permuting if necessary the elements �1,. . .,�s, we can always suppose that (�1,�1) is non-singular.

According to Proposition 7

��1; �1� � �ÿ1�k�1��1; �1��

Let

��1; �1� � �0idX � �1B� �2B2 � . . .�kB

k

If k is odd then

��1; �1� � ��1; �1��

so �0 is real. As (�1,�1) is non-singular there exists 2U such that

2 � sgn �0��1; �1�If �Pk

i�0 �jBi then we have

�20 � �sgn �0��0

�0�1 � �1�0 � �sgn �0��1

�0�2 � �21 � �2�0 � �sgn �0��2

..

. ...

�0�k � � � � � �k�0 � �sgn �0��k

We obtain recursively

�0; �2; . . . ; �kÿ1 2 R

�1; �3; . . . ; �k 2 iR

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 129

which implies that

� �

Now suppose that k is even. As (�1,�1) is non-singular, we can find 2U such that

2 � sgn�ÿi�0���1; �1�Multiplying both sides of the equation by ÿi we obtain

ÿi2 � ÿi sgn�ÿi�0���1; �1�or

iexpi�

4

� �

� �2

� ÿi sgn�ÿi�0���1; �1�

As the coefficients of i sgn (ÿi�0)(�1,�1) are successively real and pure imaginary, using the sameargument as that used in the case k odd we obtain

iexpi�

4

� �

� ��� iexp

i�

4

� �

or

iexpi�

4

� �� � iexp

i�

4

� �

Hence

� � ÿi

Now let us construct the sequence (e; �02; . . . ; �0s). Set

e � ÿ1�1

If k is odd set

8i 2 f2; . . . ; sg : �0i � sgn �0�i ÿ �e; �i��eand if k is even

8i 2 2; . . . ; s : �0i � sgn�ÿi�0��i ÿ �e; �i��eIt is relatively easy to check that (e; �02; . . . ; �0s) is a basis of the U-module S.Let us now look at the properties of the basis that we have just constructed. If k is odd we have

� � 2

� �sgn �0���1; �1�� �sgn �0��e;e�� �sgn �0���e; e�

130 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

As and * are non-singular their product * is also non-singular, so we have

�e; e� � sgn�0 idX � �idX

We handle the case k even in a similar way:

� � ÿi2

� ÿisgn�ÿi�0���1; �1�� ÿisgn�ÿi�0��e;e�� ÿisgn�ÿi�0���e; e�

and it follows that

�e; e� � sgn�0 iidX � �iidX

In both cases, it is a simple matter to check that

8i 2 f2; . . . ; sg : �e; �0i� � 0

and the decomposition

S �L�e� �L��02; . . . ; �0s�results directly from the values of (e,e) and (e; �0i), i �2,. . .,s. We shall now look at this result in alittle more detail.

1. If k is odd we have

�idX � �e; e� � !�Bke;�e�idX � !�Bkÿ1e;�e�B� . . .� !�e;�e�Bk

Thus,

!�Bke;�e� � �1 and !�Bkÿ1e;�e� � . . .!�e;�e� � 0

In the same way, if k is even

!�Bke;�e� � �i and !�Bkÿ1e;�e� � . . .!�e;�e� � 0

2. Clearly the family �e;Be; . . . ;Bke� generates the vector subspace L�e� and it is not difficult to showthat its elements form an independent set, thus �e;Be; . . . ;Bke� is a basis of the vector space L�e�.

3. The set f�02;B�02 . . . ;Bk�02; . . . ; �0s;B�0s . . . ;Bk�0sg generates L��02; . . . ; �0s�. Thus one way of finding a

basis for L��02; . . . ; �0s� is to look for a maximal independent subset of this set.

4. Clearly, we have the vector space decomposition

S� �S � �L�e� �L��e� � �L��02; . . . ; �0s� �L���02; . . . ; ��0s�

In fact, the subspaces (L�e� � L��e� and (L��02; . . . ; �0s� �L���2 . . . ��0s�, are both symplectic as we

shall now see. To prove this, it is sufficient to show that they are orthogonal (Theorem 1). As

L�e� � X and L��02; . . . ; �0s� � X

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 131

these subspaces are orthogonal (Theorem 3). In the same way L��e� and L���02; . . . ; ��0s� are orthogonal.

In addition

8i 2 f2; . . . ; sg : 0 � �e; �0i� �Xk

j�0

!�Bkÿje; ��0i�Bj

Hence

8j 2 f1; . . . ; kg : !�Bje; ��0i� � 0

and it follows that L��e� is orthogonal to L���02; . . . ��0s�. As

!�Bj�e; �0i� � !�Bje; ��0i�

L��e� is orthogonal to L��02; . . . ; �0s� and the result now follows.It should be noted that the two spaces in question are complexifications of real subspaces; from the

argument above, these real spaces are orthogonal and hence symplectic.

5. The subspaces L�e� and L��02; . . . ; �0s�are stable under AC.We now turn to the second lemma.

Lemma 5 Let (�1,. . .,�s) be basis of the U-module S � X and suppose that there exist i6�j such that(�i, �i), (�j, �j) are singular and (�i, �j) is non-singular. Then we can find another basis( f,g,�03; . . . ; �0s) of the U-module S such that

� ( f,g)� idX

� ( f, f )�(g,g)�0� 8i2{3,. . .,s}: ( f,�0i) � (g,�0i) �0� S � L( f,g) � L(�03; . . . ; �0s)

Proof Permuting if necessary the elements �1,. . .,�s, we can suppose that (�1, �2) is non-singular. As(�1, �2)ÿ1 is inversible there exists �2U such that

�2 � ��1; �2�ÿ1

If we set

~�1 � ��1 and ~�2 � ���2

then

�~�1; ~�2� � ���1;���2�

� �2��1; �2�� ��1; �2�ÿ1��1; �2� � idX

In addition, �ÿ1 and �*ÿ1 exist so we can write

�1 � �ÿ1 ~�1 and �2 � ��ÿ1 ~�2

132 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

so �~�1; ~�2; �3; . . . ; �s� is a basis of S. Hence, we can suppose that the basis (�1,. . .,�s) is such that (�1,�2)�idX. We shall consider two cases separately: k even, k odd.

Case 1: k oddNotice first that

8i 2 f1; . . . ; sg : ��i; �i� � �ÿ1�k�1��i; �i�� � ��i; �i��

so, by identifying coefficients, we obtain that (�i,�i) is of the form

��i; �i� � �0idX � �1B� �2B2 � . . .� �kB

k

with

�0; �2; . . . ; �kÿ1 2 R and �1; �3; . . . ; �k 2 iR

By Proposition 5 and the remark after it we can find 2U such that �* and

� ÿ12��1; �1� ÿ 1

2��2; �2�2

If we write

f � �1 ��2

we obtain

�f ; f � � ��1 ��2; �1 ��2�� ��1; �1� ����1; �2� ���2; �1� ����2; �2�� ��1; �1� ���1; �2� ��ÿ1�k�1��1; �2�� �2��2; �2�� ��1; �1� � 2�2��2; �2� � 0

In addition

� f ; �2� � ��1 ��2; �2�� ��1; �2� ���2; �2�� idX ���2; �2�

As (�2,�2) is singular, ( f,�2) is non-singular. Also

f � �1 ��2 ) �1�1 � �2�2 � �1� f ÿ�2� � �2�2 � �1 f � ��2 ÿ �1��2

It follows easily that ( f,�2,. . .,�s) is a basis of the U-module S and, applying the argument used at thebeginning of the proof, we can suppose that ( f,�2) �idX.Now set

g � �2 ÿ 12��2; �2� f

Then

�g; g� � ��2 ÿ 12��2; �2�f ; �2 ÿ 1

2��2; �2�f �

� ��2; �2� ÿ 12��2; �2����2; f � ÿ 1

2��2; �2��f ; �2� � 1

4��2; �2����2; �2��f ; f �

� ��2; �2� ÿ 12��2; �2�idX ÿ 1

2��2; �2�idX � 1

4��2; �2�2�f ; f � � 0

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 133

and

� f ; g� � � f ; �2 ÿ 12��2; �2�f �

� � f ; �2� � idX

Finally for i2{3,. . .,s} set

�0i � �i ÿ �g; �i��f ÿ � f ; �i��gThen

� f ; �0i� � � f ; �i� � �g; �i�� f ; f � ÿ � f ; �i�� f ; g� � 0

and

�g; �0i� � �g; �i� ÿ �g; �i��g; f � ÿ � f ; �i��g; g�� �g; �i� ÿ �g; �i��ÿ1�k�1

idX � 0

Using some elementary (but tiresome) checking it is not difficult to show that ( f ; g; �03; . . . ; �0s) is abasis of the U-module S. That

S �L� f ; g� �L��03; . . . ; �0s�follows directly from the values of ( f,g) and � f ; �0i�, �g; �0i� i � 3; . . . ; s

Case 2: k evenThe reasoning used here is very similar to that used in the first case so we shall be briefer. To begin

with we have

8i 2 f1; . . . ; sg : ��i; �i� � �ÿ1�k�1��i; �i�� � ÿ��i; �i��

so, by identifying coefficients, we find that (�i,�i) is of the form

��i; �i� � �0 � �1B� . . .� �kBk

with

�0; �2; . . . 2 iR and �1; �3; . . . 2 R

By Proposition 5 and the remark after it we can find 2 U such that �ÿ* and

� 12��1; �1� ÿ 1

2��2; �2�2

Writing

f � �1 ��2

we obtain

� f ; f � � 0

and

� f ; �2� � idX ���2; �2�

134 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

As (�2,�2) is singular, ( f,�2) is non-singular. As is Case 1, ( f,�2,. . .,�s) is a basis of the U-module Sand we can suppose that ( f,�2)�idX. Setting

g � �2 ÿ 12��2; �2�f

we obtain

�g; g� � 0 and � f ; g� � idX

Finally, for i2{3,. . .,s} we set

�0i � �i � �g; �i��f ÿ � f ; �i��gand obtain as in Case 1 that

� f ; �0i� � �g; �0i� � 0

and in addition that ( f ; g; �03; �0s) is basis of the U-module S and that

S �L� f ; g� �L��03; . . . ; �0s�This finishes the lemma.

As after Lemma 4, we shall take a closer look at the results we have just obtained.

1. We have proved that

idX � � f ; g� � !�Bkf ; �g�idX � !�Bkÿ1f ; �g�B� . . .� !� f ; �g�Bk

Thus

!�Bkf ; �g� � 1 and !�Bkÿ1f ; �g� � . . . � !� f ; �g� � 0

Also

� f ; f � � 0) 8i 2 f0; . . . ; kg : !�Bif ;�f � � 0

�g; g� � 0) 8i 2 f0; . . . ; kg : !�Big; �g� � 0

2. The family ( f,Bf,. . .,Bk f,g, Bg,. . .,Bkg) is a basis of the vector subspace L( f,g)

3. The set f�03;B�03 . . . ;Bk�03; . . . ; �0s;B�0s . . . ;Bk�0sg generates L��03; . . . ; �0s�. Thus one way of finding a

basis for L��03; . . . ; �0s� is to look for a maximal independent subset of this set.

4. Clearly, we have the vector space decomposition

S� �S � �L� f ; g� �L��f ; �g�� � �L��03; . . . ; �0s� �L���03; . . . ; ��0s��

In fact, the subspaces (L( f,g) �L��f ; �g�� and (L��03; . . . ; �0s� � L���03; . . . ���) are both symplectic. Asafter Lemma 4 for the corresponding subspaces it can be shown that the subspaces are orthogonal andhence symplectic. Here also the two spaces in question are complexifications of real subspaces; thesereal spaces are orthogonal and hence symplectic.

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 135

5. The subspaces L( f,g) and L��03; . . . ; �0s� are stable under AC:

7. The decomposition of a Hamiltonian operator having two imaginary eigenvalues

We shall now state and prove the decomposition theorem for a Hamiltonian operator A on asymplectic space (V,!) where the operator has a pair of imaginary eigenvalues �i�. As above, we shallsuppose that V is contained in some R2n. The complexification VC of V can be written

VC � gen��i�� � gen�ÿi�� � X � Y

Theorem 3 If B, B, etc. are defined as in the previous sections, then V has the decomposition

V � �U1 � �U1� � � � � � �U� � �U�� � �W1 � �W1� � � � � � �W� � �W��such that

� U1; . . . ;U�;W1; . . . ;W� 2 gen��i�� (Clearly �U1; . . . ; �U�; �W1; . . . ; �W� 2 gen�ÿi���� all subspaces �Ui � �Ui� and �Wi � �Wi� are symplectic and all pairs of such subspaces are orthogonal;

in addition each one of these spaces is invariant under AC

� B (resp. B) is invariant and nilpotent on each subspace Ui�resp:�Ui� and if the nilpotency index of Brestricted to Ui is ki�1 then Ui has a basis of the form �ei;Bei; . . . ;Bkiei� such that

!�Bsei;�ei� � �1 if s � ki; ki odd

� �i if s � ki; ki even

� 0 otherwise

� B is invariant and nilpotent on each subspace Wi; if the nilpotency index of B on Wi is li�1 then Wi

has a basis of the form � fi;Bfi; . . . ;Bli fi; gi;Bgi; . . . ;Bligi� such that

!�Bsfi; �gi� � 1 if s � li� 0 otherwise

and

8s 2 f0; . . . ; lig : !�Bsfi;�f i� � !�Bsgi; �gi� � 0

Proof From the theory of the Jordan form, we know that X has a basis of the following form:

v1;Bv1 ; . . . ; Bs1v1

v2;Bv2 ; . . . ; Bs2v2

..

. ... ..

.

vr;Bvr ; . . . ; Bsr vr

where

8i 2 f1; . . . ; rg : si � r andXr

i�1

�si � 1� � dim X

136 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

If we consider X as a U-module, then the family �v1; . . . ; vr� is a basis. We shall now show that thereexist i; j 2 f1; . . . ; rg such that �vi; vj� is non-singular. If the contrary is true then

8i; j 2 f1; . . . ; rg : !�Bkvi;�vj� � 0

Hence, as B is nilpotent and its nilpotency index is k�1 we have

8l � 08m � 0 : !�Bk�l�mvi;�vj� � 0

Using Lemma 2, we obtain

8l � 08m � 0 : !�Bk�lvi;Bm�vj� � 0

As the family of elements of the form Bm�vj generates Y and ! is non-degenerate we have

8i 2 f1; . . . ; rg8l � 0 : Bk�lvi � 0

In particular

8i 2 f1; . . . ; rg8l 2 f0; . . . ; sig : Bk�Blvi� � 0

Hence, Bk � 0, which is a contradiction. So there is always a pair �vi; vj� (with possibly i�j) such that�vi; vj� is non-singular.

Applying either Lemma 4 or Lemma 5 we obtain a first decomposition. Replacing V by the secondelement in the decomposition and repeating the argument we obtain a second decomposition.Continuing in this way we arrive at the result. &

We shall refer to the sequence �v1; . . . ; vr� as a Jordan family.

Remark The subspaces �Ui � �Ui� and �Wi � �Wi� are clearly complexifications of real subspaces so, ifwe take the subspaces of these spaces composed of their real vectors, we obtain a decomposition of Vinto real, pairwise orthogonal, symplectic subspaces.

If A is a Hamiltonian operator on a symplectic space (V,!) then the space is said to beindecomposable for A if V cannot be written as the non-trivial direct sum of two symplectic subspaceswhich are orthogonal and stable for A.

Theorem 3a V has a decomposition

V � �U1 � �U1�R � � � � � �U� � �U��Rsuch that the �Ui � �Ui�R are indecomposable.

Proof To simplify the notation we shall drop the indices i. Suppose that �U � �U�R has thedecomposition E � F. A restricted to E and to F has a complex Jordan form composed of Jordanblocks of the form

i� 1

. .. . .

.

i� 1

i�

2666437775

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 137

and

ÿi� 1

. .. . .

.

ÿi� 1

ÿi�

2666437775

the number of each type of block of a given order being the same. Thus �U � �U�R has at least twoJordan blocks of each type. However we know that �U � �U�R has the complex Jordan form

i� 1

. .. . .

.

i� 1

i�ÿi� 1

. .. . .

.

ÿi� 1

i�

26666666666664

37777777777775Thus, we have a contradiction and so �U � �U�R is indecomposable. Now let us look at the spaces�W � �W�R. We recall that ( f,g) is a U-module basis with

� f ; g� � idV � f ; f � � �g; g� � 0

First, consider the case where l is odd. It is not difficult to see that (u,v) is also a U-module basiswhere u � �1= ���

2p �� f � g�; v � �1= ���

2p �� f ÿ g�. In addition, we have

�u; u� � 12�� f ; g� � �g; f ��

� 12�� f ; g� � �ÿ1�l�1� f ; g���

� idV

Similar calculations give

�v; v� � ÿ idV �u; v� � �v; u� � 0

If l is even and we set u � �1= ���2p �� f � ig�; v � �1= ���

2p �� f ÿ ig� we obtain

�u; u� � ÿ iidV �v; v� � iidV �u; v� � �v; u� � 0

It follows that in both cases the space �W � �W�R can be decomposed into two subspaces of the firsttype and we have already seen that these spaces are indecomposable.

This completes the proof. &In fact, we can analyze this decomposition a little further. Let us consider the triples (V,!,A) where

A is a Hamiltonian operator on the symplectic space (V,!) (We can consider real or complexsymplectic spaces.). We write

�V ; !;A� � �V 0; !0;A0�

138 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

if there exists a symplectomorphism �: V ! V 0 such that � �A �A0 � �. Clearly, � defines anequivalence relation and all elements of the same equivalence class, which we call a type, have thesame dimension. Notice also that if A can be represented by a matrix A in a symplectic basis then thisis also the case for A0 if and only if (V,!,A) and (V 0; !0;A0) belong to the same type.

It is not difficult to see that if V and V 0 belong to the same type � and V has the decompositionV � E � F then V 0 has a decomposition V 0 � E0 � F0 with

�E; ~!A� � �E0; ~!0;A0� �F; ~!;A� � �F0; ~!0;A0�where we have used the � to denote the restrictions to the appropriate subspaces. Thus we can speak ofdecomposable and indecomposable types.

N. Burgoyne and R. Cushman have proved in [3,4] that any type has unique decomposition intoindecomposable types or, to put it another way, any decomposition of a symplectic space intoindecomposable subspaces is unique up to type, that is, in any such decomposition we find the samenumber of representatives of the same indecomposable types.

8. Finding the Burgoyne and Cushman canonical form

Let us look a little closer at the spaces Ui � �Ui. As in the theorem, we have just proved we shall dropthe indices i to simplify the notation and we shall let the nilpotency index of B restricted to the spacewhich interests us be k�1.

Case 1: k odd

�e; e� � � idX

First, let us consider the case where �e; e� � � idX . If we set

8i 2 f1; . . . ; k � 1g : ui � Biÿ1e vi � �ÿ1�iBkÿi�1�e

then it is easy to verify that �u1; . . . ; uk�1; v1; . . . ; vk�1� is a complex symplectic basis. In addition

ACui �ACBiÿ1e

�AC�AC ÿ � iidx�iÿ1e

� �AC ÿ �i�ie� i��AC ÿ �i�iÿ1e

� ui�1 � i�ui i � 1; . . . ; ki�ui i � k � 1

A similar calculation gives

ACvi � ÿi�vi; i � 1

ÿi�vi ÿ viÿ1; i � 2; . . . ; k � 1

Thus, the matrix of A in the basis �u1; . . . ; uk�1; v1; . . . ; vk�1� is of the form

A � A1 A2

A3 A4

� �

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 139

with

A4 � ÿAt1 A2 � A3 � 0

where

A1 �

i�

1 . ..

. .. . .

.

. .. . .

.

1 i�

0BBBBBB@

1CCCCCCAWe now set

qi � 1��2p

i�u�i�1=2� � �ÿ1��i�1=2�

vkÿ�i�1=2��2� i odd

1��2p �u�i=2� ÿ �ÿ1��i=2�

vkÿi�2� i even

pi � 1��2p

i��ÿ1��i�1=2�

ukÿ�i�1=2��2 ÿ v�i�1=2�� i odd

1��2p ��ÿ1��i=2�

ukÿ�i=2��2 � v�i=2�� i even

A simple computation shows that if i is odd then

qi ����2p

Imu�i�1=2�

pi � �ÿ1��i�1=2� ���2p

Imukÿ�i�1=2��2

and if i is even

qi ����2p

Reu�i=2�

pi � �ÿ1��i=2� ���2p

Reukÿ�i=2��2

Using the fact that �u1; . . . ; uk�1; v1; . . . ; vk�1� is a symplectic basis, it is easy to show that�q1; . . . ; qk�1; p1; . . . ; pk�1� is a real symplectic basis. Now employing the images of the ui's and vi'sunder AC, after some lengthy calculations we obtain the matrix of A in this basis: it has the form

A � A1 A2

A3 A4

� �with

A4 � ÿAt1

A1 �

B

I . ..

. .. . .

.

. .. . .

.

I B

0BBBBBB@

1CCCCCCA

140 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

where

B � 0 ÿ��� 0

� �I � 1 0

0 1

� �A2 � 0

A3 �0

. ..

0

�ÿ1��k�1=2�I

0BB@1CCA

Now let us consider the case where �e; e� � ÿ idX . Instead of setting ui � Biÿ1e, we setui � ÿBiÿ1e. Then as before �u1; . . . ; uk�1; v1; . . . ; vk�1� is a complex symplectic basis and

ACui � ui�1 � i�ui; i � 1; . . . ; k

i�ui; i � k � 1

and

ACvi � ÿi�vi i � 1

ÿi�vi ÿ viÿ1 i � 2; . . . ; k � 1

We thus obtain the same complex matrix as before. Now we set

qi � 1��2p �u�i�1=2� � �ÿ1��i�1=2�

vkÿ�i�1=2��2� i odd

1��2p

i�ÿu�i=2� � �ÿ1��i=2�

vkÿi�2� i even

pi � 1��2p ��ÿ1��iÿ1=2�

ukÿ�i�1=2��2 � v�i�1=2�� i odd

1��2p

i��ÿ1��i=2�

ukÿ�i=2��2 � v�i=2�� i even

If i is odd then

qi ����2p

Reu�i�1=2�pi � �ÿ1��iÿ1=2� ���

2p

Reukÿ�i�1=2��2

and if i is even

qi ����2p

Imu�i=2�pi � �ÿ1��i=2� ���

2p

Imukÿ�i=2��2

The matrix of A in this basis is of the form

A � A1 A2

A3 A4

� �

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 141

withA4 � ÿAt

1

A1 �

B

I . ..

. .. . .

.

. .. . .

.

I B

0BBBBBB@

1CCCCCCAA2 � 0

A3 �0

. ..

0

�ÿ1��kÿ1=2�I

0BB@1CCA

Notice that the two real matrices differ only in the sign of the element (a2n;n) and that they are preciselythe matrices corresponding to indecomposable types found by Burguoyne and Cushman when the orderis twice an even number.

Case 2: k even

�e; e� � � iidX

We shall consider in detail the case where �e; e� � � iidX. If we set

8i 2 f1; . . . ; k � 1g : ui � Biÿ1e vi � �ÿ1�iiBkÿi�1�e

then it is easy to verify that �u1; . . . ; uk�1; v1; . . . ; vk�1� is a complex symplectic basis. In addition

ACui � ui�1 � i�ui i � 1; . . . ; k i�ui i � k � 1

The matrix of A in this complex basis has the same form as that found above in Case 1 for the complexbases. We now set

qi � 1��2p �ui � �ÿ1�i�1

ivkÿi�2�pi � 1��

2p

i��ÿ1�iukÿi�2 � ivi�

A simple calculation shows that

qi ����2p

Reui

pi � �ÿ1�i ���2p

Imukÿi�2

It is not difficult to see that �q1; . . . ; qk�1; p1; . . . ; pk�1� is a real symplectic basis. The matrix of A inthis basis is of the form

A � A1 A2

A3 A4

� �

142 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

withA3 � ÿA2 and A4 � ÿAt

1

where

A1 �

0

1 0

. .. . .

.

. .. . .

.

1 0

0BBBBB@

1CCCCCA

If �e; e� � ÿ iidX, instead of setting ui � Biÿ1e, we set ui � ÿBiÿ1e. Then as before�u1; . . . ; uk�1; v1; . . . ; vk�1� is a complex symplectic basis and

ACui � ui�1 � i�ui i � 1; . . . ; ki�ui i � k � 1

and

ACvi � ÿi�vi i � 1

ÿi�vi ÿ viÿ1 i � 2; . . . k � 1

Once again, we find a complex matrix of the same form.We now set

qi � 1��2p

i�ui � �ÿ1�i�1

ivkÿi�2�pi � 1��

2p ��ÿ1�iukÿi�2 � ivi�

A simple calculation shows that

qi ����2p

Imui

pi � �ÿ1�i ���2p

Reukÿi�2

We easily obtain that �q1; . . . ; qk�1; p1; . . . ; pk�1� is a real symplectic basis. The matrix of A in thisbasis is of the form

A � A1 A2

A3 A4

� �with

A3 � ÿA2 and A4 � ÿAt1

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 143

where

A1 �

0

1 0

. .. . .

.

. .. . .

.

1 0

0BBBBB@

1CCCCCA

These matrices are precisely those corresponding to indecomposable types found by Burguoyne andCushman when the order is twice an odd number. In the light of what we have seen we can state a thirddecomposition theorem.

Theorem 3b V has a decomposition

V � �A1 � �A1�R � � � � � �A� � �A��R � �B1 � �B1�R � � � � � �B� � �B��R � �C1 � �C1�R� � � � � �C � �C �R � �D1 � �D1�R � � � � � �D� � �D��R

such that the subspaces are indecomposable and such that

� Ai contains a vector ei with (ei,ei)�idX

� Bi contains a vector ei with (ei,ei)�ÿidX

� Ci contains a vector ei with (ei,ei)�iidX

� Di contains a vector ei with (ei,ei)�ÿiidX

This decomposition is unique in the sense that any other such decomposition contains the samenumber of elements of each kind of the same dimension.

From the symplectic bases of the spaces Ui � �Ui we can form a symplectic basis of V. If we write�qi

1; . . . ; qiki; pi

1; . . . ; piki� for the symplectic basis of the space Ui � �Ui it easy to verify that

�q11; . . . ; q1

k1; . . . ; q�1; . . . ; q�k� ; p

11; . . . ; p1

k1; . . . ; p�1; . . . ; p�k��

is a symplectic basis of V. In this basis the matrix of A is of the form

This matrix is the Burgoyne and Cushman canonical form.

144 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

9. Constructing an algorithm

Using the theory we have developed in the preceding sections, in this section we shall present analgorithm to calculate canonical forms of Hamiltonian matrices having a pair of imaginary eigenvaluesand the transformation matrix which produces it.

First, let us observe that in a symplectic space (V,!) the transformation matrix from one symplecticbasis to another is always symplectic. This is easy to see. Suppose that �ui�1�i�2n and �vi�1�i�2n aresymplectic bases; then if the matrices of the bilinear form in the first and second bases are respectivelyM and N and if P is the transformation matrix for the first to the second basis then we have

J2n � N � PtMP � J2n

Hence the result. As we have already observed, the bilinear form !R2n defined on R2n by

8x; y 2 R2n : !R2n�x; y� � xtJ2ny

is symplectic. Thus �R2n; !R2n� is a symplectic space and it is easy to verify that the canonical basis�ei�1�i�2n is symplectic.

Now consider a Hamiltonian matrix A whose eigenvalues are a pair of imaginary numbers of oppositesign. If we define the operator A on R2n by

8x 2 V : A�x� � Ax

then A is a Hamiltonian operator on R2n whose matrix in the canonical basis is clearly A. If �vi�1�i�2n

is the symplectic basis found at the end of the preceding section (whose elements belong to R2n) and ifP is the matrix whose columns are the elements of this basis then P is the transformation matrix for thebasis �ei�1�i�2n to the basis �vi�1�i�2n. As both these bases are symplectic, P is symplectic and if wewrite ~A for the matrix of the operator A in the new basis we have

~A � Pÿ1AP

~A has the form refered to in the introduction and its unicity is a result of Theorem 3b. It is the canonicalform of A and P a corresponding transformation matrix. Hence, one way of finding a canonical form ofa Hamiltonian matrix and an associated transformation matrix is to look for a symplectic basis asdescribed in the preceding sections. Let us now make precise such an algorithm. We shall use thenotation already defined above and in the interest of clarity we shall not use certain indices.

In the main program, we use four elementary procedures. Each one determines, from the nilpotencyindex k of B restricted to a certain subspace S of X�gen(�i�), a vector e in this space and the value of�e; e��� �idX;�iidX� a real symplectic basis �q1; . . . ; qk�1; p1; . . . ; pk�1� of the space �U � �U�R,where U is the subspace generated by e;Be;B2e; . . . ;Bke, and the matrix

A � A1 A2

A3 A4

� �of A restricted to �U � �U�R in the symplectic basis. We will number the procedures in the followingway:

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 145

� Proc1 (e,k) - k odd, (e,e) � � idX

� Proc2 (e,k) - k odd, (e,e) � ÿ idX

� Proc3 (e,k) - k even, (e,e) � � iidX

� Proc4 (e,k) - k even, (e,e) � ÿ iidX

PROGRAM IMCANFORM

1. Find the generalized eigenspaces X�gen (�i�), Y�gen(ÿi�);2. Write S�X; (clearly �S � Y)3. Find a Jordan family �v1; . . . ; vr� for S;4. Calculate the nilpotency index k of B restricted to S;5. Calculate the �vi; vi�; if there is no i such that �vi; vi� is non-singular, go to 12.; otherwise change

the indices, if necessary, so that �v1; v1� is non-singular;6. From the module basis �v1; . . . ; vr� determine a new module basis �e; �02; . . . ; �0r� as described in

Lemma 4; (the set f�02; . . . ; �0rg may be empty);7. If (e,e) � �idX do Proc1 (e,k);8. If (e,e) � ÿidX do Proc2 (e,k);9. If (e,e) � �iidX do Proc3 (e,k);10. If (e,e) � ÿiidX do Proc4 (e,k);11. If the set f�02; . . . ; �0rg is not empty, let S be the module generated by this set and go to 3.; otherwise

go to 17;12. For i 6� j calculate �vi; vj�; if necessary change the indices so that �v1; v2� is non-singular;13. From the module basis �v1; . . . ; vr� determine a new module basis �f ; g; �03; . . . ; �0r� as described in

Lemma 5; (the set f�03; . . . ; �0rg may be empty)14. If k odd set u � �1=2��f � g�; v � �1=2��f ÿ g� and do Proc1 (u,k) and Proc2 (v,k);15. If k even u � �1=2��f � ig�; v � �1=2��f ÿ ig� and do Proc4 (u,k) and Proc3 (v,k);16. If the set f�03; . . . ; �0rg is not empty, let S be the module generated by this set and go to 3.; otherwise

go to 17.;17. Form the canonical and transformation matrices using the blocks A1,A2,A3,A4 and the vectors�q1; . . . ; qk�1; p1; . . . ; pk�1� found in the successive passages of the algorithm.

Now let us look at the procedures.

Proc1 (e,k)

1. For i � 1 to k�1 set

ui � Biÿ1e

2. For j � 0 to �k ÿ 1=2� set

q2j�1 � ���2p

Imuj�1

p2j�1 � �ÿ1�j�1���2p

Imukÿj�1

146 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

3. For j � 1 to �k � 1=2� set

q2j ����2p

Reuj

p2j � �ÿ1�j ���2p

Reukÿj�2

4. Set A1 � �a1ij� with

a112 � ÿ�

a121 � ��

a1i;iÿ2 � 1; a1

i;i�1 � ÿ� i odd; i 6� 1

a1i;iÿ2 � 1; a1

i;iÿ1 � �� i even; i 6� 2

a1ij � 0 otherwise

A2 � 0

A3 � �a3ij� with

a3�kÿ1=2�;�kÿ1=2� � �ÿ1��k�1=2�

a3�k�1=2�;�k�1=2� � �ÿ1��k�1=2�

a3ij � 0 otherwise

A4 � �a4ij� with

a4i;i�1 � ÿ�; a4

i;i�2 � ÿ1 i odd; i 6� k

a4i;iÿ1 � ��; a4

i;i�2 � ÿ1 i even; i 6� k � 1

a4k;k�1 � ÿ�

a4k;kÿ1 � ��

a4ij � 0 otherwise

Proc2 (e,k)

1. For i � 1 to k � 1 set

ui � ÿBiÿ1e

2. For j � 0 to �k ÿ 1=2� set

q2j�1 � ���2p

Reuj�1

p2j�1 � �ÿ1�jÿ1���2p

Reukÿj�1

3. For j � 1 to �k � 12� set

q2j � ÿ ���2p

Imuj

p2j � �ÿ1�j ���2p

Imukÿj�2

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 147

4. Set A1 � �a1ij� with

a112 � ÿ�

a121 � ��

a1i;iÿ2 � 1; a1

i;i�1 � ÿ� i odd; i 6� 1

a1i;iÿ2 � 1; a1

i;iÿ1 � �� i even; i 6� 2

a1ij � 0 otherwise

A2 � 0

A3 � �a3ij� with

a3�kÿ1=2�;�kÿ1=2� � �ÿ1��kÿ1=2�

a3�k�1=2�;�k�1=2� � �ÿ1��kÿ1=2�

a3ij � 0 otherwise

A4 � �a4ij� with

a4i;i�1 � ��; a4

i;i�2 � ÿ1 i odd; i 6� k

a4i;iÿ1 � ÿ�; a4

i;i�2 � ÿ1 i even; i 6� k � 1

a4k;k�1 � ��

a4k;kÿ1 � ÿ�

a4ij � 0 otherwise

Proc3 (e,k)

1. For i � 1 to k � 1 set

ui � Biÿ1e

2. For j � 1 to k � 1 set

qj ����2p

Reuj

pj � �ÿ1�j ���2p

Imukÿj�2

3. Set

A1 � �a1ij� with a1

ij � 1 if j � iÿ 1 and 0 otherwise

A2 � �a2ij� with a2

ij � �ÿ1�i� if j � k ÿ i� 1 and 0 otherwise

A3 � �a3ij� with a3

ij � �ÿ1�i�1� if j � k ÿ i� 1 and 0 otherwise

A4 � �a4ij� with a1

ij � ÿ1 if j � i� 1 and 0 otherwise

148 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

Proc4 (e,k)

1. For i � 1 to k � 1 set

ui � ÿBiÿ1e

2. For j � 1 to k � 1 set

qj ����2p

Imuj

pj � �ÿ1�j ���2p

Reukÿj�2

3. Set

A1 � �a1ij� with a1

ij � 1 if j � iÿ 1 and 0 otherwise

A2 � �a2ij� with a2

ij � �ÿ1�i�1� if j � k ÿ i� 1 and 0 otherwise

A3 � �a3ij� with a3

ij � �ÿ1�i� if j � k ÿ i� 1 and 0 otherwise

A4 � �a4ij� with a1

ij � ÿ1 if j � i� 1 and 0 otherwise

Given the value of �, all the steps in this program can be calculated and for some of the steps, forexample the finding of a Jordan family, there are routines already written in certain computer algebrasystems such as MAPLE.

10. Bruno's canonical form

There is another canonical form which we will now consider briefly. Take a matrix of the followingform

A � A1 A2

A3 A4

� �with

A1 � A4 � 0

This matrix is clearly a real Hamiltonian matrix.

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 149

Write A�B�C, where B is the antidiagonal matrix whose antidiagonal elements are those of A. It iseasy to check that C is nilpotent and that B and C commute. Also it is easy to show that the eigenvaluesof B are �i� with the dimensions of the eigenspaces being in each case half that of the order of thematrix B. Thus B is diagonalizable and it follows the eigenvalues of A are those of B with the samemultiplicities. With some elementary calculations we can obtain the complex Jordan form of A whichcan be written

�i� 1

. .. . .

.

�i� 1

�i�ÿi� 1

. .. . .

.

ÿi� 1

ÿi�

26666666666664

37777777777775Now consider A as the matrix of a Hamiltonian operator A on a real symplectic space V. Using thesame argument as that used in the proof of Theorem 3a, we deduce that the space is indecomposable.This implies that V has a symplectic basis in which A has one of the four matrix representations givenin Section 7 and so is symplectically conjugate to one of these matrices. We can apply a similaranalysis to a matrix A0 having the same form as A with the exception that � signs replace ÿ signs andvice versa.

The matrices A and A0 are not symplectically conjugate. This can be seen in the following way.Suppose that there exists a symplectic matrix T such that A0 � Tÿ1AT and write A0 � B0 � C0 where thedecomposition is of the same form of that for A above. Then Tÿ1BT is diagonalizable, Tÿ1CT isnilpotent and these matrices commute. It follows that

Tÿ1BT � B0; Tÿ1CT � C0

Let us now write T in block form:

T � U V

W Z

� �where A is 2n�2n and the blocks U, V, W, Z are n�n. A simple calculation gives

CT � � ÿW ÿZ

U V

� �and

TC0 � � ÿV U

ÿZ W

� �This implies that

V � W ; U � ÿZ

150 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

However, as T is symplectic we must have

UtZ ÿ VtW � In

and so

ÿUtU ÿ VtV � In

which is not possible because tr�UtU � V tV � > 0. Therefore, the matrices A and A0 cannot besymplectically conjugate.

It is natural to ask to which one of the four matrices in Section 7 the matrix A or A0 is symplecticallyconjugate to. First, we see whether its dimension is twice an even number or an odd number. In theformer case we must choose between one of the two matrices found for Case 1: k odd and, in the latter,between one of the two matrices found for Case 2: k even. We now have a choice of two matrices. Wewill not go into theoretical considerations here; suffice it to say that (A1,A2) is a corresponding pair ifand only if tr(JA1) � tr(JA2).

The above discussion gives us another real canonical form of a Hamiltonian matrix. It has thestructure

where

Ai1 � Ai4 � 0

and

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 151

or the negatives of these blocks. This is the Bruno canonical form.It is also easy to find this form fromthe Burguoyne and Cushman form: we just form the matrix

Ai � Ai1 Ai2

Ai3 Ai4

� �in the Burgoyne and Cushman form and replace it by the matrix A or A0 above to which it corresponds.This will give the Aij's of the Bruno form.

11. Conclusion

In this article, we have considered a certain approach to the construction of canonical forms ofHamiltonian matrices having a spectrum composed of a unique imaginary pair. Clearly, this is only partof a much larger problem, namely that of finding a method of calculating canonical forms of generalHamiltonian matrices. However we should point out that the case treated here is one of the mostdifficult and interesting ones. Also the method we have developed here could be used as one of thebuilding blocks for a general algorithm.

Acknowledgements

I would like to thank Professors A. Mikhalev Sr. and A. Panchishkin for their encouragement andhelpful comments.

Appendix

Here, we shall give a proof of Theorem 2. We will consider the symplectic spaces complexified ifnecessary. We shall first prove three lemmas.

Lemma 1 If (V,!) is a symplectic space, A is a Hamiltonian operator on V and �, � are eigenvalues ofA such that ��� 6� 0, then

x 2 eig���; y 2 eig��� ) !�x; y� � 0

Proof We have

�!�x; y� � !�Ax; y� � ÿ!�x;Ay� � ÿ�!�x; y� ) ��� ��!�x; y� � 0

Hence the result. &

Lemma 2 If V is a vector space, A is an operator on V and � 2 K then

8i : ker�Aÿ �idV�i � ker�ÿA� �idV�i

152 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

Proof We have

x 2 ker�Aÿ �idV�i , �ÿ1�i�ÿA� �idV�ix � 0) x 2 ker�ÿA� �idV�i

Hence the result. &

Lemma 3 If (V,) is a symplectic space, A is a Hamiltonian operator on V and � 2 K then

8s 2 N�; x; y 2 V : !�x; �Aÿ �idV�s�y�� � �ÿ1�s!��A� �idV�s�x�; y�

Proof For s�1 we have

!�x; �Aÿ �idV��y�� � !�x;A�y�� ÿ �!�x; y�� ÿ!�A�x�; y� ÿ !��idV�x�; y�� ÿ!��A� �idV��x�; y�

If s > 1 we replace y by �Aÿ �idV�sÿ1�y� in the argument to obtain

!�x; �Aÿ �idV�s�y�� � ÿ!��A� �idV��x�; �Aÿ �idV�sÿ1�y��and then repeat the argument sÿ1 times to obtain the required result. &

Now we shall state and prove the theorem.

Theorem 2 If (V,!) is a symplectic space, A is a Hamiltonian operator on V and �, � are eigenvalues ofA such that ��� 6�0, then

x 2 gen���; y 2 gen��� ) !�x; y� � 0

Proof We shall show, using an induction argument, that

8i1; i2 : x 2 ker�Aÿ �idV�i1 ; y 2 ker�Aÿ �idV�i2 ) !�x; y� � 0

We have already proved the result for the case i1�i2�1 (Lemma 1).Now, let us suppose that we haveproved the result for all i1,i2 belonging to the set f1; . . . ; sg. We shall prove successively that

x 2 ker�Aÿ �idV�s�1; y 2 ker�Aÿ �idV�s ) !�x; y� � 0

and

x 2 ker�Aÿ �idV�s�1; y 2 ker�Aÿ �idV�s�1 ) !�x; y� � 0

which is sufficient to prove the result for i1; i2 2 f1; . . . ; s� 1g. Let x 2 ker�Aÿ �idV�s�1;y 2 ker�Aÿ �idV�s, then

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 153

0 � !�x; �Aÿ �idV�s�y�� � !�x; �A� �idV � �ÿ�ÿ ��idV�s�y��� ! x;

Psi�0 Ci

s�ÿ�ÿ ��sÿi�A� �idV�i�y�ÿ �

�Psi�0 Ci

s�ÿ�ÿ ��sÿi!�x; �A� �idV�i�y���Ps

i�0 Cis�ÿ�ÿ ��sÿi�ÿ1�i!��Aÿ �idV�i�x�; y� �Lemma3�

�Psi�0 Ci

s�ÿ�ÿ ��sÿi!��ÿA� �idV�i�x�; y�

According to Lemma 2, x 2 ker�ÿA� �idV�s�1so we can write

8i 2 f1; . . . ; sg : 0 � �ÿA� �idV�s�1�x� � �ÿA� �idV�s�1ÿi�ÿA� �idV�i�x�Therefore,

8i 2 f1; . . . ; sg : �ÿA� �idV�i�x� 2 ker�ÿA� �idV�s�1ÿi

By the induction hypothesis we have

8i 2 f1; . . . ; sg : !��ÿA� �idV�i�x�; y� � 0

because, according to Lemma 2, ker�ÿA� �idV�s�1ÿi � ker�Aÿ �idV�s�1ÿi. Hence the expression

0 �Xs

i�0

Cis�ÿ�ÿ ��sÿi!��ÿA� �idV�i�x�; y�

becomes

0 � �ÿ�ÿ ��s!�x; y�As ���6�0, we have our first result. Now let us turn to the second result.

Let x 2 ker�Aÿ �idV�s�1; y 2 ker�Aÿ �idV�s�1, then using an argument similar to that used in the

first part we obtain

0 �Xs�1

i�0

Cis�1�ÿ�ÿ ��s�1ÿi!��ÿA� �idV�i�x�; y�

As before, by the induction hypothesis we have

8i 2 f1; . . . ; sg : !��ÿA� �idV�i�x�; y� � 0

which allows us to write

0 � �ÿ�ÿ ��s�1!�x; y� � !��ÿA� �idV�s�1�x�; y�

Because x 2 ker�Aÿ �idV�s�1 � ker�ÿA� �idV�s�1, the second term on the right-hand side of the

equation vanishes and it now easily follows that !�x; y� � 0. &

154 R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155

References

[1] A.D. Bruno, The normal form of a Hamiltonian system, Russian Math. Surveys 43(1) (1988) 25±66.

[2] N. Burgoyne, R. Cushman, Normal forms for real linear Hamiltonian systems with purely imaginary eigenvalues, Celest.

Mech. 8 (1974) 435±443.

[3] N. Burgoyne, R. Cushman, Normal forms for real Hamiltonian systems. In The Ames Research Center (NASA)

Conference on Geometric Control Theory, 1976.

[4] N. Burgoyne, R. Cushman, Conjugacy classes in linear groups, J. Algebra 44 (1977) 339±362.

[5] Koc,ak, H., Normal forms and versal deformations of linear Hamiltonian systems, J. Diff. Eq. 51 (1984) 359±407.

[6] A. Laub, K.R. Meyer, Canonical forms for symplectic and Hamiltonian matrices, Celest. Mech. 9 (1974) 213±238.

[7] K.R. Meyer, G.R. Hall, Introduction to Hamiltonian Dynamical Systems and the N-Body Problem, Springer, 1992.

[8] R. Coleman, Constructing real canonical forms of nilpotent Hamiltonian matrices, Technical Report 142, LMC-IMAG,

October 1995.

[9] J. Williamson, On the algebraic problem concerning the normal forms of linear dynamical systems, Am. J. Math. 58

(1936) 599±617.

R. Coleman / Mathematics and Computers in Simulation 46 (1998) 117±155 155