informat to - library and archives...

107
INFORMAT ION TO USERS This manuscript has been reproduced fmm the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while othen may be from any type of cornputer printer. The quality of this reproduction is dependent upon the quaMy of the copy submitted. Broken or indistinct print, cdored or poor quality illustrations and photographs, print bleedthrough, substandard margins. and improper alignrnent can adversely affect reproduction. In the unlikely event that the author did not tend UMI a complete manuscript and there are missing pages, these will b8 noted. Also, if unauthorizeâ copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by secüoning the original, ôeginning at the upper Mt-hand corner and mtinuing from left to right in equal sedions with small overlaps. Photographs included in the original manuscript have been reproduoed xerographically in this copy. Higher quality 6" x 9" biack and white photogaphic prints are availabk for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. Bell 8 HoweH Informationand Leaming 300 North Zeeb Road, Arin Amr, Ml 481064346 USA 8001521-0600

Upload: others

Post on 14-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

INFORMAT ION TO USERS

This manuscript has been reproduced fmm the microfilm master. UMI films

the text directly from the original or copy submitted. Thus, some thesis and

dissertation copies are in typewriter face, while othen may be from any type of

cornputer printer.

The quality of this reproduction is dependent upon the quaMy of the

copy submitted. Broken or indistinct print, cdored or poor quality illustrations

and photographs, print bleedthrough, substandard margins. and improper

alignrnent can adversely affect reproduction.

In the unlikely event that the author did not tend UMI a complete manuscript

and there are missing pages, these will b8 noted. Also, if unauthorizeâ

copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by

secüoning the original, ôeginning at the upper Mt-hand corner and mtinuing

from left to right in equal sedions with small overlaps.

Photographs included in the original manuscript have been reproduœd

xerographically in this copy. Higher quality 6" x 9" biack and white

photogaphic prints are availabk for any photographs or illustrations appearing

in this copy for an additional charge. Contact UMI directly to order.

Bell 8 HoweH Information and Leaming 300 North Zeeb Road, Arin Amr, Ml 481064346 USA

8001521-0600

Page 2: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 3: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The geometry of Gaussian rotation space

random fields

Khalil Shafie H.

Department of Mat hematics and Statistics

McGill University, Montreal

September, 1998

-4 thesis submitted to the Faculty of Graduate Studies and Research

in partial fulfillment of the requirements of the degree of

Doctor of Philosophy

@Khaiil Shafie H. 1998

Page 4: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

National Library I*I of Canada Bibliothèque nationale du Canada

Acquisitions and Acquisitions et Bibliogaphic Services services bibliographiques

395 Wellington Street 395, rue Wellington OttawaON K1AON4 Ollawa ON K I A ON4 Canada Canada

Your hie Vorre retafence

Our lile Nolre relerenccr

The author has granted a non- exclusive licence ailowing the National Library of Canada to reproduce, loan, distribute or sel1 copies of this thesis in microfom, paper or electronic formats.

The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts fkom it may be printed or otherwise reproduced without the author's permission.

L'auteur a accordé une licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la forme de microfiche/film, de reproduction sur papier ou sur format électronique.

L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimes ou autrement reproduits sans son autorisation.

Page 5: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

In Memory of

My brother Yacob

Page 6: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Theorern 7, the expected EC for the weighted rotation space in Chapter 3 and the MAPLE

code in Appendix B are the main original results in this thesis.

Page 7: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Abstract

In recent years. very detailed images of the brain. produced by modern sensor rechnolo-

gies. have pjven the neuroscientist the opportunity to study the functional activation of the

brain under different conditions. The main statistical problem is to locate the isolated re-

gions of the brain where activation has occurred (the signal). and separate them from the

rest of the brain where no activation can be detected ( the noise). To do this the images

are ofte11 spatially smoothed before analysis by convolution with a filter f (t) to enhance the

signal to noise ratio. where t is a location vector in Y dimeosionai space. The motivation for

this cornes from the Matched Filter Theorem of signal processing. which States that signal

added to white noise is best detected by smoothing with a filter whose shape matches that

of the signal. The problem is that the scale of the signal is usually unknown. It is natural

to consider searching over filter scale as well as location. that is. to use a filter a-'v'? f ( t lo)

with scale o varying over a predetermined interval [oI? 4. This adds an extra dimension to

the search space. called scale space (see Poline and SIazoyer. 1994). Siegmund and Worsley

(1995) establish the relation between searching over scale space with the problem of testing

for a signal with unknown location and scale and find the approximate P-value of the maxi-

mum of the scale-space tiltered image using the expected Euler characteristic of the excursion

set. In this thesis we study the extension of the scaIe space result to rotating filters of the

forrn IS I - ' i 4 f (s-''%). where S is now an :V x .V positive definite s p r n e t r i c matrix that

rotates and scales the axes of the filter.

Thesis Supervisor: K. J. Worsley

Title: Professor

Page 8: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Résumé

Rbçeiiiment. les images plus détaillées du cerveau qui sont produits par la technologie

moderne. ont données aux neurologues une occasion d' étudier I'activat ion fonctionnelle du

cerveau sous differentes conditions. Le problème principal statistique est de localiser la

région isolée du cerveau où l'activation est arrivée (le signal), et de séparer cette région du

resc du cerveau où l'activation ne peut être détectée (le bruit). Pour faire ça, les image

:V-dimensionaies sont souvent spatiallement lissées avant analyse par convolution avec un

filtre f (,t) pour reliaussement du ratio signal bruit. où t est un vecteur position dans l'espace

:V dimensionnel. Cette approach est motivée par le Théorème 9 1 atched filter" du traitment

dii signal. qui etablit qu' une détection adéquate d'un signal auquel a été ajouté un bruit

blanc, est obtenue à l'aide d'une filtre dont la forme concorde avec celle du signal. Le

problème provient du fait que I1éche!le du signal est généraiement incounue. 11 est naturel

de considérer des filter présentant des proprietés tant en échelle qu'en position; c'est-à-$ire,

l'miploi d'un filtre o-"" f ( t jo ) dont l'échelle s varie sur un inenalle prédéterminé [ c r i . os].

Ceci ajoure une dimension supplémentaire à I'epace de recherche. applé espace en échelle

(mir Poline and Mazoyr, 1994). Siegmund a Worsley (1995) établisment une relation entre

la recherche sur l'espace en échelle et le problème de tester pour un signal dont les positions

r t klielle sont inconnues et trouvent à l'aide de la caractéristique d'Euler espérée. la P-valeur

approsirnative du maximum de l'espace en échelle. Dans ce thèse. nous étudions l'extension

de l'espace en échelle ce qui nous amène à des filtres tournant de la forme IS 1-'14 f (S-'12t) ,

où S est une matrice S x ,V symmétrique défini positive qui applique une rotation et uri

changement d' échelle.

Thesis Supervisor: K. J. Worsley

Tit le: Professor

Page 9: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Acknowledgements

I would like to express my sincere gratitude to my supen-isor. Professor Keith J. Worsley.

for his continuous guidance, encouragement. kindness. and patience during the course of my

Ph.D. program and financial support of the last year of my s t u d .

1 wish also thanks the 'ilinistry of Higher Educarion of the Islamic Republic of Iran for

granting me a Ph.D. scholarship.

I would Iike to thank the McConnell Brain Irnaging Center. Slontreal Neurological Insti-

tute for permission to use their data.

A special thanks also goes to Professor David Wolfson for his advice and encouragement

since I arrived at XIcGill University.

My sincere thanks are also due to al1 people at the Department of hlathematics and

Statistics who helped me in an? way in my work. i am especially grateful to d. Cao. 11.

Asgharian. D. Alexander, and J. Llashreghi for discussion on some aspects of the work.

1 am also greatfull to M. Pouryayevali vho helped me with the French translation of the

abstract and reviewed the thesis and provided comments.

1 also express my deepest gratitude to my parents. brothers and sisters whose love and

support have always been a constant source of my enthusiasm to study.

Finally. but not laçtly. I espress my deepest appreciation to my family, my dear wife

Batoul. dear daughter hlarzieh. and sweeties Laya and Mohammad-Moein for their constant

support. encouragement, patience and sacrifices they made during my studies.

Page 10: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Contents

O Introduction 10

1 Preliminaries 13

1.1 Random fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2 Geometry of the excursion set . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.3 Point set represenrations and the expectations of the characteristics . . . . . 19

1.4 Scale space random fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

. . . . . . . . . . . . . . . 1.5 The expected HC for the scale space random field 25

2 Rotation space random fields 27

2.1 Rotation kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2 Restricted rotation parameter . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.3 Expectation of the EC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

. . . . . . . . - - 2J Contribution af C0-w Q"-tathe expected EC- - .- - ; .. ; .. -Hl

. . . . . . . . . . . . . . . . . 2.5 Contribution of aC x QO to the expected EC 42

. . . . . . . . . . . . . . . . . 2.6 Contribution of Co x aQ to the erpected EC -45

2.6.1 Con(CO x B I ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

2.6.2 Con(CO x BZ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4'7

7.6.3 Con(C" x L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

2.7 Contribution of aC x t3Q to the expected EC . . . . . . . . . . . . . . . . . 49

2 . 1 Con(aC x Bk) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Page 11: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

2.7.2 Con(aCxBz) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.7.3 C o n ( K x L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

. . . . . . . . . . . . . . . . . . 2.8 Contribution of C x P to the expected EC 54

3 Weighted rotation space randorn field 55

3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.2 Expectat ionoftheEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.3 Weighted scale space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4 Application 62

4.1 nlRI technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

-4.2 Application to GIN data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.3 Vdidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

A Figures A- 1

B Maple code B- 1

B.l The main procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1

8 . 2 Calculation of the expectations in Chapter 4 . . . . . . . . . . . . . . . . . . B-4

B . . 1 Contribution of (C x Q)" . . . . . . . . . . . . . . . . . . . . . . . . . B--i

B.2.2 Contribution of aC x QO . . . . . . . . . . . . . . . . . . . . . . . . B-5

B.Z.3 Contribution of C x 8Q . . . . . . . . . . . . . . . . . . . . . . . . . B-6 p p p p p p p . . . . . . . . . . . . . . . . . . . . . . . .

3 . 9 WEhiiitweigEit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-12

Page 12: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

List of Figures

-4.1 The original rotation parameter space Q'. together with an example of the

excursion set at a single pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . -4-1

A.2 The rotation parameter space Q and the partition of its boundary. together

with an exampie of the excursion set at a single pixel . . . . . . . . . . . . . -1-2

-4.3 Expectation of the EC when C is a circle of radius 50 and [XI? A?] = [2? 501. -1-3

-1.4 Cornparison of critical values a t 0.05 level for a circie C with XI = 2 and

XI = 50. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -4-4

-4.5 The stimulus. response at one pixel and lagged cosine waves . . . . . . . . . A-5

A.6 The sine and cosine components of the fifM data. . . . . . . . . . . . . . . . -4-6

-4.7 The sine cornponent smoothed with different values of rn and p. The value

of 1 is fixed a t 162.30. The lower right image is the smoothed image with the

maximizing filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7

A.8 The cosine component smoothed with different values of I and 9. The value

of rn is fived at 6.49. The middle image is the smoothed image with the

maximizing filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -4-8

A.9 Left: The cosine component dong with a contour of the maxirnizing filter.

Right: the excursion set above the critical value. which covers the visual cortex.4-9

A. 10 Convolution of simulateci noise with the rnaximizing filter. No signal is detected.A-10

A. 11 Top: A Gaussian signal added to the Gaussian noise Bottom: The resulting

image is smoothed with the maxirnizing filter. Note that the added signal is

detected close to its true location. . . . . . . . . . . . . . . . . . . . . . . . . A-11

Page 13: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

A.12 Top: Two Gaussian signals added to the Gaussian noise Bottorn: The result-

ing image is smoothed with the mavimizing filter. Note that the two separate

signals are easilj- detected. . . . . . . . . . . . . . . . . . . . . . . . . . . . -4-12

A. 13 Top: Two close Gaussian signals added to the simulated noise. Bottom: The

resulting image is smoothed with the maximizing filter. Yote that the two

. . . . . . . . . . . . separate signals are detected as a single merged signal. -4-13

A.14 Triie signals and the excursion sets above 5.25 at s = s,, for the srnoothed

noise plus signals top: for the far away signais and bottom: for the close

signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A- 14

Page 14: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Chapter O

Introduction

I t is said that the object of statistics is information and its objective is the understanding of

information in data. Modern sensor technologies have. in recent p a r s produced very detailed

and informative images. rnany extremely cornplex. A few of these advanced technologies of

collecting data are PET, M M and satellite imaging methods. These techniques have changed

the nature of observations and the parameter of interest in statistical inference from finite

diniensional spaces of vectors and matrices to abstract infinite dimensional spaces of sets.

niultiwriate functions and algebraic structures so that the term "abstract inference" does

not anymore bear a meaning just for theoretical statistics. The complexity of these types

of data has confronted the researcher with challenging problems in achieving the objective

of statistics. The t heoretical aspects of statistical inference in abstract data and paraniecers

have been studied for years. A treatment of this for linear spaces can be found in Grenander

(1981) and for spaces of sets in Matheron (1975).

In analysing this new type of data, the need for the creation of new methods and tech-

niques often leads the researcher to the toois and concepts of other areas of mathematics.

One of these areas of mathematics that has been used in statistics is differential geometry.

The use of differential geometric techniques in a rigorous fashion in statistical inference goes

back to Rao (1945), who introduced a Riemannian metric based on the Fisher information

matrix on the space of a parametric family of distributions and proposed the distance induced

Page 15: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

hy the metric as a rneasure of dissimilarity between distributions. This line of applyirig dif-

a fcwntial geometrical techniques continues with the work of Barndofi-Nielsen (1988). .\niari

(1983) arriong others. In a different context. Kendall (1993) uses geometric tods for the

itrialysis of sliape data.

-hioc lier ilse u l tliffereritial geornetrical tools was opened by the rvork of .kller arirl H ; w f ï ~

( 1973) i~rid Adler (1081). Tliis work later wüs used and extended by CVorsley (1994) iri statis-

ticiil inference for images of the brain producecl by PET and MRI iniaging technicprs. Let

S( t ). t = ( t , . . . . . t :Y) ' E C c IR" be a rnultidiniensional random hictioii or raricloni field.

LN -4, = {t E C : S(t) 2 r } be the excursion set above the thresbolcl x. Adler iirid Hasokr

st~iitlird t he geonietrv of the excursion sets -4,. Iri search of an arialogue tu thc iiiiriitwr of

1 1 pcrossirigs of a une Uiniensiorial raridom functioii or stochas tic processes t liey irit ro(liicc~1

soriie diaracteristics of the excursion set relatetl to tfic Euler ur Euler-Puincari! cii;~raccc!ristic.

hi rocoiit years. t lie txpectations of these diaracteristics. riiairily the Euler c:liasactclristic. tins

foiiiid tmo rriain applications.

T l i c l first upplicatiori uses the espected Euler choracteristic for iwsessing rriutlrls fils t lie

crc'atioii of large-scales structures iri astropliysics. The Euler cliaracteristius oE galas? ifensit,?.

o f t I i i b ol>srr\.tkrl iiiiivrrse is conipared to t hat expec tetl urider the riiodel. iiiid twu ;rrtb p l ~ r tivl

TIiv secund application conies froni statistical tcstirig of the rrieilri fuiiction of a rmicloiii

ficlcl .\-(t ) oii a siibset Cf of IR.". The niaximurn .Y,,,;, of S jt) over al1 t i r i C is ~ i s t v l tu

t rs r tliitt the rneari of S(t) is zero. against the alternative chat it is greater than xlro in a

sniall localisecl rt?gioti. There are no exact results for the ~iull distrib~ition of .Y,,,;,. Soriic

;rpproxirnat ion results are tliscussed in Adler ( 1990). Orle of t hese uses t lie espec tecl E uliir

c:liaracteristic of the excursion set -4,. The heuristic argument for ttiis is as follows. If tlir

tlircdolcl is high. the Euler characteristic counts the nuniber of coiinected coriiponcnts uf t h

t?sciirsiori set. which approxirnates the number of local maxima. For very higfi t hrestiolds.

riear the global maximum Sm,, the Euler characteristic is I if .Y,, > r and zero u tlierwise.

Thus the t~spected Euler characteristic (y(.-L,)) of the excursion set approxirnates the P-valiie

Page 16: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

of -i,, (see Hasofer, 1978):

The expected Euler characteristic is obtained for some random fields by Adler (1981),

LVorsley (1994) and Cao (1997). In this thesis we will obtain this expectation for the rotation

space random field. The structure of the thesis is as follows.

In Chapter 1 we will give preliminary definitions and previous aork on expected Euler

characteristics and also review the scale space random field. The main work of the thesis

will be in Chapter 2 which starts with the definition and usage of rotation space random

fields and ends with the expected Euler characteristic for this random field. In Chapter 3

will extend this result for the weighted rotation space random field. In Chapter 4. after

some review of the 4IRI technique. we will apply the rotation space methods to a real &IR1

data set. To obtain the expected Euler characteristic for the rotation space random field. the

algebra is so cornplicated that it cannot be done by hand so we extensively use the computer

algebra package 'VIAPLE. Without this code. it would not be easy to check the result. so the

MAPLE code for checking is in the Appendix B.

Page 17: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Chapter 1

Preliminaries

In this chapter the main definitions and preliminary results for the next chapters will be dis-

cussed and we will give a review of previous work on statistical analysis of random fields. In

Section 1.1 the random field and various basic related concepts will be defined and explained.

The Euler and Hadwiger characteristics of excursion sets and their point set representations

are discussed in Section 1.2. The expectation of these characteristics is the subject of Section

1.3. In Section 1.4 we will give a review of scale space random fields and their usage for de-

tecting a signal which has the form of a known function centered at an unknown location and

multiplied by an unknown amplitude. In Section 1.5 the expected Hadwiger characteristic

in the 2-D case for use in the nert chapter is recalled from Siegmund and Worsle- (1995).

1.1 Random fields

The generalisation of time series as random functions of one variable to functions of several

variables is the core idea of random fields. The precise mesure theoretic definition of

random fields is a little involved so we take the probabilistic approach which is more natural

for statistical modeling of random functions of several variables.

Let a probability space (R,g, P) and a parameter set C subset of lRN be given. An :V

dimensional random field on C is a function such that for every fked t , X(t, w ) is a random

Page 18: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

variable on (R, 8. P). Thus: from a mathematical point of view. a random field on C will be

defined by a function X(t. of two variables t and S. the domain of which are respectively

the sets C and R. When there is no confusion. we shall omit dependency on w and write

S(t) instead of S(t. J). For a given i, E R. S(t) is simply a deterministic function on

C and is referred to as a realzzation of the field S. The set {(t' S(t)). t E C} is called

a sumple function . or sclmple path of S. .-\ generalized form of a random field is a vector

valued random field which is considered as a vector valued function of t and c ~ .

For an arbitrary finite set of values of t. say t l . . . . . t,. the corresponding random variables

S(tl):. . . . S(t,) will have a joint n-dimensional distribution a i t h distribution function

The famil? of all these joint distributions for n = 1 . 2 . . . . and al1 possible values of t, is

known as the farnily of finite-dimensional distributions for the field S.

Very often we know the family of finite dimensionai distributions of a random field and

want to know as rnuch as possible about the probability distribution in sarnple function space.

-4 celebrated theorem of Kolmogorov ensures that under very general conditions knowledge

of finite-dimensional distributions is sufficient and necessary for the existence of a random

field with given finite-dimensional distributions.

The sampie path behaviour of random fields which is of main concern in dealing with

random fields is not necessarily determined cornpletel- by finite dimensional distributions. To

ensure that finite dimensional distributions determine the sarnple path behaviour of the field

we need the separability assumption. This concept requires that sample function properties

are determined by their values on a countable everywhere dense subset of the parameter

space. A theorem of Doob (1953) ensures that for every random field there corresponds an

equivalent separable random field. So we shall assume. without further stating, that we are

dealing only with separable fields.

The mean function and covariance function of a random field are defined as:

Page 19: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

A s in the theory of stochastic processes, a central concept in the study of random fields

is stationarity. A random field on @' is stationary or homogeneous if its finite dimensional

distributions are invariant under translations in the parameter t. As a result of stationarity.

the mean function will be a constant and the covariance function will be a function of s - t. An interesting class of stationary random fields is the class of isotropic fields. A stationary

random field is said to be isotropic if its covariance function depends only on the length

/ (t - sll of the vector t - S.

An important class of random fields is the class of Gaussian fields. -1 Gaussian random

field is a random field whose finite dimensional distributions are al1 rnultivariate Gaussian

(Yormal) distributions. The multivariate Gaussian distribution is specified by its mean

vector and covariance rnatrix so the Gaussian randotn field is determined by its mean and

covariance functions. The Brownian sheet, a generalisation of Brownian motion, is a special

Gaussian random field with zero mean and covariance function given by

.V

R(s . t) = n rnin(s,: t ,). a= 1

-4s in random variable distribution theory, we can use Gaussian fields to build up new

random fields. Let ?il (t). . . . . Sn&). t E nl'V, be independent. identically distributed Gaus-

sian fields with zero mean and unit variance. Then the field U( t ) with n degrees of

freedom is defined as (see Adler, 1981, page 169)

n

U(t) = C ~ ) ( t ) t E W. i= 1

At each k e d t the distribution of U( t ) is X2 with n degrees of freedorn. Similady, we can

define F. t and Hotelling's S2 random fields (see Worsley, 1994; Cao. 1997).

Analytic properties such as continuity and differentiability for random fields are defined in

different ways. The type of analytic property which we are interested in is sample path-wise.

b random field X ( t ) is called almost everywhere continuous at a point t if with probability

Page 20: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

one sample paths are continuous at t. We say S is almost ever>where continuous in a set -4

if it is almost everywhere continuous at every point of -4.

If f, and h, denote the partial derivatives and respectively, of a function of

t = ( t l . . . . . t l v ) . we say that the random field S is almost everywhere differentiable at a point

t in the t, direction if with probability one .i; ( t ) exists. We can continue further defining

second order differentiability if .t, ( t ) exists almost everywhere. Higher order derivatives are

defined in a similar manner. The condition for having smooth sample paths are discussed in

h rdenko (1971) and .Idler (1951. Chapter 3) . We shall assume conrinuiry and second order

differentiability for the random fields.

For a random field X(t ) in :V dimensional space. define the (.Y + I)(.\: + 2)/2 vector

valued random field

Y(t) = [.Y(t), ?i;,(t), - . . . S.v(t), .Tli (t). . . . dY.v.v(t ) ] ' .

For a Gaussian random field. Y(t) is a vector valued Gaussian random field (see Adler. 1981.

page 32). The elements of the covariance matrix of the vector Y ( t ) which will be needed

later on can be found using the following general result.

'P+JS(t) Lemma 1 Let (t ) . t = ( t i , . . . . t N ) be an iV dimensional random field. Suppose a° r tdJt ,

d'*"Y(t) and D,,,,, ezist. Then

where R(s . u) is the covariance function o f S at s = (sl:. . . . s N ) and u = (ul.. . . . cl;~).

We conclude this section with the definition of the excurs2on set which is the main

concern of this thesis. The excursion set of an .V dimensional random field X(t) above the

level x in a subset C of W'Y is defined to be the set

When there is no confusion, we denote A&YI C) simply by ..I,. The excursion set and

associated random variables are very important both in application and theory.

Page 21: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

1.2 Geometry of the excursion set

The geornetry of excursion sets in one dimensional random fields or stochastic processes has

a very simple form. for it includes single points or intervals. The number of these points and

intervals is simply the number of connected components of the excursion set. This number

is reiated to the number of up-crossings of the process. which was studied by Kac (1943) and

Rice (1945). -1 full treatrnent of this relation for Gaussian processes can be found in Cramer

and Leadbet ter ( 196;).

The number of up-crossings of a random field of more than one dimension bas no rneaning,

so the generalisation of this idea. relating the number of connected components of excursion

sets in dimensions greater than one to some characteristic of the random field. is not easy

However. Swerling (1962). Bolotin (1969) and Nosko ( 1973) studied the expectation of the

number of connected components of a two dimensional random field. They were not able to

find this expectation but they found an upper bound for it.

In search of geometric characteristics of excursion sets through the concepts of integrai

geometry and differential topolog. Adler and Hasofer (1976) and Adler (1981) found two

characteristics: the Hadwiger characteristic (HC) and Euler characteristic (EC). They used

these two characteristics to define characteristics which a r e amenabte to statistical investi-

gation. The precise definitions of the HC and EC are as follows:

The HC @(A) is defined iteratively for a large class of sets A called basic complezes. -4

set of the form

f = { t ~ I R ~ : t , = a , , j E J . - x < t , < m . j $ J }

is called a k-plane of IR"' if J is an N - k-element subset of (1.. . . . X} and {a,} are fixed

numbers. A compact subset B of RN is called a basic if the intersections f n B are simply

connected for every k-plane £ of RN. k = 1. . . . , 3. -1 set -4 is a basic cornplex if it can

be represented as the union of a finite number of basics such that the intersection of any of

these is again a basic.

Page 22: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

For .V = 1. let lu(-4) be the number of disjoint intenals in A. For N > 1, let

U

shere &,'= { t E C : t = u} and

In f x t . the HC is the only functional on the sets of basic çomplex that has two following

properties. For al1 basics B.

( 1 otherwise.

If A. B. d n B and -4 u B are basic complexes then

4-l u B ) = @(.A) + u!I(B) - w(..L n B ) . (1.1)

A crucial step in deriving statistical properties of excursion characteristics is to obtain a

point-set representation which expresses the characteristic in terms of local properties of the

excursion set rather than global properties such as conneçtedness. Worsley (199%) giws

a point set representation for the HC of the excursion set. w(.-l,(S. C)) . in two and three

dimensions and using these representations finds the expectation of &(;I,(X, C) ). Based

on the HC Adler defines the IG (integral geometric) characteristic of the excursion set uf a

randorn field . The IG characteristic is a direct analogue of the number of up-crossings and

Adler gives a point set representation in two dimensions for it and finds its expectation.

The EC for a topological space .4 of dimension n is defined as:

n

where j = O.. . . . n are Betti numbers. We avoid going through the definitions of these

numbers. but to have sorne idea we just mention that JO and Ji count the number of con-

nected components and the number of handles respectively.

Page 23: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

This original definition of the EC is hard to tackle for statistical analysis. For compact

subsets -4 of RN bounded by C2 manifolds (regular domains) 'vlorse's Theorem gives a nice

representation of the EC using the critical points of any M m e function f on A. To give this

representation we need to introduce some notation.

Suppose a compact subset -4 of RN has a C' boundary 8.4 and f is a real-valued function

of class C2 on -4. Let f = $ be the gradient .V-vector and f = 2 be the .V x iV Hessian

rnatrix of 1. Let f la.-l denote the restriction of f to a.4. Finally let fi be the gradient of f

in the direction of the iriside normal to 3.4. A critical point t will be called non-degenerate

if the determinant of the Hessian matrix j' at t is non-zero. The function f defined on -4

is called a Morse function if f has no critical points on 3.4 and al1 the critical points of f

and fla.4 are non-degenerate. Adopting the notation advocated by Knuth (1992) where a

logical expression in parentheses takes the value 1 if it is true and O otherwise. we now state

Morse's Theorem.

Theorem 1 Let f be uny Morse function on a regular C' domain -4 in R" then

1.3 Point set representations and the expectations of

the characteristics

The excursion set of a randorn field might have very erratic behaviour. for exampie having

infinitely many connected components or not being compact. To avoid this situation and to

make the excursion set meet the conditions in Morse's Theorem some regularity conditions

are imposed on the sample paths of the random field.

Let .Y = X(t) , t = ( t l , . . . , t iv) E RN, be a random field in iV dimensions, and let

A t j = -tj (t) and Xjk = /qk(t), j, k = 1, . . . , N . Finally let x = be the gradient iV-vector

and let x = & be the N x N Hessian matrix of X . The moduli of continuity of %, and

Page 24: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

-VJÿJ* inside C are defined as:

where j , k = 1.. . ...Y

To ensure that realizations of X(t) are sufficiently smooth. we shall require :

Cl: for aq* e > O

C2: x has finite variance conditional on (S. x). and

C3: the density of (.Y- X) is bounded above. uniformly for al1 t E C.

For smooth Gaussian fields CLC3 are satisfied if the covariance matrix of ,Y and its first

two derivatives has a non zero determinant (Adler. 1981. page 106).

r\ssuming conditions Cl-C3 in the stationary case. Adler (1981. page 81) proves that

the excursion set of .Y above any level z is a basic complex and also rneets the conditions of

Morse's theorem. that is it is a regular domain with probability one. Worsley (1995a) uses

the randorn field .Y&) as a Morse function to obtains a point set representation for the EC

which is a crucial step in deriving the expectation of the EC. This representation cornes in

the following theorern. As before, a t a point t E aC. let 2, be the gradient of .Y in the

direction of the inside normal to K. let x - ~ be the gradient N - 1-vector in the tangent

plane to dC. let XT be the .V - 1 x N - 1 Hessian matrix in the tangent plane to X. let c

be the iV - 1 x :V - 1 inside curvature matku of aC.

Theorem 2 Under conditions CLC3,

+ (,y 2 z) (1iT = O) (.YL < o)sign[det(-XT - .tic)] tac

with probability one.

Page 25: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Csing this representation Worsley (1995a) obtains the expectation of the EC for a general

random field in N dimensions satisfying conditions Cl-C3.

Theorem 3 Let O ( . ) be the density of X and let O * ( . ) be the density o f x r . Under conditions

Cl-C3 .

Adler chooses the .Vth coordinate a s the Morse function f rather than the random field

itself and gives a sirnilar point representation. This choice of Morse function f has no cricical

point on the interior of -4, so the hrst term of (1.2) is zero. However f can have critical

points on 8.4. so hlorse's Theorem canno t be applied unless -4, ri aC is empty. Adler defines

a characteristic called the DT characteristic, equal to the right hand side of Morse's Theorem

with / equal to the Xth coordinate function and -4 = A,. The DT characteristic is equal to

the EC when the excursion set does not touch the boundaq of C. Adler (1981. page 105)

obtains the expectation of the DT characteristic under conditions Cl-C3. This expectation

1s

ax where .Yiv = Ki n = 'i - 1: X, = (.Y['. . . . -Y,), X, is the n x n matrix of Brst n rows and

colurnns of x and O ( . , .) is the joint distribution of (X. x,). Let us denote the integrand of the first term of (1.4) by

The quantity p ( S . x) is the rate or intensity of the EC of excursion sets of .Y above z. per

unit volume. Similarly, the integrand of (1.5), given by

Page 26: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

is the rate of the DT characteristic of excursion sets of S above x per unit volume.

The evaluation of &Y, z) and pDT(X, x) and the expectation in the integrand of the

second term of (1.4) in the general case is a very complicated problem for they depend on

the determinant of random matrices arising from the second derivatives of a random field and

its restriction to the boundary of C. The general theory of random determinants discussed

Li- Girko (1990) is not very helpful in siniplifying these quantities. The expression (1.7)

for p D T ( . ~ , x) is easier to evaluate than (1.6) for p ( X , x) because there are less variables

involved in the expectation. Worsley (1995a) proves that p ( X . x) is identical to pDT(S, r)

for a stationary random field X. The rate of the DT characteristic is fully derived for

station- Gaussian fields by Adler (1981), for t and F fields by CVorsley (199-4). aiid

for Hotelling's T2 fields by Cao (1997). When .Y(t) is a stationary zero-mean, unit variance

Gaussian random field this rate is given by :

where HeNdi(x) is the Hermite polynornial of degree (N - 1) in x.

Therefore for stationary fields the expectation of the DT characteristic and the first

term of the expectation of the EC in (1.4) can be written as lClpDT(.Y, r ) . where ICI is

the Lebesgue measure of C. It is hard to simplify the integrand of the second term of

the expectation of EC in (1.4) unless nc. make the further assuniption that the field .Y is

isotropic. The result of Theorem 2 simplifies for S an isotropic field. For 1 5 j 5 .V let

.Yi, ( t l . . . . . t,) = .Y ( t l . . . . , t j , 0. . . . ,O) be the restriction of X to a j-dimensional Euclidean

subspace and write pj(X) = p ( X l j , x), and for j = O define po(X) = P(X 2 I)? so that

pj (S) is the rate of the EC in ariy j-dimensional Euclidean subspace of @'. For a square

II x n matrix M let det ï j (M) be the sum of al1 j x j principal minors of M and define

detro(M) = 1.

Theorem 4 If S(t) ù isotropie, and the conditions Cl-C3 hoid, then

Page 27: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

ProoE (See Worsley, 1995a, Theorem 4).

As it appears in (1.81, the expectation of the DT characteristic has a very simple form in

the stationary case. Moving away from stationarity we Ioose this simplicity. To demonstrate

this. the expectation of the DT characteristic is obtained for a zero-mean. unit variance but

not necessarily stationary Gaussian random field S(t) when :V = 1.

r ( t ) z t hen Let c ( t ) = var(%@)) . r ( t ) = COV(X(~) . .Y(t)) and v ( t ) = (,-r,(,), , ,

E[DT(,L,)] = ë z ' / % ( t ) ( l - i'(t))L12 [ o ( t )Q (u ( t ) ) + o(u( t ) ) j dt. C

ahere <p and <P are the density and the distri bution function of the standard normal. respec-

tively. We will obtain the expectation of the EC for some other non-stationary randorn fields

in the next chapters and see how tedious the formulas are in dimensions greater than 1.

We conclude this section with a brief cornparison of the characteristics. The HC and the

DT characteristics are not invariant under rotations. While the HC is only invariant under

a permutation of the coordinate aues. the EC is invariant under any one to one continuous

transformation. The domain of the definition for the EC is very extended. but the point set

representation for it is limited to sets wit h smooth boundary. Homver, Adler ( 198 1. page 86)

shows that within the domain of definition of the HC and the EC, they are equal. And when

the excursion set does not touch the boundary of C, then the DT and IG characteristics are

equal to the EC.

1.4 Scale space random fields

In many applications the observed realizations of random fields are first smoothed to reduce

the effect of the noise. Smoothing is achieved by convolution with a kernel but the amount

of srnoothness, as measured by the width or scale of the kernel is somewhat arbitrary. Sieg-

rnund and Worsley (1995) consider the amount of the smoothness as an extra undth or scale

parameter for the random field. Adding a scale parameter to the domain of a stationary

random field makes it a non-stationary random field. They construct a test based on mm-

imising a random field in N + 1 dimension, iV dimensions for the location and one dimension

Page 28: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

for the scale. Csing two different approaches, the volume of tubes and the expectation of

the EC, Siegmund and Worsley (1995) obtain an approximation for the tail probability of

the test statistic in the case of Gaussian random fields for Y 5 3. Here we are concerned

with the HC approach.

Let k be an iv-dimensionai kernel such that

The Gaussian scale space random field is defined as

where Z(h) is a Gaussian random field defined on a subset of W%nd 0 is a positive constant.

We will assume that Z = B. the .V-dimensional Brownian sheet. Then E [.Y(t, O ) ] = O and

Cov [.Y& I . q). S ( t 3 02)] = / k [o;'(h- t!)] k [ = ~ ' ( h - t2)] dh

From the covariance function we see that for fixed o. X(t. O ) is stationary in t, but it is nor

stationary in (t, 0).

One justification for working on Gaussian scale space random fields is as follows. Assume

the random field Z(t), t E RN satisfies

where ta E C c PV. < 2 0 and 00 > O are hed values and B is an X-dimensional Brownian

sheet. The unknown parameter (c, to, oo) represents the amplitude. location and scale of a

signal added to the noise dB(t). In other words. the shape of the signal in d Z ( t ) matches the

shape of the filter. k . Models of this form have been used in different scientific contexts. for

example in the study of human brain function via positron emmision tomography (Worsley

et al., 1992; Worsley, 1994), and for geogaphical clustering of disease incidences (Ra binowitz,

Page 29: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

1994). 3ow. for testing the hypothesis of no signal, that is ( = O, consider the test statistic

that rejects for large positive values of

Sm, = sup S(t.0). tEC

It can be shown (see Siegrnund and LVorsley. 1995) that the log likelihood function is

So the test defined by (1.9) is the likelihood ratio test for testing ,C = O. The main problem

is finally the P-value of the -Y,, above r. .A very good approximation as we said before is

to use E [.y(&)] from (0.1). In the next section we show how to evaluate it.

1.5 The expected HC for the scale space random field

For scale space random fields the Hadwiger characteristic is more amenable to statistical

analysis. so Siegmund and Worsley (1995) obtain the enpectation of this characteristic.

Then using the fact that under regularity conditions. the EC and the HC are equal with

probability one we can have the erpectation for the EC. They use the additivity property

(1.1) of the HC to obtain a point set representation for the excursion set -4, of X(t . o) above

a threshold L:

-4, = {(t:a) E C x I :S( t .o ) 3 4,

where I = [ol, 02] C R+ is an interval. Then using this representation they obtain the

expectation of the HC for a Gaussian scale space random field. isotropie in t for a k e d

value of a when N 5 3. In a later work Worsley (1998). using the HC approach extends this

result for general N and the X* scale space random field as well. He uses the definition of the

HC to obtain a point set representation for the excursion set of a random field in one higher

dimension inside C x I for a general randorn field X(t, O ) . The idea of this representation

is to break down the HC contributions of the points in C x I into the points in the interior

and the boundary of this set. We will use the same idea to obtain the expected EC for the

Page 30: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

rotation space random field in the next chapter so we will mention the expected EC for this

representation. As before. let X, = &Y/& X, = a2X/atûtf and c be the inside curvature

matrix of K. For a fixed O, we show the gradient vector of S in the direction of the inside

normal to aC by .yL and the gradient ( X - 1)-vector in the tangent plane to aC by XT. In * .

addition. let = a.Y/acr and 2: = .Y,('i, > 0).

Theorem 5 If S(t, o) satisjes the regularity conditions given by (Adler, 1981, Theorern

where O(-. . j . 8(4, m(., - ) and &(') crre the densaties of (x,, s). x,, (x,, .Y) and x=. respectively .

Finally, the result for the expected HC of the Gaussian scale space random field when

Theorem 6 For the Gavssian scale space randorn field uthen .V = 2 we have

(!V/?) k ] dh.

Page 31: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Chapter 2

Rotation space random fields

In the previous chapter. we constructed a test statistic based on the scale space randorn

field for detection of localised signals in a random field generated by convolution of white

noise with a kernel of unknown location and scale. The tail probability for this test was

approximated using the expectation of the EC of the excursion set. The problem with the

scale space random field in detecting the signals. when we know the kernel has a circular

Gaussian shape. is that it might miss oblique ellipsoidal shaped signals. since Sm, is the

likelihood ratio test for detecting circuiar Gaussian signals. In this chapter we extend the

concept of the scale space random field to the rotation space random field. that is rotating as

well as scaling the Gaussian shaped smoothing filter. Csing this rotation space random field

should increase the sensitivity of the test statistic in detecting ellipsoidal shaped signals that

rnight be missed by a circular shaped filter. .As in the previous chapter the nul1 distribution

of the test statistic will be approximated by the expectation of the EC of the excursion set.

Introducing the rotation filter adds :V(N+ 1)/2 dimensions to the search space. In general, it

is hard to find the expectation of the EC for such a high dimensional non-stationary random

field. so we will restrict ourselves to ,V = 2.

In Section 2.1 the rotation filter and some of its properties is discussed. Section '1.2 is

devoted to some preliminary results for the restricted rotation parameter. The expectation

of the EC in this case, which is the main contribution of the thesis, is obtained in Section

Page 32: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

2.3. Sections 2.4 to 2.5 are the contributions of the different parts of the search set to the

expected EC and in fact, the proof for Theorem 7.

2.1 Rotation kernel

Let k be an Xdirnensional kernel such that

The Gaussian rotation space random field is defined as

X ( t S) = dct(s)-ll ' [ k [s-'~' (h - t)] dB@).

where B is the .V dimensional Brownian sheet. and S is an LV x .V symmetric positive definite

rnatrix. W e have E [S(t S)] = O and

Cov [S(ti. SI). X ( t 2 S2)] = de t (~&)-" ' k [~;"'(h - t , ) ] k [ s iv2(h - tZ)] dh

= de@&)-'/' / k [~;'/ '(h + t2 - t l ) k ~; ' / 'h dh. I [ 1 For a fired value of S, X(t , S) is stationary in t, but X( t , S) is not stationary in (t, S).

When k ( t ) = a-lv/%exp(-? 1 1 t 1 1 2 ) is an N-dimensional Gaussian kernel the covariance

function simplifies to

In the case of the Gaussian kernel there are functional relationships between difTerent deriva-

tives of the rotation space random field X (t, S). Let Xs = a X p S and X, = a2X/ût&'. for

example, we prove that:

Page 33: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Tu provr this. we write

where : S) is the density of a Gaussian distribution with mean zero and variance of S.

Taking derivative with respect to S we have

a,v(h-t,s) - a2.v(h-t,s) For the Gaussian density with mean zero and variance S we have as - ,,, (sec

Cramer aiid LeaJbetter, 1967, page 26). So

On the other hand

If we substitute the last equalitv in (2.3) we get (2.2). Having (2.2) it would not be a surprise

ni see iarer that sorne of the conditional distributions are singular.

Finally. the same likelihood based argument as in Section 1.1 for working on the scaie

space random field, justifies working on the rotation space random field. Assume the ranaom

field Z(t ). t E RN satisfies

dZ(t) = < d e t ( ~ ~ ) - ' ~ ~ k [ ~ ; ' / ~ ( t - to)] d t + dB(t) ,

where tu E C c IRN, f 2 O and So a positive definite matriv? inside a set Q of positive

definite matrices and B is an N-dimensional Brownian sheet. The unknown parameter

(c, to, Su) represents the amplitude, location, rotation and scale of the signal and dB(t )

represents the noise. The test statistic tuat rejects for large positive values of

x,, = sup X(t , S), kE4

is the likelihood ratio test for testing the hypothesis of no signal, that is ( = 0.

Page 34: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

2.2 Restricted rotation parameter

From now on ive consider only the case :V = 2. In this section we assume that the rotation

parameter S = [ C b ] of the random field is restricted to positive definite matrices with

eigenvalues limited to the range [Xi . ,L]. The set Q' of such matrices. embedded in E l 3 is the

union of two cones as shown in Figure i l . As we saw in the previous chapter to obtain the

expected EC we need to know the distribution of the derivative of the random field S in the

direction of the inside normal to aQ'. SI. and the distribution of the gradient vector in the

tangent plane to aQ'. Since i3Q' is curved. these distributions will have a complicated form.

To simplib calculation. we write

where the eigenvalues l and rn are in [A [ . A?]. Rewriting S in terms of ; = 26, we have

We will use s = [l , m. pl' instead of S as the parameter of rotation space. Then, the domain

of the values for s can be considered as

So, our aim is to derive E [X(.4,(X(t. s))], where

For simplicity in notation. derivatives of X with respect to t , will be denoted by dot

notation with a subscript i, i = 1,2. Derivatives with respect to 1. m and 9 will be denoted

by subscripts 1. m and p. To calculate the expectation of the EC we need to have the

distribution of Y = (-Y. .%. x,,,~ a,,,) at a fixed point (t, s), where xIu, is arranged

in the same way that the uech operator arranges symmetric matrices (see Searle, 1982). We

Page 35: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

know Y has a multivariate Gaussian distribution with zero mean. The covariance matrix of

Y is obtained by taking suitable derivatives of Cov [X(t l . sl ) , X(tz, s*)] t hen setting t 1 = t2

and sl = S*

The algebra involved in taking these derivatives for a general kernel is very complicated.

We decided to choose the Gaussian kernel as the kernel of the rotation space randorn field.

This choice can be justified by the uniqueness theorem of scale space (see Lindeberg, 1994).

which seems not too hard to be generalized to rotation space. We used the cornputer algebra

program MAPLE to derive the covariance matrix for the Gaussian kernel. The 1fAPLE

code for these derivatives is given in Appendi~ A. In this code the covariance function (2.1)

is defined as a multivariate function of two 5- D vectors (t l , . . . , t5) and (sl , . . . , s j) where

t,, Si, i = 1.2 are the location parameten of the random field and t,, s,, i = 3 , J . j are the

rotation parameters in the order 1, m. p. This function is given to the procedure vargen.

The procedure vargen takes al1 necessary derivatives and compute them a t ( t L o . . . . t 5 ) =

(sl , . . . . s5) . The result is written in the rnatrix Sigma in the same order as above. Then. the

covariance ma t r~u of Y is extracted as a submatrix of Signa. It is worth mentioning that

this code works for any kernel and. in fact, for deriving the covariance rnatrix of .Y and its

first two derivatives for any covariance function R(. . .) . For the Gaussian kernel we get:

where

Page 36: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 37: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 38: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 39: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The equation (2 .2) implies that for a fixed value of S some elements of the Gaussian

random vector Y are linearly correlated so that the determinant of Var(Y) is zero.

2.3 Expectation of the EC

The Theorems in Chapter 1 cannot be used directly to derive the expectation of the EC.

Since the boundary of the search set C x Q is not smooth, using the point set representation

of the EC in (1.3) is not valid. The definition of the HC does not depend on the smoothriess

of the boundary of the search set, so it is more convenient to consider this characteristic and

partition C x Q to different pieces with not necessarily smooth boundary. For each part of

this partition we can obtain the contribution to the HC of the excursion set. Then using

the additivity property (1.1) of the HC and a generalised form of Theorem 5 we can obtain

the expected HC of the excursion set. The rotation space random field admits the regularity

Page 40: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

conditions hence the HC and the EC alrnost surely agree. Thus by obtaining the expected

HC we have the expected EC.

After choosing the HC of the excursion set to work with. s e still have sorne problems to

be solved. One problem is that we have no point set representation for this characteristic in

general dimensions. By generalising the point set representation for .V = 2 and .V = 3 given

by Wonley (1995b) which seems not too hard, we can overcorne this problem.

Another complication to obtain the expected EC is the calculation of the expectation

of the determinant of submatrices of x,?~,. From Adler (1951. Lernrna 5.3.1) it is evident

that ttiese expectacions depend on the elements of the conditionai covariance and mean of

xLZl, given (S. x). But. unlike the stationary case. the elementç of this covariance matrix

are not a symrnetric function of the the indices (1.2.1. m ) (see Adler, 1981. page 109). So

to obtain the expectation of the random determinants we appealed to .LL,?PLE. In deriving

the expectation of the determinant of a random n x n matrix Y = [k;,] . we used

and the fact that a suitable nth derivative of the moment generating function of Y at zero

gives the expectations inside the sum (3.4). The 'VIAPLE code for deriving the expectation

of the determinant of a symrnetric mat rk is given in the Appendix a s Edet.

Here. it seems worth mentioning that using the moment generating function approach

to obtain the expectations in the sum (2.1) not only eases the calculation but it is the only

choice. since in most of the cases. the Gaussian distributions we have are singular without a

density function.

Now we consider partitioning of the search set C x Q. In partitioning the search set

also we face a problem. The problem is that the rotation space random field .Y has some

irregular behaviour on P = { ( l , m. p), 1 = m}. On this set the random field .Y is a constant

function of 9. Going back to the original rotation space. the union of two cones Q', the set

P is the image of P', the rotation avis of Q'. The rotation space random field on this avis

is equivalent to a scale space random field. To solve this irregularity consider the following.

Page 41: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

CG. take away the rotation avis of Q' and unfold the rest of Q' to get Q\P and then map

the line Pt to P. Then we consider the contribution of the image of P'. P. to the EC of the

excursion set separately. The sarne kind of argument, separating a part of the search set.

lias been doue in the astrophysics literature (see Gott et al., 1990).

For si~nplicity in notation we henceforth denote the set Q\P by Q and proceed by parti-

tioning the rest of the search set as C x Q =(Co x QO) LJ (aC x QO) u (C x aQ) u (aC x d Q ) ,

where " denotes the interior of a set. And in turn. the boundary of Q itself can be partitioned

as

where

-4 diagram of above partition is shown in Figure -1.2.

Let's denote the contribution of the set B n -4, to the EC of escursion set -4, b!. Cori( B).

To avoid lengthy sections we obtain the contribution of each component of the above partition

and P in a different section. We calculate the contributions from al1 the parts and add them

together. After substitution of X2/XI by r and simpliking the result we cm state the main

result of this thesis:

Theorem 7 For the rotation space random field with a Gaussian kemel we have

Page 42: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 43: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and 0;2-(l. m. 9) = i /(l + rn - (1 - rn) cos(p)) u the conditional vanance of S2 yzven .YI

evaluated ut (i. m. ,-) .

-1 typical example of (2.6) as a function of the threshold level z when C is a circle of radius

50 and [XI . h2] = (2.501 is drawn in Figure A.3. .Usa for the purpose of comparison. a plot

of the critical values at the 0.05 level for testing the existence of a signal using X,,, as a

function of the radius of the circle C for rotation space. scale space and two different values

of fked seale is plotted in Figure A.4. -4s we can see from this plot the critical values for

rotation space are much bigger than those of scale space and fked scales due to searching

over a bigger set.

Before going through the proof of Theorem i note that in the following the joint distri-

bution of .Y and a subvector -Ya of x is denoted by 4.. Also syrnrnetric submatrices of x are treated interchangeably as a matrix or a vector. The vector version of these submatrices,

derived as explained above, is used for distributional purposes. For simplicity. we wiil denote

.tu(-?, > 0) and .t&Ya < O) by .Y: and .Y;, respectively.

Page 44: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

2.4 Contribution of Co x QO to the expected EC

D The contribution of Co x QO is sirnilar to the contribution of the interior of the prism in

Theorem 5 . Let 9 be the last coordinate. so that frorn equation (1.10) ure have

.. Con(C0 x &") = \ . ; d e t ( - ~ ~ ~ ~ , ) 1 .Y = r . xIii, = O] DI^&. 0 ) d s d t

To calculate the expectation in the integrand of Con(CO x Q " ) , we first condition on -YJ.

The distribution of given (S = x. X = 0. .YG) is X ( p . r) wit h

sin(p) ( I - rn) x cos (v) 3 - + 16 ? O. O. lm I - rn

Page 45: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and

Using the MAPLE procedure Edet with the above mean and variance we get

Page 46: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

1 rn)' The randorn variable 2, is independent of (,Y, X121rn) and is distributed as W ( 0 , &)- The

joint density of (X, evaluated at jzo O) is

Since

we get

After integration on Q and C we obtain:

where ICI is the area of C.

2.5 Contribution of dC x Q" to the expected EC

The set i3C x Q" is a part of the boundary of the search region. so to obtain its contribution

we use the form for boundaries aç in (1.1 1). Since Q is Rat in the topologicai sense the

gradient vector in the tangent plan to aC x Q0 has the form of xTlm and the Hessian matrix

in the tangent plane is equal to XTl,, where the index T shows derivative in the direction

of the tangent to ûC. Also the normal to aC x QO at a point (t. l . m. p). t E X is parallel

to the normal to aC at the point t thus -YL is the derivative of .Y in the direction of the

normal to X . By the same reason (flatness of Q) the curvature matrix of aC x QO has the c o o

f a m e = [ a g p j , where c is the scalar curvature of aC. Therefore

corl(ac x QO) =

.*

E 1%; det (-XTlm - .YI C ) (-ti < O) ( X = 2, *T[m = O (z, 0 ) d s dtT.

Page 47: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

At each fixed point on aC let's denote the coordinates of a point with respect to the unit

tangential and normal vectors by ( u l , uz). The change of coordinates from ( u l , u2) to ( t [ , t 2 )

is done by a rotation matrix. After taking the espectation in the above equation ne are

integrating over al1 possible rotations, hence without loss of generality ive can replace xTlm by xll, and .Y, by .i'?.

.ifter t hese substitutions. by expanding the determinant . the expectation in the integrand

c m be written as:

Hence. we can write:

where

and

To calculate Con(8C x Q0),? we first condition on x2, The distribution of x,,, condi- a

tional on (-Y = r. x = 0: x?,) is X ( p . X) where

Page 48: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and

We use the MAPLE procedure Edet to get

If ive take espectations over x ? ~ and then rnultiply by O ~ ~ , ( X . O) we have

where 02 = i / ( ( 1 + rn - ( l - rn)cos(p)) is the conditional variance of & given .Y,. Now. integration of the right hand side of (2.8) over X and Q gives Con(8C x Q") I .

Since the right hand side of (2.8) is constant on aC the first integration gives a multiple

of lac(. The second integration over Q cannot be done analytically. so we have to do it

numerically.

For the second part, also we first condition on x~~ . Using appropriate subrnatrices of

Page 49: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

the mean and variance used for the first part we get

If ive now take the expectation over x?, and then multiply by oli,(x. 0 ) the integrand

becomes

Since by the Gauss-Bonnet theorem cdtT = VT x ( C )

2.6 Contribution of Co x aQ to the expected EC

To obtain Con(CO x a&) we use partitioning (2.5) of aQ. Since the rotation random field S

is the same on B3 and B4. these two sets have no contribution to the EC of the excursion

set. So ure will obtain the contribution of the other pans of i3Q starting with Con(CO x BI).

Since the-set Co x f i i s ff_at,-the curiaturematrk is zero.-The-iatvad n e r d ta thisset- is

in the direction of m. Hence we have:

Page 50: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The distribution of xlzc given (,Y = r: Xizi = O. x,,) is N ( p , X) where

and

Hence. the expectation in the integrand after conditioning on x,, is

E [det (- 8121) 1 .Y = z. fIzl = O, x,,]

If we take the expectation over X,, and then rnultjply- the result b y &2i (x. 0 ) t h e inte- p p p p p p - - - - - - - - - - -

- - - - -

grand becomes

And finally integrating with respect to 1: p and t and ssubstituting rn = hl we obtain

ICI Con(CO x B I ) = - - log 1

128n2 [il ($)+%-$] (z3& + 2z2 - 6z& - 6) ëf2/*.

Page 51: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The set Co x B2 is also Bat. So the curvature rnatrix is zero but the inward normal to this

set is in the opposite direction of I . Therefore

The conditional distribution of xI2, given (S = x. xlzm = 0. XI$) is .V(p. 1) with

and

By the same procedure as in the previous section we get

Page 52: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The set Co x L has a different nature from Co x B, and Co x BQ. Although the set is Rat

so that the curvature rnatr i~ is zero. there is no unique normal to this set. To make sure

the derivative in the direction of the inside normal is negative ive have to consider ail the

directions from the m avis to the 1 a i s . To do this. it is enough to make sure that the

derivative in the direction of 1 is positive and the derivative in the direction of rn is negative.

Therefore the contribution of Co x L will be

The rnatrix XL2 given (S = r. = O' x~,,) has a fixed value of

L

sin(;) (1 - m ) + cos (9) -Yr9 + 4 ~ i n ( ~ ) [ - t ~ - -i-J + 8 lm L m '

- 1

- ( 1 + m + ( 1 - rn) cos(p)) r sin (g) -YJ + 4 (1 - COS(?)) Si + 4 (1 + cos(f9)) Sm + S lm L - rn

So conditioning on xr,, gives the expectation in the integrand as

p p p p p p p p p p p p - p - - - - - - - -

Tcgët t h e integrand, wesuibstitute i = A? and rn = XI t hen ive ttake the expectation

over xi,, and then multiply the result by ~ ~ ~ ( r . O ) = &ëz'/* . So the integrand is

Integration Hnth respect to p and t gives the result

Page 53: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

2.7 Contribution of dC x aQ to the expected EC

0 In this section we obtain the contribution of aC x 3Q. again partitioning aQ as in (2.5).

The sets B3 and B4 again have no contribution. For the other parts. B I . B2 and L the same

argument as in Sections 2.5 and 2.6 applies but for clarity we will repeat it anyway.

The set aC x BI is a part of the boundary of the search set. so for its contribution ive use

the form for boundaries as in (1.11). Since Bi is flat the gradient vector in the tangent plan

to 8C x B1 has the form of Kn and the Hessian matrix in the tangent plane iç equal to xTI. where the index T shows derivative in the direction of the tangent to aC. The normal to

aC x B1 at a point (t. 1. A l . 9). t E aC is parallel to the normal to aC at the point t thus

.tL is the derivative of A' in the direction of the normal to 8C.

Since BI is flat the curvature matrix of aC x BI has the form c = [ i 01, ahere c is the

scalar curvature of X . In addition to the ordinary conditions. a point in Z)C x BI has

a contribution to the HC only if 3, < O. This means that the curvarure contribution is

[2' ioo] = XI C. %y the sarne argument as in Section 2 .5 , we c m replace the tangentid

and normal derivatives to aC by -il and x2, respectively. Therefore

If we simplify the determinant in the integrand. for the contribution of aC x BI we get

w here

Con(aC x BI)1 =

Page 54: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and

Conditioning on (.Y = z7 Xll = O j E l m y ) . XII is .V(p, 2) with

and

Thus for the first part conditioning on x*, we have

Taking expectacion over x2- then rnultipiying by p l i ( r . O) and substituting m = A l for-the p p p p p - - - - -

p p p p - - - - - - -

integrand s e get

where a; = i /((1 + A l ) - (Z - Xl)cos(ip)) is the conditional variance of given XI evaluated

at rn = XI.

Page 55: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

As a function of 1 and p the right hand side of (2.13) cannot be integrated analytically.

so we leave it for numerical integration. (2.13) does not depend on t. so the integral over

) t E aC gives a multiple of laCl. If we çubstitute q in (2.13) we get f2(l. 9).

For the second part given x~,,,

and so

Integation over 1 and p and using the Gauss-Bonnet theorem gives

Following the sarne argument as above the contribution of aC x B2 is as follows:

Con(8C x B2) = Con(aC x Bz)i + Con(8C x B&,

where

The distribution of XI, given (.Y = 2, x~, = O. Xu,) is x) where

(1 + rn - ( I - m) cos(<p)) x + (1 - m ) ( l + cos(ip)),& - 2 sin(p).Y, lm t - m

Page 56: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and

Then conditioning on x-~, gives

We take expectations over Xzl, and then mulaiply the result by yim(x. O ) and substitute

1 = X2 to get

where (r; = 11 (A2 + m - ( A 2 - m)cos(p)).

- &ais, (2 . f i ) isa functiorkof rn and-? and so i t is kft for numerical htegfation. (2.l5J

does not depend on t. so the integral over t E aC gives a multiple of laCl. If we substitute

0 2 in (2.15) we get f3(1, g).

For the second part, if we condition on ~ 2 1 ~ we get

and so

Page 57: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Therefore

The set dC x L is a part of the boundary of the search set. so for its contribution we use

the form for boundaries as in (1.11). Since L is Bat For the contribution of aC x L the

gradient vector in the tangent plan to X x L has the form of X1 and the Hessian rnatriv in

the tangent plane is equal to x ~ . The normal to 8C r L is in the direction of t2 . ?;on- the

curvature is c. the çcalar curvature of aC. The points in aC x L have a contribution to the

HC if .y, < O and Sc > O. Therefore

Hence For the contribution of aC x L we have

where

Con(3C x L ) l =

The randorn variable XI given (.Y = x, XI = O. XZlmu) has zero variance and mean of

Page 58: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Fur dit) first part a e get

where 02 = l/ ((A2 + A l ) - (A2 - AL) cos(p)). If ne substitute 0 2 as a function of 9 we get

h(4. For the second part, we have

2.8 Contribution of C x P to the expected EC

In p wr have 1 = r n . in which case. as we discussed before. 9 disappears and the rotation

space raridom field reduces to the scale space random field. To obtain Con(C x P) ae can

ilse Theorem 6 for the Gaussian kernel case. For the Gaussian kernel K = 1. d = 1 /2. O;' = AI

and o.: = A?. By siibstituring these values in (1.12) we have

Page 59: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Chapter 3

Weighted rotation space random field

In the previous chapter, rve assumed that the eigenvalues 1 , m of the rotation parameter

S of the rotation space random field .Y(t, S) were in the interval [XI. A?]. In this chapter

we àrop this restriction and consider the weighted rotation space random field. in which

.Y(t. S) is weighted by a function of S that tends to zero for extreme values of 1 and m.

There is some justification for defining the weighted rotation space random field. The limits

for the eigenvalues. XI and A2 might not be known in advance. We cannot consider al1

positive definite matrices as the space for the rotatiou parameter S because then the EC of

excursion set would be infinite. Instead, it seems more reasonable to give a weight function

to this space that gives preference to certain values of S. By doing this we will get rid of

boundaries and simplify the previous result. A similar method is used in the cumulative sum

test for a change point (see Pettitt. 1980), the weighted Kolmogorov-Smirnov test and the

Shapiro-Wilk test. which can be regarded as weighted likeiihood ratio tests.

In Section 3.1 we define the weighted rotation space and then in Section 3.2 we obtain the

expectation of the EC of the excursion set for the Wishart weighted rotation çpace random

field. In Section 3.3, we briefly discuss the possibility of extending the scale space random

fields to the weighted one.

Page 60: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

3.1 Definit ion

Let w ( S ) be a twice continuously differentiable positive function on the space of positive

definite matrices S and X(t, S) be a Gaussian rotation space random field. The weighted

Gaussian rotation space random field Y ( t , S) is defined as

Y(t. S) = w(S)-Y(t. S ) .

Then kW is a zero mean Gaussian random field with covariance function of

We restrict ourselves to iV = 2 and as in the previous chapter reparametrize the rnatrix S

by s = (1. m. J ) . For simplicity, we write u: for w ( S ) . Let Y = [ Y . fi. $i, &' $5. i . ' ; 4 and X = [S. 'il, &, .Il, X,, ?iPl -Yl2lm] Since

we have Var(Y) = BVar(X) B' where B is a 16 x 16 m a t r ~ ~ with the elements from the above

equations. The variance of X for the Gaussian kernel was obtained in the previous chapter.

so using this we can obtain the expectation of the EC.

3.2 Expectation of the EC

It is not difficult to find the expected EC for a general weight function W. But the formulas are

very complicated. depending on ui and its first two derivatives. For simplicity, we consider the

Page 61: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Wiçhart weight function. Let w ( S ) = c lde t (d)Y exp(-tr(S)/2) be the Wishart distribution

on positive definite matrices with n degrees of freedom, where cl is a constant. Since we

consider parametrization on (s) and search over

(1 -ml (n-3) ( l t rn) - then the change of variables gives the weight w ( s ) = c2 ?(lm) 2 e - 7 . on Q. where c2

is another constant.

Without loss of generaliry we assume c? = 1 and find the E [,y(zl(Y: y)) ] where

.-l(Y.yj = { ( t . s ) E C x Q : kV(t.s) 2 y} .

-4s in the previous chapter. the expectation of the EC will be the sum of contributions

of different parts. As Q has no boundary except where 1 = n and on this boundary the

random field Y is almost everywhere equai to zero. the terms involving the boundary of

Q will disappear. Folloaing the same argument as in Chapter 3 the expectation of EC is

obtained as follows.

E [x(-A(Y. Y))] = ICI& + l W p 1 + x(C)po,

w here

pl = E [~ ; tde t ( -~ l i , ) (% < O) 1 Y = 9. Y,,, = O] dil,(y. O) ds.

and

In these expression 4 is the density of Y and its derivatives. instead of X.

Page 62: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Let g?, g, and go respectively, be the integrand in h, pl and po . Let 1 + m = u and

I - m = r . Following the sarne procedure as in Chapter 3. using 'IIAPLE. we have

D

where

Page 63: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 64: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 65: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

and 02 = w 2 / ( u - L! COS(<F)) is the conditional variance of $5 given k;.

3.3 Weighted scale space

The idea of weighted rotation space can also be applied to the scale space random fields.

The principle is to replace the interval [ai, 021 for O by a weight function u ( a ) and consider

the random field

Y ( t , a) = w(o).U(t. a).

.\ suitable choice for w ( o ) rnight be a gamma density function. The espectation of the

EC for the weighted scale space can be found by using the method in Worsley (1998) for

Gaussian and random fields in generai dimensions. This will not be pursued further in

t his t hesis.

Page 66: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Chapter 4

Application

In this chapter we shall apply the rotation space random field method to a simple n1R.I

experiment. In Section 4.1 we have a brief review of the &lRI technique for collecting

image data. We describe the fiIRI data and analyze them using the rotation space method

in Section 1.2 and compare the result with that of the scale space method applied to the

same data. Then. in Section 4.3 we wiil validate the rotation space method on an artificial

Gaussian shaped signal added to white noise.

4.1 fMRI technique

Alagnetic resonance imaging (MM) is an imaging technique used in medical settings to pro-

duce high quaiity images of the inside of the human body. 41RI is based on the principles

of nuclear magnetic resonance? a spectroscopic technique used by scientists to obtain micro-

scopic chernical and physical information about molecules. An MFü scanner consists of a

large and very strong magnet in which the patient lies. .-\ radio wave antenna is used to

send signais to the body and then receive signals back. These returning signals are converted

into images by a computer attached to the scanner. Images of almost any part of the body

can be obtained using the URI technique, although EvIRI scanners are good at looking at

the non-bony parts or soft tissues of the body Iike the brain and nerves. These tissues are

Page 67: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

seen much more clearly with bIRI than with regular x-rays and CAT scans. A disadvantage

of MRI is its higher cost compared to a regular x-ray or CAT scan. Also. CAT scans are

frequently better at looking at the bones than SIFU.

Functional MRI (MU) is a technique that has recently been introduced to obtain func-

tional information from the central nervous system. This technique extends anatomical

imaging of hIFU to include localization of the brain areas active during perceptions. actions.

visual and cognitive tasks. Activation of an area of the brain causes an increase in blood

flow to that area that is greater than that needed to keep up with the oxygen demands of

the tissues. This results in a net increase in intravascular oxyhernogobin and a decrease in

deoxyhemoglobin. Deoxyhemoglobin is paramagnetic. resulting in a decrease in signal corn-

ing from the tissues. Less deoxyhemoglobin as a result of increase in blood Bow results in

an overall increase in signal. Sophisticated image processing techniques are used to obtain

images of these flow changes. The fifRI technique offers opportunities for the investigation

of human brain functional organization as well as medical applications such as neurosurgical

planning. However. fifRI requires new experimental methods and analysis. Random field

theory seems to be one of the appropriate methods to do the latter.

In this section we gave a v e v general idea of blRI and nfFU techniques. For an intro-

ductory reference on these methods see Frackowiak et al. (1997).

4.2 Application to fMRI data

One of the first experiments in f?vIRI was to locate the regions of the brain that respond

to a simple visual stimulus (Kwong et al., 1992). In a similar experiment at the Montreal

Neurological Institute, a subject was given a simple visual stimulus. flashing red dots. pre-

sented through light-tight goggles (Ouyang et al., 1994). The stimulus was switched off for 4

scans. then on for 4 scans. This procedure was repeated 5 times? giving 40 scans . The time

interval between scans was 6 seconds and the stimulus period was T = 48 seconds. Hence.

the the data set consists of a time series of 10 2 - 0 images. each 128 x 128 pixels of size 2

Page 68: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

mm. The response at one pixel is shown in Figure -1.5. This data set has been analyzed

using the x2 scde space method by Worsley (1998).

Let Y ( t . t) represent the blood flow at location t in D = 2 dimensions and time t . Since

the standard deviation was not constant across the image, Worsley (1998) first standardized

the data to get approximately unit standard deviation at each pkel as follows. At each fixed

pixel t a sine wave with period of T = 48 seconds

was fitted to the blood flow to remove most of the signal. The coefficients were estimated by

where the summation is over the n = 40 scans. Then an estimate of the pixel standard

deviation is

The standardized data is

which should have. approximately, zero mean and unit standard deviation. Since there

6 second lag between onset of stimulus and hemodynamic response. we lagged the time by

3 seconds so that most of the signal would be in the cosine component (see Figure A.5). So

the sine and cosine coefficients were then re-estimated using the standardized data as follows

Page 69: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

The images of b;(t) and b z ( t ) are shown in Figure -4.6 and will be referred to aç sine and

cosine components of the data, respectively.

D We consider sine and cosine components as our data and analyze them separately. That

is Ive smooth b: using a Gaussian filter with s = (1, m. g) where XI 5 rn < 1 5 A?. and

O < - 6: < - 2;r then we define:

Ney our aim is to test if there is any signal in the sine and cosine component of the data.

Before using the rotation space method, we have to make sure that these components are unir

variance Gaussian random fields. Since bf ( t) is a weighted sum of Y ( t , t ) , its approximate

normality is assured by the Central Limit Theorern. To make b l ( t ) have unit variance. we

used the method of Worsley (1998). The global standard deviation was estimated using

the coefficients at two periodicities, -4 = 240/6 and C = 240/1 seconds. either side of the

periodicity of the signal, that is

a;(t) = 1 sin [2r(t + 3)/.4] kœ'(t. t ) / (n /2 ) .

c;(t) = 1 sin [ 2 x ( t + 3)/C] Y'@, t ) / ( n / 2 ) ,

~ ( t ) = C cos [27r(t + CI ~ ( t , t ) / (n /2) .

p p p p p p p p p p p p p p p - p - - - - - - - - - - - -

-

These four images should contain no signal. These were smoothed as above:

and the global variance of bf was estimated by

Page 70: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

where iV is the number of pixels in the image. The final data analyzed as rotation space

random fields are

In practice to compute Sm, we have to sample the rotation parameter space Q at discrete

values of ( 1 . m, y). In image processing literature the width or scale of a kernel is measured in

terms of FWHM. Full Width at Haif Maximum. the width of the kernel at half its maximum

height. To get adequate continuity of the sampled rotation space, we chose the sampling

interval. or pixel size. to be 1/10 of the FWHbI. For a Gaussian kernel the FWHM is

o J 8 l O g 0 , where O is the standard deviation. For this kernel var(%) = 1/(202) =

so for an arbitran random field we define the efictive FWHM as ,/#. For the diagonal

of rotation space (1 = m = a')? that is scale space, the random field is stationary on the

loga scale. The effective FWHM on the logo scale is then

r

-l log 2 FWHM =

Var(a,Y/a log O ) = d E i

(see Worsley et al., 1996). Hence to obtain uniform sampling on the logo scale we use a

pixel size of d m / 1 0 = 0.167. The range of loga is 1 log 2, so let the number of sampled

intervals. n~ be the smdlest integer greater than ( i log ?)/0.167. Then the sampled values 1

of 1 and m are hl ($ ' /"~ . i = O, 1.. . . , n ~ .

For our data we chose the limits of the FWHM of the Gaussian filters tu be between

6 and 30 mm. The corresponding limits for 1 and rn are XI = (sr%)* = 6.69 mm2. and

A? = (.*)' = 162.30 mm2. The number of sampled intervals is n~ = 10.

For the rp axis, Var(%,) = 5 (::?:y. and so the srnallest FWHM is FWHXI, =

, The range of p is 2a. so the number of sampled intervals, n,, is the srnallesr

integer geater than . For XI = 6.49, A2 = 162.30. we get n, = 46, which for conve-

nience was lowered to 45, so that the sarnpled values of ;p were separated by % = 8 O . and

values of 0 were separated by 4".

Smoothing with the Gaussian filters with XI = 6.49 and A2 = 162.30 gives 3.69 for the

global maximum of XI at (tr , tz) = (168,54) and s = (162.30,162.30, O"). For X2 the global

Page 71: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

maximum is 14.87 a t (tl, t 2 ) = (138,68) and s = (32.46.6.49, lUO). Since there rnight be a

negative signal in the sine component, the global minimum of this component over al1 filters

and the locations was calculated. This minimum is - 1 . T l which occurs at s = (6.49.6.49. -1").

The images of -Y1 and .Yz for some vaiues of S. including the ma~imizing ones are in Figures

-4.7 and -1.8. respectively.

For the purpose of finding the P-value. the slice of the brain was approximated by a

circle of radius 61.77 mm chosen so that its area matched that of the slice. This gives

ICI = I lg60mm2, IdCl = 38Smm and x(C) = 1. Then. E [,y(&)] was calculated using ( 2 . 6 ) .

TG find the critical value. rhis expectation was equated to 0.05 and solved for the value of

x. The critical value at the leveI of 0.05 was found to be 5.25. Therefore the results for the

maximum and minimum of the sine component Xi are not significant. but the result for the

cosine component .& is highly significant. The images of the cosine component along with

a contour of the maximizing filter, and the excursion set above the critical value of 0.05 are

shown in Figure -4.9. This shows that the activation was taking place in the visual cortex

as expected.

4.3 Validation

For the purpose of validation the rotation space method was applied to some simulated

data. Gaussian white noise was simulated on the same slice of the brain for the real data

and smoothed using the Gaussian kernel for AI = 6.49 and X2 = 162.30. The procedure was

repeated for the same Gaussian data plus a Gaussian signal located at to = (129.129) with

s = (20.0.10.0,45°) and the amplitude of ( = 6. The test statistic for the noise only was

Sm, = 3.18 at s = (17.05,6.19,136") and for signal plus noise ,Ym, = 9.42. The maximum

in the case of signal plus noise is located at t = (129.129). s = (23.53.6.49.40°) very close

to the location and rotation parameter of the signal. If we compare the result with 3.25. the

critical value at 0.05 obtained in Section 2. we conclude that the result for the noise only

is not significant and for signal plus noise it is significant. Figure A.10 shows the result of

Page 72: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

smoothing the pure noise with the maximizing kernel and Figure Al1 shows the result for

noise plus signal.

For more exploration. two Gaussian signals far from each other were added to the same

simulated noise. The signals that are located a t (90.180) and (170. MO), have the same

amplitude of 6 and rotation parameter s = (20.10.20"). The signals are added to the noise

and then smoothed with the mavimizing filter (Figure A.12). The mavimizing filter had the

rotation parameter of SQ = (23.53,12.36.10°). The global mavimum Sm, = 11.56 occurs at

to = (170.80) and so. This leads us to conclusion that the hypothesis of having no signal

must be rejected. The image of the excursion set of the location for the value so of the

rotation parameter is in Figure -kl.I[Top]. This figure shows that both signals are detected

as two separate signals as they are. An interesting question is how close two signals can be

to be separately detected by the test statistic Sm,. To get a partial answer to this question.

two close Gaussian signals located at (1%. 110) and (126. 129) with s = (20.10,60°) and an

amplitude of 6 were added ro the simulated noise. The search over location and rotation

gives Sm, = 12.26 at to = (144,118) and so = (162.30. 17.05.304"). Figure .A.l3[Bottom] is

the result of adding two close signals to the noise and then smoothing it with the mavimizing

kernel. Although. based on the value of the test statistics there is an indication of a signal.

the mavimizing kernel is very far from the true signals. Figure A.l.L:b, the excursion set of

the location at so of the rotation space, shows that the procedure of finding the mavirnizing

kernel is trying to combine two signals together as one rnuch bigger signal in a different

direction from the direction of the true signals.

Page 73: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Conclusion

In this thesis we obtained the expected EC for the Gaussian rotation space random field

with a Gaussian kernel when 3 = 2. This result can be used as an approximation of the nul1

distribution of the test statistic -Y,, for detecting ellipsoidal shaped signals. This result c m

be extended easily. using the MAPLE code in the Appendix. for any smoothing kernel. One

of the main features of this thesis is the extensive use of computer algebra using hl-IPLE.

Since the algebra cannot be checked by hand. the MAPLE code is given in the Appendix.

which c m be checked by hand. The result of the thesis can be extended to the y' rotation

space random field.

The method proposed in this thesis for detecting signals in a noisy image has the potential

advantage of p a t e r sensitivity at detecting signals of al1 rotated elliptical shapes. The

disadvantage of this method is that signals which are close together might be detected as

single broader signal rather than separace signals. Other limitations of the method are the

time spent to search for the global maximum and the use of a lengthy formula to approximate

the nul1 distribution.

The images analysed in the last chapter are two dimensional. Most often, the images of

the brain are collected in three dimensional space. In principle the method can be applied

to three dimensions. but now the search space is nine dimensional (three for location. six for

rotation and scaling) which would enorrnously cornplicate both the theory and application.

The power of the test statistics .Y,, for the Gaussian scale random field is discussed by

Siegrnund and Worsley (1995). A çimilar discussion needs to be done for the rotation space

random field. This will be left for future research.

Page 74: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

References

l e . R J . ( 9 The Grometrg of R.mdu.rn Fields. LVilev. New York.

-4cller. R.. .J. ( 1 990) . ..Ln intmductiorr to contini~ltv. extremir. und relutrd tupi<*s / o r . ym.-

tvul G ~ L U S S ~ U ~ ~ processes. Iristitute of hfatheniaticiil S tatistics Lecture Notes-Uoiiugraph

Silri:~?; 12. Ixistitiite of Mat hematical S tatistics. Havwanl. C.-1.

r i . S. 1 8 5 ) . Diflfrire~rrtiul-Geurr1et~icu1 rnrthods in Stntistics. Spririgcr lectiirc iiurcls iii

St,ilt,istics. 28. Springer.

Bolotiri. Y. V. ( 1969). Distribution t h e o p for the rcliability of niecliiinical svsterri (iri riissiuri ).

k a . . -1/u l . !V~tuk.. SSSR, iblech. Telu,. 5:Z-79.

a . J ( 9 ) Exc~ursion se ts of rundorn f ields .with upplicu tion tu humun 6min rrLuppny.

PIiD t kiesis. Departnient of 'rlathematics and Statistics. LlcGill C~iiversity. hloiitreal.

C'r;mier. H. and Leadlietter. hl. R. (1960. Stutionur-y und Rrluted S to~hus t~ç P~VI:ELS~J.

Wiley. New Ebrk.

Duob. .J. L. (1953). Stochastic Processes. Wiley,'iew York.

Page 75: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Frackowiak, R.' Friston, K., Frith, C., and Dolan, R. (1997). Human brain function. Aca-

dernic press.

Girko. V. L. (1990). Theory of random determinants. Kluwer Academic Publishers.

Gott. J.. Park, C.. Juskiewicz. R.. Bies, W., Bennett, D.. Bouchet. F. R., and Stebbins.

A. (l99O). Topology of microwave background fluctuations: t heory. The Astrophysics

Journal. 352: 1-14.

Grenander. C. (1981). Abstract inference. John Wiley 9r Sons.

Hasofer. -4. 41. (1978). Cpcrossings of random fields. Supplement to advances in upplied

probability, 10: 14-21.

Kac, M. ( 19-13). On the average number of real roots of a random algebraic equation. Bulletin

of Amencan iClathematicul Society, 49:311-320.

Kendall. D. G. (1993). The Riemannian structure of Euclidean shape spaces: a novel envi-

ronment for statistics. .Innuls of Statistics, 21(3): 125-1271.

Knuth, D. E. (1992). Two notes on notation. American Mathematical Monthly, 99:403-422.

Kwong, K.. Belliveau. J.. Chesler, D., Goldberg. 1.. Weisskoff, R.. Poncelet. B.. Kenned-

D.. Hoppel, B.. Cohen, M., Turner. R.. Cheng, H . X . Brady, T., and Rosen. B. (1991).

Dynamic magnetic resonance imaging of human brain activity during prima- sensory

stimulation. In Proceedings of the National Academy of Science, volume 89, pages 5675-

5679.

Lindeberg, T. (1994). Scale space theory: A basic tool for analysing structures at different

scaies. Journal of Appiied Statistics, 2 1 (2):225-270.

Matheron. G. (1975). Random sets and integral geomety. John CViley S: Sons, Sew York-

London-Sydny.

Page 76: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Nosko. V. P. (1973). On the possibility of using the morse inequalities for the estimation

of the number of excursions of a random 6eld in a domain. Theory of Probability and its

Applications, l8:82 1-822.

Ouyang, X.. Pike, G.. and Evans. A. (1994). fmri of human visual cortex using temporal

correlation and spatial coherence analysis. 13th -Annual Symposium of the Society of

Magnetic Resonance in 'iledicine.

Pettitt. A. N. (1980). A simple cumulative sum type statistic for the change-point problem

wit h zero-one observations. Biometnku. 6 7:ï9-84.

Poline. .l.-B. and hlazcyer. B. (1994). Enhanced detection in activation maps using a mul-

tifiltering approach. Journal of Cerebrd B b o d Flow and Metabolism. 14:690-699.

Rabinowitz. D. ( 1994). Detecting clusters in disease incidence. In Change-point Problems.

Hayward California. ISIS..

Rao. C. R. (1945). Information and accuracy attainable in the estimation of statiistical

parameters. Bulletin of Calcutta mathematical society, 3731-91.

Rice, S. O. (1945). Mathematical analysis of random noise. Bell. System Tech Journal.

24:46-156.

Searle. S. R. (1982). ilfatrix .Algeb~u useful for Statzstics. bÏleyJew York.

Siegmund, D. 0. and Worsley, K. J. (1993). Testing for a signal with unknown location and

scale in a stationary gaussian random field. The Annals 01 Statzstics, 23(2):608-639.

Swerling, P. (1962). Statistical properties of the contours of randorn surfaces. I.R.E. Trans.

information Theoy, bf IT-tk31.5-321.

Worsley. K., Evans, A., Yarrett, S., and Neelin, P. (1992). -4 three dimensional statistical

analysis for cbf activation studies in human brain. Journal of Cerebral Blood Flow and

Metabolism, 12:900-918.

Page 77: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Worsley, K., Marrett. S., Neelin, P., and Evans. A. (1996). Searching scale space for activation

in PET images. Human Brain Mapping, 434-90.

Worsley. K. J. (1994). Local maxima and the expected euler characteristic of excursion sets

of k2. f and t fields. Advances in Applied Pmbability, 26:13-42.

Worsley. K. J. (1995a). Boundary corrections for the expected euler characteristics of ex-

cursion sets of random fields: with an application to astrophysics. Advances in d p p l i e s

Probability (SGS.4). 27:943-939.

Worsley. K. J. (1995b). Estirnating the number of peaks in random field using the hadwiger

characteristic of excursion sets. with application to rnedical images. Annals of Statistics.

13(2 ) :640-669.

Worsley. K. J. ( 1 998). Testing for signais with unknown location and scale in a x2 random

field. with an application tu fmri. Advances in Applied Probability, submitted.

kardenko. .CI. 1. (1971). Local properties of the sample functions of random fields. In Selected

Transactions in Mathematical Statistics and Probability, volume 10. pages 233-245.

Page 78: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Appendix A

Figures

Figure -1.1: The original rotation parameter space Q'. together witli an esarnple of tlie

excursion set at a single pixel.

Page 79: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Figure A.%: The rotation parameter spsice Q and the partition of its bounda-. together with

an example of the excursion set at a single pixel

Page 80: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Figure -4.3: Expectation of the EC tvhen C is a circle of radius 50 and [ X I . X2] = (2.501.

Page 81: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

_ - - -

_ / - -

/ - I-

/ /

4

/ /

/ 4

/ 1

/

/ /

/ /

/ /

/ /

1 .' /-

i 1 /

/ /

/ /

1 1 / 1

i l , / Rotation Space I

, :1 / - ..... i f / Scale Space [2,50] ;; / - - - - - Fixed Scale = 2 : 1 /

b' 1 Fixed Scale =50

1 I : / 1

radius

Figure -4.4: Cornparison of critical values at 0.05 level for a circle C with XI = 2 and X2 = 50.

Page 82: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

.-*.*....*-*- Stimulus Response at one pixel

- - - - - cos(2*pi*(t+3)/T)

iod T=48 seconds \ / O 50 1 O0 150 200 250

t (seconds)

Figure A.5: The stimulus. response at one pixel and lagged cosine waves

A-5

Page 83: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Sine Component Cosine Component

Figure -4.6: The sine and cosine components of the &IR1 data.

-1-6

Page 84: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

phi

135

90

O

-

Figure A.7: The sine component smoothecl with diKerent values of m and 9. The value of 1

is fixed at 162.30. The lower right image is the smoothed image mith the maximizing tilter.

Page 85: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

phi

216

144

O

-

Figure .-\.S: The cosine comporient smoothed with different values of I ancl y. Tlie value of

rrL is fiscd at 6.49. Tlie middle image is the smoothed image with the maximizing filter.

Page 86: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Cosine component with a contour of the maximizing

filter

excursion set in location above 5.25

Figure -4.9: Left: The cosine component dong with a contour of the niauirnizing filter.

Right: the excursion set above the critical value, which covers the visual cortex.

Page 87: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Sirnulated Noise Maximizing Filter

Figure A. 10: Convolution of simulated noise with the maximizing filter. Yo signai is detected.

Page 88: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Simulated Noise Gaussian signal

Noise+ Signal Maximizing filter

Figure A-11: Top: A Gaussian signal added to the Gaussian noise

Bottom: The resulting image is smoothed with the rnaximizing filter. Note thac the aclded

signal is detected close to its true location.

Page 89: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Simulated Noise Two Gaussian signais

Maximizing filter Noise+ Signal

Figure A.12: Top: Two Gaussian signals added to the Gaussian noise Bottom: The resulting

iniage is smoothed with the maxirnizing filter. Note that the two separate signals are easily

cletected.

Page 90: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Simulated Noise Two Gaussian signals

Maximizing filter Noise+ Signal

Figure -4.13: Top: Trvo close Gaussian signds added to the simulateci noise. Bottorn: The

resultiny imagc is smoothed with the mavimizing filter. Xote that the two separate signals

are detected as a single merged signal.

Page 91: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

True far away signals Excursion set at 5.25 f o r fat- away signals

True close signals Excursion set at 5.25 for close signals

Figure A.14: 'ïrue signals arid the excursion sets above 5.25 at s = s,,, for the smootlied

noise plus signals top: for the far away signals and bottom: for the close signais.

Page 92: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

Appendix B

Maple code

B. l The main procedures

Vargen is a procedure that obtains the variance matrix of the vector valued random field

Y ( t ) as defined in Chapter 2. assuming E [ X ] = O. The arguments for this procedure are m.

the dimension of t? and cov, the covariance function of random field . Y ( t ) .

Vargen : = proc (cov ,ml local N,qq,ll,i,j,k,l,ii,jj,Sigma: N:= (rn+l)* (m+2)/2: Sigma:= array(symmetric,l..N,l..N): Sigma[1,1] := simplify(cov(t,t)) : for i from 1 to m do

qq:= makeproc(diff (cov(t,s) ,s[i]) ,t ,s) : SigrnaCl, i+1] := simplify(qq(t, t))

od :

11:= O: for i from 1 to rn do

for j from i to m do qq:=makeproc(diff (cov(t ,s) ,sCi] .sCj]) ,t , s ) : 11 : =ll+l :

SigrnaCl ,m+1+11] : =simplify(qq(t , t))

f o r i from 1 to m do f o r j from i to m do

qq :=makeproc(diff(cov(t,s),t~i],s~j]),t,s): Sigma[ i+ l , j+1] :=simplify(qq(t ,t))

Page 93: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

od od :

for i from 1 to m do 11 := 0: for j from 1 to m do

for k from j to rn do qq :=makeproc(diff (cov(t,s) ,tri1 , s C j ] ,sCk]) ,t,s) :

11 : =ll+l : Sigma[i+l ,m+Z+11] : =simplify(qq(t , t))

od od

od :

ii:=O: for i from 1 to m do

f o r j from i to m do ii: =ii+l:

'j:=O: for k from 1 to m do

for 1 from k to m do jj :=jj+l: qq : =makeproc ( diff(cov(t,d,tCil ,tCj],s[kl ,s(l] ,t,s) :

~igma[ii+m+l, j j+m+l] :=simplify(qq(t , t))

od od : Sigma:=map(simplify,Sigma): end :

### End of the procdeure Vargen.

The following give the nionlerit generating function and the espectation of the ileterrrii-

liant of an n x TL random normal symrrietric matrix witli mean mu aricl variance A.

MGF:= proc(A,rnu,n) local t,m,i, j,ll,u: m:= coldim(A1: t:= array(symmetric,l..n,l..n): ### Vectroizing the random matrix as described in Chapter 3 ### u:= vector(m) : n:= 1: for i from 1 to n do

Page 94: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

for j frorn i to n do uC111 := t[i,j] :

Il:= 11+1: od

od :

### Defining the MGF as a function of t ### makeproc (exp ( rnultiply(transpose(u) ,mu) + ( l / 2 ) *multiply(transpose(u) , A, u) ) , t) : end :

### ### ### ### ### End of procedure MGF ### ### ### ### ###

Edet:= proc(n,mu,A) local t,m,i,j,k,Permut,ll,PermutI,SigndPemutI,edet,mm,Factn,m@: m:= coldim(A): t:= array(symmetric,l..n,l..n): mm:= matrix(n,n, (i, j)->O) : X # # Getting a l 1 possible permutation of [1,2, . . . , n] ####

Perrnut : = permute (n) : ### Adding together the terms in the defintion of the determinant ####### edet:= 0 : Factn:= factorial(n): for i from 1 to Factn do

mgf:= MGF(A,mu,n): for j from 1 t o n do

mgf:= makeproc(diff(mgf(t),t[j,Permut[i][j]]),t : od :

Permut 1 : = Permut Ci] :

### determining the sign of ith permutation of [1,2, . . . ,n] ####

Sign_PermutI:= 1: for j from 1 to (n-1) do

forkfrom j to n do if PermutICjl > PermutICk] then Sign-PermutI:= - Sign-PermutI fi:

od od :

edet: = simplify~edet+(Sign-PermutI)*simplify(rngf(mm))): od :

end :

# End of procedure Edet.

Page 95: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

B.2 Calculation of the expectations in Chapter 4.

In the following we calculate the contributions of the different parts of the search set to the

espected EC. The strategies to find these contributions are al1 the same. so we will mrite

the rletails just for the first one. The variance rnatris of the vector Y, whose elenierits are

the rotation space random field and a11 its first and second derivatives, w u calculated and

saved as the niatris Sigma. The ordering of the elements of secorid derivatives in a vector

is the same as explained in Chüpter 3. When we need the variance matrix of a subveçtor of

Y. we estract ic Froiii Sigma. Tlieii we cülculiltt' the curirliciorial riieari uf tlie subvector. The

espectatiori of the determinant of an i x i random niatris wi th meaii Mu aricl variance V wis

c:i~lculatccl and saved in the file LEdet i for i = 2 , 3 . 4 . Wtien we nced tliese espectatioris wt.

read these fiies aRer giving appropriate values to Mu and V. Ttien we take the espectatiori

witli respect to sorrie elenlents of Y by just niultiplying the terms by the clensity of chis

B.2.1 Contribution of (C x Q)"

Here WP fiist estract the variance matrix of (S. x,~~,,,,. x ~ ~ ~ , , ) froni Sigma thcn calculate .

the conclitiorial variance of X12i,, givcn (.Y: Xilr , , , ) .

Xow. we obtain the conditional mean:

mm:=vector( (n+l) ,O) : mmCi] :=x: mm In+ l] : =u : M u :=multiply( S12, inverse (S22) , mm) :

Page 96: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

We read the expectation of the determinant of a 4 x 4 random matrix from LEdet4 and

then give V the value of Slc2 obtained from the above and then simplify the result:

read (LEdet4) : V:=Slc2: factor(simplify(Edet4)); aa:= collect( simplify(Edet4) ,u,distributed,factor); latex(aa) ;

Here WC take espectations over S, which has a riormal distribution witti rnean zero i~,ricl

bb:= int ( aa*u*exp(-u^2/2/sigmaXphi),u=O..infinity); cc : = simplify(bb/dets22/ (2*Pi) - (3) ) ;

Firially. we write the result in latex format and as a function in C langiagc.

latex (cc*exp (-xn2/2) ; int-C-int-Q:= makeproc( factor(

int(int( int ( cc,l=m..U), m=L..U), phi=O..Z*Pi) 1 ) ; la tex (int-C-int-Q 0 *exp ( - x Y 2 ) ) ; C (int-C-int-Q , optimized, precision=single , ansi) ;

B.2.2 Contribution of 8C x QO S11:=submatrix(Sigma, [7,9,10,16,17,191.

[7,9,10,16,17,19]): S22 : zsubmatrix (Sigma, 1. . (n+l) , 1. . b+l) : S12:=submatrix(Sigma, [7,9,10,16,17,19] ,1.. ( n + l ) ) : Slc2:=matadd(SllJ -multiply( SI2 , inverse(S22) , transpose(S12))) : Slc2:=map(factor,Slc2) : mm:=vector((n+l),O): mm[l] :=x: mm [3] : =z : mC6] :=u: Mu:=map(simplify ,multiply( S12, inverse(S22), mm)) : read(LEdet3) : V:=Slc2: a a l : = ( collect ( simplify(Edet3) , [u,z,x] ,distributed, simplify) ) ; latex(aa1) ;

Page 97: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

bbl:= int ( int ( aal*exp(-z*2/2/sigma-2^2),z=-infinity..O) *u*exp (-un2/2/s igmaXphi) , u=O . . inf i n i t y ) ;

cc1 : =simplify(bbl/dets22/(2*Pi) ^ ( 3 ) ) : ddl :=collect (ccl, [sigma-:!, cos (phi) ,sin(phi) ,XI , simplify) ; latex (ddl) ; b~C~int~Q~l~p:=makeproc(algsubs(sigma~2=sigma~2(1,m,phi),ddl),1,m,phi) :

C(b-C-int-9-1-p,optimized, ansi, precision=single);

Sll:=submatrix(Sigma, [16,17,19], [16,17,19]): S22:=submatrix(Sigma, 1. . (n+l), 1. . (n+l) ) : S12:=submatrix(Sigma, [16,17,19] , 1. . (n+l) :

Slc2:=rnatadd(S11,-multiply( 512 , inverse(S221, transpose(S12))): Slc2:=map(simplify,Slc2) : rnm:=vector(n+l ,O) : mm[lJ :=x : mmC33 : = z : m[63 :=u: Mu: =map (simplify ,multiply( S12, invarse (S22) , nm) ) : read(LEdet2) : V:=Slc2: aa2:=sirnplify(Edet2); latex (aa2) ;

bb2:= int ( int ( aa2*z*exp(-zn2/2/sigma-în2),z=-infinity..O) *u*exp(-u'2/2/sigmaXphi),u=O..infinity 1:

cc2: = simplify ( bb2/dets22/ (2*Pi) a (3) ) : latex(cc2*exp(-xm2/2) ) ; dd2:=algsubs (sigma-2^2=l/(l+m-(1-rn) *cos (phi)) , cc21 ; b-C-int-Q_2:=makeproc( factor(int(int(int(dd2,phi=O. .2*Pi),m=L..l), l=L-.U))); latex(algsubs(L=lambda-1, algsubs(U=lambda-2,b-C-int-Q-2()*exp(-xa2/2)))); C(b~C~int~Q~2,ansi,optimized,precision=single);

B.2.3 Contribution of C x ûQ

Page 98: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

mmtl] :=x: mm 153 : =z : mCn+i3 :=u: Mu:=map(simplify,multiply( S12, read (LEdet3) : V:=Slc2: aa:= factor(simplify(Edet3)) :

aa := collect (

inverse(S22) , mm)) :

bb : = int ( int (aa*exp(-zn2*4*m-2) , z=-inf inity . . O)

dd:=collect(simplify(subs(m=L,cc~*( 256*L*1^2*Pin3) / (1-LI , y, f a c t o r )

/ ( 256*L*lA2*Pi-3) * (1-L) ; latex (subs (L=lambda-1, dd*exp (-xn2/2) ) ) ;

int-C-BI:= makeproc(factor(int(int(dd,1 =L..U), phi=0..2*Pi))) ; latex (- subs (U=lambda-2, subs (L=lambda-1, int-C-BI() *exp (-x-2/21] ) ) ;

Slc2:=matadd(S11,-rnultiply( SI2 Slc2:=map(simplify,Slc2): mm:=vector( (n+1) ,O> : m[1] :=x: rnm[4J : = z : mm[n+i] :=u: Mu:=map(simplify,multiply( S12, read(LEdet3) : V:=Slc2: aa:= factor(simplify(Edet3)): aa := collect (

, inverse (S22) , transpose (SI21 1) :

inverse (S22) , mm) ) :

simplify(32*1*m-3*(l-m) -2*Edet3) , [y , z ,u] , distributed ,f actor) / (32*l*rn'3* (1-m) -2) :

latex(aa ) ;

Page 99: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 100: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

m[l1 :=x: mC31 : =z : mm [53 : =w : m[6] :=u: Mu:=map(collect ,multiply( S12, inverse (S22) , mm) ,y) : read (LEdet2) : V:=Slc2: aal:=sirnplify(Edet2): aal:= collect (Edet2 ) , [u,z , w ,XI , simplify) ; latex(aa1) ;

bbl : =int ( int ( i n t ( aal *exp(-z'2/2/sigma_2̂ 2) , z=-inf inity . .O) *exp (-w'2*4*me2) , w=-inf inity . .O) *u*exp (-u-2/2/sigmaXphi) , u=O . . inf inity) ;

c c 1 := subs (m=L, sirnplify( bbl/dets22/(2*Pi)^3) ) ;

ddl : =collect (simplify (ccl) , [sigma-2, cos (phi), sin(phi) , x , sigma-21 , simplify) ;

latex (subs (L=\lambda-1, ddl) ) ; b-C-Bl-1-p : =makeproc (

subs(sigma-2=sigma-2(l,L,phi),ddl*(l-~)*sqrt(2) /(512*sqrt(Pi)) 1

,l , ph i ) ; C ( b-C-Bl-l-p , ans i ,precision=single) ;

bb2:= int( int (int ( aa2*exp(-w-2*4*mn2),w $ -infinity..O)

*z*exp(-~^2/2/sigma_2^2),z=-infinity..O) *u*exp(-Û2/2/sigrnaXphi) , u=O . . inf inity ) ;

cc2:= factor(subs(m=L,simplify( bb2/dets22/(2*Pi)-3))): latex(subs (L=lambda-1, cc2*exp(-xe2/2) ) ) ; dd2:=sirnplify(c~2/sigma-2~2); b-C-BI-2:= makeprod

factor ( i n t (int ( (i/ ((l+L) -(PL) *cos (phi) ) *dd2) , phi=O . .2*Pil, l=L. . U)) ) ;

latex(subs(L=lambda-1 ,b-C-BI-2()*exp(-xe2/2))); C(b-C-BI-2,optimized,ansi,precision=single);

Page 101: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

aal : =factor (simplify(Edet2)) :

aai:= (

collect (

bbl:=int( int( int( aal*exp(-za2/2/sigma_2^2), z=-infinity..O) *exp(-w^2*4*le2) , w=O..infinity) *u*exp(-u-2/2/sigmaXphi) , u=O..infinity);

c d : = subs(l=U, simplify( bbl/dets22/(2*Pi)-3)); ddl :=collect (512*sqrt (Pi) *ccl/((m-U) *sqrt (2 ) )

, [cos ( p h i ) , s i n ( p h i ) , sigma-2,yl ,factor) ; ddl-l:=collect( coeff(dd1,sigma-2,l)

, [cos (phi) ,sin(phi) ,y] ,factor) ; dd1-3:= coeff(dd1,sigma-2,3); latex( (m-lambda-2) *sqrt (2) *exp (-xn2/2) / (512*sqrt (Pi) 1) ; latex(subs (U=lambda-2, ddl-ltsigma-2) ) ; latex (subs (U-lambda-2, dd1-3*sigma-2^3) ) ; b-C-B2_l_p:=makeproc(

subs (sigma-2=sigma-2(U,m,phi) , ddl* (m-U) *sqrt (2) / (512*sqrt (Pi) ,m,phi) ;

C( b-C-B2-1-p,ansi,precision=single);

aa2 : =simplify (Mu C31) ; latex(aa2) ;

bb2:= int( int ( i n t (

aa2*exp(-wn2*4*1'2),w=0..infinity) *z*exp (-zn2/2/signa-2-2) , z=-inf inity . .O) *u*exp(-un2/2/sigmaXphi),u=0..infinity 1;

B-IO

Page 102: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

cc2:= factor(subs(l=U,simplify( bb2/dets22/(2*Pi)-3))): latex(subs(U=larnbda-2,cc2*exp(-xa2/2))); dd2:=simplify(cc2/sipa-2-2); b-C-B2-2:= rnakeproc(factor(

int ( i n t ( 1/ ( (U+m) - (U-m) *cos (phi) ) *dd2 , phi=O. .2*Pi) ,m=L. . U) 1 ) ;

latex(subs(L=lambda~l,subs(U=lambda~2,b~C~B2~2()*exp(-x*2/2)))); C(b-C-B2-2,optimized,ansi,precision=single);

Mu:=map( simplify,multiply( S12, inverse(S221, am)): % aa: =simplify(Mu [l] ) : aa: =(collect (

latex (aa) ;

bb:= int ( int ( int ( i n t (

*u*exp(-u^Z/P/sigmaXphi),u=O..infinity 1 ; cc:= subs(l=U,subs(m=L,simplify( bb/dets22/(2*Pi)-3))); dd: =collect (256*Pin2*U*L*cc/ (sqrt (2) * (L-U) )

, [sigma_2,cos(phi) , sin(phi) ,y] ,factor) ; latex(subs(L=lambad-l,subs(U=lambda-2,

( s q r t (2) * (L-U) ) / (256*Pin2*U*L) *exp (-xa2/2) ) ) ) ; latex (subs lambda-1, subs (U=lambda-2, dd) ) ) ; b-C-L- 1-p : = makeproc ( subs (sigma-2=sigma-2 (U , L , phi) ,

( s q r t (2) * (L-U) ) *dd/ (256*Pin2*U*L) ) ,phi ) ; C (b-C-L-l-p , ansi , precision=single) ;

Page 103: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

t n @ r B V M c '-l r + r n ~ ( ~ p m m m

Il P. P g ZE3 g E m m r . * * - r. 7. * * P. r X TJ

X 3 R 9 5 Y V1

. . rn - hin 0. + c t u r iD rt nn

UI r n - O IU + rd cn - h) r O u b P U V a - V - V

C D u Ui

. a

- - d u w 5,- cn

Z?\ A .. (O u

cn " 00 + F - w O rD V " "

l- F V . . t3 O .. -

F F W h ) " " F P te w .. u

F F 0, ÇP " " + e -J cn u .. F e r0 -J U V

- r .. rO U

Page 104: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics
Page 105: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

SI2 : =submatrix(WSigma, c i ' , 9,10,14,15,16] ,1. . (5+l) ) : Slc2: =matadd(Sll . -multiply( SI2 , inverse(S22) , transpose (Sl2))) :

mm : =vector ( (5+l) ,O) : mmC11 :=x: mm [3 ] : =z : m[6] :=u: Mu:=rnap(simplify,multiply( S12, inverse(S22), mm)) : read(LEdet3) : V:=Slc2: aal :=f actor(simplify(Edet3)) :

aal: = ( collect ( simplify( 256*1-3*rn-3*(1-m)^3+Edet3 )

, [u,z ,y] ,distributed,factor) / ( 256*l*3*mn3* (1-m) -3) ) ;

bbl:= int ( int (

aal*exp(-~^2/2/sigma-2~2),z=-infinity..O) *u*exp(-u-2/2/sigmaXphil, u=O..infinity);

ccl:=sirnplify(bbl/dets22): ddl:=collect(ccl ,psi,factor); ddl-l:=simplify(coeff(ddl,psi,-1)); latex(dd1-l/psi) ;

ddl~2:=collect(simplify(coeff(ddl,psi,-2) * 128

*(Pm) *1-2*me2/ (sqrt (2) *sqrt (Pi) * sigma_2*y) , eqn)

, [cos(phi) ,u,vl , simplify) ; ee1-2: =(sqrt (2) +sqrt (Pi) *sigma-2) /(l28

* (1-rn) *ln2*m-2*psin2) : latex(ee1-2) ; latex(dd1-2) ; ddl~3:=collect(simplify(coeff(ddl,psi,-3)

*64*me (3/2) *le (3/2) * (1-ml / (Pi*sigma-2*sin(phi) *ye2) , eqn)

, Cu,vl ,factor) ; eel-3: =Pi*sin(phi) *sigma-2/ (64*mn (3/2) *ln (3/2) *

(1-m) *ps ia3) : latex (eel-3) ; latex(dd1-3) ; dd1~4:=collect(coeff(ddl,psi,-4),~y,sigma~2]) ; dd1-4-1:=factor(simplify(l*m*coeff(dd1-4,y,1),eqn));

Page 106: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics

n A r (

4 a Ei

F i w m

1

% . rl tn Il CV -

I d n

%Y .rl .rl tn PI - * e

9 CV 9 0 V) ho- ri \ cd f i

8 e

PI * :: e . . CL

I 7-4

I u

I Q d

* rl I

U I a

w h

Y1 .r( .- 4 - PCC\( E cd

CV ar-4 (a U V I - 4 Il X VI * * Q, II C\( C1 -. cd d > c d 4

Page 107: INFORMAT TO - Library and Archives Canadacollectionscanada.gc.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/...1.3 Point set represenrations and the expectations of the characteristics