building a hybrid patient’s model for augmented reality … szewczyk/mozer 3.pdf · building a...

16
Comput. Biol. Med. Vol. 25, NO. 2, pp. 149-164, 19% Copyright 0 1995 Elsevier Science Ltd Printed in Great Britain. All rights reserved OOlO-4825/95 $9.50+0.00 BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY IN SURGERY: A REGISTRATION PROBLEM ST~PHANE LAVALL~E* , PHILIPPE CINQUIN*, RICHARD SzELwa,t OLIVIER PERIA*, ALI HAMADEH*, GUILLAUME CHAMPLEBOUX* and JOCELYNE TROCCAZ* * TIMC-IMAG, Faculte de Medecine de Grenoble, 38 700 La Tronche, France; and t Cambridge Research Lab., Digital Equipment Corporation, One Kendall Square, Bldg 700, Cambridge, MA 02139, U.S.A. (Receioed 21 September 1994) Abstract-In the field of Augmented Reality in Surgery, building a hybrid patient’s model, i.e. merging all the data and systems available for a given application, is a difficult but crucial technical problem. The purpose is to merge all the data that consitute the patient model with the reality of the surgery, i.e. the surgical tools and feedback devices. In this paper, we first develop this concept, we show that this construction comes to a problem of registration between various sensor data, and we detail a general framework of registration The state of the art in this domain is presented. Finally, we show results that we have obtained using a method which is based on the use of anatomical reference surfaces. We show that in many clinical cases, registration is only possible through the use of internal patient structures. Registration Merging reality and models Fusion Calibration Coordinate systems Localization Integration 1. INTRODUCTION Using computers and guiding systems to help the surgeon plan and perform an operation has now become a clinical reality at several sites around the world. Computer Integrated Surgery, Computer Assisted Surgery, Image Guided Surgery, Image Guided Therapy, Image Guided Operating Robot, Augmented Reality in Surgery are acronyms that depict the same reality, an exciting emerging scientific domain. However, applications of virtual reality technology in surgery need to face the difficult problem of building a hybrid patient’s model, which is the issue addressed here. The objective of this paper is to show the current possibilities and limitations for this particular problem. The purpose of building a hybrid model is to provide a complete representation that merges the real patient during the surgery with the useful computerized patient data. Building such a global representation of the underlying information in an Augmented Reality system enables the surgeon to interact with both the real and virtual patient. Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery, mainly for teaching purposes, while building a hybrid patient’s model is necessary for Augmented Reality systems used in real surgery, to combine what the surgeon sees with what he can see on medical images. Moreover, the patient data are themselves composed of several pieces of information, each one being different, which makes it useful to combine them in a single model. In the most general case represented in Fig. 1, an Augmented Reality system for surgery will have to gather different classes of information and systems: l pre-operative images: CT, MRI, Angio-MRI, SPECT, TEP, MEG, Stereo-Angiograms . . . , l anatomical models: geometrical atlases, ten 25-2-F 149

Upload: others

Post on 23-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

Comput. Biol. Med. Vol. 25, NO. 2, pp. 149-164, 19% Copyright 0 1995 Elsevier Science Ltd

Printed in Great Britain. All rights reserved OOlO-4825/95 $9.50+0.00

BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY IN SURGERY:

A REGISTRATION PROBLEM

ST~PHANE LAVALL~E* , PHILIPPE CINQUIN*, RICHARD SzELwa,t OLIVIER PERIA*, ALI HAMADEH*, GUILLAUME CHAMPLEBOUX* and

JOCELYNE TROCCAZ*

* TIMC-IMAG, Faculte de Medecine de Grenoble, 38 700 La Tronche, France; and t Cambridge Research Lab., Digital Equipment Corporation, One Kendall Square, Bldg 700,

Cambridge, MA 02139, U.S.A.

(Receioed 21 September 1994)

Abstract-In the field of Augmented Reality in Surgery, building a hybrid patient’s model, i.e. merging all the data and systems available for a given application, is a difficult but crucial technical problem. The purpose is to merge all the data that consitute the patient model with the reality of the surgery, i.e. the surgical tools and feedback devices. In this paper, we first develop this concept, we show that this construction comes to a problem of registration between various sensor data, and we detail a general framework of registration The state of the art in this domain is presented. Finally, we show results that we have obtained using a method which is based on the use of anatomical reference surfaces. We show that in many clinical cases, registration is only possible through the use of internal patient structures.

Registration Merging reality and models Fusion Calibration Coordinate systems Localization

Integration

1. INTRODUCTION

Using computers and guiding systems to help the surgeon plan and perform an operation has now become a clinical reality at several sites around the world. Computer Integrated Surgery, Computer Assisted Surgery, Image Guided Surgery, Image Guided Therapy, Image Guided Operating Robot, Augmented Reality in Surgery are acronyms that depict the same reality, an exciting emerging scientific domain. However, applications of virtual reality technology in surgery need to face the difficult problem of building a hybrid patient’s model, which is the issue addressed here. The objective of this paper is to show the current possibilities and limitations for this particular problem. The purpose of building a hybrid model is to provide a complete representation that merges the real patient during the surgery with the useful computerized patient data.

Building such a global representation of the underlying information in an Augmented Reality system enables the surgeon to interact with both the real and virtual patient. Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery, mainly for teaching purposes, while building a hybrid patient’s model is necessary for Augmented Reality systems used in real surgery, to combine what the surgeon sees with what he can see on medical images. Moreover, the patient data are themselves composed of several pieces of information, each one being different, which makes it useful to combine them in a single model. In the most general case represented in Fig. 1, an Augmented Reality system for surgery will have to gather different classes of information and systems:

l pre-operative images: CT, MRI, Angio-MRI, SPECT, TEP, MEG, Stereo-Angiograms . . . ,

l anatomical models: geometrical atlases,

ten 25-2-F 149

Page 2: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

150 ST~PHANELAVALLBE etal.

Pre-operative images (CT, Mm . ..I I

Fig. 1. To build a hybrid model means to estimate a chain of geometrical transformations Tl, T2, . . . Tn between all the coordinate systems that are involved.

l intra-operative images: X-rays, ultrasound images, video images (endoscope or microscope),

l position and shape information (e.g. obtained with 3-D optical sensors), l coordinate systems associated with visual or auditory feedback Virtual Reality

devices, and l coordinate systems associated with operative guiding systems: systems that give the

accurate position of a tool freely moved by the surgeon or systems in which a tool is actively manipulated by a robot.

This description enables us to give the definition of the hybrid model construction. Definition. A coordinate system is associated with each pre-operative and intra-

operative imaging modality, each statistical geometrical model, each sensor, each surgical tool, each guiding system, and building the hybrid model requires to compute a chain of geometrical transformations Tl, T2, . . . Tn between all the coordinate systems that are involved.

To be more specific about this problem of matching between the reality and the model, let us consider three examples that correspond to three possible ways of merging the model and the reality.

Example 1. Suppose first that we want to locate the position of a real surgical tool or any instrument on the virtual model. Commonly, in commercially available navigation systems, the tip of a pointer is displayed in real-time on resliced pre-operative MRI images. The 3-D position of the instrument is computed in the coordinate system of a localizer which can be a passive manipulator with six degrees of freedom or any optical system made of three standard or linear cameras for instance [l]. In that case, to build the hybrid model becomes simply estimating a transformation between a coordinate system associated with the pre-operative MRI images (these images represent the virtual model), and the real-time position of the instrument (the instrument represents the real patient during surgery). Therefore, the problem is in estimating a transformation between a coordinate system associated with the pre-operative MRI images, and the

Page 3: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery 151

coordinate system of the 3-D localizer. To establish such a link, something that can be seen on medical images and that can be detected by the localizer must exist. In some systems, a reference feature is made with at least three little material spheres or pins which are implanted or pasted on the patient before the MRI examination and not removed before the end of the operation. These landmarks are easily observable on MRI images, their positions are quickly computed in the localizer coordinate system by a simple manual digitization and finally the rigid transformation between two sets of three or four points can be computed very efficiently. Alternative methods exist and are within the scope of this paper. Our main emphasis is on the possibility of using only the anatomical structures of the patient instead of material landmarks to register all the data.

Example 2. In a second example, images from the model are merged with images from the reality. For instance, a semi-transparent flat screen is placed above the real surgical field, and some images from the model are displayed on this transparent screen. Note that one can consider that the surgeon has to look through a simple hole attached to this screen, so that the location of the surgeon eyes can be assumed to be known. Another solution is to consider the use of glasses with LCD screen. Consider now that the virtual model is built from a set of intra-operative 2-D ultasound images acquired at the beginning of the operation and then merged to constitute a 3-D volume of data. Another possibility would be to consider intra-operative Magnetic Resonance images since MRI machines for surgery are now commercially available. Here, building the hybrid model implies computing the transformation between the flat screen position and the images of the model. One solution is to use an optical localizing device that can track 3-D positions of infra-red markers in real-time, to attach emitters both to the ultrasound probe and to the flat screen, so that the 3-D volume of ultrasound images can be built in the localizer coordinate system, and the position of the screen is also known in the same coordinate system. In that case, building the hybrid model is achieved by solving delicate calibration problems, first in order to estimate the transformation between the ultrasound image and a rigid body made of several markers fixed to the ultrasound probe, and second to estimate the transformation between the synthetic image displayed on the transparent screen and a rigid body made of several markers fixed to this screen.

Example 3. In a third example, a robot is asked to position indicators (e.g. laser beams), or mechanical guides (e.g. a simple tube) or active tools (e.g. a cutting tool) at some location previously defined on the patient model given by pre-operative CT images. Here, the robot is seen as an intra-operative assistant of the surgeon, therefore building the hybrid model simply means estimating a link between CT images and the robot coordinate system. Two solutions are possible.

Exampte 3.1. This registration can be done as in example 1 by using tactile sensors on the robot to detect the position of reference material pins previously implanted in the patient bones PI. Example 3.2. Another solution is to use an intra-operative X-ray system as an intermediate sensor between the CT images and the robot. It is possible to estimate the transformation between the robot and a pair of X-ray images by using a calibration cage carried by the robot [3,4]. It is possible to register intra-operative X-rays with pre-operative images by matching the images of a reference anatomical structure (e.g. a vertebra) [5]. The combination of these two techniques enables us to build the hybrid model. Note that in this case, the X-ray images have been added to the hybrid model, which can be very useful when intra-operative control is necessary.

At that stage of the description of the construction of a hybrid model, we must differentiate between two classes of sub-problems which are often confused- registration and calibration-for which we give the following definitions.

Page 4: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

152 STBPHANELAVALLBE etal.

Registration. To estimate a transformation between two coordinate systems which are not in the same spatial or time domain.

Calibration. To estimate a transformation between two coordinate systems which are in the same spatial and time domain.

For example, merging pre-operative CT images with an intra-operative navigation system (example 1) is a registration problem; merging intra-operative ultrasound images with a transparent LCD screen positioned above the surgical field (example 2) is a calibration problem. Similarly, example 3.1 describes a registration problem. Example 3.2 is a combination of a calibration and a registration problem.

It is important to realize that any device involved in an Augmented Reality system either must be associated with a position sensor through a calibration procedure, or it must have a Sensor function itself. For example, an optical or mechanical passive navigator has a sensor function when it is used to digitize 3-D points. In example 3.2, it is shown that a robot can be calibrated with X-ray images. Similarly, the position of a stereo system with LCD glasses must be accurately encoded with an optical 3-D localizer if it has to be matched with the real positions. In our experience the use of an optical 3-D sensor such as the Optotrak system (Northern Digital) solves many of the calibration problems, since the position of different tools and devices can be computed with an accuracy better than 0.3 mm in real-time in a common reference system. This device is made of three linear cameras that enable us to compute the 3-D location of infra-red markers (each camera locates a plane to which the marker belongs). Rigid bodies made of at least three markers constitute reference systems that can be attached to any tool or device. When a rigid body can be attached to the operated structures, or near the operated structures, an optical sensor also provides a convenient way to track the structures motions. In the latter case, this reference rigid body constitutes the common reference system.

Therefore, in the general case, to build the hybrid model means to register all the sensors and to calibrate all the other devices with at least one sensor.

In the rest of this paper, we focus on registration problems because they are the most difficult and the most specific to Augmented Reality in Surgery. Calibration methods are extensively described in robotics and vision books. We first present a general method of registration, and the state of the art in this domain. Then, we briefly present the registration techniques that we have developed for several years at the Grenoble Hospital, and we describe the results we have obtained for several clinical applications.

2. REGISTRATION METHODS Over the past few years, many solutions have been proposed to deal with image

registration problems in general, not only in the medical field but also in Computer Vision (see [6,7] for surveys in that field). A framework and a survey of image registration techniques in general is presented in [8]. However, in this paper, we are concerned with registration of 3-D spaces; the registration of 2-D images is not addressed. In this section, we present the existing methods by following a three-step- based framework. More details about this methodology can be found in [9].

2.1. Step 1: definition of a relation between reference systems First, a reference coordinate system is defined and associated with each basic data set

or system, and models of relationships between coordinates are defined. To give an example, we associate a coordinate system Ref,, with CT pre-operative images, and a coordinate system Refsensor with an intra-operative sensor (for instance X-ray images). Now, the objective is to estimate the transformation T between Refsenso, and Ref,,.

This step often involves difficult sensor calibration problems. See for instance [lo] for a method that can calibrate X-ray images coming from intra-operative X-ray intensifiers even if local distortions are present in the images.

This step also raises the problem of the choice of a representation for rigid-body, elastic or time-varying transformations, i.e. to what extent can we expect that an

Page 5: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery 153

anatomical structure is stable between different data acquisitions and during surgery? In the most general case, the transformation T is simply a function that transforms coordinates MB= (X,, YB, 2,) in Ref, into coordinates M, = (X,, Ya, 2,) in Ref,, taking an index t into account, which is usually the time:

WA) = W&s, t). (1) In most registration problems, the transformation T is a transformation between the same structures observed by different sensors, therefore it is a rigid-body transformation (a transformation can be considered rigid provided that deformations are negligible with respect to the required accuracy). Typically, the registration between pre-operative CT, MRI, SPECT images, and intra-operative images taken at the beginning of an operation is modeled by a rigid-body transformation. In other cases, the transformation T has to take deformations into account, which becomes a problem of elastic matching. Such deformations can be global or local, micro- or macroscopic, elastic or plastic, varying quickly with time or not. Therefore an infinite set of models can be used to define a non- rigid transformation T [11-B].

2.2. Step 2: extraction of reference features and de$nition of a disparity (or similarity) function between corresponding features

Once the model of a relation between Ref, and Ref, has been defined, the second step is to extract corresponding features in Ref, and RefB, and, at the same time, define a disparity function (or a similarity function) between these features. Reference systems Ref, and Ref, will be assumed to be registered when the defined disparity function (or similarity function) will be minimal (or maximal). Such an optimization will constitute the last step of the registration process. Obviously, the choice of reference features and of the corresponding optimization method is the key to any registration strategy. Most methods differ at this level.

Let us assume a set of reference features %A = {FAi, i = 1 . . . N} have been extracted in Ref,, with a corresponding set of features sB= {FBj, j= 1 . . . M} in Ref,. The disparity function must involve some function of the distances between features FAI and the features FBI transformed by the transformation T we are looking for. A standard disparity function is given by a least-squares criterion:

D = c wi [distance (FAi , T(Fsi , t))]’ where Wi are scalar weights. (2)

Now, we examine the possible features that can be used. A large group of methods considers only the use of external landmarks (e.g. stereotactic frames or pins implanted into the patient), or simple anatomical points. With other research groups, we consider that surfaces of reference anatomical structures are more accurate, robust and conve- nient. The patient’s data contain enough information for registration. Segmentation of anatomical reference features is still a difficult problem, but specific solutions exist for many specific cases. Since the choice of the reference features becomes the crucial step of the construction of the hybrid model, we extensively describe the approaches that are encountered in the literature, and we classify these features according to their geometri- cal representation. Most methods we describe refer to image registration methods, but remind us that the construction of the hybrid model becomes a problem of registration between sensor data, and most of the sensors we can use provide images. It is hoped that the generalization of the methods described will be simple to implement for a scientist, and that physicians will gain an intuitive idea of what is technically feasible.

Matching 3-D points with 3-D points. The registration of two sets of corresponding points is a well-known problem for which authors have proposed many solutions [19,20]. Thus mathematics does not raise any problem in general, the real difficulty lies in extracting such reference points.

Page 6: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

154 STBPHANE LAVALLBE etal.

Anatomical landmarks segmented interactively. A standard and very simple approach consists of selecting interactively some pairs of corresponding points in two 3-D examinations. This approach has been applied to the head by many authors, using reference points such as the nasion, the top of the head, . . . : in ]12,21,22] authors use this method to register CT with MRI, in [23] authors register MRI with SQUID neuromagnetomer images (using a digitizing probe to register the neuromagnetometer coordinate system with some reference points taken on the skin), and in [24] authors register MRI or CT with the coordinate system of a mechanical navigation system used for stereotactic neurosurgery. In the general case, these methods are usually inaccur- ate and they need time and experience from the operator.

External landmarks. A second approach is to fix fiducial markers on the patient. Some authors have designed systems in which just a few pins or balls are fixed to the patient during pre-operative image acquisition. During surgery, a 3-D mechanical or optical localizer enables the surgeon to detect the position of such landmarks and the system can easily compute the corresponding rigid transformation. For instance, a system successfully used in ENT surgery relies on a registration performed through small balls pasted on the patient skin and manually detected with a passive mechanical arm [l] or with an optical infra-red sensor [25]. Authors obtain a millimetric accuracy. After such a registration, the mechanical or optical sensor indicates in real-time the location of a surgical tool with respect to the pre-operative images. In [26], balls are taped on the skin at locations defined by tattooed ink points: balls are detected on pre-operative images but they are withdrawn just after MRI acquisition, during surgery, a mec- hanical localizer is used to detect the tattooed ink points. An accuracy of 3 mm is reached using four points. In a successful system designed for robotic hip replacement surgery [2], the pre/intra-operative registration is performed through metallic pins implanted into the femur prior to CT examination and sensed with a semi-automatic robotic procedure during surgery, sub-millimetric accuracy is reached. Similarly, in [27], authors propose that small permanent fiducial markers be implanted under the skin into the skull, these markers could be easily localized on CT or MRI, and they could be detected during surgery using a A-mode ultrasonic probe mounted at the end of a mechanical localizer. In [28], the registration is performed using at least three radiopaque 5 mm beads attached to the patient’s scalp localized on CT images and detected during surgery with the focal axis of an operating microscope targeting the beads, the location of which is known with an ultrasound 3-D localizer. But several millimeters of errors have been observed. A submillimetric registration is reached for brain images in [29] with four markers for registration between CT, MRI, TEP and a 3-D mechanical localizer. For all these methods, the accurate detection of the landmarks in 3-D images is not obvious [29], e.g. the global centroid of the centroids of the landmarks segmented on each slice may not coincide exactly with the true 3-D centroid of the landmarks because of slice thickness and spacing.

All these approaches still raise the problem of adding some material structures on the patient before CT, MRI, SPECT acquisition. They might be inaccurate in some applications when these structures are just pasted on the skin, and they imply an invasive operation prior to real surgery when the reference structures are fixed to the bones. Matching 3-D points with 2-D points. Registration of points has been also widely used

to register 3-D images with 2-D projections. For instance, in stereotactic neurosurgery, some frames or headholders are composed of acrylic plates containing radiopaque markers: it is then possible to register the stereotactic frame coordinate system with standard X-rays and Digital Subtraction Angiography (DSA) images [30,31]. See also [32], in which scalp markers are used to register a “patient coordinate system” with angiograms. Using a 3-D localizer and a surface matching technique, the patient coordinate system is also registered with Magnetic Resonance Angiography (MRA) images, so that the MRA/angiograms transformation can be computed.

Page 7: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery 155

Matching 3-D curves with 3-D curves. There are a few cases where reference features can be 3-D curves. As these curves do not coincide exactly from start to finish, most of curve matching techniques match in a first step shape signatures that are invariant by translation and rotation (usually using curvature and torsion) and then register matched pieces of curve. For robustness purposes, these two steps can be merged. See examples in [33,34].

In practice, 3-D curves used as reference features can be curved wires or opaque catheters fixed on the patient, blood vessels or singularities curves (or crest lines) extracted on isodensity surfaces [35].

Matching surfaces with surfaces or surface points. A very general and practical feature to use is the 3-D boundary‘ surface of a reference anatomical structure. Typically, one surface can be the result of a 3-D segmentation algorithm on 3-D images (CT, MRI, . . . ), or a 3-D surface model obtained from anatomical sections or merged collections of segmented structures. How to obtain such segmented surfaces is not described in this paper, but in practice, skin and bones can be segmented automatically in most cases on CT and MRI while an automated segmentation of soft tissues remains a challenging problem in general. This first surface will have to be registered with a second data surface which can be another surface segmented on 3-D images, but which can be also a set of surface patches or points acquired with a large variety of sensors. For instance, if the chosen reference structure is visible (i.e. in the case of skin surfaces or for open surgery) standard vision sensors can be used: stereo-vision or range imaging sensors, 3-D localizers (optical, ultrasound or mechanical). The great advantage of such sensors is that their output is directly a set of surface points ready to be matched without further segmentation. Ultrasound images spatially registered together using 3-D localizers (which constitute 2.5-D ultrasound images) also yield invaluable information about surface points location. However, automated segmentation of ultrasound images is still a very difficult problem in the general case.

Surfaces can provide basic features for 3-D rigid registration as well as 3-D elastic matching. However, surface-based elastic matching is still delicate and few results are reported in the literature for medical applications. In this section, we review existing work on surface-based rigid registration, for which many different approaches have been proposed and many results exist.

Actually, there is a simple and general approach that solves the free-form surface registration problem; which involves the minimization of a distance between a model surface S, and a data surface S,:

D = distance (S,, T(S,)). (3) Several algorithms follow this approach. Their main differences lie in the different definitions of the distance functions between two surfaces [5,36-401.

Let the data surface S, be represented by 3-D data points P,, j= 1 . . . M. First, we assume that, after registration, S, is included in S A. The problem is to estimate the rigid-body transformation T that minimizes the following quantity D:

D=c [4 &,TJ’rd12 (4)

where the distance ds between a surface S and a point M is defined by the minimum Euclidean distance

MS, W = min~,dd~i, Ml. (5) To compute the distance d, between a point and a surface, several authors have considered precomputing and storing the distances ds(S, M) on a lattice of a volume V containing S: the result is a 3-D distance map [4]. Once the distance map has been built, for any point M inside V, the distance d,(S, M) is approximated by a combination of the distances computed at the lattice points neighboring M. In order to optimize memory

Page 8: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

156 STBPHANELAVALLBE etal.

space, speed of computation and accuracy of the distance map near the surface, we have introduced octree-spline distance maps in [5].

In a typical surgical application where the problem is to register the position of a bone surface segmented on CT images with a set of 3-D surface points acquired during the surgery, the idea of precomputing a distance map is really practical, since it can be done pre-operatively. Therefore, the registration which takes place during the operation becomes very fast.

Different optimization methods have also been reported to minimize the criterion of equation (4) [5,36,42]. In [5], we report the use of the powerful Levenberg-Marquardt algorithm to minimize the function D. Our resulting method is very general since it can be used not only to register medical images but also surface points extracted with 3-D sensors. This method makes possible to remove automatically outliers (false data) that occur quite often, provided that their percentage is less than 10 per cent of the total number of data. See the last section for results obtained with this method.

Matching a 3-D surface with 2-D projections. In the medical field, edges extracted on 2-D X-ray projection images represent valuable features for registration with 3-D models. However, for smooth surfaces, inferring the contour generators (i.e. the curves that belong to the surface and that form the image contours) can be quite difficult [43]. Very few papers deal with this problem, and none of these methods have been used in the medical field. The method that we developed recently (see [5]) uses octree spline distance maps, exactly as for 3-D surface registration. This allows to compute quickly and accurately a distance between projection lines and a 3-D surface (the distance is a minimum when projection lines are tangent to the surface). See applications of this method in the next section.

Otherfeatures. Many other classes of features have been proposed. As a final example, inertia moments of volumes corresponding to reference structures can be used. Once two corresponding volumes have been extracted, inertia matrices are computed and eigen- vectors are extracted for each one. Then the matching process just aligns the correspond- ing eigenvectors or principal axes [44,45]. These approaches are simple to implement but not as accurate as the surface matching methods. 2.3. Step 3: optimization of the disparity function

The third step is to estimate the transformation parameters by minimizing a disparity function between reference features. This problem raises very technical discussions about the algorithms that can be used, but this does not really affect the efficiency of a registration method in realistic applications. This is much less important than the choice of the reference features, and than the definition of the disparity function between the features. Therefore, such a discussion is not in the scope of this paper (see [9] for details). From a practical point of view, minimization techniques often require a non- linear optimization using iterative procedures. In a general case, a standard conjugate gradient descent technique can be used. However, each time a least squares criterion is used, a very powerful algorithm is the Levenberg-Marquardt algorithm [46].

3. RESULTS OF SURFACE REGISTRATION USING OCTREE-SPLINES

In this general framework, we have developed a series of methods that are based on Levenberg-Marquardt minimization between pre-segmented boundaries of anatomical structures [5, 151. Using octree-splines to represent 3-D distance maps yields fast algorithms that allow for near real-time registration and also for elastic registration. Our approach has been implemented for three instances of registration:

l rigid 3-D/3-D (estimate the rigid-body transformation betwen a 3-D surface and 3-D points)

l rigid 3-D/2-D (estimate the rigid-body transformation between a 3-D surface and at least two X-ray projections)

Page 9: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery

Fig. 2. Virtual masks of the patient are registered. One mask is obtained from MRI images, the other one is obtained with a range imaging sensor. For several thousands of data points, the convergence of the registration algorithm requires less than 10 s on a DEC Alpha workstation.

The accuracy of this registration step is submillimetric.

l elastic 3-D/3-D (estimate a hierarchical volumetric transformation between two surfaces that have been slightly deformed).

In the rest of this section, we present some results that have been obtained with these techniques.

MRZISPECT registration. To register MRI and SPECT images accurately, we have proposed in [47] adding a range imaging sensor to a standard SPECT imaging device, and then using the skin surface of the patient’s face as a stable reference feature. The basic idea is to calibrate the SPECT device with a fixed range imaging sensor, and to register a range image of the patient’s face with the scalp surface segmented on MRI images, using the surface matching previously presented. Then both transformations are combined to give the transformation between the SPECT images and the MRI images. This method has been successfully validated on 10 patients and is illustrated in Figs 2 and 3. Other applications of this technique in Augmented Surgery are easy to imagine.

Registration of CT images with intra-operative X-ray images images. Pre/intra-operative registration using only anatomical features is usually more difficult, because of lower quality intra-operative data (typically X-rays or ultrasounds). However, our algorithms have been successfully applied to registering CT with a pair of X-rays (for the skull and the vertebra). It is illustrated on Fig. 4 in the case of a vertebra [4,48].

Regbtration of CT images with intra-operative ultasound images images. Instead of X-ray images, we have demonstrated that it is possible to use intra-operative ultrasound images when the position of the ultrasound is localized in a 3-D space (using an optical localizer). It is illustrated in Fig. 5 for the case of the pelvic bone of a patient. A registration based on pelvic bones can have many applications, and for instance it can be used to position a patient during external beam radiotherapy [49]. This method has also been tested on in vitro experiments for a vertebra, and the accuracy was surprisingly about 1 mm [50].

Registration of CT images with a 3-D optical localizer. For open surgery, the simplest use of surface-based registration technique requires manually digitization of 3-D surface points that lie on the surface of a vertebra, e.g. using an optical localizer. Then these

Page 10: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

ST~PHANE LAVALLBE etal.

Fig. 3. Once the registration between the patient’s masks has been performed, SPECI images that coincide with the original MRI images are resliced and displayed side by side or superim- posed. Due to the voxel size of SPECI images (about 4 mm), we could measure that the final

accuracy is about 2 mm which is better than the image accuracy.

surface points are registered with a CT model of this vertebra (Fig. 6). This method gives a submillimetric accuracy [48,51].

Elastic registration. Finally, our methods have been extended to elastic registration using octree-spline transformations. The results is a volumetric transformation with local warpings but smooth deformations. This method can be used to register models with real anatomy, to input a priori knowledge in a 3-D segmentation process, and ultimately, to track deformations during surgery. Preliminary results are reported in [15].

4. CONCLUSION

The conclusion of this paper is that registering the data and systems involved into an Augmented Reality system for surgery is a difficult task in general. However, we hope that the references and the examples presented in this paper have shown that some solutions exist in particular situations. A major conclusion is for medical applications in which a bone can be used as a stable reference structure, several methods exist which work and are accurate and convenient. This relates to applications in which the bone itself is the target (e.g. orthopaedics, crania-facial surgery) and also to applications for which a bone can be considered to be fixed to the operated structures with an error that is below the required accuracy (e.g. the skull for neurosurgery, the pelvic bone for prostate radiation therapy or retroperitoneoscopy). Among the existing methods, we prefer anatomy-based techniques to external fiducial markers solutions that are much more cumbersome for both the surgeon and the patient.

To make further progress, it will be necessary to face two difficult technical issues: automated segmentation of reference structures, and real-time tracking of deformations during surgery using intra-operative sensors. For deformable structures that cannot be associated with rigid structures within the required accuracy (e.g. heart surgery), building the hybrid model is a much more difficult task that will require further long term research.

5. SUMMARY

Augmented Reality for Surgery is an emerging discipline for which many technical problems still exist. Building a hybrid patient’s model, i.e. merging all the data and

Page 11: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery 159

Page 12: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

160 STBPHANELAVALLBE etal.

Fig. 5. A series of intra-operative ultrasound images of the pelvic bone have been segmented. The result is a set of 3-D points that belong to the surface of the pelvic bones and that lie on small pieces of 3-D planar curves. (a) Before convergence, the points segmented on the ultrasound images are not coincident with the surface of the pelvis bone segmented on CT images. (b) After convergence, all the points are on the surface. Therefore, pre- and intra-operative reference

systems are matched by the intermediary of these ultrasound images.

systems available in a particular application is one of these difficult but crucial problems. The purpose is to merge all the data that constitute the patient model with the reality of the surgery, i.e. the surgical tools and feedback devices. In this paper, we first develop this concept, which enables us to realize that the construction of the hybrid model becomes the registration of multiple sensor data. In most cases, these sensor data are images or 3-D surfaces. Since this area of registration is large and confusing, a general framework of registration is presented and the state of the art in this domain is presented. Finally, we show results that we have obtained using a method that is based on the registration of anatomical reference surfaces. We show that in many clinical cases, the integration of data is possible through the use of internal patient structures only. We definitely claim that it is no longer necessary to use material structures fixed to the patient to register pre-operative data with the intra-operative position of the patient. The presented results show a registration of a range image with MRI images, and then the application of this registration to superimpose SPECT and MRI images. Then, we present the registration of pre-operative data with a pair of X-rays, with a set of ultrasound images, and with a collection of 3-D points manually digitized with an optical localizer. We hope that this paper will give an overview of the problem and we hope to convince that currently existing methods provide solutions to many specific problems.

Page 13: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery

Fig. 6. . . . .

Registration between surface points manually digitized with an optical localizer CT model. (a) Before registration. (b) After registration.

161

and a 3-D

Page 14: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

162 STBPHANE LAVALLBE etal.

However, elastic deformations still raise difficulties, which limit current applications to anatomical structures that are stable, by comparison with the required accuracy. Acknowledgements-We are very grateful to all the radiologists, surgeons and physicians involved in these projects.

REFERENCES 1. R. Mosges, G. Schlondorff, L. Klimek, D. Meyer-Ebrecht, W. Krybus and L. Adams, Computer assisted

surgery. An innovative surgical technique in clinical routine. In Computer Assisted Radiology, CAR 89, H. V. Lemke (Ed.), pp. 413-415, Berlin, June 1989. Springer-Verlag, Berlin (1989).

2. R. H. Taylor, H. A. Paul, C. B. Cutting, B. Mittelstadt, W. Hanson, P. Kazanzides, B. Musits, Y. Y. Kim, A. Kalvin, B. Haddad, D. Khoramabadi and D. Larose, Augmentation of human precision in computer-integrated surgery. ZTBM (Innovation and Technology in Biology and Medicine), Special Zssue on Robotic Surgery 13, 450-468 (1992).

3. J. Lavallte, S. Troccaz, L. Gaborit, P. Cinquin, A. L. Benabid and D. Hoffmann, Image guided robot: a clinical application in stereotactic neurosurgery. In IEEE Znt. Conf. on Robotics and Automation, pp. 618- 625. Nice, France (1992).

4. P. Sautot, P. Cinquin, S. Lavallte and J. Troccaz, Computer assisted spine surgery: a first step towards clinical application in orthopaedics. In 14th IEEE Engineering in Medicine and Biology Conference, pp. 1071-1072. Paris, November (1992).

5. S. Lavallee, R. Szeliski and L. Brunie, Matching 3-D smooth surfaces with their 2-D projections using 3-D distance maps. In SPZE Vol. 1570 Geometric Methods in Computer Vision, pp. 322-336. San Diego, CA

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

20.

21.

22.

23.

24.

25.

26.

(1991). - P. J. Besl and R. C. Jain, Three-dimensional object recognition, ACM Comput. Surveys 17, 75-145 (1985). P. J. Besl, The free-form surface matching problem. In Structure Transfer Between Sets of Three-dimensional Medical Imaging Data, H. Freeman (Ed.). Academic, New York (1990). L. G. Brown, A survey of image registration techniques, ACM Computing surveys 24, 326-376 (1992). S. Lavallee, Registration for computer integrated surgery: methodology, state of the art. In Computer Integrated Surgery, R. Taylor, S. LavalICe, G. Burdea and R. Mosges (Eds). MIT Press, Massachusetts (1995). G. Champleboux, S. Lavallee, P. Sautot and P. Cinquin, Accurate calibration of cameras and range imaging sensors, the NPBS method. In IEEE Znt. Conf. on Robotics and Automation, pp. 1552-1558. Nice, France (1992). R. Bajcsy and S. Kovacic, Multiresolution elastic matching, Computer Vision, Graphics, and Image Processing 46, 1-21 (1989). C. Schiers, U. Tiede and K. H. Hohne, Interactive 3D registration of image volumes from different sources. In Computer Assisted Radiology, CAR 89, H. U. Lemke (Ed.), pp. 667-669. Springer-Verlag, Berlin (1989). J. J. Jacq and C. Roux, Automatic registration of 3D images using a simple genetic algorithm with a stochastic nerformance function. In IEEE Engineering in Medicine and Biology Society Proceedings, pp. 126-127. San Diego, CA (1993).

-.

A. C. Evans. W. Dai. L. Collins. P. Neelin and S. Marett. Warnina of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and fun&o&l analysis. In SPZE Vol. 1445 Medical Imaging v, pp. 236-247 (1991). R. Szeliski and S. Lavallee. Matchine 3-D anatomical surfaces with non-rigid deformations using octree- splines. In SPZE Vol. 2031 Geometrichethods in Computer Vision II, pp. 30&315. San Diego, CA(1993). N. Ayache, P. Cinquin, I. Cohen, F. Leitner and 0. Monga, Segmentation of complex medical objects: a challenge and a requirement for computer assisted surgery planning and performing. In Computer Integrated Surgery, R. Taylor, S. Lavallee, G. Burdea and R. Mosges (Eds). MIT Press, Massachusetts (1995). D. Terzopoulos and D. Metaxas, Dynamic 3D models with local and global deformations: Deformable superquadrics, IEEE Transactions on Pattern Analysis and Machine Zntelligence 13, 703-714 (1991). D. Terzopoulos, A. Witkin and M. Kass, Constraints on deformable models: recovering 3D shape and nonrigid motion, Artificial Intelligence 36, 91-123 (1988). T. S. Huang, S. D. Biostein and E. A. Margerun, Least-squares estimation of motion parameters from 3-D corresnondences. In IEEE Conference on Computer Vision and Pattern Recognition, 24-26, Miami (1986). S. Umeyama, Least-squares estimation of transformation parameters between two point patterns.‘ ZEEB Trans. Pattern Anal. Machine Zntell., PAM1 13, 376-380 (1991). G. T. Y. Chen, M. Kessler and S. Pitluck, Structure transfer between sets of three dimensional medical imaging data. In Computer Graphics, R. C. Bower (Ed.), 171-175. Dallas (1985). K. D. Toennies. J. K. Uduna. G. R. Herman I. L. Wornom III and S. R. Buchman, Registration of 3-D ejects and surfaces, IEEE Computer Graphics and Applications 10, 52-62 (1990). - M. Singh, R. R. Brechner and V. W. Henderson, Neuromagnetic localization using magnetic resonance images, IEEE Transactions on Medical Zmaging 11, 129-134 (1992). Y. Kosugi, E. Watanabe and J. Goto, An articulated neurosurgical navigation system using MRI and CT images, IEEE Transactions on Biomedical Engineering 35, 147-152 (1988). L. Adams, A. Knepper, W. Krybus, D. Meyer-Ebrecht, G. Pfeiffer, R. Ruger and M. Witte, Orientation aid for head and neck surgeons. ZTBM (Innovation and Technology in Biology and Medicine)-Special Issue on Robotic Surgery 13, 409-424 (1992). J. Koivukangas, Y. Louhisalmi, J. Alakuijala and J. Oikarinen, Ultrasound-controlled neuronavigator- guided brain surgery. .Z. Neurosurgery 79, 36-42 (1993).

Page 15: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

A hybrid patient’s model for Augmented Reality in Surgery 163

27. J. T. Lewis and R. L. Galloway, A-mode ultrasonic detection of subcutaneous fiducial markers for image-space registration. In 14th IEEE Engineering in Medicine and Biology Conference, 1061-1062. Paris (1992).

28. D. W. Roberts, J. W. Strohbein and J. F. Hatch, A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope, J. Neurosurgery 545-549 (1986).

29. V. R. Mandava, J. M. Fitzpatrick, C. R. Maurer, R. J. Maciunas and G. S. Allen, Registration of multimodal volume head images via attached markers. In SPIE Medical Imaging VI (1652): Image Processing, 271-282 (1992).

30. B. A. Kall, Comprehensive multimodality surgical planning and interactive neurosurgery. In Computers in Stereotactic Neurosurgery, P. J. Kelly and B. A. Kall, (Eds), 209,229. Blackwell Scientific, Boston (1992).

31. P. Suetens, D. V. Vandermeulen, J. M. Gybels, G. Marchal and G. Wilms, Stereotactic arteriography and biopsy simulation with the BRW system. In Computers in Stereotactic Neurosurgery, P. J. Kelly and B. A. Kall (Eds). 249-258, Blackwell Scientific, Boston (1992).

32. R. Grzesczczuk, N. Alperin, D. N. Levin, Y. Cao, K. K. Tan and C. A. Pelizzari, Multimodality intracranial angiography: registration and integration of conventional angiograms and MRA data. In IEEE Engineering Medicine Biology Society (EMBS), J. P. Morucci (Ed.), 2783-2784. Paris (1992).

33. H. Wolfson, On curve matching, IEEE Transactions on Pattern Analysis and Machine Itelligence 12, 483- 489 (1990).

34. A. GuCziec and N. Ayache, Smoothing and matching of 3-D space curves. In Second European Conference on Computer Vision (ECCV’92), G. Sandini (Ed.), 620-629, Santa Margherita, Italy, May. Springer Verlag, LNCS Series Vol. 588 (1992).

35. J. P. Thirion, 0. Monga, S. Benayoun, A. Gueziec and N. Ayache, Automatic registration of 3D images using surface curvature. In IEEE Int. Symp. on Optical Applied Science and Engineering, San Diego, July (1992).

36. C. A. Pelizzari, G. T. Y. Chen, D. R. Spelbring, R. R. Weichselbaum and C-T. Chen, Accurate 3-D registration of CT, PET, and-or MR images of the brain, J. Computer Assisted Tomography 13, 20-26 (1989).

37. C. A. Pelizzari, G. T. Y. Chen and J. Du, Registration of multiple MRI scans by matching bony surfaces. In 14th IEEE Engineering in Medicine and Biology Conference, 1972-1973. Paris, November (1992).

38. H. Jiang, R. A. Robb and K. S. Holton, A new-approach to 3-D registration of multimodal~ty medical images by surface matching. In SPIE 1808 Visualizatin in Biomedical Computing, 196-213 (1992).

39. G. Malandain and J. M. Rocchisani, Registration of 3D medical images using a mechanical based method. In IEEE EMBS-3D Advanced Image Processing in Medicine, 91-95. Rennes, France (1992).

40. J. F. Mangin, V. Frouin, I. Bloch, J. Lopez-Krahe and B. Brendriem, Fast nonsupervised 3D registration of PET and MR images of the brain. Technical Report 93COO6, Telecom Paris, Paris (1993).

41. G. Borgefors, Distance transformations in arbitrary dimensions, Computer Vision, Graphics, and Image Processing 27, 321-345 (1984).

42. P. J. Besl and N. D. McKay, A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 239-256 (1992).

43. D. J. Kriegman and J. Ponce, On recognizing and positioning curved 3-D objects from image contours. IEEE Transactions on Pattern Analysis and Machine Inteliigence PAM1 12, 1127-1137 (1990).

44. A. Gamboa-Aldeco, L. Fellingham and G. Chen, Correlation of 3D surfaces from multiple modalities in medical imaging. In SPIE 626, Medecine XIV, 467-473 (1986).

45. N. M. Alpert, J. F. Bradshaw, D. Kennedy and J. A. Correia, The principal axes transformation: a method of image registration, J. Nucl. Medicine 31, 1717-1722 (1989).

46. W. H. Press, B. P. Flannery, S. A. Tuekolsky and W. T. Vetterling, Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, Cambridge (1986).

47. 0. Peria, S. Lavallte, G. Champleboux, A. F. Joubert, J. F. Lebas and P. Cinauin. Millimetric

48.

49.

50.

51

registration of SPECT and MR images of the brain without headholders. In IEEE Engineering Medicine Biology Society (EMBS) 14-15, San Diego, November (1993). S. LavalIke, J. Troccaz, P. Sautot, B. Mazier, P. Cinquin, P. Merloz and J. P. Chirossel. Computer assisted spine surgery using anatomy-based registration. In Computer Integrated Surgery (R. Taylor, S. Lavallte, G. Burdea and R. Mosges (Eds). MIT Press, Cambridge (1995). J. Troccaz, Y. Menguy, M. Bolla, P. Cinquin, P. Vassal, N. Laieb, L. Desbat, A. Dusserre and S. Dal Soglio, Conformal external radiotherapy of prostatic carcinoma: requirements and experimental results, Radiother. Oncof. 29, 176-183 (1993). C. Barbe, J. Troccaz, B. Mazier and S. Lavallee, Using 2.5D echography in computer assisted spine surgery. In IEEE, 160-161, San Diego. IEEE Engineering in Medicine and Biology Society Proceedings 11993). S. LavallCe, P. Sautot, J. Troccaz, P. Cinquin and P. Merloz, Computer Assisted Spine Surgery: a technique for accurate transpedicular screw fixatio using CT data and a 3D optical localizer. In First International Symposium on Medical Robotics and Computer Assisted Surgery (MRCAS94), Pittsburgh, Pennsylvania (1994).

About the Author-STiPHANE LAVALLBE was born in Pavillons-sous-Bois, France, in 1962. He received the engineering degree from Ecole Nationale SupBrieure des TelCcommunications de Bretagne in 1986, and the Ph.D. degree in Biomedical Engineering from Grenoble University in 1989. In 1991. he worked at the Cambridge Research Laboratorv of Dieital Eauioment Corporation in Cambridge, MA. Since 1991, h; has held a research postion at tie CNI&,‘in the TIMC (Techniques de I’Imagerie, la ModClisation et la Cognition) laboratorv in Grenoble. France. He has worked on Computer-integrated Surgery, izcluding its appliiation to robot assisted stereotactic neurosurgery and extensions to orthopaedics, and recently co-edited a book on the subject. His current research interests include registration of multi-modality images and

Page 16: BUILDING A HYBRID PATIENT’S MODEL FOR AUGMENTED REALITY … Szewczyk/mozer 3.pdf · Building a virtual model is necessary for Virtual Reality systems used to simulate a surgery,

164 ST~PHANELAVALLBE etal.

models, registration of images with surgical physical spaces, sensor calibration and robot calibration.

About the AMhOT-PHILIPPE CINQUIN was born in 1956. He holds a Doctor es Sciences Thesis in Applied Mathematics and is a Medical Doctor. His interests are in the fields of the application to medical image processing of computer science and applied mathematics. He has be% professor at the Joseph Fourier University and at Grenoble’s University Hospital since 1989. There he has been in charge of a program of Computer Assisted Medical Interventions since 1985.

About the Author-RICHARD SZELISKI was born in Montreal, Canada, in 1958. He received the B.Eng. degree in Honours Electrical Engineering from McGill University, Montreal, in 1979, the M.A.Sc. degree in Electrical Engineering from the University of British Columbia, Vancouver, in 1981, aid the Ph.D. degree in Computer Science from Carnegie Mellon University, Pittsbureh. in 1988. Since Julv 1989 he has been a Member of Research Staff at the Cambridee Researcoh iab of Digital Equipment Corporation, Cambridge, where he is pursuing research yrr 3-D computer vision, geometric modeling, medical image registration, and parallel programming and algorithms. Prior to coming to Digital, he worked at Bell-Northern Research, Montreal, at Schlumberger Palo Alto Research, Palo Alto, and at the Artificial Intelligence Center of SRI International, Menlo Park. Dr Szeliski has published over 30 research papers in computer vision, computer graphics, medical imaging, neural nets and parallel numerical algorithms, as well as the book Bayesian Modeling of Uncertainty in Low-Level Vision. He is a member of the American Association for Artificial Intelligence, the Association of Computing Machinery, the Institute of Electrical and Electronic Engineers, and Sigma Xi.

About the Author---OLIVIER PERIA was born in 1967. He received the Engineering degree in Computer Science in 1991 from 31 Institute (Informatique Industrielle et Instrumentation). He is currently a Ph.D. student at TIMC, where he is working on registration of SPECT images.

About the Aufhor-ALI HAMADEH was born in Damascus, Syria, in 1968. He received the Engineering degree in Electronics in 1991 from ENSERGlINPG in France. He is currently a Ph.D. student at TIMC, where he is working on segmentation and registration of projection images.

About the Author-GUILLAUME CAMPLEBOUX was born in 1961. He received the Engineering degree from the Mechanical and Microtechical school in Besancon, France, in 1986 and the Ph.D. degree in applied mathematics from the Universite J. Fourier in 1991. His domain of interest is mathematical modelling in Vision and Sensor Calibration, in relation with the medical field. He is senior member of the research staff of the TIMC laboratory in Grenoble.

About the Author-JocELYNE TROCCAZ was born in 19659. She received a Ph.D. in Computer Science from the Institut National Polytechnique de Grenoble in 1986 and an “Habilitation a diriger les recherches” in 1993. She was a teaching assistant from 1984 to 1988 at the Universite Joseph Fourier of Grenoble. She has been Charge de Recherche CNRS since 1988. Her primary interest lies in automatic robot programming. Her current interest lies in the medical applications of robotics including micro-robotics and the design of safe robots for surgical applications. She is involved in different clinical applications including radiotherapy optimization and virtual echography.