high performance computing in biomedical imaging researchalgis/dsa/articlemedimage.pdf ·...

35
High performance computing in biomedical imaging research Mahlon Stacy * , Dennis Hanson, Jon Camp, Richard A. Robb Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15 January 1998; received in revised form 15 April 1998 Abstract The Mayo Biomedical Imaging Resource (BIR) conducts research into and development of image analysis, visualization, and measurement capabilities and software tools for biomedical imaging applications. The design goal for these tools includes full interactivity, yet some tools are both compute bound and time sensitive. Therefore, eective use of these capabilities re- quires that they be executed on high performance computers. This paper provides an overview of the high performance computing activities in the BIR, including resources, algorithms and applications. Ó 1998 Elsevier Science B.V. All rights reserved. Keywords: Biomedical imaging; Image processing; Image visualization; Image registration; Image segmentation 1. Introduction The Mayo Biomedical Imaging Resource (BIR) has its roots in basic physiology research dating back to the 1940s. Computational approaches to quantitative biol- ogy began in the late 1950s and became extensive in the 1960s. Mainframe based computer imaging techniques were developed in the 1970s and transferred to net- worked Unix workstations beginning in 1986. The BIR currently utilizes a mixture of computer brands, architectures, and networking technologies to address a broad spectrum of biomedical research activities related to imaging. Fig. 1 is a diagram of the current computational network in the BIR. The BIR has developed many advanced algorithms for visualization, processing and measurement of biological and medical volumetric images. This eort was Parallel Computing 24 (1998) 1287–1321 * Corresponding author. 0167-8191/98/$ – see front matter Ó 1998 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 - 8 1 9 1 ( 9 8 ) 0 0 0 5 9 - 3

Upload: others

Post on 28-May-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

High performance computing in biomedicalimaging research

Mahlon Stacy *, Dennis Hanson, Jon Camp, Richard A. Robb

Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA

Received 15 January 1998; received in revised form 15 April 1998

Abstract

The Mayo Biomedical Imaging Resource (BIR) conducts research into and development of

image analysis, visualization, and measurement capabilities and software tools for biomedical

imaging applications. The design goal for these tools includes full interactivity, yet some tools

are both compute bound and time sensitive. Therefore, e�ective use of these capabilities re-

quires that they be executed on high performance computers. This paper provides an overview

of the high performance computing activities in the BIR, including resources, algorithms and

applications. Ó 1998 Elsevier Science B.V. All rights reserved.

Keywords: Biomedical imaging; Image processing; Image visualization; Image registration; Image

segmentation

1. Introduction

The Mayo Biomedical Imaging Resource (BIR) has its roots in basic physiologyresearch dating back to the 1940s. Computational approaches to quantitative biol-ogy began in the late 1950s and became extensive in the 1960s. Mainframe basedcomputer imaging techniques were developed in the 1970s and transferred to net-worked Unix workstations beginning in 1986. The BIR currently utilizes a mixture ofcomputer brands, architectures, and networking technologies to address a broadspectrum of biomedical research activities related to imaging. Fig. 1 is a diagram ofthe current computational network in the BIR.

The BIR has developed many advanced algorithms for visualization, processingand measurement of biological and medical volumetric images. This e�ort was

Parallel Computing 24 (1998) 1287±1321

* Corresponding author.

0167-8191/98/$ ± see front matter Ó 1998 Elsevier Science B.V. All rights reserved.

PII: S 0 1 6 7 - 8 1 9 1 ( 9 8 ) 0 0 0 5 9 - 3

Page 2: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

signi®cantly in¯uenced by the requirements of the Dynamic Spatial Reconstructor[35,44], a unique real time three-dimensional (3D) X-ray computed tomography(CT) device developed in the 1970s. The success of the various capabilities and toolsdeveloped by the BIR led to integration of the software into a comprehensive, in-teractive software system called ANALYZE [37] in 1988, now in its 8th major ver-sion 10 years later. ANALYZE has been distributed across the world, bothacademically and commercially, and is being used in over 300 institutions.

The primary design goals for ANALYZE were to provide interactive operationfor most of the visualization and analysis tools, integration between the tools to forma consistent interface, and portability among Unix workstations. In early versions ofthe program, hand optimization of assembly language routines, and techniques foraccelerated computation, such as using 16 bit integer lookup tables to avoid ex-pensive ¯oating point calculations, were valuable elements required to sustain in-teractivity.

As the functionality of ANALYZE expanded (the underlying AVW library nowcontains over 500 functions), algorithms developed for image processing, volumetricrendering, image segmentation and registration, modeling, and procedures such asvirtual endoscopy exceeded the capabilities for interactivity even on the fastest ofcomputers. In addition to powerful new software algorithms, the average sizes ofimage volumes have grown signi®cantly. A full thoracic 3D volume image from the

Fig. 1. BIR systems and networking. At the core of the BIR are several multiprocessing systems providing

computation and ®le services. Ethernet, Fast Ethernet, FDDI and Fiber Channel networks provide the

communication fabric.

1288 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 3: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

DSR was typically 128 ´ 128 ´ 128 8-bit pixels, or 2 Mbytes in size. Present gen-eration magnetic resonance images (MRI) typically are 256 ´ 256 16-bit pixels, withMRI volumes routinely containing 100 or more slices, totaling approximately16 Mbytes. X-ray CT images typically measure 512 ´ 512 16-bit pixels, with spiralCT volumes containing 50±200 slices and ranging upwards of 100 Mbytes in size.Current research at Mayo in micro-CT using very high-resolution technology typi-cally generates 2 Gbytes per volume image. The National Library of Medicine'sVisible Human Dataset occupies about 54 Gbytes at full resolution [33].

Visualization methods for 3D image data include a variety of volume renderingalgorithms. Most techniques use either parallel or perspective ray casting methods.The current implementation of these algorithms is linear along the path of the ray,usually at oblique angles to a 3D block of image data (critical in contiguousmemory). This can cause cache thrashing in modern processors, as data access isnon-linear. Novel approaches can help accelerate these methods, but they remain(especially for perspective methods) interactive only on suitably sized (smaller)volume images. Since the data sets are relatively large, they do not lend themselves todistributed processing because network latency is higher than the processing time.Very high speed networks can reduce the latency.

To accelerate the rendering process, e�cient methods are being developed toconvert the volumetric data (voxel) into polygonal based 3D models. Key to thesuccess of these techniques is the ability to segment the voxel data into objectssuitable for modeling, balancing the size of the ®nal model based on the need forsuitable anatomic detail with the speed of modern high speed graphics engines todisplay the models at real time rates. Some anatomical features are di�cult or im-possible to segment automatically, whereas others can now be segmented withminimal operator interaction. Some segmentation procedures occur in serial steps,such as morphologic erosion followed by morphologic dilation of the 3D image.Such procedures can be pipelined on SMP systems, providing that the data stream isdelivered at the output of the processing step prior to completion of the entire step.Otherwise, the pipeline will stall, and no performance gain will be realized.

Once segmentation is complete, a modeling step is performed. This converts thesegmented edges in the image data to surface polygons, and often requires extensiveprocessing time. This is due to the ``random searching'' nature of the task, whichgrows exponentially as the cube of the linear dimensions of the image data. Oncepolygonal models have been generated, they can be displayed and manipulated invirtual environments, useful for training and surgical planning. But the best leadingedge graphics systems are still marginally interactive in such virtual environments,due to the large size (and possibly number) of the models. Work continues to furtherre®ne the segmentation and tiling processes to accelerate the entire visualizationprocess.

The BIR has attempted to utilize SMP parallel processing, network based dis-tributed processing, and message passing procedures to expand the access to pro-cessing power not available on a single CPU. Each of these has met with somesuccess. More often, the continuous improvement of computer chips and systems,along with the development of more e�cient algorithms tends to eliminate any

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1289

Page 4: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

advantage gained in such methods. Since the raw data size is relatively large, andmany processing tasks cannot be readily partitioned for parallel computation, thenetwork latencies and extra programming e�ort combine to constrain active devel-opment of such techniques.

This paper describes some of the algorithms and applications that we have de-veloped and applied on high performance computers and which could bene®t greatlyfrom an order of magnitude increase in performance by the CPU and systems.

2. Algorithms

2.1. 3D image processing

Frequency domain image processing by means of the fast Fourier transform(FFT) is an obvious example of a computation-bound process in medical imagingthat is well served by parallel techniques. Both X-ray CT and magnetic resonanceimaging are heavily dependent upon Fourier domain processing for image recon-struction. In the BIR, however, the most extensive use of the FFT is in 3D decon-volution of volumetric microscope images. Optical sections of thick specimens arealways to some extent contaminated by out-of-focus structures outside the imageplane. Although optical scanning (confocal) microscopes are speci®cally designed toreject this out-of-focus light, further detail can be extracted by careful deconvolutionwith a theoretical 3D point spread function. Because of the di�culty of constructingan accurate point-spread function, iterative deconvolution is often more practicalthan an exact solution, adding further to the computational burden.

The Fourier spectrum of an image may also be used to detect certain features inmedical images, for example, the scale and orientation of textures. Convolutionperformed in the spatial domain with a kernal of limited extent forms an algorithmicbasis for a large number of important non-linear image ®lters used in medical ap-plications.

2.2. 3D image registration

Multimodality images obtained from medical imaging systems such as computedtomography (CT), magnetic resonance imaging (MRI), positron emission tomo-graphy (PET), and single photon emission computed tomography (SPECT), gen-erally provide complementary characteristic and diagnostic information. However,the multimodal image data sets of the same object(s) are generally obtained at dif-ferent times, orientations, scale, and extent of coverage. Synthesis of these imagedata sets into a composite vector-valued image containing these complimentary at-tributes in accurate registration and congruence provides truly synergistic informa-tion about the object(s) under examination. This synthesis is accomplished throughthe use of 3D registration algorithms for biomedical volume images [6,49].

Registration algorithms are comprised of two important components which areseparable but cannot exist independently: the cost function, which is used to

1290 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 5: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

determine when the volume images are in accurate registration; and the searchprocess, which is used to search through all degrees of freedom in the transformationwhile trying to ®nd the global minimum of the cost function. Several algorithms existfor each component, most of which are computationally complex and requiremethodologies for implementation that make them su�ciently e�cient so as tocompute the registration transformation in a reasonable time.

The registration cost functions can be categorized into two types: surfacematching algorithms and voxel matching algorithms. Surface matching algorithmsuse a prede®ned (segmented) surface that is common in both of the volume imagesbeing matched as the basis for ®nding the registration transformation [5,6]. Thisrequires the presence of a common structure and consistency of representation of itssurface upon which to base the registration. In the intramodality case this is a validassumption (in most cases), but with di�erent modalities the surface may have aslightly di�erent representation. Therefore, the surface matching cost function mustbe robust in the presence of minimal di�erences in surface representation. The costfunction most often used in surface matching algorithms is the distance betweenpoints on the common surfaces, with the ideal minimum being a root mean-squareerror distance of zero for all points on the common surfaces. However, to make thealgorithm e�cient and robust to noisy surface contours, only a small sample ofpoints is used to compute the distance-based cost function. To make the algorithmse�cient with respect to cost function computation, many algorithms precomputedistances to the surface in one of the volume images, as with a chamfer distancefunction, which encodes all voxels in the volume image with their respective closestdistance to the surface [20,21]. Selected points are then sampled from the othervolume image surface (l00 points, for example) and with each transformation step,the distance for each point from the matching surface is simply found through alookup table operation in the distance image. This provides an e�cient implemen-tation for cost function computation for a limited number of surface points, whichcan be handled by conventional workstation architectures. However, as the volumeimage size and the number of surface points increases, the computational load in-creases, and with this cost function computation being done many times during thesearch process, the computational burden grows.

Voxel matching algorithms examine the actual voxel values in each volume imageand form a cost function based on a relationship between these paired values[29,32,48,50,51]. The cost function assumes that the relationship described by thefunction between the paired value sets varies with spatial transformation of thevolume images and has a minimum which describes the relationship when the vol-umes are in registration. In the intramodality case, the simplest cost function toassume is value equality, which has a minimum when all voxel values are equalthroughout the volume images. Given noise and scan related image changes, this willnever be exactly true, so a reasonable cost function which minimizes global equalityof voxel values is to examine the ratio of voxel values and minimize the variance ofthese ratios [51]. In the intermodality case, several methods have been proposed,including an extension of the voxel ratio method to examine variances within single-valued collections of voxels from one volume image and minimizing the weighted

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1291

Page 6: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

collection of such variances for all voxel values [51]. A more robust method recentlydeveloped examines the relationship in the distribution of voxel value pairs by es-timating the entropy in both the joint distribution of paired voxel values and theindividual entropies of each volume image value distributions [29,32,48,50]. Thealgorithm, called maximization of mutual information, tries to minimize the entropyof the paired value distribution (focus the paired-value clusters) while maximizingthe distribution (entropy) of values in each of the individual volume images (spreadvalues over full range of image data) [29,50]. Entropy is computed by assuming anunderlying probability distribution which describes the distribution of the values ineach of the cases, the simplest of which is the ratio of the number of (paired or single)voxel values in the current alignment divided by the total number of possible sets ofvalues. In any of these algorithms, the cost function is computed by examining thevalues of all voxels in both volume images and computing some relationship betweenthem, a very large computational task that begs for high performance computingcapabilities. As with the surface matching algorithms, e�ciencies have been achievedby sampling a subset of all of the voxels in order to achieve reasonable registrationtimes (often several minutes). However, registrations using voxel value methodswould best use all voxels and thus require additional computational capacity.

Several search functions have been used in searching for the minimum of theregistration cost function through all the degrees of freedom in the registrationtransformation. Most algorithms assume rigid body registration, meaning that only3 degrees of rotation and 3 degrees of translation need be searched. Most all algo-rithms use a multiresolution pyramid approach to the search process, starting withvolume images that are signi®cantly subsampled and increasing the sample size witheach multiresolution step while decreasing the search range about the minimumfound at each resolution step. This provides e�ciency in the computation of the costfunction while also making the cost function minimization process robust to noiseand local minima. Several di�erent algorithms are also used to determine the di-rection of search, given there are six di�erent directions (independent variables) tosearch representing the 6 degrees of freedom in rigid body transformation. Methodswhich require only function evaluation are simpler and less computationally bur-densome than those which require the computation of partial derivatives of the costfunction in each of the respective degrees of freedom, but the former converge lessrapidly than the later. Even with e�cient, multiresolution search strategies, thecomputational complexity of cost function evaluation through this search process isan example of a problem for which high performance computing solutions are fre-quently necessary. Furthermore, consideration of other linear degrees of freedom,such as searching through scale, and non-linear transformations add substantialcomputational complexity to the 3D registration problem [32].

2.3. Image segmentation

The physical properties of a patient's tissues are converted by medical imagingscanners into regular rectilinear grids of numerical values. These values (voxels) varysigni®cantly in representation from the objects referred to as organs or tissues. In

1292 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 7: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

order to visualize the surfaces of 3D organs from the values of the voxels, whetherthe visualization is mental or computer-mediated, the image data must be segmented,i.e. the voxels making up an object of interest must be separated from all those notmaking up the object. Segmentation methods span a continuum between manualediting of serial sections to complete automated segmentation of 3D multispectraldata by a combination of statistical classi®cation and shape-based spatial modi®-cation.

All methods of automated segmentation that use image voxel values or theirhigher derivatives to make boundary decisions are negatively impacted by spatialinhomogeneity in the linearity of the imaging modality. Preprocessing to correct suchinhomogeneity is often crucial to the accuracy of automated segmentation. Apowerful algorithm [5] to reduce these e�ects takes the form of a local comparison ofgrayscale histogram features relative to the global image histogram. The algorithm isgenerally most e�ective at large ``window'' sizes, requiring the algebraic combinationof several hundred neighboring voxels to produce one output voxel. This can bee�ectively addressed by parallel processing. General linear and non-linear image®lters are also often employed to control noise, enhance detail, or smooth objectsurfaces.

In trivial cases (such as the segmentation of bony structures from CT data) thestructure of interest may be facilely segmented by selecting an appropriate grayscalethreshold, but such simple automation can at best de®ne a uniform tissue type, andthe structure of interest usually consists of only a portion of all similar tissue in theimage ®eld. Indeed, most organs have at least one ``boundary of convention'', i.e. ageometric line or surface separating the organ from other structures which are an-atomically separate, but physically continuous, and it is therefore necessary tosupport interactive manual editing regardless of the sophistication of automatedsegmentation available.

Multispectral image data, either full-color optical images or spatially coregisteredmedical volume images in multiple modalities, can often be segmented by use ofstatistical classi®cation methods [30]. Both supervised and unsupervised automatedvoxel classi®cation algorithms of several types are used. These methods are mostuseful on polychromatically stained serial section micrographs, because the stainshave been carefully designed to di�erentially color structures of interest with stronglycontrasting hues. There are however, startling applications of these methods usingmedical images, notably the use of combined T1 and T2 images to image multiplesclerosis lesions.

Voxel-based segmentation is often incomplete, in that several distinct structuresmay be represented by identical voxel values. Segmentation of uniform voxel ®eldsinto subobjects is often accomplished by logical means (i.e. ®nding independentconnected groups of voxels) or by shape-based decomposition [15].

All the models illustrated in this paper have been processed and segmented usingthe automated and manual tools in AnalyzeAVW [36±38], a comprehensive medicalimaging workshop developed by the Biomedical Imaging Resource of the MayoFoundation. AnalyzeAVW supports the segmentation of volumetric images intomultiple object regions by means of a companion image known as an object map,

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1293

Page 8: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

which stores the object membership information of every voxel in the image. In thecase of spatially registered volume images, object maps allow structures segmentedfrom di�erent modalities to be combined with proper spatial relationships.

2.4. Volume rendering

Volume rendering is the predominant methodology utilized in creating 3D visu-alizations from the volume images acquired in biomedical imaging [7,16,28,39].Given the nature of the acquired data being composed of volume elements, or voxels,volume rendering techniques provide a direct method of visualization of the 3Dvolume images without, in many cases, the need for prior segmentation or surfaceextraction. Given the direct relationship between the voxels rendered via the volumerendering process and the representation of those voxels in the rendered image, otherprocesses can be applied while using the volume rendering as a visualization refer-ence. These processes include visual segmentation, interactive manipulation, andquantitative analysis, the later of which is crucial to many surgery planning appli-cations.

The volume rendering process can be divided into two components, both of whichcan take advantage of high performance computing capabilities. The ®rst componentis ray casting, the computation of simulated rays that intersect and interact with a setof voxels along the path of each ray in order to select the set of voxels that is ren-dered in the visualization. Given this set of voxels along the ray path, the secondcomponent is the computation of the output rendered value for each ray, dependenton the algorithm being used to compute the rendered image.

The ray casting process is inherently parallelizable and can take full advantage ofthe power of parallel architectures in high performance computing systems. Thereare two geometric models used in the ray casting process: parallel ray casting anddivergent ray casting, as shown in Fig. 2. In each model, a rendering `screen' isassigned along the ray casting direction. This screen consists of a given number ofpixels which determines the size of the output rendered image. A ray is cast througheach one of the screen pixels to provide a value to that pixel determined by the typeof rendering algorithm being used. In the case of parallel ray casting, the eyeviewpoint is considered to be at in®nity, so each ray is strictly parallel to the othersand can be represented as a single line equation with a di�erent starting point (ateach render screen pixel center). Given this, e�ciencies can be built into the imple-mentation of the ray casting process to compute the ®rst ray (and the intersection ofthat ray with the voxels along its path), and index all other rays (and their voxelintersections) simply by changing the starting point for the ray. The `screen to scene'transformation, which produces the representation of the ray (line) in the coordinatesystem of the `scene' (volume image), need only be done once. Given this parallelgeometry case, renderings can be computed rapidly on conventional workstationsystems (1±20 renderings per second). Furthermore, interactions which use therendered image as a visualization reference for other types of manipulations, such asvisual segmentation and interactive movement, can be e�ciently implemented bydetermining the portion of the render screen a�ected by the interaction and only

1294 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 9: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

casting new rays through that part of the render screen that has changed. Given theindependence of each ray computation, this process could take advantage of highperformance systems that provide parallel processing capabilities, resulting in realtime rendering performance.

The divergent ray casting geometry provides more computational complexity tothe rendering process, but is crucially important to rendering structures at closerange (close to the eye viewpoint) [17]. The perspective gained via the divergent ray

Fig. 2. Divergent (perspective) and parallel geometries for ray casting in volume rendering.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1295

Page 10: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

casting process provides important visual clues when the eye viewpoint comes closeto the structure being rendered, including cases where the eye viewpoint is inside ofthe structure (as in virtual endoscopy) [1]. The intersections of the ray with a givenset of voxels need to be computed for all rays during the rendering process. Given thedivergence, each ray will intersect voxels at various locations and with varying pathlengths. Each ray can be computed independent of the others, allowing the algorithmto take advantage of parallel computation capabilities, and, in conjunction with highperformance computing systems, can reduce rendering times from several minutes toa few seconds.

The algorithms used to determine the output value for the rendered pixel alongany given ray can be divided into two classes: transmission (or projection) algorithmsand re¯ectance (or surface) algorithms [40]. Transmission algorithms integrate somecomponent value of multiple voxels along each ray path into the output renderedvalue. Volumetric compositing, for example, weights the contribution of each voxelalong the ray path by opacity and color functions that are established based onthreshold levels for given tissue types. Summed voxel projections average the voxelsalong the ray path and output that average, while maximum intensity projections(MIP) output the maximum voxel value along the ray path. In all cases, somecomputed value of all voxels along the ray path is integrated into the output value, aprocess that could bene®t from systems with high performance in arithmetic com-putation and memory access.

Re¯ectance (surface) volume rendering algorithms utilize a light model andcompute a normal to each voxel on the surface being rendered via information in theneighborhood of that voxel. The normal is computed as a gradient of the levelsurface function through that voxel, which is estimated by using the voxel valuessurrounding the voxel to compute the directional gradient to orient the normal atthat voxel. This normal is then used with the light model to compute a shading valuefor each voxel rendered along each ray path. The computation of the gradient isquite straightforward (subtraction of paired voxel values about the central voxelbeing rendered), and in conjunction with the computation of a shading value fromthe lighting model, could take advantage of high performance systems with e�cientmemory access and fast arithmetic computation.

Interactive exploration of the 3D visualization space utilizing the renderings asvisualization references for the exploration provides another computational chal-lenge that may best be handled by high performance computing systems [24]. Giventhe direct connection between a pixel in the rendered image and the set of voxels fromwhich it was derived, the rendered image can be used to select sets of voxels that occurin the volume image based on their rendered appearance. For example, directedmeasurements of distance (linear and curvilinear), surface area (planar and curved),and volume (via region growing) can be made directly on the volume rendered image,mapping the selected pixels back to their respective volume image voxels to computethe measurement. Furthermore, interactive, visual segmentation can be achieved withdrawing tools used directly on the rendering but re¯ected back into the volume imageitself. Once components are segmented, they can further be moved via translation androtation, in order to simulate procedures in applications such as surgery.

1296 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 11: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

2.5. Modeling

For the imaging scientist, useful reality may be 80 million polygons per frame [52]displayed at a rate of 30 frames per second (2400 million polygons per second).Unfortunately, current high-end hardware is only capable of displaying just over 10million polygons per second. So while currently available rendering algorithms cangenerate photorealistic images from volumetric data [8,14,17], they cannot sustainthe necessary frame rates. Thus, the complexity of the data must be reduced to ®twithin the limitations of the available hardware.

A number of algorithms have been developed in the BIR, of which three will bediscussed here, for the production of e�cient geometric (polygonal) surfaces fromvolumetric data. An e�cient geometric surface is one that contains a prespeci®ednumber of polygons and accurately re¯ects the size, shape and position of the objectbeing modeled while being su�ciently small so as to permit real time display on amodern workstation. Two of these algorithms use statistical measures to determinean optimal polygonal con®guration while the third is a re®nement of a simple suc-cessive approximation technique. Our modeling algorithms assume that the gener-ation of polygonal surfaces occurs in four phases: segmentation, surface detection,feature extraction and polygonization. Of these phases, the modeling algorithmsmanage the last three.

For all the methods, surface detection is based on the binary volume produced bysegmentation. The object's surface is the set of voxels where the change betweenobject and background occurs. Because the algorithms rely primarily on local fea-tures over a fairly small region of an object surface, there is signi®cant potential forparallelism to improve the e�ciency of these processes. For the statically basedmethods, feature extraction determines the local surface curvature for each voxel inthe object's surface. This calculation transforms the binary surface into a set ofsurface curvature weights and eliminates those surface voxels that are locally ``¯at''[1].

Surface voxels are assigned a weight based on their deviation from being ¯at. Themagnitude of the weight gives an indication of the sharpness of the curvature and thesign an indication of the direction of curvature.

2.5.1. Kohonen networkA Kohonen network or self-organizing map is a common type of neural network.

It maps a set of sample vectors from an N-dimensional space of real numbers onto alattice of nodes in an M-dimensional array. Each node in the array has an N-di-mensional position vector. An arbitrary N-dimensional vector is then mapped ontothe nearest node in the vectors [26,9]. This node is referred to as the best matchingunit (bmu).

By applying the algorithm to a given set of sample vectors, the nodes are dividedinto regions with a common nearest position vector. This is known as Voroni Tes-sellation. The usefulness of the network is that the resultant mapping preserves thetopology and distribution of the position vectors. That is, adjacent vectors aremapped to adjacent nodes and adjacent nodes will have similar position vectors. If

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1297

Page 12: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

P(x) is an unknown probability distribution on the vectors from which any numberof sample vectors are drawn, then to preserve distribution, for any sample vectorfrom P(x) each node has an equal probability of being mapped to that vector. Thismeans that the relative density of position vectors approximates P(x).

2.5.2. Growing netDue to the nature of the Kohonen network, a surface with a bifurcation or a hole

will exhibit distortions as the network struggles to twist a ®xed topology to thesurface. This problem was observed in an initial Kohonen-based tiling algorithm [1].To correct this, a second algorithm based on the work of Fritzke [9,10] has beenimplemented.

Using Competitive Hebbian Learning, the network is adapted to a set of samplevectors through the addition and deletion of edges or connections. To de®ne thetopological structure of the network, a set of unweighted edges is de®ned and anedge aging scheme is used to remove obsolete edges during the adaptation process.The resultant surface is the set of all polygons where a single polygon is de®ned byany three nodes connected by edges.

The network is initialized with three nodes and their edges, at random positions inRN, which form a single triangle. A sample signal is selected from the sample vectorsat random and the nearest and the second nearest node are determined. The ages ofall edges emanating from the nearest node are incremented by adding the squareddistance between the sample signal and the nearest node. The nearest node and itsdirect topological neighbors are moved towards the signal by fractions respectively,of the total distance.

If the nearest node and second nearest node are connected by an edge, that edge'sage is set to zero. Otherwise a new edge, connecting the nodes, is created as well as allpossible polygons resulting from this edge. All edges, and associated polygons, withan age greater than amax are removed. If this results in orphaned nodes, nodeswithout any connecting edges, those nodes are removed as well. If the number ofsignals s presented to the net is an integer multiple of X, the frequency of nodeaddition, a new node is inserted into the network between the node with the maxi-mum accumulated error, and its direct neighbor with the largest error variable.

Edges connecting these nodes are inserted, replacing the original edges. Addi-tional edges and polygons are added to insure that the network remains a set of 2Dsimplices (triangles). Multiplying them by a constant (empirically found to be 0.5 formost cases) reduces the error variables. The error variable is initialized to that ofnode with the maximum accumulated error. At this point, all error variables aredecreased by multiplying them by a constant, typically a value of 0.995 is adequatefor most cases. Since the connections between the nodes are added in an arbitraryfashion, a post-processing step is required to re-orient the polygonal normals. Amethod described by Hoppe [18] has been utilized with great success.

2.5.3. Deformable adaptive modelingAlgorithms such as the growing net described above reconstruct a surface by

exploring a set of data points and imposing a structure on them by means of some

1298 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 13: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

local measure. While these methods can achieve a high degree of accuracy, they canbe adversely a�ected by noise or other perturbations in the surface data. Deformablemesh based algorithms, like our Kohonen-based method, are limited to surfaces thatare homomorphic to the initial mesh's topology if they are to successfully reconstructthe surface. Based on the work of Algorri and Schmitt [2], an algorithm has beendeveloped that uses a local technique to recover the initial topology of the datapoints and applies a deformable modeling process to reconstruct the surface.

Tiling begins with a thresholded volumetric data set. The data are thresholded sothat the voxels comprising the object of interest are set to 1, all other voxels are set to0. The thresholding, as well as any segmentation and/or image editing, is performedas a preprocessing step. The non-zero voxels form the input data space. Partitioningthe data space into a set of cubes creates an initial mesh. The size of the cubes de-termines the resolution of the resultant surface, the smaller the cube the higher theresolution (see Fig. 3). A cube is labeled as a data element if it contains at least onedata point. From the set of labeled cubes, a subset of face cubes is identi®ed. A face

Fig. 3. Image voxels and related surfaces at di�erent resolutions.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1299

Page 14: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

cube is any cube that has at least two sides without adjacent neighbors. By sys-tematically triangulating the center points of each face cube, a rough approximationof the surface is generated. This rough model retains the topological characteristicsof the input volume and forms the deformable mesh.

The adaptive step uses a discrete dynamic system constructed from a set of nodalmasses, the mesh's vertices, which are interconnected by a set of adjustable springs.This system is governed by a set of ordinary di�erential equations of motion (adiscrete LaGrange equation) that allows the system to deform through time. Given amass value, a damping coe�cient and the total internal force on any node due to thespring connections to its neighboring nodes, the discrete LaGrange function de-scribes the external force applied at the node and serves to couple the mesh to theinput data. The total internal force is the sum of the spring forces acting between thenode and its neighbor node(s). Given two vertex positions, the spring forces can bedetermined.

In our mass±spring model, the set of coupled equations is solved iteratively overtime using a fourth order Runge±Kutta method until all masses have reached anequilibrium position. To draw the data toward the data's surface and to recover ®nedetail, each nodal mass is attached to the surface points by an imaginary spring. Thenodal masses react to forces coming from the surface points and from the springsthat interconnect them, thus the system moves as a coherent whole. For most datasets, the equations do not converge to a single equilibrium position, rather, they tendto oscillate around the surface points. To accommodate this, the algorithm is ter-minated when an error term, usually a measure of the total force, has been mini-mized.

3. Applications

3.1. Basic science research

The imaging computer leverages the utility of the optical microscope (medicine'soldest ``imaging system'') signi®cantly. Combined with scanning microscope tech-nologies such as the confocal microscope, the 3D reconstruction of subcellularstructures in living tissue is possible. By the use of immuno¯uorescent dyes, spatialdistribution of chemical activity (such as metabolic processes or the binding ofneurotransmitters) can be imaged in relationship to the cellular structures. Thisopens a new realm in the study of cellular processes and functions as they relate tothe physical structures and properties of a tissue or organ.

The types of physical and chemical relationships studied at this level require highresolution over extremely wide ®elds, i.e. relationships between objects that di�er insize by an order of magnitude or more. The examples which follow progress from thestudy of single cells in vivo through ascending levels of scale to the imaging of (stillmicroscopic) tissue ®elds containing structures made up of thousands of cells.

The visualization of microscopic anatomic structures in three dimensions is atonce an undemanding application of VR and an example of the greatest potential of

1300 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 15: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

the technology. As of yet, the visualization aspect of the task is paramount, little ofthe interactive nature of VR has been exploited, and screen-based display is generallyadequate to the visualization task. However, the ``reality'' portrayed is not a simu-lation of a real-world experience, but in fact an ``enhanced reality'', a perceptualstate not normally available in the real world.

3.1.1. NeuronsThe study of neuronal function has advanced to the point where the binding sites

for speci®c neurotransmitters may be visualized as a 3D distribution on the surfaceof a single neuron. Visualization of the architectural relationships between neurons isless advanced. Nerve plexes, where millions of sensory nerve cells are packed into afew cubic millimeters of tissue o�er an opportunity to image a tractable number ofcells in situ.

In one study in the BIR collaborating with the Department of Physiology [46], anintact superior mesenteric ganglion from a rabbit was imaged with confocal mi-croscopy in an 8 ´ 4 3D mosaic. Each mosaic ``tile'' consisted of a stack of64 512 ´ 512 pixel confocal images. In all, 20 complete neurons were located in themosaic. As each neuron was found, a subvolume containing that neuron was con-structed by fusing portions of two or more of the original mosaic subvolumes. Eachneuron was converted into a triangularly tiled surface and repositioned globally invirtual space. When completed, the virtual model consisted of 20 discrete neurons intheir positions as found in the intact tissue, as shown in Fig. 4. Several di�erent typesof neuronal shape are evidenced, with most neurons easily grouped by type.

3.1.2. Corneal cellsThe density and arrangement of corneal cells is a known indicator of the general

health of the cornea, and it is routinely assessed for donor corneas and potentialrecipients. The corneal confocal microscope [34] is a re¯ected-light scanning aperture

Fig. 4. Neurons in situ in the inferior mesenteric ganglion of a rat.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1301

Page 16: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

microscope ®tted for direct contact with a living human cornea. The image it cap-tures is a 3D tomographic optical image of the cornea. The sectional images rep-resent a section about 15 lm thick, and they may be captured at 1 lm intervalsthrough the entire depth of the cornea. This instrument is a potentially valuable newtool for assessing a wide range of corneal diseases.

The purpose of this project, a joint collaboration between the Department ofOphthalmology and the BIR, is to develop a software system for the automatedmeasurement of local keratocyte nuclear density in the cornea. In addition, visual-izations have been produced of the keratocyte packing structure in the intact humancornea, as demonstrated in Fig. 5. Although the images are inherently registered, eyemovement tends to corrupt registration, and requires detection and correction. In-plane inhomogeneity (hot spots) and progressive loss of light intensity with imageplane depth are easily corrected for. Keratocyte nuclei are automatically detectedand counted, with size ®lters to reject objects too small to be nuclei, and detection ofoversize objects that are recounted based on their area.

It has been found that both global and local automated density counts in rabbitcorneas correlate well to those reported by other investigators and to conventionalhistologic evaluation of cornea tissue from the same rabbits scanned by confocal. Adecrease in keratocyte density toward the back of the cornea has been observed,similar to that reported by previous investigators.

Fig. 5. Keratocyte nuclei in the in vivo cornea.

1302 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 17: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

Fig. 5 illustrates the stacking pattern of keratocyte nuclei found in a living normalhuman cornea. Other bright structures too small to be cell nuclei are also seen whenthis image is viewed in color.

3.1.3. Trabecular tissue analysis in glaucomaThe trabecular tissue of the eye is a ring of spongy, ¯uid-®lled tissue situated at

the junction of the cornea, iris and sclera. It lies between the anterior chamber of theeye and the canal of Schlemm, which is the conduit for aqueous humor back into thecirculatory system. Because this tissue lies in the only out¯ow path for aqueoushumor, it has long been implicated in the eye disease glaucoma, in which the interiorocular pressure rises. Many investigators have noted changes in this tissue linked toglaucoma, but whether the changes are cause or symptom of the pressure rise is notknown. The tissue exhibits a resistance to ¯ow much greater than that of a randomlyporous material having the same volumetric proportion of ¯uid space to matrix,implying there is some sort of funnel or sieve structure in the tissue, but this structurehas never been revealed by 2D analysis.

In this project, another involving the BIR and Ophthalmology, several hundred 1lm stained serial sections of trabecular tissue were digitized, as well as imaging 60 lmsections of trabecular tissue as volume images using the confocal microscope. Thestained serial sections are superior for the extent of automated tissue type segmenta-tion possible (trabecular tissue consists of collagen, cell nuclei, cell protoplasm, and¯uid space), although the variations in staining and microscope conditions requiredsigni®cant processing to correct. The confocal images were perfectly acceptable forsegmenting ¯uid space from tissue, however, and their inherent registration and smallsection-to-section variation in contrast proved superior for an extended 3D study.

The architecture of the tissue is so complex that attempts to unravel the entiretissue in order to analyze the architecture of the connected ¯uid space were aban-doned. It was observed that the ¯uid space in all specimens is continuous from theanterior chamber through the trabecular tissue into Schlemm's Canal. However,after a morphometric analysis in which small chambers were successively closed, itwas determined that the interconnection was maintained by very small chambers lessthan 3 lm in diameter. There are a large number of these narrowings, and they occurat all regions of the tissue, but all specimens examined showed disconnection afterclosing o� all 3 lm or smaller chambers. In the left-hand panel of Fig. 6, before anymorphologic processing, all of the dark ¯uid space is interconnected, other colorsillustrate small cul-de-sacs in the remaining ¯uid space. In the right-hand panel, thesame analysis is performed after opening all chambers smaller than 2 lm, showing aloss of connectivity between the anterior chamber and the canal of Schlemm. Thisproject is uncovering clues about the normal and abnormal function of this tra-becular tissue, and how it relates to the symptoms and progression of glaucoma.

3.1.4. Prostate microvesselsNew diagnostic tests have increased the percentage of prostate cancers detected,

but there is no current method for assessing in vivo the widely varying malignantpotential of these tumors. Therefore, improved diagnosis [12,47] has led to removing

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1303

Page 18: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

more prostates, rather than the improved speci®city of diagnosis that might leadultimately to fewer surgeries. One known histologic indicator of malignancy is thedensity of microvessels feeding the tumor. This feature could conceivably lead tomethods of characterizing malignancy in vivo, but the physiology underlying an-giogenesis is not fully understood.

In this project, a collaboration between the BIR and the Department of Pathology[23,24], several hundred 4 lm serial sections through both normal and cancerousregions of excised tissue following retropubic prostatectomy were di�erentiallystained with antibodies to factor VII-related antigen to isolate the endothelial cells ofblood vessels. The sections were digitized through the microscope in color, equalizedfor variations in tissue handling and microscope parameters, segmented, spatiallycoregistered section-to-section, and reconstructed into 3D views of both normal andtumor-feeding vessel beds.

The two types of vessel trees are visually distinctive, with the tumor-associatedneovasculature appearing much more twisted and tortuous than the normal vessels(see Fig. 7). Furthermore, measurement of the instantaneous radius of curvatureover the entire structure bears out the visual intuition, in that the neovasculatureexhibits a larger standard deviation of local curvature (i.e. more change in curvature)than the normal vessels.

3.2. Surgical planning, simulation, and virtual reality based navigation

Computers are among the most important tools surgeons use when they plan andperform surgery [27,41]. Computers can now be used to create highly accurate 3D

Fig. 6. Connected ¯uid space in human trabecular tissue before (left) and after (right) morphological

opening.

1304 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 19: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

images of the interior of patients' bodies. Surgeons can use these images to determinethe nature and extent of the patients' medical problems, the most e�ective treatmentmethods, and potential complications. Designing surgical procedures in this way isinvaluable in complex cases where, in the past, complications were not readily visibleuntil after surgery had begun. Now, surgeons can combine 3D images of the patientswith virtual reality equipment to practice surgical procedures and e�ectively evaluatethe results of their operations [42]. It is in these areas that high performance com-puting has a signi®cant impact in the practice of surgery.

An important factor in the increased use of computers to improve the e�ectivenessof surgical procedures has been advancements in hardware and software technology.The improving accuracy and resolution of medical imaging technology, such as inCT and MRI systems, have also been important. The combination of these factorshas yielded a powerful and practical environment for surgical planning, simulation,treatment, and evaluation, which requires high performance computing for e�ectiveimplementation.

The following examples demonstrate collaborative projects between the BIR andDepartment of Surgery, illustrating clinical applications where high performancecomputing will play an important role in advancing the current planning systems tofull rehearsal and intraoperative guidance systems. The technology of choice forfacilitating the implementation of these powerful systems is an integration of devicesthat will augment the surgeon's abilities to interact with the patient intraoperativelyusing high performance computing platforms that can provide the imagery withsu�cient speed and accuracy to provide a synergistic visualization experience. Vir-tual reality devices continue to evolve to provide the necessary tools to deliver thisintegrated surgical solution.

Virtual reality computer-assisted surgery systems are currently being developed toassist surgeons during neurologic, craniofacial, orthopedic, thoracic, and urologicsurgery. Such systems will allow the surgeon to interactively visualize the 3D ren-dering of CT and MRI volume data with hands-free manipulation of the virtualdisplay. The surgeon will be able to scale, orient, and position prescanned body

Fig. 7. Microvasculature supplying normal (right) and cancerous (left) prostate gland.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1305

Page 20: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

imagery on-line in real time from any desired perspective. The clinical goal is dy-namic fusing of the 3D imagery with the actual patient in the operating room, guidedby the position of the surgeon, the current direction of the surgeon's vision, and theposition of intraoperative devices such as surgical probes and other tools. Thiscustomized integration of VR technology and high performance computing willpermit on-line access to the preoperative plan and to update the measurements andanalysis done preoperatively with new, real time intraoperative data.

The combination of high quality 3D volume images and high performancecomputing provides an ideal opportunity for the development of powerful yetpractical systems for medical image visualization, manipulation, and measurement insurgical planning, simulation, and intraoperative guidance systems.

3.2.1. Craniofacial surgery planning and evaluationCraniofacial surgery involves surgery of the facial and cranial skeleton and soft

tissues. It is often done in conjunction with plastic surgical techniques to correctcongenital deformities, or for the treatment of deformities caused by trauma, tumorresection, infection and other acquired conditions. Craniofacial surgical techniquesare often applied to other bony and soft tissue body structures.

Currently, preoperative information is most often acquired using X-ray CTscanning for the bony structures, with MRI used for imaging the soft internal tissues.Although the information provided by the scanners is useful, preoperative 3D vi-sualization of the structures involved in the surgery provides additional valuableinformation [3,31]. Also, 3D visualization facilitates accurate measurement ofstructures of interest, allowing for the precise design of surgical procedures. Pre-surgical planning also minimizes the surgery's duration [11]. This minimizes the riskof complications and reduces the cost of the operation and the chance of postop-erative complications.

Fig. 8 demonstrates the use of 3D visualization techniques in the planning andquantitative analysis of craniofacial surgery. Data acquired from sequential adjacentscans using conventional X-ray CT technology provides the 3D-volume image fromwhich the bone can be directly rendered. The rendering in the upper left of Fig. 8demonstrates the usefulness of direct 3D visualization of skeletal structures for theassessment of defects, in this case the result of an old facial fracture. One approachto planning the surgical correction of such a defect is to manipulate the 3D renderingof the patient's cranium. Using conventional workstation systems, surgeons canmove mirror images of the undamaged structures on the side of the face opposite theinjury onto the damaged region [42]. This arti®cial structure can be shaped usingvisual ``cutting tools'' in the 3D rendering. Such tailored objects can then be used forsimulation of direct implantation, as shown in the di�erent views of the designedimplant in the upper right, lower left, and lower center renderings in Fig. 8. Accuratesize and dimension measurements, as well as precise contour shapes, can then bemade for use in creating the actual implants, often using rapid prototyping ma-chinery to generate the prosthetic implant.

This type of planning and limited simulation is done today on standard work-station systems. The need for high performance computing systems is demonstrated

1306 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 21: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

by the advanced capabilities necessary for direct rehearsal of the surgical procedure,computation and manipulation of deformable models (elastic tissue preparation),and the associated application of the rehearsed plan directly during the surgicalprocedure. The rehearsal of the surgical plan minimizes or eliminates the need todesign complex plans in the operating room, while the patient is under anesthesiawith, perhaps, the cranium open. Further application of the surgical plan directly inthe operating room using computer-assisted techniques could dramatically reducethe time of surgery and increase the chances for a successful outcome.

3.2.2. Intraoperative neurosurgical guidance systemsIntraoperative neurosurgical guidance systems have been in existence for several

years and are continually challenged with increasing computation demands due tothe nature of the real time interactions necessary for true computer-assisted neuro-surgery and the increase in size and complexity of volume image data sets acquiredfor patient speci®c anatomy [27,41]. O�ine surgical planning and simulation can beachieved on most common workstation platforms, and are often used for prepro-cessing of image data for intraoperative guidance [11]. But high performance systemsare necessary to create the advanced visualizations useful for intraoperative

Fig. 8. Presurgical bone graft planning using 3D renderings from a CT scan. A prosthesis is designed and

then manufactured using these highly accurate renderings.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1307

Page 22: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

guidance, integrated through real-time localization devices which register the im-agery to the patient and the position and orientation of the surgeon and surgicaldevices [25,27].

An example of an intraoperative guidance technique that requires high perfor-mance computing is interactive computation of `line of sight' oblique planar im-ages positioned at the tip of either a probe or surgical instrument and orientedperpendicular to the long axis of the instrument. Such oblique slices provide theneurosurgeon with a direct visualization of the image data along the path ofsurgical approach. Controlling the position along the long axis allows the image tobe generated ahead of the surgical instrument by a given distance, permitting thesurgeon to `look ahead' at anatomic structures that are along the current approachpath. Correlation with an interactive display of segmented, 3D rendered visual-izations which also orient with instrument orientation and depict instrument lo-cation graphically further provides advanced, interactive visualization of thesurgical procedure. Fig. 9 depicts this technique via precomputed oblique sectionsalong a preselected surgical path. The large tumor deep inside the brain providesthe surgeon with a di�cult problem in path planning, in this case from the back ofthe head over the top of the cerebellum with the head tipped at approximately 30°.The images are computed at 3 mm increments along this path, the position ofwhich is indicated by the lines on the rendered visualizations. The images furtherdemonstrate the complexity of these plans and the need for advanced

Fig. 9. Intraoperative ``line of sight'' oblique images for neurosurgical approach planning to large, deep

tumor. 3D renderings depict the location of the oblique images generated from precontrast MRI, post-

contrast MRI, and MRA volume images.

1308 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 23: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

computational platforms. These images form a multimodality, synergistic data setwhich provides all anatomic information necessary for surgical guidance, includinga T1-weighted MRI prior to contrast enhancement (top), a T1-weighted MRI withgadolinium enhancement to de®ne the tumor margins (middle), and an MR an-giogram to localize the position of important vessels in the region of the surgicaltarget (bottom), important to the surgical removal of the tumor. All images shouldbe available and updated interactively during the `line of sight' intraoperativeprocedure (see Figs. 10 and 11).

Often functional image information is integrated into the information used forintraoperative guidance [19]. As an example, many epilepsy patients have no iden-ti®able pathology, such as a tumor, to serve as an indicator of the region of the braincausing the seizure focus. Such patients undergo intensive investigative procedures toidentify the region of the brain causing the seizure activity, including placement ofelectrode arrays on the cortical surface of the brain and subsequent electrical activitysampling during seizure and stimulation studies to map out the relationship betweencortical anatomy and function. The electrode arrays can be localized to speci®ccortical anatomy using rendering techniques with MRI data for brain structure andCT for electrode visualization, as shown in Fig. 12. Often advanced functionalimaging studies are done to corroborate the electrode data, including SPECTimaging of radiopharmaceutical uptake localized to the area of seizure activity, asshown in Fig. 13, and functional MRI studies as depicted in Fig. 14. These visual-izations provide the important correlation of functional information to structuralanatomy useful during intraoperative guidance and surgical resection of corticaltissue.

Fig. 10. Several views of brain and tumor provide visual localization of important structures.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1309

Page 24: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

Fig. 11. Enhanced reality surgical views. Virtual reality objects are projected onto the patient during a

procedure.

Fig. 12. Visualization of electrode arrays used in cortical mapping studies for epilepsy patients.

1310 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 25: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

Fig. 13. Addition of SPECT imaging information in full 3D synergy with cortical structure and electrode

positions for presurgical planning in epilepsy patients.

Fig. 14. Functional MRI integrated with 3D visualizations in neurosurgical planning for epilepsy patients.

These renderings are useful in planning surgical procedures that are very precise and that minimize the

chance for post-operative problems.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1311

Page 26: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

All of these techniques require real-time integration with localization devices inthe operating theater, advanced 3D visualization techniques for rendering and in-teractive sections, and multimodal volume image support (including multimodalregistration and integration), all of which require high performance computingplatforms.

3.2.3. Virtual endoscopyVirtual endoscopy [47±49] (or computed endoscopy) is a new method of diagnosis

using computer processing of 3D image data sets (such as CT or MRI scans) toprovide simulated visualizations of patient speci®c organs similar to those producedby standard endoscopic procedures. Conventional CT and MRI scans produce crosssection ``slices'' of the body that are viewed sequentially by radiologists who mustimagine or extrapolate from these views what the actual 3D anatomy should be. Byusing sophisticated algorithms and high performance computing, these cross sectionsmay be rendered as direct 3D representations of human anatomy. Speci®c anatomicdata appropriate for realistic endoscopic simulations can be obtained from 3D MRIdigital imaging examinations or 3D acquired spiral CT data.

Thousands of endoscopic procedures are performed each year. They are invasiveand often uncomfortable for patients. They sometimes have serious side e�ects suchas perforation, infection and hemorrhage. Virtual endoscopic visualization avoidsthe risks associated with real endoscopy, and when used prior to performing anactual endoscopic exam can minimize procedural di�culties and decrease the rate ofmorbidity, especially for endoscopists in training. Additionally, there are many bodyregions not accessible to or compatible with real endoscopy that can be exploredwith virtual endoscopy. Eventually, when re®ned, virtual endoscopy may replacemany forms of real endoscopy.

Methods for virtual endoscopy begin ®rst with the acquisition of three-dimen-sional images from conventional medical imaging scanners (e.g., spiral CT, MRI).Invariably, some preliminary processing on this image data is required to properlyprepare it for endoscopic visualization. These preprocessing steps may include in-terpolation to transform the data set into isotropic elements, registration to bring allimages into spatial synchrony, and segmentation to reduce the data set to the desiredspeci®c anatomic structure(s). Endoscopic visualizations can be created using eithervolume rendering techniques or surface rendering from extracted models of thestructures being examined. Perspective volume rendering is necessary to achieve theproper relationships between the structures being visualized and their relation to oneanother in space, often a tubular structure in which perspective down the length ofthe tube is required for proper perception of the 3D extent of the structure.

Volume rendering techniques in endoscopy provide the advantage of being able tosee through the immediate wall of the structure being rendered to visualize localanatomic structures surrounding the current structure. However, volume-renderingtechniques are computationally expensive, and achieving the desired real time in-teraction with the visualization to simulate an endoscopic procedure is di�cult(requiring high performance computing systems with signi®cant resources). Manyvirtual endoscopy techniques use extracted representations of the structure being

1312 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 27: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

examined, as with extracted polygonalized models of the inner lumen surface of atubular structure like the esophagus. These models can be rendered with traditionalsurface rendering techniques, often using hardware acceleration speci®c to surfacerendering algorithms to achieve real time traversal of the structure for simulatedendoscopy. Endoscopic display procedures can be simulated in one of two ways ± (1)on-line, real time display using an interactive simulator, such as a virtual realitydisplay system with rapid computation capabilities which can produce updateddisplays at real time rates in response to user interactions (e.g., using a head mounteddisplay, head tracking and 3D input devices), or (2) a predetermined ``¯ight path'' isused to compute sequential frames of views which are rendered in an animated videosequence.

Although virtual endoscopy is in the embryonic stages in clinical practice, de-scriptions of methods and preliminary results are increasing [13,43,45]. Fig. 15 il-lustrates volume renderings of segmented anatomic structures from a spiral CT scanof a patient with colon cancer and polyps. The upper left image is a transparentrendering of a portion of the large bowel selected for segmentation and the ratherlarge circumferential rectal cancer at its distal extent. The upper right and lower leftimages reveal these same anatomic segmentations at di�erent angles of view with theskin removed. The lower right panel shows a volume rendering of the isolated largecolon and cancer from a posterior oblique view. Also identi®ed, segmented andrendered in the image is a polyp in the mid-sigmoid region. Di�erent ways of digi-tally analyzing this with virtual endoscopy is illustrated in Fig. 16. The upper leftpanel is a texture-mapped virtual endoscopic view of the polyp at close range, and

Fig. 15. Volume renderings of anatomic structures segmented from spiral CT scan of patient with colon

cancer. A mid-sigmoid polyp can be seen (blue) in the oblique posterior view of the isolated bowel in the

lower right corner as well as a circumferential rectal cancer (red).

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1313

Page 28: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

the upper right panel shows an enhancement of the polyp against the luminal wall.Some enhancement is possible only with virtual endoscopy, since the polyp itself canbe digitally segmented and processed (e.g., brightened) as a separate object. Thelower left panel is a transparent rendering of the polyp, revealing a dense interiorregion that was also segmented. This is most likely a denser-than-normal vascularbed, perhaps a precursor of malignancy. The lower right panel illustrates the ca-pability for ``virtual biopsy''. Both geometric and densitometric measures may beobtained numerically form the segmented polyp (density measures computed fromthe original image data).

As 3D medical imaging and high performance computing power improve, thesigni®cant promise of virtual endoscopy for non-invasive endoscopic screening anddiagnosis will be realized. The BIR is aggressively pursuing these developments incollaboration with the Departments of Radiology, Surgery, Gastroenterology andThoracic Diseases. Several clinical protocols are underway to evaluate and validatethe usefulness of virtual endoscopy in clinical applications.

3.2.4. 4D image-guided ablation therapyThere is signi®cant potential for the treatment of potentially life-threatening

cardiac arrhythmia by minimally invasive procedures whereby an ablation electrodeis introduced into the heart through a catheter and used to surgically removeanomalies in the heart's ``electrical wiring'' which cause parts of a chamber tocontract prematurely.

Before powering the ablation electrode, the electrical activity on the inner surfaceof the heart chamber must be painstakingly mapped with sensing electrodes to locatethe anomaly. In order to create the map with a single conventional sensing electrode

Fig. 16. Virtual endoscopy view of the mid-sigmoid polyp with polyp surface highlighted, transparently

rendered revealing high-density region in polyp, and with measurements of diameter and volume.

1314 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 29: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

the cardiologist must manipulate the electrode via the catheter to a point of intereston the chamber wall by means of cine-¯uoroscopic and/or real-time ultrasoundimages. Only after the position of the sensing electrode on the heart wall has beenunambiguously identi®ed may the signal from the electrode be analyzed (primarilyfor the point in the heart cycle at which the signal arrives) and mapped onto arepresentation of the heart wall. Sensed signals from several dozen locations areneeded to create a useful representation of cardiac electrophysiology, each requiringsigni®cant time and e�ort to unambiguously locate and map. The position and extentof the anomaly is immediately obvious when the activation map is visually comparedto normal physiology. After careful positioning of the ablation electrode, the abla-tion takes only a few seconds.

The morbidity associated with this procedure is primarily related to the time re-quired (several hours), and complications associated with extensive arterial cathe-terization, repeated ¯uoroscopy and iterative attempts to ®nd and ``burn'' theo�ending target tissue. There is signi®cant promise for decreasing the time for andimproving the accuracy of the localization of sensing electrodes by automatedanalysis of real-time intracatheter or trans-esophageal ultrasound images. Anymethodology that can signi®cantly reduce the procedure time will reduce associatedmorbidity, and the improved accuracy of the mapping should lead to more preciseablation and an improved rate of success.

A system is being developed by the BIR and Mayo's Division of CardiovascularDiseases wherein a static surface model of the target heart chamber is continuouslyupdated from the real-time image stream. A gated 2D image from an intracatheter,trans-esophageal, or even hand-held transducer is ®rst spatially registered into itsproper position relative to the heart model. The approximate location of the sec-tional image may be found by spatially tracking the transducer, or by assuming it hasmoved very little from its last calculated position. More accurate positional infor-mation may be derived by surface-matching [20] contours derived from the image tothe 3D surface of the chamber. As patient speci®c data are accumulated, the staticmodel is locally deformed to better match the real-time data stream while retainingthe global shape features that de®ne the chamber.

Once an individual image has been localized relative to the cardiac anatomy, anyelectrodes in the image may be easily referenced to the correct position on thechamber model, and data from that electrode can be accumulated into the electro-physiologic mapping. To minimize the need to move sensing electrodes from place toplace in the chamber, Mayo cardiologists have developed ``basket electrodes'' [22],multielectrode packages which deploy up to 64 bipolar electrodes on 5±8 ¯exiblesplines which expand to place the electrodes in contact with the chamber wall whenreleased from their sheathing catheter. The unique geometry of these ``baskets''makes the approximate positions of the electrodes very easy to identify in registered2D images, which capture simple landmarks from the basket.

When operational, this system will allow mapping the patient's unique electro-physiology onto a 3D model of the patient's own heart chamber, as shown inFig. 17, within a few minutes rather than several hours. This can only be accom-plished using high performance computers.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1315

Page 30: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

3.2.5. Anesthesiology simulatorMayo provides training for a large number of residents in the medical specialty of

anesthesiology. Most of the techniques are used for the management of pain andinclude deep nerve regional anesthesiology procedures. The process of residenttraining involves a detailed study of the anatomy associated with the nerve plexus tobe anesthetized, including cadavaric studies and practice needle insertions in ca-davers. Because anatomy books are 2D in nature, only when the resident examines acadaver does the 3D relationships between the anatomy become clear. In addition,practice needle insertions are costly because of the use of cadavers, and ine�ectivedue to the lack of physiology. To address these issues, an anesthesiology trainingsystem is being developed in the BIR in close cooperation with the Department ofAnesthesiology [4].

The Visible Human Male is the virtual patient for this simulation [33]. A varietyof anatomic structures have been identi®ed and segmented from the CT data set. Thesegmented structures were subsequently tiled to create models that are used as thebasis of the training system. Because the system was designed with the patient inmind, it is not limited to using the Visible Human Anatomy. Patient scan data setsmay be used to provide patient speci®c anatomy for the simulation, allowing thesystem to have a large library of patients, perhaps with di�erent or interestinganatomy useful for training purposes. This capability also has the added bene®t ofallowing procedures on di�cult or unique anatomy to be planned, rehearsed andpracticed before the actual operation of the patient.

The training system provides several di�erent levels of interactivity. At the leastcomplex level, the anatomy relevant to anesthesiologic procedures may be studiedfrom a schematic standpoint, that is the anatomy may be manipulated to providedi�erent views to facilitate understanding. These views are quite ¯exible and can becon®gured to include a variety of di�erent anatomical structures; each structure canbe presented in any color, with various shading options and with di�erent degrees oftransparency, as depicted in Figs. 18 and 19. Stereo viewing increases the realism ofthe display. Simulation of a realistic procedure is provided through an immersive

Fig. 17. Normal heart electrophysiology as a color wash on the left ventricular chamber, as seen from

outside (left) and inside (right) the heart.

1316 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 31: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

environment created through the use of a head tracking system, head mounteddisplay, needle tracking system and haptic feedback. The resident enters an im-mersive environment that provides sensory input for the visual system as well as thetactile system. As the resident moves around the virtual operating theater, the headtracking system relays viewing parameters to the graphics computer, which generatesthe new viewing position to the head mounted display. The resident may interactwith the virtual patient using a ¯ying needle or using a haptic feedback device whichprovides a simulation of touching the patient's skin and advancing a needle towardthe nerve plexus.

4. Future plans

The BIR continues to optimize many of the algorithms to run on SMP machines,and is developing distributed processing techniques designed speci®cally for opti-mizing biomedical image visualization and analysis. Attempts to achieve parallelismfrom automatic parallelizing compilers have not been fruitful, due to the design ofthe code. Further work in this area includes evaluation of SMP parallelized routinesas part of AVW, thereby accelerating all programs written with this advancedimaging toolkit. The BIR now has ®ve SMP systems, providing an excellent testbedfor proving the merits of parallelization.

Fig. 18. Torso model for an anesthesiology simulator with needle in place for celiac block.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1317

Page 32: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

In addition, new techniques to e�ciently handle large medical image datasets willrequire the design of algorithms that operate intelligently on subsets of the data,whereby the results from each operation are assembled into the ®nal output. Onlythrough these methods can datasets such as the Visible Human male and female bee�ciently handled.

We are building software tools to leverage the potential capabilities of virtualreality in a program called ``VRASP'', or Virtual Reality Assisted Surgery Planning[38,40] design, the VRASP program is building infrastructure to take advantage ofSMP and distributed processing capabilities as applied to biomedical applications.

Since its inception, the BIR always attempted to push the envelope of imagingscience in an attempt to provide useful visualization and analysis tools for the needs ofimage researchers currently and in the future. While we have been fortunate to havehigh performance computers to facilitate this goal, our vision must continue to matchor reach beyond the needs and the dreams of practitioners and the user community.

Acknowledgements

The authors would like to thank the following for their collaboration and par-ticipation: At Mayo Foundation: Dr. Uldis Bite, Dr. Fred Meyer, Dr. Cli�ord Jack,

Fig. 19. A close-up view of the model revealing the celiac plexus target.

1318 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 33: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

Dr. Joseph Szurszewski, Dr. William Bourne, Dr. Jay McLaren, Dr. DouglasJohnson, Dr. David Bostwick, Dr. Douglas Packer, Dr. David Martin, and the sta�of the Biomedical Imaging Resource. At the University of Iowa: Dr. MichaelVannier.

References

[1] S. Aharon, B.M. Cameron, R.A. Robb, Computation of e�cient patient speci®c models from 3-D

medical images: use in virtual endoscopy and surgery rehearsal, in: Proceedings of the IPMI 1997,

Pittsburgh, PA, 1997.

[2] M. Algorri, F. Schmitt, Reconstructing the surface of unstructured 3D data, SPIE Proceedings:

Medical Imaging, vol. 2431, 1995.

[3] U. Bite, I.T. Jackson, The use of three-dimensional CT scanning in planning head and neck

reconstruction, Plastic Surgical Forum, vol. VIII, Annual meeting of the American Society of Plastic

and Reconstructive Surgeons, Kansas City, MO, 1985.

[4] D.J. Blezek, R.A. Robb, J.J. Camp, L.A. Nauss, Anesthesiology training using 3-D imaging and

virtual reality, in: Proceedings of Medical Imaging 1996, vol. 2707, 1996, pp. 402±410.

[5] B.H. Brinkman, A. Manduca, R.A. Robb, Quantitative analysis of statistical methods for grayscale

inhomogeneity correction in MR images, SPIE Proceedings, vol. 2710, 1996, pp. 542±552.

[6] L.G. Brown, A survey of image registration techniques, ACM Comp. Surv. 24 (4) (1992) 325±376.

[7] R. Drebin, L. Carpenter, P. Harrahan, Volume rendering, SIGGRAPH'88, 1988, pp. 65±74.

[8] L.L. Fellingham, J.H. Vogel, C. Lau, P. Dev, Interactive graphics and 3-D modeling for surgery

planning and prosthesis and implant design, in: Proceedings of NCGA'86, vol. 3, 1986, pp. 132±142.

[9] B. Fritzke, Let it grow ± self organizing feature maps with problem dependent cell structures, in:

Proceedings of ICANN-91, Helsinki, 1991.

[10] B. Fritzke, A growing neural gas network learns topologies, in: Teasauro et al. (Eds.), Advances in

Neural Information Processing Systems, MIT Press, Cambridge, MA, 1995.

[11] K. Fukuta, I.T. Jackson, C.N. McEwan, N.B. Meland, Three-dimensional imaging in craniofacial

surgery: A review of the role of mirror image production, Eur. J. Plastic. Surg. 13 (1990) 209±217.

[12] M.B. Garnick, The dilemmas of prostate cancer, Scienti®c American (1994) 72±81.

[13] B. Geiger, R. Kikinis, Simulation of endoscopy, AAAI Spring Symposium Series: Applications of

Computer Vision in Medical Images Processing, Stanford University, 1994, pp. 138±140.

[14] P.B. He�ernan, R.A. Robb, Display and analysis of 4-D medical images, in: Proceedings of the

International Symposium CAR'85, 1985, pp. 583±592.

[15] K.H. Hohne, W.A. Hanson, Interactive 3-D segmentation of MRI and CT volumes using

morphological operations, J. Comput. Assist. Tomography 16 (2) (1992) 284±294.

[16] K.H. Hohne, R. Bernstein, Shading 3-D images from CT using grey-level gradients, IEEE Trans.

Med. Image. MI±15 (1986) 45±47.

[17] K.H. Hohne, M. Bomans, A. Pommert, M. Reimer, U. Teide, G. Wiebeck, Rendering tomographic

volume data: adequacy of methods for di�erent modalities and organs, in: Hohne et al. (Eds.), 3D

Imaging in Medicine, NATO ASI Series, vol. F60 (1990) pp. 333±361.

[18] H. Hoppe, Surface reconstruction from unorganized points, Doctoral Dissertation, University of

Washington, 1994.

[19] C.R. Jack, R.M. Thompson, R.K. Butts, F.W. Sharbrough, P.J. Kelly, D.P. Hanson, S.J. Riederer,

R.L. Ehman, N.J. Hangiandreau, G.D. Cascino, Sensory motor cortex: correlation of presurgical

mapping with functional MR imaging and invasive cortical mapping, Radiology 190 (1994) 85±92.

[20] H.K. Jiang, R.A. Robb, K.S. Holton, A new approach to 3-D registration of multimodality medical

images by surface matching, SPIE Visual, Biomed. Comp. 1808 (1992) 196±213.

[21] H.K. Jiang, R.A. Robb, Image registration of multimodality 3-D medical images by chamfer

matching, in: Proceedings of Vision and Visualization, SPIE, San Jose, CA, 1992, pp. 649±659.

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1319

Page 34: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

[22] S.B. Johnson, D.L. Packer, Intracardiac ultrasound guidance of multipolar atrial and ventricular

mapping basket operations, J. Am. Col. Cardiol. 29 (1997) 202A.

[23] P.A. Kay, R.A. Robb, D.G. Bostwick, J.J. Camp, Robust 3-D reconstruction and analysis of

microstructures from serial histologic sections, with emphasis on microvessels in prostate cancer, in:

Proceedings of the Fourth Conference on Visualization in Biomedical Computing, Hamburg,

Germany, 1131, 1996, pp. 129±134.

[24] P.A. Kay, R.A. Robb, R.P. Meyers, B.F. King, Creation and validation of patient speci®c anatomical

models for prostate surgery planning using virtual reality, in: Proceedings of the Fourth Conference

on Visualization in Biomedical Computing, Hamburg, Germany, 1131, 1996, pp. 547±552.

[25] P.J. Kelly, B.A. Kall, Computers in Stereotactic Neurosurgery, Blackwell Scienti®c Publications,

Boston, MA, 1992.

[26] T. Kohonen, Self-organization and associative memory, 3rd ed., Springer, Berlin, 1989.

[27] S. Lavelle, R. Taylor, R. Mosges, Computer Integrated Surgery, MIT Press, Cambridge, MA, 1996.

[28] M. Levoy, Display of surfaces from volume data, Comput. Graph. Appl. 8 (3) (1988) 29±37.

[29] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, P. Suetens, Multimodality image registration

by maximization of mutual information, EEE Trans. Med. Imag. 16 (2) (1997) 187±l98.

[30] A. Manduca, J.J. Camp, E.L. Workman, Interactive multispectral data classi®cation, in: Proceedings

of the 14th Annual Conference of IEEE Engineering in Medicine and Biology Society, Paris, France,

vol. 14, 29 October±1 November 1992, pp. 2144±2145.

[31] J.L. Marsh, M.W. Vannier, S.J. Bresma, K.M. Hemmer, Applications of computer graphics in

craniofacial surgery, Clin. Plastic. Surg. 13 (1986) 441.

[32] C.R. Meyer, J.L. Boes, B.K. Kim, P.H. Bland, K.R. Zasadny, P.V. Kison, K. Koral, K.A. Frey, R.L.

Wahl, Demonstration of accuracy and clinical versatility of mutual information for automatic

multimodality image fusion using a�ne and thin-plate spline warped geometric deformations, Med.

Image Anal. 1 (3) (1997) 195±206.

[33] NLM: Fact Sheet: The Visible Human Project, http:/www.nlm/nih.gov/pubs/factsheets/visible_hu-

man.html.

[34] W.M. Petroll, H.D. Cavanaugh, J.V. Jester, Three dimensional imaging of corneal cells using in-vivo

confocal microscopy, J. Microscopy 170 (3) (1993) 213±219.

[35] R.A. Robb, E.A. Ho�man, L.J. Sinak, L.D. Harris, E.L. Ritman, High-speed three dimensional x-ray

computed tomography: The dynamic spatial reconstructor, Proc. IEEE 71 (1983) 308±319.

[36] R.A. Robb, D.P. Hanson, A software system for interactive and quantitative analysis of biomedical

images, in: Hohne et al. (Eds.), 3D Imaging in Medicine, NATO ASI Series, vol. F60, 1990, pp. 333±

361.

[37] R.A. Robb, D.P. Hanson, The ANALYZE software system for visualization and analysis in surgery

simulation, in: Lavalle et al. (Eds.), Computer Integrated Surgery, MIT Press, Cambridge, MA, 1993.

[38] R.A. Robb, Surgery simulation with ANALYZE/AVW: a visualization workshop for 3-D display

and analysis of multimodality medical images, in: Proceedings of Medicine Meets Virtual Reality II,

San Diego, CA, 1994.

[39] R.A. Robb, C. Barillot, Interactive display and analysis of 3-D medical images, IEEE Trans. Med.

Imag. MI±8 (1989) 217±226.

[40] R.A. Robb, Three-Dimensional Biomedical Imaging, Wiley, New York, 1998.

[41] R.A. Robb, D.P. Hanson, J.J. Camp, Computer-assisted surgery planning and rehearsal at Mayo

clinic, IEEE Computer 29 (1) (1996) 39±47.

[42] R.A. Robb, VR Assisted Surgery Planning, IEEE Engineering In Medicine And Biology, March

1996, pp. 60±69.

[43] G.D. Rubin, C.F. Beaulieu, V. Argiro, H. Ringi, A.M. Norbash, J.F. Feller, M.D. Drake, R.B.

Je�rey, S. Napel, Perspective volume rendering of CT and MR images: Applications for endoscopic

imaging, Radiology 199 (1996) 321±330.

[44] E.L. Ritman, R.A. Robb, L.D. Harris, Imaging Physiological Functions, Praeger, New York, 1985.

[45] R.M. Satava, R.A. Robb, Virtual endoscopy: Application of 3D visualization of medical diagnosis,

Presence 6 (2) (1997) 179±197.

1320 M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321

Page 35: High performance computing in biomedical imaging researchalgis/DSA/articlemedimage.pdf · Biomedical Imaging Resource, Mayo Foundation/Clinic, Rochester, MN 55905, USA Received 15

[46] P.F. Schmalz, S.M. Miller, J.H. Szurszewski, Three-dimensional imaging of neurons in living

preparations of peripheral autonomic ganglia, Gastroenterology 112 (4) (1997) A729 (#44).

[47] T.A. Starney, J.E. McNeal, Adenosarcoma of the prostate, Campbell's Urology, 6th ed., vol. 2, pp.

1159±1221.

[48] C. Studholme, D.L.G. Hill, D.J. Hawkes, Automated 3-D registration of MR and CT images of the

head, Med. Image Anal. 1 (2) (1996) 163±175.

[49] P.A. van den Elsen, E.D. Pol, M.A. Viergever, Medical image matching ± a review with classi®cation,

IEEE Eng. Med. Biol. 12 (1) (1993) 26±38.

[50] W.M. Wells III, P. Viola, H. Atsumi, S. Nakajima, R. Kikinis, Multimodal volume registration by

maximization of mutual information, Med. Image Anal. 1 (1) (1996) 35±51.

[51] R.P. Woods, J.C. Mazziota, S.R. Cherry, MRI-PET registration with automated algorithm, J.

Comput. Assist. Tomography 17 (4) (1993) 536±546.

[52] M. Zyda, D.R. Pratt, J.S. Falby, C. Lombardo, K.M. Kelleher, The software required for the

computer generation of virtual environments, Presence 2 (2) (1994).

M. Stacy et al. / Parallel Computing 24 (1998) 1287±1321 1321