[ieee 2008 8th ieee international conference on bioinformatics and bioengineering (bibe) - athens,...
TRANSCRIPT
ANew Method for Processing the Forward Solver Data inFluorescence Molecular Imaging
Dimitris Gorpas, Kostas Politopoulos, and Dido Yova
Abstract-During the last few years a quite large number offluorescence imaging applications have been reported in theliterature, as one of the most challenging problems in medicalimaging is to "see" a tumour embedded in tissue, which is aturbid medium. This problem has not been fully encounteredyet, due to the non-linear nature of the inverse problem. In thispaper, a novel method for processing the forward solveroutcomes is presented. Through this technique the comparisonbetween the simulated and the acquired data can be performedonly at the region of interest, minimizing time-consuming pixelto-pixel comparison. With this modus operandi a-prioryinformation about the initial fluorophore distribution becomesavailable, leading to a more feasible inverse problem solution.
I. INTRODUCTION
THE combined. pro~ess in flu~r~scent probes technologyand optical ImagIng modalItIes have encountered a
number of difficulties in the noise dominated fluorescenceimage acquisition. Nevertheless, fluorescence signalprocessing still lacks the feasibility and time efficacy, due tothe enormous amount of the recorded data. The currentstate-of-the-art in fluorescence molecular imagingapplications requires parallel programming with multiplecomputational units and computational time in the order ofhours. The most important reason for this lack of timeefficacy is the non-linear nature of the inverse problem,which requires iterative procedures for convergence [1], andthe pixel-to-pixel comparison between measured andcomputed data [2].
In this paper, a novel method for processing the forwardsolver outcomes is presented. Through this technique thecomparison between the simulated and the acquired data canbe performed only at the region of interest, minimizing timeconsuming pixel-to-pixel comparison.
Within this work a custom forward solver has been
Manuscript received June 14, 2008. This work was supported by theEuropean Committee under the Integrated Project "World Wide Welfare:high BRIGHTnes semiconductor lasERs for gEneric Use", IST-2005-2.5.1Photo
D. Gorpas is with the Laboratory of Biomedical Optics and AppliedBiophysics, School of Electrical and Computer Engineering, NationalTechnical University of Athens, GR-157 73, Zografou, Greece (phone:+302107722293; fax: +302107723048; e-mail: dgorpas@ mai1.ntua.gr).
K. Politopoulos is with the Laboratory of Biomedical Optics and AppliedBiophysics, School of Electrical and Computer Engineering, Nation~l
Technical University of Athens, GR-157 73, Zografou, Greece (e-maIl:[email protected]). .
D. Yova is with the Laboratory of Biomedical Optics and ApphedBiophysics, School of Electrical and Computer Engineering, Nation~l
Technical University of Athens, GR-157 73, Zografou, Greece (e-maIl:[email protected]).
developed, based on the Diffusion Approximation [3].Galerkin finite elements [4] have been introduced tooriginate the variational formulation of the DiffusionApproximation, whereas a Delaunay triangulation scheme[5] has been applied for the region discretization. For thethree-dimensional simulation of the fluorophoresdistribution, the super-ellipsoid models [6] have beenselected. Those models have been extensively used invarious machine vision applications [7], [8], however this isthe first time these are used to simulate fluorophoredistribution in a fluorescence molecular imaging application.
The estimated fluorescence emission distribution over theregion surface is converted into intensity levels, whichcorresponds to the measurand of the imaging system [9].However, a back-projection scheme is required to transformthe global coordinates of the simulated signal into imagelocal coordinates, and eventually pixels. This backprojection requires accurate knowledge of the imagingsystem calibration parameters, which are the relation(translation and rotation) between the global coordinatessystem and the camera-centered system and the distortionparameters that the camera lens adds. Furthermore, the filtertransmission factor should be incorporated into thesimulated signal, in order to match the real acquisitionsystem [9].
Once the simulated image has been formulated, imagesegmentation algorithms are applied, in order to perform anaccurate segmentation. Morphological filtering provides acontrast enhancement, whereas a custom watershedtransformation [10] extracts the region of interest from thesimulated image. It is essential that the same algorithms willbe applied to the acquired images as well, since intensitylevels, along with the geometrical features, would lead to theimage registration convergence.
II. METHODOLOGY
A. Forward Solver
In the Diffusion Approximation the Radiative TransferEquation can be expressed by the PI approximation [11]:
where ()) is the angular modulation frequency of the lightsource in sr/s, c is the average speed of light in the medium
with typical value in the case of tissue c = 2.55· 1010 cm/s,
u(r) is the average intensity or fluence in W/cm2, D(r) is
the photon diffusion coefficient in cm, Pa(r) is the medium
absorption coefficient in cm-1 and Ao(r) is the isotropic
component of the source in W/cm2• The photon diffusion
coefficient is expressed as:
(2)
[~liJ/c)+ #a.,xc(r)Jrexc(r)- V~exc(r"pUexc(r)]= 0
[~liJ/c)+ #a"m(r)Jrem(r)- V~em(r"pUem(r)]
=17#:::c(r~ - iliJr(r)rUexc(r)Uexc(r)+ 2RDexc(r~exc(r)/~]= 4Isrc(r)Uem(r)+ 2RDem(r~ em(r)f~]= 0
(5)
(6)
(7)
(8)
Using the first Green's identity, an extension to the theoremof divergence for integration by parts [15], it is obtained:
where the Robin type boundary condition of (7) was takeninto consideration. Inserting (10) and (11) into (9), therequired variational (weak) formulation is obtained:
Jvrexc(r"puexc(r)}(r)tr =v (10)
JDexc(r)[nVUexc(r)}(r)tr - JDexc(r"pUexc(r"p V/(r)s v
(9)
(11 )
~liJ/c)fUexc(r)p(r)tr + J#a,xc(rPexc(r)p(r)trv v
- JV~exc(r"puexc(r)}(r)tr=Ov
JDexc(r)[oVUexc(r)}(r)tr =s
2K1 JIs~(r)p(r)tr-(2Rr Juexc(r)p(r)trs~ S
Within this model it is assumed that the presence of thefluorophores does not alter the reduced scattering coefficientof the tissue. However, the absorption coefficient is assumedto be the sum of tissue absorption coefficient and the one of
the fluorophores, Pa,excl em (r)=P:~excl em (r)+ P:::cIem (r).A finite elements solution of the Diffusion Approximation
model can be derived by posing the model in a weakvariational form with the use of piecewise linear functions[14]. To obtain the variational formulation of the model it istaken into consideration the fact that the solution of (5), isthe light source for (6). Thus, the two equations will beconfronted separately, to prevent the occurrence of unstableequation systems.
Therefore, the excitation field (5) is multiplied with the
test function If/(r) and integrated over the domain V:
The boundary integral term in (10) can be written as:
(4)
where the coefficient R represents the boundary reflectioncoefficient and depends on the mismatch between therefraction indices. For a typical tissue/air value of relativeindex of refraction of tissue with respect to air, ntis/air =1.33,
this coefficient equals to R =2.34 [12].In fluorescence imaging two light fields are taken into
consideration, the excitation and the emission. For theexcitation light, indexed exc, the only light source is the
external excitation light I src(r). On the other hand, for the
emission field, indexed em, there exist only the internalfluorescence sources. Thus assuming fluorophore quantumyield 1], the internal light sources are described as:
U(r)=-2RD(r~(r)/~] rES, 5 0 0<0 (3)
where P::c(r) is the absorption coefficient of the
fluorophores at the excitation wavelength, r(r) is the
fluorophore lifetime in sand Uexc(r) is the average intensity
of the excitation light.Under these premises and taking into consideration (1)
and (3) the following equation system arises:
with P; (r) being the reduced scattering coefficient in cm-1,
whereas the factor a is related to the absorption, scatteringand anisotropy of the medium. In the case of tissue, thisfactor is in the order of a - 0.5, assuming anisotropyg - 0.8 [12].
The exact boundary condition for the diffuse intensityrequires that there should be no diffuse intensity re-enteringthe medium from outside, at the surface S. However, thisboundary condition cannot be satisfied exactly and someapproximate boundary conditions must be considered. Onesuch approximation requires that the total diffuse fluxdetected inwards at the surface S should be zero [3]. This
condition in terms of the average intensity U (r), taking into
account the mismatch between the refractive indices of theinspected medium and the surrounding one, is expressed bythe Robin type condition [11] [13] as:
where a exc = [ak,exc] is the photon density of the excitation
field in nodal points of the finite element mesh. The matrixAexc contains the finite element approximation of the
Diffusion Approximation model and it is of the form:
In order to obtain the finite element approximation of the
Diffusion Approximation model, the solution Uexc{r) of the
variational formulation (12) is approximated in piecewiselinear functions per element, based on the nodal values of
the Uexc{r) solution. Hence, the discretization of Uexc{r),following the Galerkin method [4] [15] is expressed as:
(19)
Thus from (19) the coupling between the excitation andthe emission models becomes apparent. However, in orderto avoid any unstable conditions, due to matrix inversionsand multiplications, the problem confrontation is succeededseparately for each model. Furthermore, it becomes obviousthat the finite elements discretization scheme must be thesame between the two models; otherwise the excitationmodel solution should be updated to the new discretization,resulting into accuracy losses and decrease of time efficacy.
B. Region Discretization
The region under inspection was discretized viaapplication of the Delaunay triangulation scheme [5].Delaunay triangulation is a pure geometrical discretizationtechnique and it is among the most common techniquesapplied to Finite Elements problems. The elements that havebeen selected for the described forward solver were lineartetrahedral elements, distributed over a uniform mesh. The
set of tetrahedral coordinates ~, (2' (3 and (4 are in terms
of a parametric coordinate system and in literature theyreceive different names, such as isoparametric coordinatesor shape function coordinates.
The geometric definition of the element in terms of thesecoordinates is expressed as:
c(p, k)=17JP:::c(r~ - imr(r)rVI k(r)IIp(r)tr (20)v
with the matrix c(p, k) being as follows:
case. Moreover, the source vector is of the form:
(14)
(13)
(12)
~m / c)JUexc(r)II(r)tr + JDexc(r)vUexc(r)v VI (r)v v
+(2R)' JUexlr)II(r)trs
+JPa,exc(rpexc(r)ll(r)tr =(R/2) JIsrc(r)ll(r)trv Sm
N
Uexc{r)~ U;c(r) = I ak,excVlk{r)k=l
where VI k (r) are the nodal basis functions of the finite
elements discretization, ak,exc is the photon density in nodal
point k, with k =I,L ,N, and N is the number of nodalpoints. Furthermore, as test functions the test functions
VI p (r), P = I,L ,N, are chosen, in order to obtain the
algebraic equations for all the unknowns. The resultingsystem is linear algebraic, expressed in matrix mode as:
with Keexc(p,k) being the mass matrix, Meexc(p,k) being
the stiffness matrix and Peexc(p, k) being the boundary
condition matrix of the finite element approximation:
Keexc(p, k)=
~m/c)JVIk(r)llp(r)tr + Jpa,exc(r)llk(r)llp(r)tr (16)v v
Meexc(p, k)= JDexlr)v VIk(r)v VI p(r)tr (17)v
Peexc(p.k)= (2R)' JVI k(r)llp(r)tr (18)s
(I ~ ~ c) d.(2 1 a2 b2 c2 d2 X
- (21 )(3 6V a3 b3 C3 d3 y
(4 a4 b4 c4 d4 Z
where the sixteen matrix entries depend only on the vertexlocations of the tetrahedron and V is the elemental volume.
Utilizing these elemental definitions, the mass andstiffness matrices can be approximated by expressing themin the simplex coordinates [15].
The fluorophore distribution was simulated by applyingthe super-ellipsoid model, defined as the solution of thegeneral form of the implicit equation:
(22)
Under the same modus operandi, a similar matrix systemderives for the emission field, as described in (6). All thematrices for this field are approximated as in the excitation
In this equation, one can recognize an ellipsoid form,enriched by two more parameters, G) and G2' that allow
kl
[~;] =[~r;: - 4 2UiVi 2 T2] k2u;1i lj + ui (26)- 4 r/ + 2vi
2 2UiVit5vi vilj viri PI
P2
(27)
'il 'i2 'i3 Xo Xir21 r22 r23 Yo ~ (25)r31 r32 r33 Zo Zi
0 0 0
D. Image Segmentation
The simulated image for each measurement is preprocessed and normalized, in order to meet the acquisitionprotocol. The pre-processing algorithms include intensityadjustment and contrast enhancement. The intensityadjustment was performed by mapping the intensity valuesof each image to new ones, between 0 (black) and 1 (white).Values below the low and above the high thresholds areclipped; that is values below the low threshold mapped to 0and those above the high threshold mapped to 1. Extensivetesting revealed that the most appropriate mapping was theone that was weighted toward lower (darker) output values.
An expected side effect of the intensity adjustment is theweakening of the region-of-interest edges. However, thedescribed algorithm overcame this difficulty viamorphological image processing. More specific, opening
where (kl ,k2) are the radial distortion parameters, (PI' P2)
are the tangential distortion coefficients and lj2 =Ui2+ Vi
2.
Having experimentally solved the calibration problem forthese parameters, the mapping between the globalcoordinates system to the image plane is possible, which
means that the X(r) in (24) can now be expressed as
X(u, v). Furthermore, the fluence values are thresholded by
the fluorescence filters transmission factor QE. Finally, the
virtual image formulation is expressed as:
where the coefficients au' av and s are scale factors for
transforming the image plane metric units (Ui,Vi) to image
pixels (u;, v;) and represent the camera intrinsic parameters.
R = h]is the rotation matrix and T = [to Yo Zo J is the
translation matrix. These two matrices define the extrinsic
parameters of the camera frame. Finally (Xi' ~,Zi) are the
global coordinates of the arbitrary point.Although telecentric lenses optically compensate for lens
distortions, there should be taken into account at least themost common types of distortion, to avoid any possible lensconstruction errors [19]. The overall distortion equation is:
[U:] [au S 0 0]Vi = 0 av 0 O·
1 0 0 0 1
(24)
., 4j'=10
-.
where X(r) is the measurable quantity. Within this
formulation, losses in the imaging system are taken intoconsideration and accounted as a correlation factor. Thisfactor is experimentally calculated through the systemcalibration procedure [9] [16].
Camera calibration is the process of determining theinternal camera geometric and optical characteristics(intrinsic parameters) and/or the three-dimensional positionand orientation of the camera frame relative to a certainworld coordinate system (extrinsic parameters) [17].
The most important premise for an accurate calibration isthe correct mathematical expression of the camera model.Assuming telecentric lenses, the pinhole camera model [18]is not applicable. Telecentric lenses perform orthographicprojection and are ideal for gauging applications, thus the
projection of an arbitrary point (Xi' ~,Zi) to the image
plane is expressed as:
controlling the shape curvature. As for the ellipsoid case, the
ai' a 2 and a 3 parameters are scale factors on x, yand z
axis respectively, Fig. 1.The form of (22) gives information on the position of a
three-dimensional point, related to the super-ellipsoidsurface, which is critical for interior/exterior determination.This information derives from the following critical values
of the implicit equation: f(x, y, z) =1 when the point lies on
the surface, f(x, y, z) < 1 when the point is inside the super
ellipsoid and f(x, y, z» 1 when it is outside of the super
ellipsoid. In the case of fluorophore distribution, theabsorption coefficient can be expressed, utilizing thesevalues, as:
c. Image Formulation
The formulation of the images, after the solution of theforward problem, in the case of the DiffusionApproximation was based on the following expression:
l) l) l) {I, f(x,y,z)~ 1)La\! =)L:s \! + c)LfuO ,r C = ( ) (23)
0, f ~,y,z > 1
f2£I~O.5
Fig. 1. Super-ellipsoid models for various parameters values
filter can produce a reasonable estimate of the backgroundacross the image, as long as the selected structuring elementis large enough not to fit entirely within any of the objects ofinterest. Subtracting the resulted image from the restored, anew one with even background arises (top-hattransformation). Repeating the same procedure, but insteadof opening applying the closing filter (bottom-hattransformation), and multiplying the outcomes, a contrastenhancement was succeeded.
After the preprocessing procedure the image is ready forthe segmentation. Simple thresholding would lead either tothe exclusion of some features or to the region growth ofothers. On the other hand, application of any edge detectionor segmentation algorithm would lead to falsely detectededges or over-segmentation. The reason is that the object inthe images does not have sharp edges or relevant intensities.In order to avoid such problems, this algorithm performed acombination of marker-controlled watershed transformationand thresholding. As a result even the lowest intensityfeatures were detected.
Through this procedure, the region-of-interest from thesimulated images was extracted. This will be the input to theimage registration problem, where the matching between theacquired and the simulated data is evaluated. However, thismatching process does not seek the entire images, but onlythe regions of interest, minimizing significantly the timingrequired to solve the inverse problem.
III. RESULTS
The synthetic frequency-domain fluorescence data weregenerated on a simulated 2 cm x 2 cm x 2 cm cube, with itscenter located at the origin of the coordinates system,illuminated by a simulated point laser beam with a Diracprofile at the z =I plane and angular modulation frequencyOJ =100 MHz. The refractive mismatch at the boundary ofthe region was set at R = 2.34 and the average speed of light
in the medium at c = 2.55 .1010 cm/s.Data were generated for two different super-ellipsoid
models at two different locations. The first super-ellipsoidmodel was a sphere with diameter r = 0.25 em and thesecond model was an ellipsoid with x-axis diameter 0.125em, y-axis diameter 0.25 em and z-axis diameter 0.125 em.These two models were firstly located at the origin of thecoordinates system, while their second location was off-axis
with their center located at the point (0.25,0.25,0.5).
For the simulated absorption coefficient were chosen the
values )1::exc = 0.023 cm-1 and )1::em = 0.0289 cm-1 for the
background and )1:::. = 0.5 cm-1 and )1::: = 0.0506 cm-1
for the fluorophore. The lifetime of the fluorophore was
taken to be T =0.56 .10-9 s independent from the positionand the quantum efficiency was 1] = 0.016, to match thecorresponding properties of the Indocyanine Green (ICG)dye, commonly used in experiments. Finally, the reduced
scattering coefficient was taken as )1; = 9.84 cm-1 in both
the target and the background, and was considered to beconstant for both excitation and emission wavelengths.
The region was discretized at 24576 uniformly distributedlinear tetrahedral elements, which means a number of 4913nodal points. The simulations were performed at a 2.16 GHzDual Core unit with 2 GB RAM and required approximately30 minutes for the forward solver to converge to a solution.In Fig. 2 the outcomes of the forward solver for the fourcases of simulation measurements are presented as slicedsurfaces at the z =1 plane.
(11)
Fig. 2. Surface slices presenting the fluence exiting the cube for the fourcases of super-ellipsoid models. (a) and (b) show the spherical modellocated at the center of the cube (a) and off-axis (b). (c) and (d) present theellipsoid model for the same locations respectively.
By application of (27) the slices of Fig. 2 are transformedto image data and presented in Fig. 3. The calibrationparameters that were used for this transformation havechosen to correspond to an 800x800 pixels camera, with theuse of telecentric lens. The fluorescence filters transmissionfactor was set at QE =0.9, to correspond to most high-end
commercial fluorescence filters at the emission wavelengthof the ICG.
(c)
(b)
(d)
(a)
(e)
(b)
(d)
Fig. 3. Images after application of the fluence transformation. (a) and (b)show the spherical model located at the center of the cube (a) and off-axis(b). (c) and (d) present the ellipsoid model for the same locationsrespectively.
Fig. 4 shows the restored images after the preprocessingalgorithms. The difference between Fig. 3 and Fig. 4 isobvious. The region-of-interest in Fig. 4 is well determinedand the contrast enhancement presents high levels.
Fig. 5. Segmented images (a) and (b) show the spherical model located atthe center of the cube (a) and off-axis (b). (c) and (d) present the ellipsoidmodel for the same locations respectively.
Through this point the extraction of the intensityhistogram and the geometrical properties of the region-ofinterest is possible, which will conclude to a more feasiblecomparison between real and simulated images.
(c)
(b)
(d)
IV. CONCLUSION
In this paper a new method for the processing of theforward solver outcomes in fluorescence molecular imaginghas been presented. This method is based on computervision algorithms and succeeds to isolate the region-ofinterest from the noisy background, a very important aspectfor the convergence of the inverse problem.
Another important aspect of this work is the introductionof the super-ellipsoids as the simulation models. With onlyeleven parameters, an excessively large number of tumoursimulations can result automatically and the required initialvalues of the fluorophore distribution to be available. This isof great importance as the lack of these values is mostlyresponsible for the time consuming iterations during thesolution of the inverse problem.
Fig. 4. Restored images after application of the preprocessing algorithms.(a) and (b) show the spherical model located at the center of the cube (a)and off-axis (b). (c) and (d) present the ellipsoid model for the samelocations respectively.
Finally, in Fig. 5 the segmented images are presented.Comparing Fig. 4 and Fig. 5 one can notice the accuratesegmentation that the custom watershed transformation hasachieved.
REFERENCES
[1] V. Ntziachristos, "Fluorescence Molecular Imaging," Annu. Rev.Biomed. Eng., vol. 8, pp. 1-33, 2006.
[2] A. Joshi, W. Bangerth, E.M. Sevick-Muraca, "Adaptive finiteelements based tomography for fluorescence optical imaging intissue," Opt. Express, vol. 12, pp. 5402-5417,2004.
[3] A. Ishimaru, "Wave Propagation and Scattering in Random Media,"Voll: Single Scattering and Transport Theory, Academic Press, NewYork, 1978.
[4] T.K. Sengupta, S.B. Talla, S.C. Pradhana, "Galerkin finite elementmethods for wave problems," Sadhana, vol. 30, pp. 611-623,2005.
[5] P.L. George, H. Borouchaki, "Delaunay Triangulation and Meshing,"Hermes, Paris, 1998.
[6] Y. Fougerolle, A. Gribok, S. Foufou, F. Truchetet, M. Abidi, "ImplicitSurface Modeling using Supershapes and R-functions," In Proc. ofPacific Graphics '05, Macao, China, pp. 169-172,2005.
[7] L. Chevalier, F. Jaillet, A. Baskurt, "Segmentation and superquadricmodeling of 3D objects," In Proceedings of WSCG 2003, Plzen,Czech Republic, 2003.
[8] lA. Tyrrell, B. Roysam, E. di Tomaso, R. Tong, E.B. Brown, R.K.Jain, "Robust 3-D modeling of tumor microvasculature usingsuperellipsoids," IEEE International Symposium on BiomedicalImaging: From Nano to Macro, Arlington, Virginia, USA, pp. 185188,2006.
[9] J. Ripoll, V. Ntziachristos, "Imaging scattering media from a distance:Theory and applications of non-contact optical tomography," Mod.Phys. Lett. B, vol. 18, pp. 1-29, 2004.
[10] F. Meyer and P. Maragos, "Multiscale Morphological SegmentationsBased on Watershed, Flooding, and Eikonal PDE," Proc. Int'l Confon Scale-Space Theories in Computer Vision, Lecture Notes inComputer Science, Springer-Verlag, vol. 1682, pp. 351-362, 1999.
[11] S.R. Arridge, "Optical tomography in medical imaging," InverseProbl., vol. 15, pp. R41-R93, 1999.
[12] l Ripoll, D. Yessayan, G. Zacharakis, V. Ntziachristos, "Experimentaldetermination of photon propagation in highly absorbing andscattering media," 1. Opt. Soc. Am. A, vol. 22, pp. 546-551, 2005.
[13] H. Dehghani, B. Pogue, "Near infrared spectroscopic imaging:Theory," In K.D. Paulsen, P.M. Meaney, L.C. Gilman, (editors)Alternative Breast Imaging: Four Model-Based Approaches, vol. 778,pp. 183-199, Netherlands, Springer, 2005.
[14] M. Guven, B. Yazici, V. Ntziachristos, "Fluorescence opticaltomography with a priori information," In F.S. Azar (editor),Multimodal Biomedical Imaging II, Proc. SPIE, 643107, 2007.
[15] P.P. Silvester, R.L. Ferrari, "Finite elements for electrical engineers,"University Press, Third Edition, Cambridge, 1996.
[16] T. Tarvainen, "Computational methods for light transport in opticaltomography," Doctoral Dissertation, Department of Physics,University of Kuopio, Finland, 2006.
[17] R.Y. Tsai, "A Versatile Camera Calibration Technique for HighAccuracy 3D Machine Vision Metrology Using Off-the-Shelf TVCameras and Lenses," IEEE J Robot. Automat., vol. RA-3, pp. 323344,1987.
[18] O. Faugeras, "Three-Dimensional Computer Vision: a GeometricViewpoint," The MIT Press, Fourth printing, 2001.
[19] M.T. Ahmed, A.A. Farag, "Differential Methods for NonmetricCalibration of Camera Lens Distortion," In Proc. IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition,CVPR'OI, vol. 2, pp. 477-482, 2001.