automated 3-d reconstruction using a scanning electron...

8
Automated 3-D Reconstruction Using a Scanning Electron Microscope Nicolas Cornille †‡ Dorian Garcia Michael A. Sutton Stephen R. McNeill Jean-Jos´ e Orteu University of South Carolina Department of Mechanical Engineering 300 Main St. — Columbia, SC 29308 (USA) e-mail: [email protected] ´ Ecole des Mines d’Albi, CROMeP Campus Jarlard 81013 Albi Cedex 09 (France) e-mail: [email protected] Abstract Methods for both the accurate calibration and also the use of an Environmental Scanning Electron Microscope (ESEM) for accurate 3D reconstruction are described. Unlike previ- ous work, the proposed methodology does not require either known motions of a target or use of calibration target with accurate fiducial marks. Experimental results from the cal- ibration studies show the necessity for taking into account and correcting for distortions in the SEM imaging process. Moreover, the presence of high-frequencies components in the distortion field demonstrates that classic parametric distor- tion models are not sufficient to model distortions in a typical ESEM system. Results from preliminary 3-D shape measure- ments indicate that the calibration process removes measure- ment bias and reduces random errors to a range of ±0.05 pixel, which corresponded to ±43 nanometers for the objects being studied in this work. Keywords: scanning electron microscope, accurate calibra- tion, digital image correlation, stereo-vision, 3-D shape from motion. 1 Introduction The increasing interest in micro- and nano-scale investigations of material behavior at reduced length scales requires the de- velopment of high-magnification, non-contacting, 3-D shape measuring systems. Optical methods have emerged as a de facto standard for 2-D and 3-D macro-scale measurements. Since its early development at the University of South Car- olina [33], the digital image correlation (DIC) technique for measuring two and three-dimensional shapes and/or deforma- tions has been shown to be a versatile and effective measure- ment method due to its high spatial resolution, its high sensi- tivity and its non-contacting nature. However, the extension of DIC to the micro- or nano-scale with an SEM system faces the challenges of calibrating the imaging system, as well as measuring 3-D shapes using the single imaging sensor usually available. Many authors have successfully applied the DIC technique to a wide range of macro-scale problems [15, 22, 21, 11, 26, 34, 6, 32]. It has become commonplace to try to apply DIC at the micro- or nano-scale, using either optical microscopy or electron microscopy [31, 24, 12, 18, 25, 36]. However, few authors have investigated the problem of the accurate cali- bration of their micro- or nano-scale imaging systems, and specifically the determination and correction of the underly- ing distortions in the measurement process. One reason may be that the obvious complexity of high-magnification imaging systems weakens the common underlying assumptions in para- metric distortion models (radial, decentering, prismatic, . . . ) commonly used to correct simple lens systems (e.g., digital cameras) [2, 7]. Recently, Schreier et al. proposed a new methodology to calibrate accurately any imaging sensor by correcting a priori for the distortion [30, 29] using a non-parametric model. The a priori correction of the distortion transforms the imaging sensor into a virtual distortion-free sensor plane using trans- lations of a grid-less planar target. The same target can then be used to calibrate this ideal virtual imaging sensor using un- known arbitrary motions. As opposed to classical calibration techniques relying on a dedicated target marked with fiducial points of some sort (ellipses, line crossings, . . . ), the approach of Schreier et al. can be applied on any randomly textured planar object (the so-called “speckle pattern”). Since it is rather difficult, if not impossible, to realize a proper classical calibration target aimed at being imaged at high magnifica- tion, this method appears well suited for calibrating a SEM, only requiring a suitable planar speckle pattern. This paper presents our preliminary results for (a) the ac- curate calibration of a SEM system and (b) the application of the calibrated SEM system for 3-D shape measurement using a series of arbitrary specimen motions. The results indicate that two known translations of a randomly textured specimen are sufficient to transform a SEM into an accurate 3-D shape measurement system for high magnification studies. 2 SEM Imaging Considerations The SEM imaging process is based upon the interaction be- tween atoms of the observed specimen and the SEM electron beam: a heated filament generates electrons which are acceler- ated, concentrated and focused using a tray of electromagnetic lenses. When the electron beam strikes the specimen, a vari- ety of signals are emitted which can be measured by dedicated detectors. Within all these emitted signals, two of them are of a particular interest: the secondary electrons (SE) and the back-scattered electrons (BSE). The SE originate close to the

Upload: vuhuong

Post on 05-Sep-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

Automated 3-D Reconstruction Using a Scanning Electron Microscope

Nicolas Cornille†‡ Dorian Garcia† Michael A. Sutton† Stephen R. McNeill†

Jean-Jose Orteu‡

† University of South CarolinaDepartment of Mechanical Engineering

300 Main St. — Columbia, SC 29308 (USA)e-mail: [email protected]

‡ Ecole des Mines d’Albi, CROMePCampus Jarlard

81013 Albi Cedex 09 (France)e-mail: [email protected]

Abstract

Methods for both the accurate calibration and also the useof an Environmental Scanning Electron Microscope (ESEM)for accurate 3D reconstruction are described. Unlike previ-ous work, the proposed methodology does not require eitherknown motions of a target or use of calibration target withaccurate fiducial marks. Experimental results from the cal-ibration studies show the necessity for taking into accountand correcting for distortions in the SEM imaging process.Moreover, the presence of high-frequencies components in thedistortion field demonstrates that classic parametric distor-tion models are not sufficient to model distortions in a typicalESEM system. Results from preliminary 3-D shape measure-ments indicate that the calibration process removes measure-ment bias and reduces random errors to a range of ±0.05 pixel,which corresponded to ±43 nanometers for the objects beingstudied in this work.

Keywords: scanning electron microscope, accurate calibra-tion, digital image correlation, stereo-vision, 3-D shape frommotion.

1 Introduction

The increasing interest in micro- and nano-scale investigationsof material behavior at reduced length scales requires the de-velopment of high-magnification, non-contacting, 3-D shapemeasuring systems. Optical methods have emerged as a defacto standard for 2-D and 3-D macro-scale measurements.Since its early development at the University of South Car-olina [33], the digital image correlation (DIC) technique formeasuring two and three-dimensional shapes and/or deforma-tions has been shown to be a versatile and effective measure-ment method due to its high spatial resolution, its high sensi-tivity and its non-contacting nature. However, the extensionof DIC to the micro- or nano-scale with an SEM system facesthe challenges of calibrating the imaging system, as well asmeasuring 3-D shapes using the single imaging sensor usuallyavailable.

Many authors have successfully applied the DIC techniqueto a wide range of macro-scale problems [15, 22, 21, 11, 26,34, 6, 32]. It has become commonplace to try to apply DICat the micro- or nano-scale, using either optical microscopyor electron microscopy [31, 24, 12, 18, 25, 36]. However, few

authors have investigated the problem of the accurate cali-bration of their micro- or nano-scale imaging systems, andspecifically the determination and correction of the underly-ing distortions in the measurement process. One reason maybe that the obvious complexity of high-magnification imagingsystems weakens the common underlying assumptions in para-metric distortion models (radial, decentering, prismatic, . . . )commonly used to correct simple lens systems (e.g., digitalcameras) [2, 7].

Recently, Schreier et al. proposed a new methodology tocalibrate accurately any imaging sensor by correcting a priorifor the distortion [30, 29] using a non-parametric model. Thea priori correction of the distortion transforms the imagingsensor into a virtual distortion-free sensor plane using trans-lations of a grid-less planar target. The same target can thenbe used to calibrate this ideal virtual imaging sensor using un-known arbitrary motions. As opposed to classical calibrationtechniques relying on a dedicated target marked with fiducialpoints of some sort (ellipses, line crossings, . . . ), the approachof Schreier et al. can be applied on any randomly texturedplanar object (the so-called “speckle pattern”). Since it israther difficult, if not impossible, to realize a proper classicalcalibration target aimed at being imaged at high magnifica-tion, this method appears well suited for calibrating a SEM,only requiring a suitable planar speckle pattern.

This paper presents our preliminary results for (a) the ac-curate calibration of a SEM system and (b) the application ofthe calibrated SEM system for 3-D shape measurement usinga series of arbitrary specimen motions. The results indicatethat two known translations of a randomly textured specimenare sufficient to transform a SEM into an accurate 3-D shapemeasurement system for high magnification studies.

2 SEM Imaging Considerations

The SEM imaging process is based upon the interaction be-tween atoms of the observed specimen and the SEM electronbeam: a heated filament generates electrons which are acceler-ated, concentrated and focused using a tray of electromagneticlenses. When the electron beam strikes the specimen, a vari-ety of signals are emitted which can be measured by dedicateddetectors. Within all these emitted signals, two of them areof a particular interest: the secondary electrons (SE) and theback-scattered electrons (BSE). The SE originate close to the

Page 2: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

surface of the specimen and are correlated to its topography:the SE signal amplitude is a function of the local orientationof the specimen surface with respect to the detector location.Relative to SE, the BSE signal is emitted from interactionsthat occur over a slightly larger depth under the the specimensurface and is mainly correlated to the local atomic composi-tion.

Fig. 1 — Difference between acquisition with SE (left) and BSE de-tector (right): The SE detector is strongly influenced by sample topog-raphy. The BSE is most strongly influenced by local specimen com-position.

2.1 Operating Conditions

SEM operational parameters such as the detector type, accel-erating voltage, working distance, etc. will affect the qualityof the images. In this regard, the major requirements for im-age correlation in an SEM are (a) adequate image contrast,(b) random texture in the images, (c) appropriate spatial fre-quency in the random texture (d) temporal invariance in theimages and (e) minimal image changes during the rigid-bodymotion of the specimen. Since SE detector imaging (whichdepends on topography of the sample) will violate (e) in mostcases, BSE imaging is preferred. Moreover, the surface textureis improved when using the BSE detector (see Fig. 1).

Since BSE imaging is also slightly affected by topography,in this work we chose to acquire all BSE images at a low ac-celerating voltage. In this way, the primary beam electronsdo not penetrate deeply into the specimen. Another advan-tage of a low voltage incident beam is that surface details areenhanced in comparison to high voltage [27].

2.2 Parallel and Perspective Projections

Rays ≈ parallel

SEM Imaging Sensor

Specimen

Fig. 2 — At low magnification, perspective projection is used. Athigh magnification, parallel projection is typically employed.

Figure 2 shows a schematic of two common models forthe SEM imaging process. At low magnification (lesser than1000×), the general model of perspective (or central) projec-tion is applied because the observed area and electron beamsweep angle are both large. At higher magnification, the elec-tron beam scanning angle is small and projection rays can beconsidered as parallel: the center of projection is at infinityand the parallel projection typically is assumed.

In addition to these projection issues, the SEM imagingprocess is obviously not distortion free. These distortions mustbe taken into account and as we will see in Section 5.1, classicparametric models (radial and tangential distortion effects)are clearly not sufficient. The calibration procedure we use isexplained in Section 3.

3 Calibration Procedure

Our calibration procedure for the SEM follows the methodol-ogy developed at the University of South Carolina [29]. It isa two-step process:

1. Determination of the distortion removal function (warp-ing function) based on a series of in-plane translations ofa flat, randomly textured, calibration target. This warp-ing function transforms the SEM imaging system into anideal, distortion-free, virtual imaging sensor.

2. Calibration of this virtual imaging sensor through a tra-ditional bundle-adjustment calibration technique. Thisstep requires another image sequence of the same targetundergoing arbitrary rigid-body motions.

As will be shown below, the first step in the calibration pro-cess uses digital image correlation to precisely measure thedisplacements1 into the image sequence undergone by a setof arbitrary chosen points: the so-called “image matching” or“image registration” problem. This novel approach greatlysimplifies the second step in the process by completely elim-inating the need for a calibration target made up of fiducialmarkers such as ellipses or crossing lines. This is particularlyimportant since, for obvious reasons, it is rather difficult, ifnot impossible, to realize a proper dedicated target which issuitable at high magnifications.

3.1 A priori Distortion Removal

A

B

C

90◦

100µm

100µm

y

z

−x

Fig. 3 — Motions of the calibration target made with the translationstage for the estimation of the distortion removal functions. Thegiven numerical values are suitable for magnification 200×.

The determination of the warping function which trans-forms the physical imaging sensor into an ideal, virtual sensor

1A displacement of a given object point into the imaging space

is often called a “disparity”. A set of disparities related to the same

two images is commonly referred as a “disparity map”.

Page 3: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

plane is accomplished by acquiring a series of images of a ran-domly textured planar target undergoing in-plane translations[30, 29]. The theoretical background of this technique comesfrom the theoretical fact that any perspective transformation(and so any affine transformation) transforms a line into a line[5]. While only two known, non-parallel translations are re-quired to determine the a priori distortion removal function,the accuracy of the warping functions can be greatly improvedusing additional (unknown) translations. A typical translationsequence is illustrated in Fig. 3 (AB and AC are usually chosenas the two know motions).

As noted previously, the disparity maps for each image ofthe translation sequence with respect to a common referenceimage are computed using digital image correlation. To estab-lish the warping function, software has been developed [14] toaccurately determine the two B-spline vector functions relat-ing image coordinates to coordinates on a virtual sensor plane;here, the calibration object is the virtual sensor plane.

3.2 Calibration

After determining the warping function that establish theideal, virtual sensor plane, the virtual imaging system can becalibrated using bundle adjustment techniques. The bundleadjustment technique was originally developed for photogram-metry applications [3, 16, 17], i.e., the shape measurement us-ing multiple views of an object. The technique has since beenused with great success for various single camera calibrationapplications [4, 19]. It is clear that a reliable shape measure-ment using different views of a rigid object can only be madeif sufficiently large triangulation angles are realized betweenthe different orientations of the imaging system. Equivalently,sufficient triangulation angles can be realized using a station-ary imaging system by rotating the calibration object in frontof it. This principle is used to achieve reliable calibration ofthe virtual imaging system by moving the calibration targetwith the SEM rotation stage.

4 Automated 3-D Reconstruction

Feature Point

Detection

Feature Point

Detection

Feature Point

Detection

Point Set

Matching

Point Set

Matching

Dense

Matching Estimation

EpipolarGeometry

Dense

Matching

EpipolarGeometryEstimation

Motion

Estimation Estimation

Motion

ParametersCalibration

Image 1 Image 2 Image n

Triangulation

3-D Shape

Fig. 4 — Steps of the Automated 3-D Reconstruction

C3

T2→1

m2

M

C1C2

m1m3

T2→3

Fig. 5 — Measuring the 3-D shape of a moving specimen is equivalentto measuring the shape of a static specimen with a moving imagingsensor.

The most common approach for calibration and use of asingle-sensor imaging system in 3-D surface reconstruction isto require accurate knowledge of the rigid-body object motionswithin a sequence of images (see Fig. 4). To mitigate the re-quirement for accurate motions, a novel approach is proposedthat does not require any information regarding the requiredrigid-body motions.

Indeed, a relationship exists between any image pair in thescene: the epipolar geometry [5, 23]. This geometry dependsupon the relative positions of the two imaging sensor locations.If this geometry can be estimated from a pair of images in thesequence, then the rigid-body transformation between the tworespective views can be estimated. By repeating this processfor each image pair, the motion between each view and thenthe general motion of the scene is obtained.

The main problem in implementing this approach is theestimation of the epipolar geometry from a pair of images.This is a rather delicate process and will not be fully developedhere; the reader can refer to [1, 35, 37] for details. The firststage in this process is the extraction of some points (calledfeature points) from each image in order to obtain appropriatecorresponding matches throughout the image sequence.

Feature Points Extraction

Point detectors are most often compared with respect to theirlocalization accuracy criterion. For this work, where the pre-cise location of a feature point in the image (e.g., a corner) isnot required but the ability to repeatedly identify correspond-ing points in the detection process is paramount, the work ofSchmid et al. [28] indicates that the best detector evaluatedwith this criterion is the Harris detector [9].

Feature Points Matching

In order to estimate the epipolar geometry between each pairof images, feature points are optimally matched using a Zeromean Normalized Cross Correlation (ZNCC) function. Toprevent false matching, a robust method (Least Median ofSquares) is used to detect and remove outliers.

Epipolar Geometry and Motion Estimation

A minimum of seven point correspondences is required to es-timate the epipolar geometry with an appropriate parameter-ization. In practice, an 8-point algorithm [20] is implementedwhich uses eight or more point pairs. To improve the stabilityof the process, isotropic scaling is applied to the coordinates[10] before carrying out the 8-point algorithm. Finally, an

Page 4: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

optimization approach is used to minimize a physically mean-ingful quantity such as the distance from each point to theappropriate epipolar line. Then, using the intrinsic parame-ters given by the calibration, the motion is recovered [13, 38](note that specific scene information is required to recover thescale factor for the magnitude of the translation).

Dense Matching and Refinement of the Estimated

Motion

After the epipolar geometry is estimated, one approach tocomplete the matching process for each feature point restrictsthe search area to its epipolar line and the surrounding area,oftentimes referred to as the “epipolar band” (thick epipolarline). In this work, a different approach is employed. Using thefeature points where corresponding matching points have beenidentified, an estimation of the disparity map is determinedin order to make the dense correlation easier: a triangularmesh is generated [8] from the matched points and then thedisparity map is estimated for each point by interpolation intothe triangle which contains it. Once the dense disparity mapis computed, the epipolar geometry is improved as well as thecorresponding motion.

Triangulation

As shown in Fig. 5, triangulation is used to compute the 3-Dcoordinates of a point that is common to at least two images.The 3-D information for each point is recovered by intersectingthe different rays defined by the optical center of the sensorand the projected point in the image. Since the rays may notintersect, an over determined equation system is solved using abundle-adjustment process to minimize the distance betweenthe 2-D image point and the reprojection of the computed 3-Dpoint.

Since the translation component of the rigid-body motionlinking two views is estimated up to an unknown scale fac-tor, the 3-D shape is also reconstructed up to the same scalefactor. The scale factor can be determined (a) knowing a dis-tance between two points on the specimen and/or (b) knowingthe magnitude of one translation component of the rigid-bodymotion of a given view in the image sequence.

5 Experiment and Results

50× 200×1×Fig. 6 — Specimen used for the experiment: a coin detail imagedwith the BSE detector at magnification 200×.

For all experiments, a FEI ESEM Quanta 200 is operated inhigh-vacuum mode (SEM mode). The operational parametersfor these studies are as follows:

• BSE detector

• 200× magnification

• non-dimensional spot-size of seven

• 10mm working distance

• accelerating voltage of 8kV

• seven-bit gray scale for all images (1024 × 884 size)

• 1.3mm by 1.1mm field of view

Figure 6 shows three fields of view for a US penny. The200× coin detail shows a speckle texture that is adequatefor selective use of digital image correlation to identify andmatch corresponding subsets during rigid-body motion. Notethat the 200× image has a small depth range (< 50µm) doesnot introduce significant perspective distortion to the dispar-ity maps between the images in a translation sequence. Thus,the randomly textured regions of the coin can be considered“flat” and be used for the computation of the a priori distor-tion removal function.

5.1 Calibration

translationsequence

(29 images)

arbitrarymotion

sequence(24 images)

(subset size 35)correlation

image

reference

image

Fig. 7 — Overview of calibration procedure: this first image of thetranslation sequence is correlated with the all other images of both thesequences.

200× Closer view

Fig. 8 — BSE detector images of the planar aluminum calibrationtarget covered with a thin speckle pattern layer of gold realized bymicrolithography.

micro-scalespeckle pattern(gold deposit)

carbon adhesive disc tomaintain a (missing)

second specimen

carbon paint toinsure specimen/wafer conductivity

aluminumwafer

specimento measure

(US penny coin)

Fig. 9 — Setup of the experiment: the coin to measure is stuck usinga thin adhesive on an aluminum wafer covered with a gold specklepattern deposited by microlithography.

Page 5: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

2 1 0 -1 -2

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-3-2-1 0 1 2 3

x-distortion [pixel]

0 -1 -2

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-3-2.5

-2-1.5

-1-0.5

0 0.5

1

y-distortion [pixel]

Fig. 10 — Distortion field computed by using the especially designedplanar speckle pattern: up) along the image x-axis and bottom) alongthe image y-axis.

Procedure

As shown in Fig. 7, the FEI ESEM is calibrated (a) usingthe a priori distortion removal technique based on a 29 imagesequence of a translated target and (b) using bundle adjust-ment calibration of the perspective model based on a 24 imagesequence of the same target undergoing arbitrary rigid-bodymotions.

Since the a priori distortion removal technique requires thata planar calibration target contains an appropriate specklepattern texture, an aluminum plate covered with a goldspeckle pattern realized by a microlithography process (seeFigs. 8 and 9) is employed for one set of experiments. An-other set of experiments used the random texture on the USpenny. Note that the relative ”flatness” of the coin detailshould not produce significant heterogeneities in the disparitymaps when the target is slightly translated at magnification200×.

Results Using the Micro-lithography Target

Figure 10 shows the distortion field of the SEM imaging sys-tem when using the target shown in Fig. 8. The magnitudeof the distortion is up to 4 pixels in the corners of the images(size 1024 × 884). Careful inspection of the data in Fig. 10indicates that the distortion field contains high-frequency dis-tortion components. To investigate these features, a high-passfilter is applied to the distortion field shown in Fig. 10. Fig. 11displays the two-dimensional form for the high frequency com-ponents. The data in Fig. 11 clearly shows a periodic structurein the distortion along the x- and y-axis of the images, withan amplitude of up to 0.1 pixel. This high-frequency distor-tion field is probably related to the scan control system for theelectron beam.

After performing the calibration process using bundle-adjustment, the rotation sequence gives a standard deviationfor the reprojection errors in the virtual sensor plane (which is

0.013.47e-18

-0.01 -0.02

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-0.1

-0.05

0

0.05

0.1

x-distortion [pixel]

0.04 0.03 0.02 0.01

0 -0.01

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-0.1

-0.05

0

0.05

0.1

y-distortion [pixel]

Fig. 11 — High-frequencies of the distortion field computed by usingthe especially designed planar speckle pattern: up) along the imagex-axis and bottom) along the image y-axis.

the specimen plane in the reference image) of 43 nanometers(error is less than 0.05 pixel in image space).

Results Using Coin Texture

Figure 12 shows the distortion field of the SEM imaging sys-tem when using the random texture on the coin as the targetfor the a priori distortion removal technique. The results arevery similar to those obtained by using the “ideal” planar tar-get.

The high frequency components are shown in Fig. 13. Thesecomponents are slightly disturbed compared to those com-puted with the planar target sequence.

After performing the calibration process by bundle-adjustment, the rotation sequence gives a standard devia-tion of the re-projection errors in the virtual sensor plane 91nanometers, which represents an error of less than 0.1 pixelin image space. Note that these values also include the corre-lation errors which are higher with the coin texture than the“ideal” speckle pattern used in the previous section.

Interestingly, the bundle adjustment technique re-estimatesthe shape of the calibration target (the coin) while also com-puting the perspective model parameters of the imaging sys-tem. Due to the a priori distortion removal technique con-straint, the target is first assumed to be perfectly flat. As-suming a flat object as the initial guess for the calibration bybundle-adjustment, Fig. 14 shows that the shape of the targetis properly re-estimated.

5.2 3-D Reconstruction

The process described in Section 4 is used to complete the cali-bration and 3-D reconstruction process for a specific example.For this application, four images of the US penny were ac-quired at 200× (see Fig. 6). The four images were obtainedafter undergoing unknown rigid-body motions.

Page 6: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

2 1 0 -1 -2

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-3-2-1 0 1 2 3

x-distortion [pixel]

0 -1 -2

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-3-2.5

-2-1.5

-1-0.5

0 0.5

1

y-distortion [pixel]

Fig. 12 — Distortion field computed by using the coin in replacementto the especially designed speckle pattern: up) along the image x-axisand bottom) along the image y-axis.

Since the US penny texture is used to obtain feature pointsand perform image correlation in this application, then thedistortions shown in Figs. 12 and 13 will be similar to thosepresent in these images. Hence, the distortion results for thisapplication are not reported.

Feature Point Extraction

A set of feature points is first extracted using Harris detector.This set is then processed to keep only the best feature pointsin a given circular neighborhood (typically 5 pixel radius) withrespect to their Harris “cornerness” function response. Us-ing this approach, good feature points are regularly scatteredthroughout the image and the epipolar geometry to estimatein the next steps is likely to be better [38]. Depending uponthe image being used between 9200 and 9800 points are ex-tracted.

Robust Matching

For each pair of images, feature points are matched using:

• ZNCC criterion

• 15 × 15 pixels correlation window

• Correlation threshold of 70%

• Least Median of Squares method to detect and removeoutliers (false matches) while estimating the epipolar ge-ometry

An average of 3100 pairs of points are robustly matchedfrom each set containing initially 9500 points (about 100 initialmatches were removed as outliers).

Motion Estimation

As the feature points are only extracted with pixel accuracy,the computation of the epipolar geometry is a first estimationfor the following stage (at this point, the mean distance of

0.03 0.02 0.01

3.47e-18 -0.01 -0.02 -0.03 -0.04

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-0.1

-0.05

0

0.05

0.1

x-distortion [pixel]

0.05 0.04 0.03 0.02 0.01

3.47e-18 -0.01 -0.02

0 200

400 600

800 1000x [pixel]

0 100

200 300

400 500

600 700

800

y [pixel]

-0.1

-0.05

0

0.05

0.1

y-distortion [pixel]

Fig. 13 — High-frequencies of the distortion field computed by usingthe coin in replacement to the especially designed speckle pattern: up)along the image x-axis and bottom) along the image y-axis.

-600 -400 -200 0 200 400 600x [micron] -600-400-200 0 200 400 600

y [micron]

-10

0

10

20

30

40

z [micron]

Fig. 14 — Re-estimated 3-D shape of the coin target used for thecalibration by bundle-adjustment. The shape is made up of three dis-tinct areas, which corresponds to the three areas of interest selectedfor being matched by correlation in the rotation sequence (we avoid tocorrelate areas of high curvature where the image correlation is likelyto be less accurate.

points to their epipolar line is 0.26 pixel). An estimation ofthe motion is recovered and a first approximation of the 3-Dshape is computed. Figure 15 shows the reconstructed shapebased on 3100 matched feature points.

Dense Matching

From the matched feature points obtained previously, an es-timation of the dense disparity map is computed and used asan initial guess for an optimization approach (maximizationof the ZNCC score with affine transformation of the correla-tion window). Then, the epipolar geometry is refined (meandistance of points to their epipolar line is now 5.10−3 pixel)improving the motion estimates.

Triangulation

Figure 16 shows the 3-D shape of the coin detail reconstructedby triangulation of the dense sub-pixel disparity map. Theresults in Figure 16 confirm that the calibration and 3-D re-construction processes can be performed with arbitrary rigid-

Page 7: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

Fig. 15 — Reconstructed 3-D shape based on the 3100 matched fea-ture points.

Fig. 16 — Reconstructed 3-D shape after dense sub-pixel matchingby image correlation (rendering of 26400 sub-sampled 3-D points).

body motions using a single sensor imaging system such as theFEI ESEM.

6 Conclusion

Using a priori distortion removal methods, a FEI ESEM sys-tem has been calibrated and used to accurately measure the3-D shape of an object. The experimental results from thecalibration studies show the necessity for taking into accountand correcting for distortions in the SEM imaging process.Moreover, the presence of high-frequencies components in thedistortion field demonstrates that classic parametric distor-tion models are not sufficient in the case of an SEM. Further-more, the results from preliminary 3-D shape measurementsare promising; calibration accuracy is estimated to be ±0.05pixel, which corresponded to ±43 nanometers for the objectsbeing studied in this work.

7 Future Work

Work is on-going to utilize the a priori distortion removal pro-cess and extend the work to the accurate measurement ofstrains in SEM systems. Issues such as temporal stability,presence of an unknown scale factor in the 3-D reconstructionprocess and accuracy of the measurements will be primaryareas of emphasis.

Acknowledgment

The authors wish to thank Dr. Oscar Dillon, former direc-tor of the CMS division at NSF, and Dr. Julius Dasch atNASA HQ for their support of this work through grants NSF-CMS-0201345 and NASA NCC5-174. In addition, the supportof Dr. Dana Dunkelberger, Director of the USC MicroscopyCenter, by providing unlimited access to the FEI ESEM forour baseline studies is gratefully acknowledged.

References

[1] X. Armanque and J. Salvi. Overall view regarding funda-mental matrix estimation. Image and Vision Computing,21(2):205–220, 2003.

[2] Horst A. Beyer. Accurate calibration of CCD cameras. InConference on Computer Vision and Pattern Recognition,1992.

[3] D. C. Brown. The bundle adjustment — progress andprospects. Int. Archives Photogrammetry, 21(3), 1976.

[4] M. Devy, V. Garric, and J.J. Orteu. Camera calibrationfrom multiple views of a 2D object using a global non lin-ear minimization method. In International Conference onIntelligent Robots and Systems, Grenoble (France), sep1997.

[5] Olivier Faugeras. Three-Dimensional Computer Vision:A Geometric Viewpoint. MIT Press, 1993. ISBN 0-262-06158-9.

[6] K. Galanulis and A. Hofmann. Determination of forminglimit diagrams using an optical measurement system. In7th International Conference on Sheet Metal, pages 245–252, Erlangen (Germany), sep 1999.

[7] D. Garcia, J.-J. Orteu, and M. Devy. Accurate calibra-tion of a stereovision sensor: Comparison of different ap-proaches. In 5th Workshop on Vision, Modeling, andVisualization, pages 25–32, Saarbrcken (Germany), nov2000.

[8] L. Guibas and J. Stolfi. Primitives for the manipulationof general subdivisions and the computation of voronoidiagrams. ACM Transaction On Graphics, 4(2):74–123,1985.

[9] C. Harris and M. Stephens. A combined corner and edgedetector. In Alvey Vision Conference, pages 147–151,1988.

[10] Richard Hartley. In defence of the 8-point algorithm.In Fifth International Conference on Computer Vision(ICCV’95), pages 1064–1070, Boston (MA, USA), June1995.

[11] J. D. Helm, S. R. McNeill, and M. A. Sutton. Improved3-d image correlation for surface displacement measure-ment. Optical Engineering, 35(7):1911–1920, 1996.

[12] M. Hemmleb and M. Schubert. Digital microphotogram-metry - determination of the topography of microstruc-tures by scanning electron microscope. In Second Turkish-German Joint Geodetic Days, pages 745–752, Berlin(Germany), may 1997.

Page 8: Automated 3-D Reconstruction Using a Scanning Electron ...clifton.mech.northwestern.edu/~me395/docu/sutton.pdf · Automated 3-D Reconstruction Using a Scanning Electron Microscope

[13] T. Huang and A. Netravali. Motion and structure fromfeature correspondences: a review. In Proc. of IEEE,volume 82, pages 252–258, 1994.

[14] Correlated Solutions Inc. and DorianGarcia. Vic2D and Vic3D softwares.http://www.correlatedsolutions.com, 2002.

[15] Z. L. Khan-Jetter and T. C. Chu. Three-dimensionaldisplacement measurements using digital image correla-tion and photogrammic analysis. Experimental Mechan-ics, 30(1):10–16, 1990.

[16] Karl Kraus. Photogrammetry, volume 1: Fundamentalsand Standard Processes. Dmmler/Bonn, 1997. ISBN 3-427-78684-6.

[17] Karl Kraus. Photogrammetry, volume 2: Advanced Meth-ods and Applications. Dmmler/Bonn, 1997. ISBN 3-427-78694-3.

[18] A.J. Lacey, S. Thacker, S. Crossley, and R.B. Yates. Amulti-stage approach to the dense estimation of disparityfrom stereo sem images. Image and Vision Computing,16:373–383, 1998.

[19] J.-M. Lavest, M. Viala, and M. Dhome. Do we reallyneed an accurate calibration pattern to achieve a reli-able camera calibration? In 5th European Conference onComputer Vision, pages 158–174, Freiburg (Germany),1998.

[20] H.C. Longuet-Higgins. A computer algorithm for recon-structing a scene from two projections. Nature, 293:133–135, September 1981.

[21] P. F. Luo, Y. J. Chao, and M. A. Sutton. Applicationof stereo vision to 3-D deformation analysis in fracturemechanics. Optical Engineering, 33:981, 1994.

[22] P. F. Luo, Y. J. Chao, M. A. Sutton, and W. H. Peters.Accurate measurement of three-dimensional deformableand rigid bodies using computer vision. ExperimentalMechanics, 33(2):123–132, 1993.

[23] Q.-T. Luong and O. D. Faugeras. The fundamental ma-trix : Theory, algorithms and stability analysis. TheInternational Journal of Computer Vision, 1(17):43–76,1996.

[24] E. Mazza, G. Danuser, and J. Dual. Light optical mea-surements in microbars with nanometer resolution. Mi-crosystem Technologies, 2:83–91, 1996.

[25] H. L. Mitchell, H. T. Kniest, and O. Won-Jin. Digi-tal photogrammetry and microscope photographs. Pho-togrammetric Record, 16(94):695–704, 1999.

[26] J.-J. Orteu, V. Garric, and M. Devy. Camera calibra-tion for 3-D reconstruction: Application to the measureof 3-D deformations on sheet metal parts. In EuropeanSymposium on Lasers, Optics and Vision in Manufactur-ing, Munich (Germany), jun 1997.

[27] R.G. Richards, M. Wieland, and M. Textor. Advantagesof stereo imaging of metallic surfaces with low voltagebackscattered electrons in a field emission scanning elec-tron microscope. Journal of Microscopy, 199:115–123,August 2000.

[28] C. Schmid, R. Mohr, and C. Bauckhage. Evaluation ofinterest point detectors. International Journal of Com-puter Vision, 37(2):151–172, 2000.

[29] H. Schreier, D. Garcia, and M. A. Sutton. Advances inlight microscope stereo vision. Experimental Mechanics,2003. Submitted.

[30] Hubert W. Schreier. Calibrated sensor and method forcalibrating same. Patent Pending, nov 2002.

[31] M. A. Sutton, T. L. Chae, J. L. Turner, and H. A.Bruck. Development of a computer vision methodologyfor the analysis of surface deformations in magnified im-ages. In George F. vander Voort, editor, MiCon 90: Ad-vances in Video Technology for Microstructural Control,ASTM STP 1094, pages 109–132, Philadelphia (USA),1990. American Society for Testing and Materials.

[32] M. A. Sutton, S. R. McNeill, J. D. Helm, and H. W.Schreier. Computer vision applied to shape and defor-mation measurement. In Elsevier Science, editor, Inter-national Conference on Trends in Optical NondestructiveTesting and Inspection, pages 571–589, Lugano (Switzer-land), 2000.

[33] M. A. Sutton, W. J. Wolters, W. H. Peters, W. F. Ran-son, and S. R McNeill. Determination of displacementsusing an improved digital correlation method. Image andVision Computing, 21:133–139, 1983.

[34] P. Synnergren and M. Sj odahl. A stereoscopic dig-ital speckle photography system for 3-D displacementfield measurements. Optics and Lasers in Engineering,31:425–443, 1999.

[35] P. Torr. and D. Murray. The development and com-parison of robust methods for estimating the fundamen-tal matrix. International Journal of Computer Vision,24(3):271–300, 1997.

[36] F. Vignon, G. Le Besnerais, D. Boivin, J.-L. Pouchou,and L. Quan. 3D reconstruction from scanning elec-tron microscopy using stereovision and self-calibration.In Physics in Signal and Image Processing, Marseille(France), January 2001.

[37] Zhengyou Zhang. Determining the epipolar geometry andits uncertainty: A review. Technical Report 2927, INRIA,July 1996.

[38] Zhengyou Zhang. A new multistage approach to motionand structure estimation: From essential parameters toeuclidean motion via fundamental matrix. Technical Re-port 2910, INRIA, June 1996.