efficient shape reconstruction of microlens using optical … · 2019. 5. 8. · ieee transactions...

10
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape Reconstruction of Microlens Using Optical Microscopy Yangjie Wei , Member, IEEE , Chengdong Wu, Yi Wang , Fellow, IEEE , and Zaili Dong Abstract—The imaging properties of a microlens are highly related to its 3-D profile; therefore, it is of funda- mental importance to measure its 3-D geometrical char- acteristics with high accuracy after industrial fabrication. However, common 3-D measurement tools are difficult to use for fast, noninvasive, and precise 3-D measurement of a microlens. Depth acquisition is a direct way to understand the 3-D properties of objects in computer vision, and shape from defocus (SFD) has been demonstrated to be effective for 3-D reconstruction. In this paper, a depth reconstruction method from blurring using optical microscopy and optical diffraction is proposed to reconstruct the global shape of a microlens. First, the relationship between the intensity dis- tribution and the depth information is introduced. Second, a blurring imaging model with optical diffraction is formu- lated through curve fitting, accounting for relative blurring and heat diffusion, and a new SFD method with optical diffraction and defocused images is proposed. Finally, a polydimethylsiloxane (PDMS) microlens is used to validate the proposed SFD method, and the results show that its global shape can be reconstructed with high precision. The average estimation error is 77 nm, and the cost time is reduced by 92.5% compared with atomic force microscopy scanning. Index Terms—Microlens, optical diffusion, polydimethyl- siloxane (PDMS), relative blurring, shape from defocus (SFD). I. I NTRODUCTION A microlens is an indispensable microoptics element and is of great importance in diverse microoptoelectromechanical- system-related applications, including chip-scale atomic clocks, microatomic magnetometers, photocommunication, optoelec- tronics, digital displays, information storage, biochemical detec- tion, light-emitting diodes, and digital optical imaging systems. Manuscript received September 17, 2014; revised January 23, 2015 and March 3, 2015; accepted April 18, 2015. Date of publication July 9, 2015; date of current version November 6, 2015. This work was supported in part by the Natural Science Foundation of China under Grant 61305025 and Grant 61473282, and in part by the Fundamental Research Funds for the Central Universities under Grant N13050411. Y. Wei and C. Wu are with the College of Information Science and Engineering, Northeastern University, Shenyang 110819, China (e-mail: [email protected]; [email protected]). Y. Wang is with the Department of Information Technology, Uppsala University, Uppsala 752 37, Sweden, and also with Northeastern Uni- versity, Shenyang 110819, China (e-mail: [email protected]). Z. Dong is with the State Key Laboratory of Robotics, Shenyang Insti- tute of Automation, Chinese Academy of Sciences, Shenyang 110016, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIE.2015.2454480 Hence, there has been a growing demand for developing high- volume and low-cost processes to fabricate microlenses [1], [2]. With the recent progress in 3-D microlens fabrication, several fabrication processes for microlenses have been reported, such as reactive ion etching, hot embossing, the micromachining of injection molds, and electron beam lithography [3], [4]. However, no matter which fabrication method is used, each produced microlens has a rigid 3-D structure with a unique focal length. The precise knowledge of its imaging proper- ties, i.e., the surface geometry and the refractive index, is of fundamental importance to ensure high accuracy and lens- to-lens uniformity for fabrication [5]. Because of the small dimensions and precise alignment tolerance of a microlens, the measurement of these parameters is a much more complicated process than that for normal lenses. Therefore, an accurate and efficient measurement tool to evaluate the 3-D profile of a microlens (or a microlens array) is necessary in industrial manufacturing [6]–[9], where a number of different methods have been proposed for the testing and characterization of microlenses. The most frequently used techniques are optical interferometry, scanning electron microscopy (SEM) [10], and atomic force microscopy (AFM) [11]. The most frequently used interferometers are the Twyman– Green [12], Mach–Zehnder [13], digital holographic [14], and phase-shifting shearing [15] interferometers for the inspection of microlenses. The basic procedure of these techniques is the positioning of the zero optical path difference of a surface to attain its relative height and the subsequent reconstruction of the 3-D surface profile of a microlens. These interferometer- based techniques have their own advantages; however, they are somewhat complicated, and their arrangements require better isolation to prevent vibration. More recently, optical coher- ence tomography (OCT) has become a potential technique for characterizing microoptical components and lenses [16]. OCT is a noninvasive and depth-resolving cross-sectional imaging technique for the characterization of turbid media and has become a well-established technique in biomedical diagnostics and engineering. However, the resolution of OCT, which is highly related to the wavelength of its optical source, is limited. Currently, the highest imaging resolution is only 1 μm, and it is inversely proportional to its penetration depth. In addition, the current OCT systems in the market are expensive. SEM produces the images of a sample by scanning it with a focused beam of electrons. SEM micrographs have a large depth of field yielding a characteristic 3-D appearance that is useful for understanding the surface structure of a sample. Furthermore, SEM can achieve a resolution lower than 1 nm. However, SEM can only work in a vacuum environment or 0278-0046 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 18-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655

Efficient Shape Reconstruction of MicrolensUsing Optical Microscopy

Yangjie Wei, Member, IEEE , Chengdong Wu, Yi Wang, Fellow, IEEE , and Zaili Dong

Abstract—The imaging properties of a microlens arehighly related to its 3-D profile; therefore, it is of funda-mental importance to measure its 3-D geometrical char-acteristics with high accuracy after industrial fabrication.However, common 3-D measurement tools are difficult touse for fast, noninvasive, and precise 3-D measurement ofa microlens. Depth acquisition is a direct way to understandthe 3-D properties of objects in computer vision, and shapefrom defocus (SFD) has been demonstrated to be effectivefor 3-D reconstruction. In this paper, a depth reconstructionmethod from blurring using optical microscopy and opticaldiffraction is proposed to reconstruct the global shape of amicrolens. First, the relationship between the intensity dis-tribution and the depth information is introduced. Second,a blurring imaging model with optical diffraction is formu-lated through curve fitting, accounting for relative blurringand heat diffusion, and a new SFD method with opticaldiffraction and defocused images is proposed. Finally, apolydimethylsiloxane (PDMS) microlens is used to validatethe proposed SFD method, and the results show that itsglobal shape can be reconstructed with high precision. Theaverage estimation error is 77 nm, and the cost time isreduced by 92.5% compared with atomic force microscopyscanning.

Index Terms—Microlens, optical diffusion, polydimethyl-siloxane (PDMS), relative blurring, shape from defocus(SFD).

I. INTRODUCTION

A microlens isan indispensablemicroopticselementand isofgreat importance in diverse microoptoelectromechanical-

system-related applications, including chip-scale atomic clocks,microatomic magnetometers, photocommunication, optoelec-tronics, digital displays, information storage, biochemical detec-tion, light-emitting diodes, and digital optical imaging systems.

Manuscript received September 17, 2014; revised January 23, 2015and March 3, 2015; accepted April 18, 2015. Date of publicationJuly 9, 2015; date of current version November 6, 2015. This work wassupported in part by the Natural Science Foundation of China underGrant 61305025 and Grant 61473282, and in part by the FundamentalResearch Funds for the Central Universities under Grant N13050411.

Y. Wei and C. Wu are with the College of Information Science andEngineering, Northeastern University, Shenyang 110819, China (e-mail:[email protected]; [email protected]).

Y. Wang is with the Department of Information Technology, UppsalaUniversity, Uppsala 752 37, Sweden, and also with Northeastern Uni-versity, Shenyang 110819, China (e-mail: [email protected]).

Z. Dong is with the State Key Laboratory of Robotics, Shenyang Insti-tute of Automation, Chinese Academy of Sciences, Shenyang 110016,China (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIE.2015.2454480

Hence, there has been a growing demand for developing high-volume and low-cost processes to fabricate microlenses [1], [2].

With the recent progress in 3-D microlens fabrication, severalfabrication processes for microlenses have been reported, suchas reactive ion etching, hot embossing, the micromachiningof injection molds, and electron beam lithography [3], [4].However, no matter which fabrication method is used, eachproduced microlens has a rigid 3-D structure with a uniquefocal length. The precise knowledge of its imaging proper-ties, i.e., the surface geometry and the refractive index, isof fundamental importance to ensure high accuracy and lens-to-lens uniformity for fabrication [5]. Because of the smalldimensions and precise alignment tolerance of a microlens, themeasurement of these parameters is a much more complicatedprocess than that for normal lenses. Therefore, an accurateand efficient measurement tool to evaluate the 3-D profile ofa microlens (or a microlens array) is necessary in industrialmanufacturing [6]–[9], where a number of different methodshave been proposed for the testing and characterization ofmicrolenses. The most frequently used techniques are opticalinterferometry, scanning electron microscopy (SEM) [10], andatomic force microscopy (AFM) [11].

The most frequently used interferometers are the Twyman–Green [12], Mach–Zehnder [13], digital holographic [14], andphase-shifting shearing [15] interferometers for the inspectionof microlenses. The basic procedure of these techniques is thepositioning of the zero optical path difference of a surface toattain its relative height and the subsequent reconstruction ofthe 3-D surface profile of a microlens. These interferometer-based techniques have their own advantages; however, they aresomewhat complicated, and their arrangements require betterisolation to prevent vibration. More recently, optical coher-ence tomography (OCT) has become a potential technique forcharacterizing microoptical components and lenses [16]. OCTis a noninvasive and depth-resolving cross-sectional imagingtechnique for the characterization of turbid media and hasbecome a well-established technique in biomedical diagnosticsand engineering. However, the resolution of OCT, which ishighly related to the wavelength of its optical source, is limited.Currently, the highest imaging resolution is only 1 μm, and it isinversely proportional to its penetration depth. In addition, thecurrent OCT systems in the market are expensive.

SEM produces the images of a sample by scanning it witha focused beam of electrons. SEM micrographs have a largedepth of field yielding a characteristic 3-D appearance thatis useful for understanding the surface structure of a sample.Furthermore, SEM can achieve a resolution lower than 1 nm.However, SEM can only work in a vacuum environment or

0278-0046 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

7656 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015

wet conditions (in environmental SEM), and all samples mustbe also an appropriate size to fit inside the specimen chamber.These disadvantages limit its application areas.

AFM is a very-high-resolution type of scanning probe mi-croscopy, with a demonstrated resolution on the order of frac-tions of a nanometer. AFM provides a 3-D surface profile, andsamples viewed by AFM do not require any special treatments.However, the usefulness of AFM also has limitations. Forexample, AFM can only image a maximum height on the orderof 10–20 μm and a maximum scanning area of approximately150 μm × 150μm. Furthermore, the scanning speed of an AFMsystem is comparatively low, and it is difficult to use it for theeffective shape measurement of microlenses.

With the current precision and resolution improvements inoptical microscopes, computer vision techniques have beenused in high-resolution 3-D reconstruction [17]–[21]. Com-pared with the aforementioned methods, requirements, suchas the experimental environment, the operation cost, and theequipment price, are much lower for optical microscopes. Fur-thermore, with the advantage of a direct and real-time obser-vation, optical-microscope-based technologies have been usedto image micrometer samples, attain real-time vision feedback,and assist AFM to improve the precision, success rate, andefficiency of micromanipulation/nanomanipulation [22]. More-over, shape from defocus (SFD), or depth from defocus, hasbeen demonstrated to be an effective shape reconstructionmethod by using the blurring degree of region images, and it hasbeen widely used for many macroscopic observations [23]–[29].However, the traditional SFD method is inaccurate for high-resolution 3-D reconstruction for the following reasons.

1) The depth calculation of traditional SFD is based onthe presupposition that optical diffraction can be omittedduring an optical imaging process. However, owing to thewave–particle duality, the direction in which light travelsdeviates from a straight line when it travels around smallobstacles or passes through small openings because ofoptical diffraction [31]–[34].

2) The shape reconstruction precision is highly related tothe defocus measurement. In traditional SFD, the defocusphenomenon is supposed to result from the depth varia-tion when the camera parameters of an optical imagingsystem are fixed. However, the intensity distribution ofa point on the image plane does not converge at a pointbecause of optical diffraction. Therefore, besides depthvariation, optical diffraction also results in defocusedimaging.

In our previous work, SFD with fixed camera parameters wasproposed [35], [36], but optical diffraction was not consideredin the defocus imaging process. Therefore, a high-precisionshape reconstruction method with optical diffraction is pro-posed to estimate the global shape of a microlens in this paper.Our present approach is novel in several ways and provides fast,noninvasive, and precise 3-D measurement of a microlens. Theinnovation and contribution of this paper are as follows.

1) The accuracy problem of traditional SFD is proved. First,the main disadvantages of common techniques currently

Fig. 1. Standard imaging theory in geometrical optics. s denotes thedepth of the source point; D denotes the diameter of the lens; r is theradius of the blurring spot; X is the optical axis; Y Z is the imaging planeperpendicular to optical axis X.

used to measure a microlens profile are analyzed, andSFD using optical microscopy is introduced as a promis-ing 3-D measurement method. Then, the reconstructeddepth of traditional SFD and SFD with optical diffractionis theoretically compared when the same blurring degreeis considered, and a graphical representation of the resultsis shown.

2) A new SFD method considering optical diffraction isproposed. First, the relationship between Fresnel diffrac-tion and the depth information is developed, and a blur-ring imaging model with optical diffraction is developedthrough the curve fitting of a numerical model, account-ing for relative blurring and heat diffusion. Subsequently,heat diffusion equations combined with optical diffrac-tion are developed, and their solutions are transformedinto a dynamic optimization problem.

3) Finally, an experiment with a polydimethylsiloxane(PDMS) microlens is carried out, and the results showthat its global shape can be reconstructed with highprecision using the SFD method in this paper. Throughcomparison with traditional SFD without optical diffrac-tion, it can be seen that, with our new SFD method, thereconstruction error of traditional SFD can be decreasedby 55%.

II. DEFOCUS IN GEOMETRICAL OPTICS

In geometrical optics, when focal length f , the distance ofthe object from the principal u, and the distance of the focusedimage from the lens plane v fulfill

1

u+

1

v=

1

f(1)

the image of a source point is a focused point, as shown inFig. 1, and the imaging process is focused. Otherwise, theimage is a blurred round spot, and a defocused image appears.

The intensity distribution in the blurring spot can be normallydenoted with a 2-D Gaussian function, which is called a pointspread function, i.e.,

h(y, z) =1

2πσ2exp

(−y2 + z2

2σ2

)(2)

Page 3: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

WEI et al.: EFFICIENT SHAPE RECONSTRUCTION OF MICROLENS USING OPTICAL MICROSCOPY 7657

where h(y, z) is the point spread function, σ denotes the spreadof the Gaussian kernel, and y and z are the 2-D coordinates ofany image on the image plane.

Since a blurring image can be theoretically consideredthe summation of some relative blurring points, this defocusimaging process can be denoted by the following convolutionfunction:

E(y, z) = I(y, z) ∗ h(y, z) (3)

where E(y, z) and I(y, z) are the defocused image and thefocused image, respectively, and ∗ denotes convolution.

The radius of the blurring spot r determines the size of thepattern generated by a point light source with unit intensity, andit can be denoted as [37]

r =Dv

2

∣∣∣∣ 1f − 1

v− 1

s

∣∣∣∣ . (4)

The blurring degree can be denoted by Gaussian kernel σ,which is

σ2 = γ2r2 (5)

where γ is a constant between the blurring radius and theblurring degree.

When the point spread function is approximated by a shift-invariant Gaussian function, the imaging model in (3) can beformulated in terms of an isotropic heat equation, i.e.,{

u(y, z, t) = ε ·Δu(y, z, t), t ∈ (0,∞)

u(y, z, 0) = q(y, z)(6)

where ε is the diffusion coefficient and is nonnegative, q(y, z)is the radiance, u

.= (∂u/∂t), Δ denotes the Laplacian opera-

tor, and

Δu =∂2u

∂y2+

∂2u

∂z2. (7)

When distance map s is not an equifocal plane, the pointspread function is general shift varying. The equivalence withthe isotropic heat equation does not hold, and the diffusionprocess can be formulated in terms of the inhomogeneousdiffusion equation as follows:{

u(y, z, t) = ∇ · (ε(y, z)∇u(y, z, t)) , t ∈ (0,∞)

u(y, z, 0) = q(y, z)(8)

where ∇ and ∇· denote the gradient operator and divergenceoperator, i.e.,

∇ =

[∂

∂y

∂z

]T∇· = ∂

∂y+

∂z. (9)

Consider u(y, z, 0) to be radiance q(y, z); the solution ofthe diffusion equation can be obtained in terms of the con-volution of the image with a temporally evolving Gaussiankernel. Thus, the diffusion equation can be introduced into thedefocus imaging. Solution u at a time t = τ plays the role ofan image E(y, z) = u(y, z, τ) captured with a certain blurring

setting that is related to τ . However, radiance q(y, z) is notalways necessary to construct the diffusion equation becauseits diffusion process to a defocused image is similar to thatbetween two defocused images.

Suppose there are two images E1(y, z) and E2(y, z) for twodifferent focus settings σ1 and σ2, respectively, with σ1 < σ2

(i.e., E1(y, z) is more defocused than E2(y, z)); then, E2(y, z)can be written as

E2(y, z)

=

∫∫1

2πσ22

exp

(− (y − u)2 + (z − v)2

2σ22

)I(u, v)dudv

=

∫∫1

2πΔσ22

exp

(− (y − u)2+(z − v)2

2Δσ22

)E1(u, v)dudv

(10)

whereΔσ2 � σ22 − σ2

1 is the relative blurring between E1(y, z)and E2(y, z).

Therefore, a defocused image can be described by anotherdefocused image with the relative blurring between them, andno focused image is needed [33], [34]. The following heatdiffusion functions between two defocused images can begiven as:⎧⎪⎨

⎪⎩u(y, z, t) = ∇ · (ε(y, z)∇u(y, z, t)) , t ∈ (0,∞)

u(y, z, 0) = E1(y, z)

u(y, z,Δt) = E2(y, z).

(11)

Then, the relationship between the relative blurring and theblurring radius is

Δσ2 = γ2(r22 − r21

)(12)

where ri(i = 1, 2) [see (4)] is a function of the depthinformation.

Therefore, the blurring radius and the depth information canbe calculated through solving the heat diffusion functions in(11) with two defocused images and (12). These are the basicprinciples of traditional SFD in geometrical optics.

III. SHAPE RECONSTRUCTION MODEL

WITH DIFFRACTION

As known in Section II, the traditional SFD method isto calculate the depth information with the blurring degreemeasurement of two defocused images in geometric optics,where optical light travels in straight lines. In fact, althoughthe focused-image-forming condition in (1) is fulfilled becauseof optical diffraction, the intensity distribution of a point on theimage plane does not converge at a point but scatters in a spot,as shown in Fig. 2, where the figure on the right is the blurredimage that resulted from optical diffraction. However, opticaldiffraction has been rarely considered in traditional SFD. Inthis section, we will introduce the basic principle of opticaldiffraction into traditional SFD, and we develop a blurringimaging model with optical diffraction.

Page 4: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

7658 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015

Fig. 2. Schematic of circular aperture diffraction.

Fig. 3. Diagrammatic sketch of the optical path.

There normally appear two typical diffraction types, i.e.,Fraunhofer diffraction and Fresnel diffraction. The former dif-fraction, which is also called far-field diffraction, occurs whenfield waves are passed through an aperture or a slit only causingthe size of an observed aperture image to change due to thefar-field location of the observation and the increasingly planarnature of the outgoing diffracted waves passing through theaperture; Fresnel diffraction, or near-field diffraction, occurswhen a wave passes through an aperture and diffracts in the nearfield, causing any diffraction pattern observed to differ in sizeand shape, depending on the distance between the aperture andthe projection [38]. When the distance increases, the outgoingdiffracted waves become planar, and Fraunhofer diffractionoccurs.

In the optical imaging system shown in Fig. 3, the opticaldiffraction that occurs is convergent-wave Fresnel diffraction[39], and the amplitude of a random point P on the imagingplane can be described as

EP = exp

[ik

(x+

ρ2

2b

)]B exp(iβ) (13)

B exp(iβ) = −iJ1(2πλ

−1y sinu)

πλ−1y sinu+

λ

π sin2 u· T (14)

where T=∑∞

n=2(−iy−1 sinu)nxn−1Jn(2πλ

−1y sinu); sinu=(D/R); x is the movement distance of the imaging planealong optical axis X ; y is the distance between P and gridorigin O along the y-axis; Y OZ is the imaging plane; D isthe diameter of the lens; k = 2π/λ; ρ is the distance fromP to the x-axis; Jn is the n-order Bessel function; λ is the

Fig. 4. Intensity distribution curve along the optical axis.

wavelength of the incident light; R is the distance betweenthe ideal imaging plane and the lens when the imaging planemoves forward or backward xi (i = 1, 2); and b is the distancebetween the imaging plane and the lens, and when the imagingplane moves forward or backward xi (i = 1, 2), it becomesbi (i = 1, 2).

Then, the normalized intensity distribution of P is

IP = EP · E∗P = B2. (15)

In (13) and (14), we can see that the intensity distribution ofrandom point P is a function of x and y, i.e., Ip(x, y). Becausethe scale factor between the object distance and the imagingdistance of a camera is axial magnification m, the variation ofthe imaging distance x can be transformed into the variation ofthe object distance l with

l =x

m. (16)

If we fix all the parameters of a camera, the intensity distrib-ution that resulted from the variation of x can be also replacedby the variation of l, and the distribution of Ip(l, y) is almostthe same to that of Ip(x, y) because l is a linear function of x.When λ = 600 nm and sinu = 0.5, Ip(x, y) with different lis shown in Fig. 4, where we can see that Ip(x, y) is maximalwhen P is the intersection point of the imaging plane and theoptical axis. In other positions around the intersection point, theintensity value decreases with the distance from the maximalpoint. When l is fixed, each curve of Ip(l, y) distributes close toa Gaussian function along the y-axis, and when depth variationl is zero, Ip(l, y) does not converge at a point, as expected ingeometrical optics. It demonstrates that, even when the focusedimaging condition in geometrical optics is fulfilled, the imagingprocess is not focused due to optical diffraction.

Therefore, both the depth variation and optical diffractioncan result in blurring imaging, and blurring kernel σ can bedenoted as

σ = f(rd) = Fd(diffraction, s). (17)

Page 5: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

WEI et al.: EFFICIENT SHAPE RECONSTRUCTION OF MICROLENS USING OPTICAL MICROSCOPY 7659

Fig. 5. Depth variation and blurring degree with optical diffraction.

Fig. 6. Depth variation and blurring degree without optical diffraction.

Since Ip(l, y) with fixed l is close to a Gaussian function ofy, it is easy to fit each Ip(l, y) with a Gaussian curve and toattain the Gaussian kernel σ of different l. When λ = 600 nmand sinu = 0.5, the relationship between σ and l is shown inFig. 5, where we can see that with depth variation l increasing,Gaussian kernel σ is increasing as well and that when depthvariation l is zero, σ with optical diffraction is 1.57× 10−4.In order to compare with the blurring imaging in geometricaloptics, wavelength λ and sinu are fixed, and the relationshipbetween σ and l in geometrical optics is calculated, as shown inFig. 6, where we can see that, when depth variation l is zero, σ iszero. Furthermore, with depth variation l increasing,σ in Figs. 5and 6 is becoming close. Therefore, in a macrooptical system,it is reasonable to omit optical diffraction, but on the nanoscale,the influence of optical diffraction cannot be ignored.

Fig. 7. Fitting curve of a numerical model.

Therefore, in order to calculate the depth information fromthe blurring degree of a blurring image, a mathematical functionbetween l and σ is required. In Fig. 5, we can see that therelationship between σ and l can be fitted with a quadraticcurve, and the fitting curve in our paper is

σ = al2 + bl+ c. (18)

The fitting curve is shown in Fig. 7, where the points with∗ are the calculation values with a Gaussian function, and thesolid line is the fitting curve. Therefore,

al2 + bl+ c− σ = 0. (19)

The solution of (19) is

l =−b±

√b2 − 4a(c− σ)

2a. (20)

Then, the final depth of a blurring image can be calculated as

s = s0 +−b±

√b2 − 4a(c− σ)

2a(21)

where s0 is the ideal object distance.From (18), we can see that a, b, and c are known after the

curve fitting. In order to calculate the depth information, weonly need to obtain blurring kernel σ, which can be attainedfrom the relative blurring between two blurring images.

Here, we would not adjust any camera parameter to obtainblurring images E1(y, z) and E2(y, z) because, in a high-resolution observation, the variation of camera parameters willdestroy the camera. The method we use is to capture the firstblurring image E1(y, z) and then to capture the second blurringimage E2(y, z) after a small depth variation of Δs that isexactly known through a nanoplatform. Suppose their depthmaps are s1(y, z) and s2(y, z), respectively, and the diffusionequations in (11) are constructed. In order to solve them, wecalculate the relative blurring between them and introduce a

Page 6: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

7660 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015

global optimization method for the reconstruction of a globalshape.

First, the relative blurring between E1(y, z) and E2(y, z) is

Δσ2 �σ22 − σ2

1

=(al22 + bl2 + c

)2 − (al21 + bl1 + c

)2. (22)

The depth variation of these blurring images is

s1(y, z)− s2(y, z) =Δs(y, z) (23)

si(y, z)− s0(y, z) = li(y, z), i = 1, 2. (24)

By simplification, (22) can be denoted as

al22 + bl2 + c = ±√Δσ2 + (al21 + bl1 + c)

2. (25)

Suppose that

c′ = c∓√Δσ2 + (al21 + bl1 + c)

2. (26)

Then, we obtain the depth reconstruction model with opticaldiffraction as

s = s0 +−b±

√b2 − 4ac′

2a. (27)

As a global algorithm, we construct the following opti-mization problem to calculate the solutions of the diffusionequations:

s = argmins2(y,z)

∫∫(u(y, z,Δt)− E(y, z))2 dydz. (28)

However, the aforementioned optimization process is illposed, i.e., the minimum may not exist, and although it exists,it may not be stable with respect to the data noise. A commonway to regularize this problem is to add a Tikhonov penalty asfollows [40]:

s = argmins2(y,z)

∫∫(u(y, z,Δt)− E(y, z))2 dydz

+ α ‖∇s2(y, z)‖2 + αk ‖s2(y, z)‖2 (29)

where the additional term imposes a smoothness constraint onthe depth map. In practice, we use α > 0 and k > 0, which areall very small, because these terms have no practical influenceon the cost energy denoted as

F (s)=

∫∫(u(y, z,Δt)−E2(y, z))

2dydz+α‖∇s‖2+αk‖s‖2.(30)

Thus, the solution process is equal to the following equation:

s = argmins

(F (s)) s.t. (11), (27). (31)

Therefore, (31), as a dynamic optimization, can be solved bya gradient flow. If the cost energy in (30) is below the energythreshold, the algorithm stops, and the desired depth map is

Fig. 8. Flow graph of our algorithm.

Fig. 9. Veeco Dimension 3100 AFM system used in our experiment.

obtained. Otherwise, we calculate the following equation withthe optimization step:

∂s

∂t= −F ′(s). (32)

Then, the depth map is updated, and the algorithm returns tothe next circulation until the stop condition is fulfilled.

The flow graph of our algorithm is shown in Fig. 8.

IV. EXPERIMENT

In order to validate the new algorithm proposed in this paper,we use it to reconstruct the global shape of a PDMS microlensin a microlens array. The geometrical shape of each microlens isa hemispheroid. Its average radius along the vertical directionis 2 μm, and its average radius along the horizontal directionis 6.5 μm. In the experiment, first, we use a Veeco Dimension3100 AFM system, as shown in Fig. 9, to scan a microlens.The scanning frequency of the AFM system is 1 Hz. Then,

Page 7: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

WEI et al.: EFFICIENT SHAPE RECONSTRUCTION OF MICROLENS USING OPTICAL MICROSCOPY 7661

Fig. 10. Microlens scanned by AFM.

Fig. 11. Profile of a section scanned by AFM.

we capture two blurring images of the same microlens withthe optical microscope HIROX-7700 through depth variation.The amplification factor of our microscope is 7000, and therest of the camera parameters are as follows: f = 0.357 mm,s0 = 3.4 mm, F -number = 2, and Δs = 100 nm. Finally, weuse the traditional SFD method without optical diffraction andour new SFD method with optical diffraction in this paperto reconstruct the global shape of the microlens with twoblurring images captured by our optical microscope, and then,we compare the reconstructed results with those of the AFMscanning.

The experimental results are shown in Figs. 10–16. Figs. 10and 11 are the AFM-scanned 3-D image of the microlens andits 3-D profile of any section, respectively. In Fig. 11, it canbe seen that the height of the microlens is 2 μm, and itsradius along the horizontal direction is 6.5 μm. Fig. 12(a) and(b) shows the two optical blurring images before and afterdepth variation, respectively; the reconstructed shape of themicrolens with our new SFD is shown in Fig. 13, where (a)and (b) are the side and vertical views, respectively. The unit inour reconstructed shape figure is millimeter. The reconstructedshape of the microlens with traditional SFD is shown in Fig. 14,where (a) and (b) are the side and vertical views, respectively.In Fig. 13, it can be seen that our algorithm can reconstructthe entire shape of the microlens and the planar substrate. Theheight difference between the substrate and the vertex of thehemisphere is approximately 2 μm, and the horizontal radius

Fig. 12. Defocused images of the microlens viewed by opticalmicroscopy.

of the microlens on the substrate is approximately 6.5 μm.However, with traditional SFD, the reconstructed shape is farfrom the true shape, and it is difficult to separate the microlensand the substrate.

In order to investigate the precision of our shape reconstruc-tion algorithm, first, we construct the error map φ between trueshape s in Fig. 10 and estimated shape s from our method,and from φ, we can observe the error distribution on the entireshape. The computational formula is as follows:

φ =s

s− 1. (33)

The error maps of our new SFD method and the traditionalSFD method are shown in Figs. 15 and 16, respectively, andtheir average values are 1.8× 10−6 and 5.2× 10−5, respec-tively. Then, we calculate the average height error of a randomsection selected from the 3-D shape, and the computationalformula is as follows:

Eave =1

n

n∑k=1

|Hk − Hk| (34)

where n is the number of the sample points, i.e., 256 inour experiment; Hk is the true height value from the AFMscanning; and Hk is the estimated height of the kth point fromour reconstruction method. Compared with (33), (34) can giveus a quantitative evaluation of our method. The accurate heightof this section is known from the AFM scanning in Fig. 10, andthe comparison result is shown in Fig. 17, where the solid lineis the reconstructed profile of our method, the dot-dash line isthe reconstructed profile of the traditional SFD method withoutoptical diffraction, and the dash line is the profile of the AFMscanning.

From Figs. 13, 14, and 17, and our calculation, we can obtainthe following conclusions.

1) The reconstructed 3-D shape of the new proposed algo-rithm is close to the measurement result of AFM. Theheight difference between the substrate and the peak pointis 2.2 μm, and the width of the reconstructed hemispher-oid is 13.2 μm, which is the same as the AFM scan-ning. However, with traditional SFD, the reconstructedshape is far from the true shape. The height differencebetween the substrate and the peak point is 1.8 μm,and the width of the reconstructed hemispheroid is about

Page 8: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

7662 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015

Fig. 13. Reconstructed shape with our method.

Fig. 14. Reconstructed shape with the traditional SFD method without optical diffraction.

Fig. 15. Error map of our SFD in this paper.

12.5 μm; however, there is another peak on the left sideof the reconstructed hemispheroid.

2) The middle of our reconstructed shape is thinner thanthe scanned shape. The reason is that we use a globaloptimization to solve our algorithm, and it needs a secularchange to get to the optimization value. Our future workis to improve the optimization method to solve this prob-lem. However, with traditional SFD, the reconstructedsubstrate is not flat on the left side.

Fig. 16. Error map of traditional SFD.

3) Compared with the measurement result of AFM, the Eave

of our new algorithm is 77 nm, and it coincides with thedemand of a high-resolution reconstruction. For tradi-tional SFD without optical diffraction, its Eave is 170 nm.

4) If the scanning image size is 256 × 256 pixels, the AFMscanning takes 240 s. However, with our optical method,the calculation process for the same image size takes 13 s,and the capture process of the two blurring images takes5 s. Therefore, the total working time of our method is

Page 9: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

WEI et al.: EFFICIENT SHAPE RECONSTRUCTION OF MICROLENS USING OPTICAL MICROSCOPY 7663

Fig. 17. Arbitrary section profile of our microlens.

only 7.5% of the AFM scanning time, and it is the same asthat of traditional SFD because we do not add additionalcomputational burden to traditional SFD when we designour new SFD.

V. CONCLUSION

In this paper, an effective shape reconstruction method fora microlens using optical microscopy has been proposed andvalidated with a practical microlens. Our primary contributionsare to analyze the disadvantages of the common tools currentlyused in microlens profile reconstruction and to choose the shapereconstruction from defocus using optical microscopy as apromising shape estimation method for a microlens. Moreover,we propose a blurring imaging model with optical diffractionthrough curve fitting, accounting for relative blurring and heatdiffusion, and we improve the precision of traditional SFD.Finally, the shape of a practical PDMS microlens is recon-structed with our new proposed shape reconstruction methodand the traditional SFD method. The results show that theproposed algorithm is an effective method for reconstructingthe microlens shape, and the calculation time is only 7.5% ofthe AFM scanning time. Furthermore, with our new SFD, thereconstruction error of traditional SFD can be decreased by55%, and at the same time, we do not increase the calculationtime of traditional SFD. In addition, our method can be possiblyused to simultaneously measure the entire shape of a microlensarray in industry.

REFERENCES

[1] L. Li et al., “A hybrid polymer–glass achromatic microlens array fab-ricated by compression molding,” J. Opt., vol. 13, no. 5, May 2011,Art. ID. 055407.

[2] S. K. Lee, M. G. Kim, K. W. Jo, S. M. Shin, and J. H. Lee,“A glass reflowed microlens array on a Si substrate with rectangu-lar throughholes,” J. Opt. A, Pure Appl., vol. 10, no. 4, Apr. 2008,Art. ID. 044003.

[3] V. K. Madanagopal et al., “Low-coast, low-loss microlens arrays fabri-cated by soft-lithography replication process,” Appl. Phys. Lett., vol. 82,no. 8, pp. 1152–1154, Feb. 2003.

[4] H. M. Yeh and K. S. Chen, “Development of a digital-convolution-basedprocess emulator for three-dimensional microstructure fabrication usingelectron-beam lithography,” IEEE Trans. Ind. Electron., vol. 56, no. 4,pp. 926–936, Apr. 2009.

[5] T. Anna, C. Shakher, and D. S. Mehta, “Three-dimensional shape mea-surement of micro-lens arrays using full-field swept-source optical co-herence tomography,” Opt. Lasers Eng., vol. 48, no. 11, pp. 1145–1151,Nov. 2010.

[6] L. H. Lee, C. H. Huang, S. C. Ku, Z. H. Yang, and C. Y. Chang, “Efficientvisual feedback method to control a three-dimensional overhead crane,”IEEE Trans. Ind. Electron., vol. 61, no. 8, pp. 4073–4083, Aug. 2014.

[7] Y. Chen, X. D. Li, C. Wiet, and J. M. Wang, “Energy managementand driving strategy for in-wheel motor electric ground vehicles withterrain profile preview,” IEEE Trans. Ind. Electron., vol. 10, no. 3,pp. 1938–1947, Aug. 2014.

[8] R. Drath and A. Horch, “Industrie 4.0: Hit or hype,” IEEE Trans. Ind.Electron., vol. 8, no. 2, pp. 56–58, Jun. 2014.

[9] F. Barranco, J. Diaz, B. Pino, and E. Ros, “Real-time visual saliencyarchitecture for FPGA with top-down attention modulation,” IEEE Trans.Ind. Electron., vol. 10, no. 3, pp. 1726–1735, Aug. 2014.

[10] J. Chen, W. Wang, J. Fang, and K. Varahramyan, “Variable-focusingmicrolens with microfluidic chip,” J. Micromech. Microeng., vol. 14,no. 5, pp. 675–680, Mar. 2004.

[11] S. J. Qin et al., “Fabrication of micro-polymer lenses with spaces usinglow-cost wafer-level glass-silicon molds,” IEEE Trans. Compon., Packag.,Manuf. Technol., vol. 3, no. 12, pp. 2006–2013, Dec. 2013.

[12] S. Reichelt and H. Zappe, “Combined Twyman–Green and Mach–Zehnder interferometer for microlens testing,” Appl. Opt., vol. 44, no. 27,pp. 5786–5792, Sep. 2005.

[13] H. Elfstrom, A. Lehmuskero, T. S. Stamoinen, M. Kuittinen, andP. Vahimaa, “Common-path interferometer with diffractive lens,” Opt.Exp., vol. 14, no. 9, pp. 3847–3852, May 2006.

[14] F. Charriere et al., “Characterization of microlenses by digital holographicmicroscopy,” Appl. Opt., vol. 45, no. 5, pp. 829–835, Feb. 2006.

[15] H. Sickinger, O. Falkenstorfer, N. Lindlein, and J. Schwide, “Charac-terization of microlenses using a phase-shifting shearing interferometer,”Opt. Eng., vol. 33, no. 8, pp. 2680–2686, Aug. 1994.

[16] E. Acosta, D. Vazquez, L. Garner, and G. Smith, “Tomographic methodfor measurement of the gradient refractive index of the crystalline lens. I.The spherical fish lens,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 22,no. 10, pp. 424–433, Oct. 2005.

[17] F. Marino, P. D. Ruvo, G. D. Ruvo, M. Nitti, and E. Stella, “HiPER3-D: An omnidirectional sensor for high precision environmental 3-Dreconstruction,” IEEE Trans. Ind. Electron., vol. 59, no. 1, pp. 579–591,Jan. 2012.

[18] S. Y. Cho and W. S. Chow, “A neural-learning-based reflectance modelfor 3-D shape reconstruction,” IEEE Trans. Ind. Electron., vol. 47, no. 6,pp. 1346–1350, Dec. 2000.

[19] Z. G. Li and J. H. Zheng, “Visual salience based tone mapping for highdynamic range images,” IEEE Trans. Ind. Electron., vol. 61, no. 12,pp. 7076–7082, Dec. 2014.

[20] J. H. Kim and C. H. Meng, “Visually servoed 3-D alignment of multipleobjects with sub-nanometer precision,” IEEE Trans. Nanotechnol., vol. 7,no. 3, pp. 321–330, May 2008.

[21] Y. Gao et al., “3-D object retrieval with Hausdorff distance learning,”IEEE Trans. Ind. Electron., vol. 61, no. 4, pp. 2088–2098, Apr. 2014.

[22] X. J. Tian, Y. C. Wang, N. Xi, Z. L. Dong, and Z. H. Tong, “Ordered arraysof liquid-deposited SWCNT and AFM manipulation,” Sci. China, vol. 53,no. 2, pp. 251–256, Sep. 2008.

[23] A. P. Pentland, “A new sense for depth of field,” IEEE Trans. PatternAnal. Mach. Intell., vol. 9, no. 4, pp. 523–531, Jul. 1987.

[24] P. N. Vinay and C. Subhasis, “On defocus, diffusion and depth estima-tion,” Pattern Recognit. Lett., vol. 28, no. 3, pp. 311–319, Feb. 2007.

[25] A. P. Pentland, S. Scheroch, T. Darrell, and B. Girod, “Simple rangecameras based on focus error,” J. Opt. Soc. Amer. A, Opt. Image Sci.,vol. 11, no. 11, pp. 2925–2934, Nov. 1994.

[26] S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus rangesensor,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 12,pp. 1186–1198, Dec. 1996.

[27] P. Favaro, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 3, pp. 518–531,Mar. 2008.

[28] P. Favaro, A. Mennucci, and S. Soatto, S, “Observing shape fromdefocused images,” Int. J. Comput. Vis., vol. 52, no. 1, pp. 25–43,Aug. 2003.

[29] P. Favaro and A. Mennucci, “Learning shape from defocus,” in Proc.ECCV , Copenhagen, Denmark, May 28–31, 2002, pp. 735–745.

Page 10: Efficient Shape Reconstruction of Microlens Using Optical … · 2019. 5. 8. · IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015 7655 Efficient Shape

7664 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 62, NO. 12, DECEMBER 2015

[30] P. Wang, Y. G. Xu, W. Wang, and Z. Q. Wang, “Analytic expression forFresnel diffraction,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 15, no. 3,pp. 684–688, Mar. 1998.

[31] Z. Jiang, Q. Lu, and Z. Liu, “Several questions related to diffraction,”J. Laser Appl., vol. 6, pp. 266–270, Jun. 1994.

[32] J. Gu, “Discussion on π jump at boundary in diffraction phenomena,”J. Laser Appl., vol. 6, pp. 270–272, Mar. 1994.

[33] R. C. Word, J. P. S. Fitzgerald, and R. Konenkamp, “Direct imaging ofoptical diffraction in photoemission electron microscopy,” Appl. Phys.Lett., vol. 103, no. 2, Jul. 2013, Art. ID. 021118.

[34] I. Kantor et al., “A new diamond anvil cell design for X-ray diffractionand optical measurements,” Rev. Sci. Instrum., vol. 83, no. 12, Dec. 2012,Art. ID. 125 102.

[35] Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using singlecamera with fixed camera parameters,” IET Comput. Vis., vol. 6, no. 1,pp. 29–39, Mar. 2012.

[36] Y. J. Wei, C. D. Wu, Z. L. Dong, and Z. Liu, “Global shape reconstructionof the bended AFM cantilever,” IEEE Trans. Nanotechnol., vol. 11, no. 4,pp. 713–719, Jul. 2012.

[37] P. Favaro and S. Soatto, 3-D Shape Reconstruction and Image Restora-tion: Exploiting Defocus and Motion-Blur. London, U.K.: Springer-Verlag, Sep. 2006.

[38] H. Oberst, D. Kouznetsov, K. Shimizu, J. Fujita, and F. Shimizu, “Fresneldiffraction mirror for atomic wave,” Phys. Rev. Lett., vol. 94, no. 1,Jan. 2005, Art. ID. 013203.

[39] Z. Q. Wang and P. Wang, “Rayleigh criterion and K Strehl criterion,”ACTA Photon. Sin., vol. 29, no. 7, pp. 621–625, Sep. 2000.

[40] R. Lagnado and S. Osher, “A technique for calibrating derivative securitypricing models: Numerical solution of an inverse problem,” J. Comput.Finance, vol. 1, no. 1, pp. 13–26, Sep. 1997.

Yangjie Wei (M’13) received the B.S. degreein electronics information engineering from JilinUniversity, Changchun, China, in 2002 and theM.S. and Ph.D. degrees in recognition and in-telligent systems from the Chinese Academy ofSciences (CAS), Shenyang, China, in 2005 and2013, respectively.

From 2010 to 2012, she was studying at theFraunhofer Institute for Nondestructive Testing(IZFP). Currently, she serves as an AssociateProfessor with the College of Information Sci-

ence and Engineering, Northeastern University, Shenyang, China. Sheis the author or coauthor of 30 peer-reviewed papers published injournals and conference proceedings. Her current research interestsinclude microobservation and nanoobservation, 3-D reconstruction, andmicroimage and nanoimage processing.

Dr. Wei was the recipient of support from the Joint Doctoral Programof Fraunhofer and CAS during her studies at Fraunhofer IZFP. Shewas also the recipient of the Best Student Paper Award at the 2009IEEE International Conference on Mechatronics and Automation and theAward of Excellent Doctorial Dissertation from CAS in 2014.

Chengdong Wu received the B.Sc. degree inelectrical automation from Shenyang Architec-tural and Civil Engineering Institute, Shenyang,China, in 1983, the M.Sc. degree in controltheory and its applications from Tsinghua Uni-versity, Beijing, China, in 1988, and the Ph.D.degree in industrial automation from Northeast-ern University, Shenyang, China, in 1994.

Currently, he is the Director of the Instituteof Artificial Intelligence and Robotics, Collegeof Information Science and Engineering, North-

eastern University, Shenyang, China. He is the author or coauthor ofnine books and more than 150 journal and conference papers. His mainresearch interests include pattern recognition and image processing,robot intelligent control, and wireless sensor networks.

Yi Wang (F’15) received the B.S. degree incomputer science from Northeastern University,Shenyang, China, in 1982 and the Ph.D. degreein computer science from Chalmers Universityof Technology, Göteborg, Sweden, in 1991.

He was currently a Chair Professor with theDepartment of Information Technology, UppsalaUniversity, Uppsala, Sweden. He is also cur-rently holding a joint visiting professorship po-sition with Northeastern University.

Dr. Wang was a Keynote Speaker at the2015 European Joint Conferences on Theory and Practice of Software(ETAPS). He was the recipient of the Computer-Aided VerificationAward in 2003, the Outstanding Paper Award at the 2012 EuromicroConference on Real-Time Systems, the Best Paper Award from the 2009IEEE Real-Time Systems Symposium, and the Best Paper Award at the2013 Design, Automation, and Test in Europe (DATE).

Zaili Dong was born in 1952. He received theB.S. degree from Shenyang Jianzhu Univer-sity, Shenyang, China, in 1983 and the M.S.degree at the Chinese Academy of Sciences,Shenyang, China, in 1986.

From 1982 to 1984, he was a Visiting Pro-fessor at Purdue University, West Lafayette,IN, USA. He is currently with the State KeyLaboratory of Robotics, Shenyang Institute ofAutomation, Chinese Academy of Sciences. Hespecializes in the areas of pattern recognition,

machine vision, teleoperation, reverse engineering, sensor systems,and microtechniques and nanotechniques. He is the author or coauthorof over 100 papers in these areas, and he is the holder of more thanten patents.