# Blind multichannel reconstruction of high-resolution images using wavelet fusion

Post on 03-Oct-2016

213 views

Embed Size (px)

TRANSCRIPT

<ul><li><p>Blind multichannel reconstruction of high-resolutionimages using wavelet fusion</p><p>Said E. El-Khamy, Mohiy M. Hadhoud, Moawad I. Dessouky, Bassiouny M. Salam,and Fathi E. Abd El-Samie</p><p>We developed an approach to the blind multichannel reconstruction of high-resolution images. Thisapproach is based on breaking the image reconstruction problem into three consecutive steps: a blindmultichannel restoration, a wavelet-based image fusion, and a maximum entropy image interpolation.The blind restoration step depends on estimating the two-dimensional (2-D) greatest common divisor(GCD) between each observation and a combinational image generated by a weighted averaging processof the available observations. The purpose of generating this combinational image is to get a new imagewith a higher signal-to-noise ratio and a blurring operator that is a coprime with all the blurringoperators of the available observations. The 2-D GCD is then estimated between the new image and eachobservation, and thus the effect of noise on the estimation process is reduced. The multiple outputs of therestoration step are then applied to the image fusion step, which is based on wavelets. The objective ofthis step is to integrate the data obtained from each observation into a single image, which is theninterpolated to give an enhanced resolution image. A maximum entropy algorithm is derived and usedin interpolating the resulting image from the fusion step. Results show that the suggested blind imagereconstruction approach succeeds in estimating a high-resolution image from noisy blurred observationsin the case of relatively coprime unknown blurring operators. The required computation time of thesuggested approach is moderate. 2005 Optical Society of America</p><p>OCIS codes: 100.0100, 100.6640, 100.3010.</p><p>1. Introduction</p><p>Image reconstruction is an important field of imageprocessing. The development of this field has been mo-tivated by the requirement to acquire high-resolutionimages from degraded observations obtained by mul-tiple sensors. By using suitable image reconstructionalgorithms, we can obtain a single high-resolution im-age either from several degraded still images or fromseveral degraded multiframes. There are numerousapplications of the reconstruction of high-resolutionimages such as remote sensing, medical imaging, sat-</p><p>ellite imaging, and high definition television (HDTV).In particular, image reconstruction has been the sub-ject of much interest in the fields of optics and opticalimaging. Many references have appeared in the opticsliterature on image reconstruction (restoration, recov-ery, deconvolution), blind deconvolution, and multi-frame reconstruction.17Most algorithms that have been developed for high-</p><p>resolution image reconstruction are iterative innature.819 These algorithms seek to reduce the com-putational complexity of the matrix inversion pro-cesses involved in the solution by using successiveapproximation methods for estimation of the high-resolution image. The frequency domain treatmentof the image reconstruction problem has been a pop-ular treatment in the early solutions. This is achievedby making use of the specific time-shift properties ofthe Fourier transform.710 The maximum a posteriori(MAP) estimation algorithm has also been imple-mented in this field.1113 Some other set theoreticalapproaches have also been presented.10,14In the general solution of the image reconstruction</p><p>problem, we deal with three degradation phenomena:a general geometric registration warp, blurring, andadditive noise. Most of the above-mentioned ap-</p><p>S. E. El-Khamy (elkhamy@ieee.org) is with the Department ofElectrical Engineering, Faculty of Engineering, Alexandria Univer-sity, Alexandria 21544, Egypt. M. M. Hadhoud is with the Depart-ment of Information Technology, Faculty of Computers andInformation, Menoufia University, 32511 Shebin Elkom,Egypt. M. I. Dessouky, B. M. Salam, and F. E. Abd El-Samie(fathi_sayed@yahoo.com) are with the Department of Electronicsand Electrical Communications, Faculty of Electronic Engineer-ing, Menoufia University, 32951 Menouf, Egypt.Received 28 February 2005; revised manuscript received 29</p><p>June 2005; accepted 1 July 2005.0003-6935/05/347349-08$15.00/0 2005 Optical Society of America</p><p>1 December 2005 Vol. 44, No. 34 APPLIED OPTICS 7349</p></li><li><p>proaches depend mainly on the assumption that thedegradation in each image or frame is known a priorior is to be estimated during the image reconstructionprocess. Few methods deal only with the problem ofblind image reconstruction.10,20 In blind methods, theblurring operators that affect each observation imageare unknown. The first step in blind image restora-tion is to register the images, which means estimat-ing the geometric registration warp between differentimages. This step can be considered as a preprocess-ing step, and we assume in our research that theregistration process has already been performed.20When multiple distorted registered versions of the</p><p>same scene are available, it is possible to restore theoriginal image from these distorted versions withoutany prior knowledge of the distortion functions.2125The problem of reconstructing an image from two dis-torted observations in a high signal-to-noise ratio(SNR) environment has been investigated using a two-dimensional (2-D) greatest common divisor (GCD) al-gorithm that is sensitive to the presence of noise.25 Inthe z domain, the original image can be regarded as theGCD among the distorted observations if noise is ne-glected and the distortion filters are assumed to be offinite impulse response (FIR) and are relativelycoprime.25 Since any small variations in the point-spread function (PSF) of the imaging devices used incapturing the observations lead to blurring operatorsof different parameters, which are thus coprime in thez domain, the coprimeness assumption of blurring op-erators is a realistic assumption.We develop a blind reconstruction approach of</p><p>high-resolution images that is based on breakingthe image reconstruction problem into three consec-utive steps: a blind multichannel GCD restoration,a wavelet-based image fusion, and a maximum en-tropy image interpolation. The blind restorationstep depends on estimating the 2-D GCD betweeneach observation and a combinational image gener-ated by a weighted averaging process of availableobservations. The 2-D GCD is then estimated be-tween the new image and each observation. Themultiple outputs of the restoration step are thenapplied to the wavelet-based image fusion step. Amaximum entropy algorithm is derived and used ininterpolating the resulting image from the fusionstep.The paper is organized as follows: The problem is</p><p>formulated in Section 2; Section 3 reviews the 2-DGCD algorithm; Section 4 presents the suggested ap-proach to obtain the high-resolution image from mul-tiple observations; Section 5 discusses the wavelet-based image fusion process; in Section 6, thesuggested maximum entropy image interpolation al-gorithm is derived and its implementation is given;results are presented in Section 7; and concludingremarks are given in Section 8.</p><p>2. Problem Formulation</p><p>The image reconstruction problem to be solved is theestimation of a high-resolution image from severalimages degraded by both blurring and noise. An im-</p><p>age degraded by both blurring and noise can be mod-eled by the following equation2528:</p><p>ym, n xm, n bm, n vm, n, (1)</p><p>where x(m, n) is the original image; b(m, n) is thedegradation PSF; and v(m, n) is an additive zeromean white Gaussian noise.Assuming that we have two degraded observations</p><p>of the same scene given by the following equation:</p><p>ykm, n xm, n bkm, n vkm, n,</p><p>k 1, 2, (2)</p><p>where b1m, n and b2m, n are coprime filters. Thecoprimeness assumption is justified, since any slightvariation of a group of similar blurring operators af-fecting the same observation will lead to a differentblurring operator that is coprime with the other op-erators.In the z domain this equation translates to</p><p>Ykz1, z2Xz1, z2Bkz1, z2Vkz1, z2,</p><p>k 1, 2. (3)</p><p>If the noise vkm, n is neglected, then the termVz1, z2 vanishes, and hence the above equation be-comes</p><p>Ykz1, z2Xz1, z2Bkz1, z2, k 1, 2. (4)</p><p>From the above equation, if the two distortion trans-fer functions B1z1, z2 and B2z1, z2 are coprime, thenXz1, z2 is the GCD of Y1z1, z2 and Y2z1, z2. Thiscan be written mathematically as follows:if</p><p>GCDB1z1, z2, B2z1, z2 1, (5)</p><p>then</p><p>GCDY1z1, z2, Y2z1, z2Xz1, z2. (6)</p><p>The GCD between two 2-D functions using the 2-DGCD algorithm is revised in Section 3.</p><p>3. Two-Dimensional Greatest Common DivisorAlgorithm</p><p>In the GCD algorithm25 it is assumed that the twoblurred images y1m, n and y2m, n of the originalimage xm, n are bothM Nmatrices. Substitutingz1 expj2mP, m 0, 1, . . . , P 1 into bothY1z1, z2 and Y2z1, z2 for each m, yields two one-dimensional (1-D) polynomials:</p><p>Ykexpj(2mP), z2Xexpj(2mP), z2Bkexpj(2mP),z2, k 1, 2. (7)</p><p>7350 APPLIED OPTICS Vol. 44, No. 34 1 December 2005</p></li><li><p>Thus the 1-D GCD between these two polynomialsyields the scaled quantity c0 expj2mPXexpj2mP, z2. For each value of m we furthersubstitute z2 expj2nP, n 0, 1, . . . , P 1 inthis GCD to form a matrix of discrete Fourier trans-form elements,</p><p>A(m, n)a(m)Xexpj(2mP) expj(2nP). (8)</p><p>Carrying out similar operations on columns, we get</p><p>Lm, nlnXexpj2mP</p><p> expj2nP (9)</p><p>where am and ln are scalar quantities that mustbe determined. From Eqs. (8) and (9) we have25</p><p>Am, namLm, nln 0. (10)</p><p>The evaluation of am and ln can be made on aleast-squares basis. Thus the estimated Fouriertransform of the original image is then calculatedby25</p><p>Xej2mP, ej2nP12 Am, nam</p><p>Lm, nln. (11)</p><p>The inverse Fourier transform is then used to obtainan estimate of the original image.</p><p>4. Proposed Reconstruction Approach ofHigh-Resolution Images</p><p>Assume that we have K degraded observations of thesame scene given by the following equation:</p><p>ykm, n xm, n bkm, n vkm, n,</p><p>k 1, 2, . . . , K. (12)</p><p>The application of the 2-D GCD algorithm describedin Section 3 that involves only two observations at atime is not an efficient approach to restoration, be-cause it does not incorporate the information in allobservations into the restoration process. To do so, wesuggest generating a new observation image, yK1,given by the following equation:</p><p>yK1m, n k1</p><p>K</p><p>wkykm, n, (13)</p><p>where wk values are scalars chosen according to anestimation of the SNRs in each image, which is madebased on a noise variance estimation. Another re-striction on the values of wk is the normal-</p><p>ization condition as follows:</p><p>k1</p><p>K</p><p>wk 1. (14)</p><p>Substituting Eq. (12) into Eq. (13), we obtain</p><p>yK1m, n k1</p><p>K</p><p>wkxm, n bkm, n vkm, n.</p><p>(15)</p><p>Thus</p><p>yK1m, n xm, n k1</p><p>K</p><p>wkbkm, n</p><p>k1</p><p>K</p><p>wkvkm, n. (16)</p><p>This equation can be written in the form</p><p>yK1m, n xm, n bK1m, n vK1m, n,(17)</p><p>where</p><p>bK1m, n k1</p><p>K</p><p>wkbkm, n, (18)</p><p>vK1m, n k1</p><p>K</p><p>wkvkm, n. (19)In the z domain, this leads to</p><p>BK1z1, z2 k1</p><p>K</p><p>wkBkz1, z2. (20)</p><p>It can be proved that BK1z1, z2 is coprime with allBkz1, z2 for k K. Dividing BK1z1, z2 by Bkz1, z2where k K gives</p><p>BK1z1, z2Bkz1, z2</p><p>w1B1z1, z2Bkz1, z2</p><p>w2B2z1, z2Bkz1, z2</p><p>wk wKBKz1, z2Bkz1, z2</p><p>, (21)</p><p>Since Biz1, z2 and Bkz1, z2 are coprime functionsfor i k and i, k K; thus</p><p>Biz1, z2Bkz1, z2</p><p>Qiz1, z2Riz1, z2Bkz1, z2</p><p>, (22)</p><p>where Qiz1, z2 is the quotient, Riz1, z2 is the re-mainder, and Riz1, z2 0.Substituting Eq. (22) into Eq. (21) yields</p><p>1 December 2005 Vol. 44, No. 34 APPLIED OPTICS 7351</p></li><li><p>BK1z1, z2Bkz1, z2</p><p>wki1ik</p><p>K</p><p>wiQiz1, z2 Riz1, z2Bkz1, z2 .(23)</p><p>Thus</p><p>BK1z1, z2Bkz1, z2</p><p>Qtz1, z2Rtz1, z2Bkz1, z2</p><p>, (24)</p><p>where</p><p>Qtz1, z2wki1ik</p><p>K</p><p>wiQiz1, z2, (25)</p><p>Rtz1, z2i1ik</p><p>K</p><p>wiRiz1, z2 0. (26)</p><p>This leads to the conclusion that BK1z1, z2 iscoprime with all Bkz1, z2 for k K.Thus the 2-D GCD algorithm can be carried out</p><p>between yK1m, n and any ykm, n where k Kto give good estimates of X expj2mP,expj2nP.The relation</p><p>vK1m,n </p><p>k1</p><p>K</p><p>wkvkm, n</p><p>leads to an image with noise variance K12 given by</p><p>K12 </p><p>k1</p><p>K</p><p>wk2k</p><p>2. (27)</p><p>For equal weight averaging we have w1 w2 wK 1K:</p><p>K12 </p><p>k1</p><p>K k2</p><p>K2. (28)</p><p>Assuming that all observations are taken in the samenoisy environment leads to</p><p>K12</p><p>k2</p><p>K . (29)</p><p>The above equation leads to an improvement of SNRin the image yK1m, n by a factor ofK. This increasein SNR enables a robust application of the 2-D GCDalgorithm between yK1m, n and any other obser-vation ykm, n since this 2-D GCD algorithm is sen-sitive to the presence of noise.We summarize the steps of our suggested algo-</p><p>rithm as follows:</p><p>(1) Begin with multiple observations blurred byrelatively coprime blurring operators in the presenceof noise.(2) Generate a new image by a weighted averag-</p><p>ing process of the available observations.(3) Carry out the 2D GCD algorithm between the</p><p>generated image and each observation.(4) Carry out a wavelet-based image fusion pro-</p><p>cess on the results obtained in step (3).(5) Carry out a maximum entropy image interpo-</p><p>lation process on the obtained result of step (4).</p><p>5. Wavelet-Based Image Fusion</p><p>Image fusion is a process by which information fromdifferent observation images is incorporated into asingle image. The importance of image fusion lies inthe fact that each observation image contains com-plementary information. When this complementaryinformation is integrated with that of another obser-vation, an image with the maximum amount of in-formation is obtained. Different approaches havebeen adopted for multisensor or multiple observationimage fusion, from a simple image averaging ap-proach to the wavelet transform image fusionapproach.29,30We adopt the wavelet transform image fusion ap-</p><p>proach to integrate the data from the multiple out-</p><p>Fig. 1. Schematic diagram of the wavelet image fusion process;DWT, discrete wavelet transform.</p><p>Fig. 2. Original undegraded building image and its maximumentropy interpolation: (a) original image and (b) maximum entropyinterpolation of the original image.</p><p>7352 APPLIED OPTICS Vol. 44, No. 34 1 December 2005</p></li><li><p>puts of themultichannel GCD image restoration step.In the application of the simple wavelet image fusionscheme, the wavelet packet decomposition is calcu-lated for each observation to obtain the multiresolu-tion levels of the images to be fused. In the transformdomain, the coefficients belonging to all resolutionlevels whose absolute values are largest are chosenfrom among the available observations. This rule isknown as the maximum frequency rule.29,30 Usingthis method, fusion takes place on all resolution lev-els and the dominant features at each scale are pre-served.Another alternative to the maximum frequency</p><p>rule is the area-based selection rule, also called thelocal variance or the standard deviation rule. Thelocal variance of the wavelet coefficients is calculatedas a measure of local activity levels associated witheach wavelet coefficient. If the measures of activity ofthe wavelet coefficients in each of the two images tobe fused are close to each other, the average of the twowavelet coefficients is taken; otherwise, the coeffi-cient with the maximum absolute is chosen.29,30Generally, the process of wavelet packet image</p><p>fus...</p></li></ul>