a sparsity-driven approach for joint sar imaging and phase...

15
1 A Sparsity-driven Approach for Joint SAR Imaging and Phase Error Correction N. ¨ Ozben ¨ Onhon and M¨ ujdat C ¸ etin Faculty of Engineering and Natural Sciences, Sabancı University, Orhanlı, Tuzla, 34956 Istanbul, Turkey Abstract—Image formation algorithms in a variety of appli- cations have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity- driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm each iteration of which consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements it provides over existing techniques for model error compensation in SAR. Index Terms—Synthetic aperture radar, phase errors, autofo- cus, regularization, sparsity. I. I NTRODUCTION Synthetic aperture radar (SAR) has recently been and con- tinues to be a sensor of great interest in a variety of remote sensing applications, in particular because it overcomes certain limitations of other sensing modalities. First, SAR is an active sensor using its own illumination. To illuminate a ground patch of interest, the SAR sensor uses microwave signals which provide SAR with the capability of imaging day and night as well as in adverse weather conditions. Due to these features of SAR, SAR image formation has become an important research topic. The problem of SAR image formation is a typical example of inverse problems in imaging. Solution of inverse problems in imaging requires the use of a mathematical model of the observation process. However such models often involve errors and uncertainties themselves. As a predominant example in SAR imaging, motion-induced errors are reasons for model uncertainties which may cause undesired artifacts in the formed imagery. This type of errors causes phase errors in the SAR data which result in defocusing of the reconstructed This work was partially supported by the Scientific and Technological Research Council of Turkey under Grant 105E090, and by a Turkish Academy of Sciences Distinguished Young Scientist Award. Copyright (c) 2010 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. The authors are with the Faculty of Engineering and Natural Sci- ences, SabancıUniversity, Orhanlı-Tuzla 34956. Istanbul, Turkey (e-mail: [email protected], [email protected]). images [1]. Because of the defocusing effect of such errors, the techniques developed for removing phase errors are often called autofocus techniques. Various studies have been presented on the SAR autofocus problem [2–17]. One of the most well known techniques, Phase Gradient Autofocus (PGA) [2], estimates phase errors using the data obtained by isolating many single defocused targets via center-shifting and windowing operations. It is based on the assumption that there is a single target at each range coordinate. Another well-known approach for autofocus is based on the optimization of a sharpness metric of the defocused image intensity [3–10]. These techniques aim to find the phase error estimate which minimizes or maximizes a sharpness function of the conventionally reconstructed image. Commonly used metrics are entropy or square of the image intensity. Techniques such as mapdrift autofocus [11] use subaperture data to estimate the phase errors. These techniques are suitable mostly for quadratic and slowly varying phase errors. A recently proposed autofocus technique, multichannel autofocus (MCA) [12], is based on a non-iterative algorithm which finds the focused image in terms of a basis formed from the defocused image, relying on a condition on the image support to obtain a unique solution. In particular, MCA estimates 1D phase error functions by directly solving a set of linear equations obtained through an assumption that there are zero-reflectivity regions in the scene to be imaged. When this is not precisely satisfied, presence of a low-return region is exploited, and the phase error is estimated by minimizing the energy of the low-return region. When the desired conditions are satisfied, MCA performs very well. However, in scenarios involving low-quality data (e.g., due to low SNR) the perfor- mance of MCA degrades. A number of modifications to MCA have been proposed, including the incorporation of sharpness metric optimization into the framework [12], and the use of a semidefinite relaxation based optimization procedure [17] for better phase error estimation performance. One common aspect of all autofocus techniques referred to above is that they perform post-processing, i.e., they use conventionally reconstructed (i.e., reconstructed by the polar- format algorithm [18, 19]) defocused images in the process of phase error estimation. Our starting point however is the observation that more advanced SAR image formation techniques have recently been developed. Of particular interest in this paper is regularization-based SAR imaging (see, e.g., [20–22]), which has been shown to offer certain improvements over conventional imaging. Regularization-based techniques can alleviate the problems in the case of incomplete data

Upload: others

Post on 04-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • 1

    A Sparsity-driven Approach for Joint SAR Imagingand Phase Error Correction

    N. ÖzbenÖnhon and M̈ujdat ÇetinFaculty of Engineering and Natural Sciences, Sabancı University, Orhanlı, Tuzla, 34956 Istanbul, Turkey

    Abstract—Image formation algorithms in a variety of appli-cations have explicit or implicit dependence on a mathematicalmodel of the observation process. Inaccuracies in the observationmodel may cause various degradations and artifacts in thereconstructed images. The application of interest in this paperis synthetic aperture radar (SAR) imaging, which particularlysuffers from motion-induced model errors. These types of errorsresult in phase errors in SAR data which cause defocusing ofthe reconstructed images. Particularly focusing on imaging offields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction.Phase error correction is performed during the image formationprocess. The problem is set up as an optimization problem ina nonquadratic regularization-based framework. The methodinvolves an iterative algorithm each iteration of which consists ofconsecutive steps of image formation and model error correction.Experimental results show the effectiveness of the approach forvarious types of phase errors, as well as the improvements itprovides over existing techniques for model error compensationin SAR.

    Index Terms—Synthetic aperture radar, phase errors, autofo-cus, regularization, sparsity.

    I. I NTRODUCTION

    Synthetic aperture radar (SAR) has recently been and con-tinues to be a sensor of great interest in a variety of remotesensing applications, in particular because it overcomes certainlimitations of other sensing modalities. First, SAR is an activesensor using its own illumination. To illuminate a ground patchof interest, the SAR sensor uses microwave signals whichprovide SAR with the capability of imaging day and night aswell as in adverse weather conditions. Due to these featuresof SAR, SAR image formation has become an importantresearch topic. The problem of SAR image formation is atypical example of inverse problems in imaging. Solution ofinverse problems in imaging requires the use of a mathematicalmodel of the observation process. However such models ofteninvolve errors and uncertainties themselves. As a predominantexample in SAR imaging, motion-induced errors are reasonsfor model uncertainties which may cause undesired artifacts inthe formed imagery. This type of errors causes phase errors inthe SAR data which result in defocusing of the reconstructed

    This work was partially supported by the Scientific and TechnologicalResearch Council of Turkey under Grant 105E090, and by a Turkish Academyof Sciences Distinguished Young Scientist Award.

    Copyright (c) 2010 IEEE. Personal use of this material is permitted.However, permission to use this material for any other purposes must beobtained from the IEEE by sending a request to [email protected].

    The authors are with the Faculty of Engineering and Natural Sci-ences, SabancıUniversity, Orhanlı-Tuzla 34956. Istanbul, Turkey (e-mail:[email protected], [email protected]).

    images [1]. Because of the defocusing effect of such errors,the techniques developed for removing phase errors are oftencalled autofocus techniques.

    Various studies have been presented on the SAR autofocusproblem [2–17]. One of the most well known techniques,Phase Gradient Autofocus (PGA) [2], estimates phase errorsusing the data obtained by isolating many single defocusedtargets via center-shifting and windowing operations. It isbased on the assumption that there is a single target at eachrange coordinate. Another well-known approach for autofocusis based on the optimization of a sharpness metric of thedefocused image intensity [3–10]. These techniques aim tofind the phase error estimate which minimizes or maximizes asharpness function of the conventionally reconstructed image.Commonly used metrics are entropy or square of the imageintensity. Techniques such as mapdrift autofocus [11] usesubaperture data to estimate the phase errors. These techniquesare suitable mostly for quadratic and slowly varying phaseerrors. A recently proposed autofocus technique, multichannelautofocus (MCA) [12], is based on a non-iterative algorithmwhich finds the focused image in terms of a basis formedfrom the defocused image, relying on a condition on theimage support to obtain a unique solution. In particular, MCAestimates 1D phase error functions by directly solving a setoflinear equations obtained through an assumption that therearezero-reflectivity regions in the scene to be imaged. When thisis not precisely satisfied, presence of a low-return region isexploited, and the phase error is estimated by minimizing theenergy of the low-return region. When the desired conditionsare satisfied, MCA performs very well. However, in scenariosinvolving low-quality data (e.g., due to low SNR) the perfor-mance of MCA degrades. A number of modifications to MCAhave been proposed, including the incorporation of sharpnessmetric optimization into the framework [12], and the use of asemidefinite relaxation based optimization procedure [17]forbetter phase error estimation performance.

    One common aspect of all autofocus techniques referredto above is that they perform post-processing, i.e., they useconventionally reconstructed (i.e., reconstructed by thepolar-format algorithm [18, 19]) defocused images in the processof phase error estimation. Our starting point however isthe observation that more advanced SAR image formationtechniques have recently been developed. Of particular interestin this paper is regularization-based SAR imaging (see, e.g.,[20–22]), which has been shown to offer certain improvementsover conventional imaging. Regularization-based techniquescan alleviate the problems in the case of incomplete data

  • 2

    or sparse apertures. Moreover, they produce images with in-creased resolution, reduced sidelobes, and reduced speckle byincorporation of prior information about the features of interestand imposing various constraints (e.g., sparsity, smoothness)about the scene. However, existing regularization-based SARimaging techniques rely on a perfect observation model, anddo not involve any mechanism for addressing any modeluncertainties.

    Motivated by these observations and considering scenes thatadmit sparse representation in some dictionary, we proposeasparsity-driven technique for joint SAR imaging and phaseerror correction by using a nonquadratic regularization-basedframework. In the proposedsparsity-driven autofocus (SDA)method, phase errors are considered as model errors which areestimated and removed during image formation. The proposedmethod handles the problem as an optimization problem inwhich the cost function is composed of a data fidelity term(which exhibits a dependence on the model parameters) anda regularization term, which is thel1 − norm of the field.For simplicity we consider scenes that are spatially sparse,however our approach can be applied to fields that are sparsein any given dictionary by using anl1 −norm penalty on theassociated sparse representation coefficients. The cost functionis iteratively minimized with respect to the field and thephase error using coordinate descent. In the first step of everyiteration, the cost function is minimized with respect to thefield and in the second step the phase error is estimated giventhe field estimate. The phase error estimate is used to updatethe model matrix and the algorithm passes to the next iteration.To the best of our knowledge, this work is first in the context ofproviding a solution to the problem of model errors in sparsity-driven image reconstruction.

    Sharpness-based autofocus techniques [3–10] share certainaspects of our perspective, but our approach is fundamentallydifferent. In particular, our approach also involves a certaintype of sharpness metric about the field, but inside of a costfunction as a side constraint (regularization term) to a datafidelity term which incorporates the system model and thedata into the optimization problem for image formation. Henceour approach imposes the sharpness-like constraint duringtheprocess of image formation, rather than as post-processing.This enables our technique to correct for artifacts in the scenedue to model errors effectively, in an early stage of the imageformation process. Furthermore, unlike existing sharpness-based autofocus techniques, our model error correction ap-proach is coupled with an advanced sparsity-driven imageformation technique which has the capability of producinghigh resolution images with enhanced features, and as a resultour approach is not limited by the constraints of conventionalSAR imaging. In fact, our approach benefits from a dual useof sparsity, both for model error correction (autofocusing) andfor improved imaging. Finally, our framework is not limitedto sharpness metrics on the scene, but can in principle beused for model error correction in scenes that admit a sparserepresentation in any given dictionary.

    We present results on synthetic scenes as well as on twopublic datasets. Qualitative as well as quantitative analysis ofthe experimental results shows the effectiveness of the pro-

    posed method and the improvements it provides over existingmethods in terms of both scene reconstruction as well as phaseerror estimation.

    The rest of this paper is organized as follows. In Section II,the observation model for a SAR imaging system is described.In Section III, a general view of the phase errors and theireffect on the SAR data are provided. In Section IV, theproposed method is described in detail and in Section V theexperimental results are presented. We conclude the paper inSection VI and provide some technical details in the Appendix.

    II. SAR OBSERVATION MODEL

    In SAR systems, one of the most widely used signals intransmission is the chirp signal:

    s(t) = Re{

    e[j(ω0t+αt2)]

    }

    (1)

    Here,ω0 is the center frequency and2α is the so-called chirp-rate. For spotlight-mode SAR, which is the modality of interestin this paper, the relationship between the observed datar̄m(t)at them−th aperture position, obtained after a pre-processingstep, and the underlying fieldF (x, y) is as follows:

    r̄m(t) =

    ∫ ∫

    x2+y2≤L2

    F (x, y)e−jU(x cos θ+y sin θ)dxdy (2)

    Here,L is the radius of the circular patch to be imaged,θ isthe observation angle at them − th aperture position, andUhas the following form:

    U =2

    c(ω0 + 2α(t − τ0)) (3)

    Here, τ0 is the round-trip propagation time (demodulationtime). All of the returned signals from all observation anglesconstitute a patch from the two dimensional spatial Fouriertransform of the corresponding field. These data are calledphase histories and lie on a polar grid in the 2D frequencydomain as shown in Figure 1. Let the 2D discrete phase history

    Fig. 1. Graphical representation of an annulus segment containing knownsamples of the phase history data in the 2D frequency domain.

    data be denoted by aK × M matrix R. Column m of R,denoted by theK×1 vectorr̄m, is obtained by samplinḡrm(t)(the returned signal at cross-range positionm), in fast-timet

  • 3

    (range direction) atK positions. In terms of this notation, thediscrete observation model can be formulated as follows [20]:

    r̄1r̄2

    r̄M

    ︸ ︷︷ ︸

    rMK×1

    =

    C̄1C̄2

    C̄M

    ︸ ︷︷ ︸

    CMK×I

    fI×1 (4)

    Here, the vectorr of observed samples is obtained just byconcatenating the columns of the 2D phase history dataR,under each other.̄Cm and C are discretized approximationsto the continuous observation kernel at the cross-range positionm and for all cross-range positions, respectively.f is a vectorrepresenting the sampled and column-stacked version of thereflectivity imageF (x, y). Note thatK and M are the totalnumbers of range and cross-range positions, respectively.

    III. PHASE ERRORS

    During the pre-processing of the SAR data (mentioned inSection II), the demodulation timeτ0 needs to be known.When this time is known imperfectly, the SAR data ob-tained after pre-processing contain phase errors. The inexactknowledge of the demodulation time occurs when the distancebetween the SAR sensor and the scene center cannot be de-termined perfectly due to SAR platform position uncertaintiesor when the signal has delay due to some atmospheric effects.Since uncertainties on, e.g., the position of the platform areconstant over a signal received at one aperture position butare different at each aperture position, phase errors caused bysuch uncertainties vary only along the cross-range direction inthe frequency domain. The implication of such an error in theimage domain is the convolution of (each range line of) theimage with a 1D blurring kernel in the cross-range direction.Hence, such phase errors cause defocusing of the imagein the cross-range direction. An example of SAR platformposition uncertainties arises from errors in measuring theaircraft velocity. A constant error on aircraft velocity inducesa quadratic phase error function in the data [1]. Usually,phase errors arising due to SAR platform position uncertaintiesare slowly-varying (e.g., quadratic, polynomial) phase errors,whereas phase errors induced by propagation effects are muchmore irregular (e.g., random ) phase errors [1]. While mostphase errors encountered are 1D cross-range varying functions,it is possible to encounter both range and cross-range varying2D phase errors as well. For instance, in low frequency UWBSAR systems, severe propagation effects may appear throughthe ionosphere, including Faraday rotation, dispersion, andscintillation [23] which cause 2D phase errors, defocusingthereconstructed image in both range and cross-range directions.Moreover, waveform errors such as frequency jitter frompulse to pulse, transmission line reflections and waveguidedispersion effects may cause defocus in both range and cross-range direction [18]. 2D phase errors can in principle behandled in two sub-categories as separable and non-separableerrors, but it is not common to encounter 2D separable phaseerrors in practice.

    For these three types of phase error functions, let usinvestigate the relationship between the phase-corruptedanderror-free phase history data in terms of the observation model.

    A. 2D Non-separable Phase Errors

    In the presence of 2D non-separable phase errors, all samplepoints of theK × M phase history data are perturbed withdifferent and potentially independent phase errors. LetΦ2D−nsbe a 2D non-separable phase error function. The relationshipbetween the phase-corrupted and error-free phase histories areas follows:

    R�(k, m) = R(k, m)ejΦ2D−ns(k,m) (5)

    Here,R� denotes the phase-corrupted phase history data. Toexpress this relationship in terms of the observation model,first we define the vectorφ2D−ns

    φ2D−ns =[

    φ2D−ns(1), φ2D−ns(2), ..., φ2D−ns(S)]T

    (6)

    which is created by concatenating the columns of the phaseerror matrix Φ2D−ns under each other. Here,S is the totalnumber of data samples and equal to the productMK.Using the corresponding vector forms, the relationship in (5)becomes

    r� = D2D−nsr (7)

    whereD2D−ns is a diagonal matrix:

    D2D−ns = diag[

    ejφ2D−ns(1), ..., ejφ2D−ns(S)]

    (8)

    In terms of observation model matrices, the relationship in(7) is as follows

    C (φ2D−ns) f = D2D−nsCf (9)

    where,C is the initially assumed model matrix by the imagingsystem andC (φ2D−ns) is the model matrix that takes thephase errors into account. The equations (7) and (9) can beexpressed in the following form as well.

    r�(s) = ejφ2D−ns(s)r(s) Cs (φ2D−ns) f = e

    jφ2D−ns(s)Csf

    for s = 1, 2, .., S (10)

    Here, r(s) denotess − th element of the vectorr and Csdenotess − th row of the model matrixC.

    B. 2D Separable Phase Errors

    A 2D separable phase error function is composed of rangevarying and cross-range varying 1D phase error functions asfollows:

    Φ2D−s(k, m) = ξ(k) + γ(m) (11)

    Here,ξ, representing the range varying phase error, is aK×1vector andγ, representing the cross-range varying phase error,is a M × 1 vector. TheS × 1 vector for 2D separable phaseerrors φ2D−s, is obtained by concatenating the columns ofΦ2D−s as follows:

    φ2D−s =

    [ξ(1) + γ(1)︸ ︷︷ ︸

    φ2D−s(1)

    , ..., ξ(K) + γ(1)︸ ︷︷ ︸

    φ2D−s(K)

    , ξ(1) + γ(2)︸ ︷︷ ︸

    φ2D−s(K+1)

    , ...,

    ξ(1) + γ(M)︸ ︷︷ ︸

    φ2D−s((M−1)K+1)

    , ..., ξ(K) + γ(M)︸ ︷︷ ︸

    φ2D−s(S)

    ]T

    (12)

  • 4

    A 2D separable phase error function affects the observationmodel matrix in the following manner:

    r� = D2D−sr C (φ2D−s) f = D2D−sCf (13)

    Here,D2D−s is a diagonal matrix:

    D2D−s = diag[

    ejφ2D−s(1), ejφ2D−s(2), ..., ejφ2D−s(S)]

    (14)

    C. 1D Phase Errors

    We mentioned before that most encountered phase errors arefunctions of cross-range only. In other words, for a particularcross-range position the phase error is same at all rangepositions. Letφ1D be the 1D cross-range varying phase error.φ1D is a vector of lengthM :

    φ1D =[

    φ1D(1), φ1D(2), ..., φ1D(M)]T

    (15)

    In the case of 1D phase errors, the relationship between theerror-free and the phase-corrupted data can be expressed as:

    r� = D1Dr C (φ1D) f = D1DCf (16)

    Here,D1D is a S × S diagonal matrix defined as:

    D1D =diag

    [

    ejφ1D(1), ..., ejφ1D(1)︸ ︷︷ ︸

    K

    , ejφ1D(2), ..., ejφ1D(2)︸ ︷︷ ︸

    K

    , ...,

    ejφ1D(M), ..., ejφ1D(M)︸ ︷︷ ︸

    K

    ]

    (17)

    These relationships can also be stated as follows:

    r̄�m = ejφ1D(m)r̄m C̄m (φ1D) f = e

    jφ1D(m)C̄mf

    for m = 1, 2, .., M (18)

    Here, r̄m and C̄m are the error-free phase history data andthe assumed model matrix for them−th cross-range position.Note that, in a 1D cross-range phase error case, there areM unknowns, in a 2D separable phase error case there areM + K unknowns, and in a 2D non-separable phase errorcase there areS = MK unknowns. Hence, correcting for 2Dnon-separable phase errors is a much more difficult problemthan the others.

    IV. PROPOSEDMETHOD

    Sparsity-driven radar imaging has already found use ina number of contexts [24–36]. In SAR applications, thereis widespread use of sparsity-based imaging due to the ad-vantages such as super-resolution and artifact suppression itprovides. Such techniques assume that the observation modelis known exactly. However, it is common to encounter modelerrors. In the presence of phase errors and an additive mea-surement noise induced by the SAR system, the observationmodel becomes

    g = C(φ)f + v (19)

    wherev stands for measurement noise, which is assumed tobe white Gaussian noise, andg is the noisy phase-corruptedobservation data. Here,φ refers to one of the three types ofphase errors introduced in Section III.

    Based on these observations we propose a nonquadraticregularization-based method for joint imaging and phase errorcorrection. While existing sparsity-driven SAR imaging meth-ods assume that data contain no phase errors, our approach

    jointly estimates and compensates such errors in the data,while performing sparsity-driven image formation. In partic-ular, we pose the joint imaging and phase error estimationproblem as the problem of minimizing the following costfunction:

    J(f, φ) = ‖g − C(φ)f‖22 + λ ‖f‖1 (20)

    Here, λ is the regularization parameter, which specifies thestrength of the contribution of the regularization term intothe solution. The given cost function is minimized jointlywith respect tof and φ using coordinate descent technique.The algorithm is an iterative algorithm, which cycles throughsteps of image formation and phase error estimation andcompensation. Every iteration involves two steps. In the firststep, the cost function is minimized with respect to the fieldand in the second step the phase error is estimated given thefield estimate. Before the algorithm passes to the next iteration,the model matrix is updated using the estimated phase error.This flow is outlined in Algorithm 1.

    In Algorithm 1, n denotes the iteration number.̂f (n) andφ̂(n) are the image and phase error estimates at iterationn,respectively. Note that the knowns in this algorithm are thenoisy phase-corrupted datag and the initially assumed modelmatrix C. The unknowns are the fieldf and the phase errorφ together with the associated model matrixC(φ) that takesthe phase errors into account. It is worth noting here that

    Algorithm 1 Algorithm for the Proposed SDA Method

    Initialize n = 0 f̂ (0) = CHg andC(φ̂(0)) = C1. f̂ (n+1) = arg minf J(f, φ̂(n))2. φ̂(n+1) = arg minφ J(f̂ (n+1), φ)3. UpdateC(φ̂(n+1)) using φ̂(n+1) andC.4. Let n = n + 1 and return to 1.

    Stop when∥∥∥f̂ (n+1) − f̂ (n)

    ∥∥∥

    2

    2/∥∥∥f̂ (n)

    ∥∥∥

    2

    2is less than a pre-

    determined threshold.In this paper, the value of the threshold is chosen as10−3.

    the use of the nonquadratic regularization-based frameworkcontributes to the accurate estimation of the phase errors aswell. Although nonquadratic regularization by itself cannotcompletely handle the kinds of phase errors considered in thiswork, it exhibits some robustness to small perturbations ontheobservation model matrix [37]. In the context of our approach,the nonquadratic regularization term in the cost functionprovides a small amount of focusing of the estimated fieldin each iteration. This focusing then enables better estimationof the phase error. This in turn results in a more accurateobservation model matrix, which provides better data fidelityand leads to a better field estimate in the next iteration.

    Next, we provide the details of the algorithm for the threeclasses of phase errors described in Section III.

    A. Algorithm for 1D Phase ErrorsIn the algorithm for 1D phase errors, in the first step of

    every iteration the following cost function is minimized withrespect tof .

    f̂(n+1) = arg min

    f

    ∥∥∥g − C(φ̂

    (n)1D )f

    ∥∥∥

    2

    2+ λ ‖f‖1 (21)

  • 5

    This is the image formation step and same for all types ofphase errors. To avoid problems due to nondifferentiability ofthe l1 − norm at the origin, a smooth approximation is used[20]:

    ‖f‖1 ≈

    I∑

    i=1

    (|fi|2 + β)1/2 (22)

    whereβ is a nonnegative small constant. In each iteration, thefield estimate is obtained as

    f̂ (n+1) =(

    C(φ̂(n)1D )

    HC(φ̂(n)1D ) + λW (f̂

    (n)))−1

    C(φ̂(n)1D )

    Hg (23)

    whereW (f̂ (n)) is a diagonal matrix:

    W (f̂(n)

    ) = diag

    [

    1/

    (∣∣∣f̂

    (n)i

    ∣∣∣

    2+ β

    )1/2

    ...... 1/

    (∣∣∣f̂

    (n)I

    ∣∣∣

    2+ β

    )1/2]

    (24)

    The matrix inversion in (23) is not carried out explicitly, butrather numerically through the conjugate gradient algorithm.Note that, this algorithm has been used in a variety of settingsfor sparsity-driven radar imaging, and has been shown to bea descent algorithm [38].

    The second step involves phase error estimation, in whicha different procedure is implemented for each type of phaseerrors. For 1D cross-range varying phase errors, given the fieldestimate, the following cost function is minimized for everycross-range position [39],

    φ̂(n+1)1D (m) = arg min

    φ1D(m)

    ∥∥∥ḡm − e

    (jφ1D(m))C̄mf̂(n+1)

    ∥∥∥

    2

    2∀m (25)

    where φ̂(n+1)1D (m) denotes the phase error estimate for thecross-range positionm in the iteration(n + 1). In (25), theK × 1 vector ḡm is the noisy SAR data at them − th cross-range position. After evaluating the norm expression in (25)(see appendix for details), we obtain

    φ̂(n+1)1D (m) = arg min

    φ1D(m)

    (

    ḡHm ḡm−

    2√

  • 6

    where

    < = Re{

    f̂ (n+1)H

    CHs g(s)}

    = = Im{

    f̂ (n+1)H

    CHs g(s)}

    (38)

    Using the phase error estimate, the model matrix is updatedthrough:

    Cs(φ̂(n+1)2D−ns(s)) = e

    (jφ̂(n+1)2D−ns

    (s))Cs ∀s (39)

    If the phase error type (i.e., 1D, 2D separable, or 2Dnonseparable) is known, then it is natural to use the corre-sponding version of the proposed algorithm for best phaseerror estimation performance. If the phase error type is notknown a priori, then the version of our algorithm for the 2Dnonseparable case can be used, since this is the most generalscenario. For any of these three types of phase errors, ouralgorithm does not require any knowledge about how the phaseerror function varies (randomly, quadratically, polynomially,etc.) along the range (in 2D cases) or cross-range (in 1D and2D cases) directions. We demonstrate the effectiveness of ourapproach on data corrupted by various phase error functions.

    V. EXPERIMENTAL RESULTS

    We have applied the proposed SDA method in a numberof scenarios and present our results in the following twosubsections. In Section V.A we present our results on varioustypes of data and demonstrate the improvements in visualimage quality as compared to the uncompensated case. InSection V.B we provide a quantitative comparison of ourapproach with existing state-of-the-art autofocus techniques.

    A. Qualitative Results and Comparison to the UncompensatedCase

    To present qualitative results for the proposed method incomparison to the uncompansated case, several experimentshave been performed on various synthetic scenes as wellas on two public SAR data sets provided by the U.S. AirForce Research Laboratory (AFRL): the Slicy data, part ofthe MSTAR dataset [41]; and the Backhoe data [42].

    To generate synthetic SAR data for a32 × 32 scene wehave used a SAR system model with the parameters given inTable I. The resulting phase history data lie on a polar grid.As

    TABLE ISAR SYSTEM PARAMETERS USED IN THE SYNTHETIC SCENE

    EXPERIMENT WHOSE RESULTS ARE SHOWN INFIGURES2 AND 3.

    carrier frequency(ω0) 2π × 1010 rad/s

    chirp rate(2α) 2π × 1012 rad/s2

    pulse duration(Tp) 4 × 10−4sec.

    angular range(∆θ) 2.3o

    observation noise, complex white Gaussian noise is added tothe data so that SNR is 30dB. We have performed experimentsfor four different types of phase errors. The original syntheticimage is shown in Figure 2(a). For the data without phaseerrors, conventional and sparsity-driven reconstructions aregiven in Figure 2(b) and (c), respectively. In this paper, inall of the experiments, the polar-format algorithm is used for

    conventional imaging. Results by conventional imaging andby the proposed method for different types of phase errorsare shown in Figure 3. Conventionally reconstructed imagessuffer from degradation due to phase errors. The results showthe effectiveness of the proposed method. As seen in Figure3, it is not possible to visually distinguish the images formedby the proposed method from the original scene.

    To demonstrate the performance of SDA in the presenceof speckle, we present some results on a128 × 128 syntheticscene, in Figure 4. The scene consists of six point-like targetsand a spatially extended target with the shape of a squareframe. To create speckle, random phase is added to thereflectivities of the underlying scene. The corresponding SARdata are simulated by taking a32 × 32 band-limited segmentfrom the 2D Fourier transform of the scene. Then a 1D cross-range varying random phase error, uniformly distributed in[−π, π] has been added to the data. Speckle is clearly visiblein the conventional image reconstructed from the data withoutphase errors in Figure 4(a). The images reconstructed byconventional imaging and sparsity-driven imaging when thedata are corrupted by phase errors are shown in Figures 4(b)and (c), respectively. The result in Figure 4(d) demonstratesthat SDA can effectively perform imaging and phase errorcompensation in the presence of speckle.

    In Figure 5 we present the images reconstructed from theSlicy data without phase errors. The Slicy target is a preciselydesigned and machined engineering test target containingmultiple simple geometric radar reflector static shapes. Figure5(a) shows the image reconstructed conventionally and Figure5(b) shows the result of sparsity-driven imaging. As seen inthe figures, sparsity-driven imaging provides high resolutionimages with enhanced features (in this particular example,this means locations of dominant point scatterers). Figure6(a)and (b) show the results on the Slicy data for a 1D quadraticand a 1D random phase error which is uniformly distributedin [−π, π]. The images in the middle column correspond todirect application of the sparsity-driven imaging techniqueof [20] without model error compensation. The significantdegradation in the reconstructions show that sparsity-drivenimaging without model error compensation cannot handlephase errors. From the images presented in the right columnwe can see clearly that the images formed by the proposedSDA method inherit and exhibit the advantages of sparsity-driven imaging (see Figure 5(b)) and in the meantime the phaseerrors are removed as well. In Figure 6(c) and (d), the resultsfor 2D separable and non-separable random phase errors aredisplayed. 2D phase errors cause a dramatic degradation onthe reconstructed images. However, the proposed SDA methodsuccessfully corrects the 2D phase errors as well, and producesimages that exhibit accurate localization of the true scatterersand significant artifact suppression.

    Another dataset on which we present results is the Backhoedataset. We present 2D image reconstruction experimentsbased on the AFRL ‘Backhoe Data Dome, Version 1.0’, whichconsists of simulated wideband (7-13 GHz), full polarization,complex backscatter data from a backhoe vehicle in freespace. The backscatter data are available over a full upper2πsteradian viewing hemisphere. In our experiments, we use VV

  • 7

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    (a)5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    (b)5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    (c)

    Fig. 2. (a) The original scene. (b) Conventional imaging fromthe data without phase errors. (c) Sparsity-driven imaging from the data without phase errors.

    (a)

    0 5 10 15 20 25 30 350

    2

    4

    6

    8

    10

    12

    Aperture Position

    Pha

    se E

    rror

    in r

    ad.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    (b)

    0 5 10 15 20 25 30 35−6

    −4

    −2

    0

    2

    4

    6

    Aperture Position

    Pha

    se E

    rror

    in r

    ad.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    (c)

    0 5 10 15 20 25 30 35−2

    −1.5

    −1

    −0.5

    0

    0.5

    1

    1.5

    Aperture Position

    Pha

    se E

    rror

    in r

    ad.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    Fig. 3. Left- Phase error. Middle- Images reconstructed by conventional imaging. Right- Images reconstructed by the proposed SDA method. (a) Results forquadratic phase error. (b) Results for an 8th order polynomial phase error. (c) Results for a phase error uniformly distributed in [−π/2, π/2] .

    (a) (b) (c) (d)

    Fig. 4. Experimental results on a speckled scene. (a) Conventional image reconstructed from noisy data without phase error. (b) Conventional imagereconstructed from noisy data with phase error. (c) Image reconstructed by sparsity-driven imaging from noisy data with phase error. (d) Image reconstructedby the proposed SDA method.

  • 8

    (a) (b)

    Fig. 5. (a) Conventional imaging from the data without phase error. (b) Sparsity-driven imaging from the data without phase error.

    (a)

    (b)

    (c)

    (d)

    Fig. 6. Left- Images reconstructed by conventional imaging. Middle- Images reconstructed by sparsity-driven imaging. Right- Images reconstructed by theproposed SDA method. (a) Results for a 1D quadratic phase error. (b) Results for a 1D phase error uniformly distributed in[−π, π]. (c) Results for a 2Dseparable phase error composed of two 1D phase errors uniformly distributed in[−3π/4, 3π/4]. (d) Results for a 2D non-separable phase error uniformlydistributed in[−π, π] .

  • 9

    (a) (b)

    Fig. 7. (a) Conventional imaging from the data without phase error. (b) Sparsity-driven imaging from the data without phase error.

    (a)

    (b)

    Fig. 8. Left- Images reconstructed by conventional imaging. Middle- Images reconstructed by sparsity-driven imaging. Right- Images reconstructed by theproposed SDA method. (a) Results for a 1D phase error uniformlydistributed in[−π/2, π/2]. (b) Results for a 2D separable phase error composed of two1D phase errors uniformly distributed in[−3π/4, 3π/4] .

    (a) (b)

    Fig. 9. Experiments on the Slicy data with70% frequency band omissions : (a) Conventional imaging from the data without phase error. (b) Sparsity-drivenimaging from the data without phase error.

  • 10

    (a) (b) (c)

    Fig. 10. Experiments on the Slicy data with70% frequency band omissions and 1D quadratic phase error: (a) Conventional imaging. (b) Sparsity-drivenimaging (c) Proposed SDA method.

    (a) (b) (c) (d)

    Fig. 11. Results of the experiment for testing the effect of the nonquadratic regularization term in the proposed SDA method on phase error compensation.(a) The original scene. (b) Conventional imaging from the data with phase error. (c) Image reconstructed in the case of replacing thel1 − norm in ourapproach with anl2 − norm without changing the phase error estimation piece. (d) Image reconstructed by the proposed SDA method.

    polarization data, centered at 10 GHz, and with an azimuthalspan of110o. The data we use in our experiments have abandwidth of 1 GHz. To deal with the wide-angle observationin the Backhoe dataset, we incorporate the subaperture-basedcomposite imaging approach of [43] into our framework. Thecomposite image is formed by combining the subapertureimages so that each pixel value of the composite image isdetermined by selecting the maximum value for that pixelacross the subaperture images. For this experiment, phase errorestimation and correction are performed for every subapertureimage. In Figure 7 we show the conventionally and sparsity-driven reconstructed images for the data without phase error.The results on Backhoe data for 1D and 2D separable randomphase errors are presented in Figure 8. In the left and middlecolumns of Figure 8, the artifacts due to phase errors areclearly seen in the images reconstructed by conventionalimaging and sparsity-driven imaging, respectively. However,both 1D and 2D phase errors are compensated effectively bythe proposed method. From the given examples so far we seethat the proposed SDA method corrects the phase errors effec-tively and provides images with high resolution and reducedsidelobes, thanks to the nonquadratic regularization-basedframework. We mentioned that regularization-based imaginggives satisfying results in cases of incomplete data as well. Weexplore this aspect in the presence of phase errors performingan experiment on Slicy data with frequency band omissions.In this experiment, data from randomly selected contiguousfrequency bands corresponding to70% of frequencies havebeen set to zero, i.e., only30% of the spectral data withinthe radar’s bandwidth are available. A detailed explanation ofthis spectral masking procedure can be found in [43]. Then,a 1D quadratic phase error function was applied to the data.

    The results of this experiment are presented in Figure 9 andFigure 10. As seen from the reconstructions, the proposedmethod produces feature enhanced images and removes phaseerrors effectively even when the data are partially available.Finally, we demonstrate how the nonquadratic regularizationfunctional in our framework supports phase error compen-sation, through a simple experiment in which we comparethe results of our approach with a quadratic regularizationscheme. In this experiment, we have applied a 1D cross-rangevarying random phase error, uniformly distributed in[−π, π]to the data from a synthetic scene simulated just by taking its2D Fourier transform. To construct a quadratic regularization-based scheme, we have replaced thel1−norm in our approachwith an l2−norm without changing the phase error estimationpiece. We present the results of this experiment in Figure11. As seen from images, with the quadratic regularizationapproach, it is not possible to correct phase errors, whereasthe image reconstructed by our nonquadratic regularization-based SDA algorithm is perfectly focused.

    B. Quantitative Results in Comparison to State-of-the-art Aut-ofocus Methods

    In the second part of the experimental study, we presentresults for comparison of the proposed technique with existingautofocus techniques. In Figure 12 we show comparativeresults for a 64 × 64 synthetic scene. The SAR data aresimulated by taking a band-limited segment on a rectangulargrid from the 2D discrete Fourier transform (DFT) of thescene. Then complex white Gaussian noise is added to thedata so that the input SNR is 10.85dB. Then a 1D cross-rangevarying random phase error, uniformly distributed in[−π, π] is

  • 11

    (a) (b) (c)

    (d) (e) (f)

    Fig. 12. (a) The original scene. (b) Conventional imaging from noisy data without phase error. (c) Conventional imaging from noisy data with phase error.(d) Result of PGA. (e) Result of entropy minimization. (f) Result of the proposed SDA method.

    added to the data. The performance of the proposed techniqueis compared to the performance of PGA [2] and entropyminimization techniques [3, 5–7]. For entropy minimizationwe have used the procedure given in [5]. For this particularexperiment, the results suggest that all three methods do a goodjob in estimating the phase error. However in terms of imagequality, while PGA and entropy minimization are limited byconventional imaging, the proposed SDA method demonstratesthe advantage of joint sparsity-driven imaging and phase errorcorrection, and produces a scene that appears to provide avery accurate representation of the original scene. For thesame synthetic scene we have also performed experiments withdifferent input SNRs. For each SNR value we have applied20 different random 1D phase errors, all of them uniformlydistributed in [−π, π]. For each experiment we compute 3different metrics. These are the MSE between the originalimage and the image resulting from the application of theautofocus technique considered, target-to-background ratio,and metrics for the phase error estimation error. These metricsare computed as follows:

    MSE =1

    I

    ∥∥∥|f | −

    ∣∣∣f̂

    ∥∥∥

    ∣∣∣

    2

    2(40)

    Here, f and f̂ denote the original and the reconstructedimages, respectively.I is the total number of pixels.

    Target-to-background ratio is used to determine the accen-tuation of the target pixels with respect to the background:

    TBR = 20 log10

    maxi∈T

    ∣∣∣f̂i

    ∣∣∣

    1IB

    j∈B

    ∣∣∣f̂j

    ∣∣∣

    (41)

    Here, T and B denote the pixel indices for the target andthe background regions, respectively.IB is the number ofbackground pixels.

    To compare the phase error estimation performance of theproposed method to other techniques, we first compute theestimation error for phase errors:

    φe = φ − φ̂ (42)

    Here φe is effectively the phase error that remains in theproblem after correction of the data or the model using theestimated phase error. To evaluate various techniques basedon their phase error estimation performance, it makes sensetofirst remove the components inφe that either have no effect onthe reconstructed image, or that can be easily dealt with, andthen perform the evaluation based on the remaining error. Wefirst note that a constant (as a function of the aperture position)phase shift has no effect on the reconstructed image [1].Second, a linear phase shift does not cause blurring, but rathera spatial shift in the reconstructed image. Such a phase errorcan be compensated by appropriate spatial operations on thescene [4], which we perform prior to quantitative evaluation.To disregard the effect of any constant phase shift in ourevaluation, and also noting that the amount of variation ofthe phase error across the aperture is closely related to thedegree of degradation of the formed imagery, we propose usingevaluation metrics based on the total variation (TV) ofφe andon thel2 − norm of the gradient ofφe:

    TVPE =1

    M − 1‖∇φe‖1 MSEPE =

    1

    M − 1‖∇φe‖

    22 (43)

    Here,∇φe is the(M −1)×1 vector, obtained by taking first-order differences between successive elements ofφe. M is thetotal number of cross-range positions.

    Now we get back to the quantitative evaluation of thereconstruction of the scene in Figure 12(a) for various SNRs.We present the comparison results for these three metrics in

  • 12

    Figure 13. SinceTVPE and MSEPE values are similar forthese particular experiments, for the sake of space we showthe results forMSEPE only. From the plots presented, it isclearly seen that the proposed method performs better than theother techniques, especially for low SNR values. We also notein Figure 13(a) that the proposed SDA method yields muchbetter performance in terms of the MSE between the originaland the reconstructed images even at high SNRs. This is dueto the fact that SDA benefits from the advantages of sparsity-driven imaging (unlike the other techniques) over conventionalimaging (see Figure 12) in addition to successfully correctingthe phase errors (like the other techniques) at high SNRs.

    All of the three algorithms were implemented using non-optimized MATLAB code on an Intel Celeron 2.13GHz CPU.In the experiment of Figure 12, the computation times requiredby PGA, entropy minimization, and the proposed SDA methodare 0.6240s, 1.1076s, and 2.1216s, respectively. For the ex-periments of Figure 13, the average computation times forPGA, entropy minimization, and SDA are 0.3095s, 0.4719s,and 3.4961s, respectively. The computational load of SDAis relatively more than the other methods, but this can bejustified through the benefits provided by the sparsity-drivenimaging framework underlying SDA, as demonstrated in ourexperiments.

    In Figure 14, we display some comparative results on theBackhoe data as well. For this experiment the applied 1Dphase error is a random error with a uniform distribution in[−π, π]. In this example, for quantitative comparison, we usethe MSE for the phase error. TheMSEPE values are shownin Table II.

    The results show that the proposed method performs phaseerror estimation more accurately than PGA and entropyminimization techniques. Furthermore, the proposed methodalso exhibits superiority over existing autofocus techniques interms of the quality of the reconstructed scene. In particular,the proposed method results in a finer and more detailedvisualization through noise and sidelobe suppression as wellas resolution improvements. The reconstructed images andquantitative comparison show the effectiveness of the proposedapproach.

    Finally, we compare our method with the recently proposedmultichannel autofocus (MCA) technique [12]. We have gener-ated a64×64 synthetic scene that satisfies the requirements ofMCA, involving a condition on the rank of the image, as wellas the presence of a low-return region in the scene. The SARdata used in these experiments are corrupted by a 1D cross-range varying random phase error, uniformly distributed in[−π, π]. We show the results of the experiments performed forvarious input SNR levels in Figure 15. We observe that bothMCA and SDA perform successful phase error compensationat the relatively high SNR of 27 dB (see Figure 15(c) and(d)). However when SNR is reduced to 10 dB, MCA is notable to correct the phase error, as shown in Figure 15(f). Onthe other hand, SDA compensates phase errors, and suppressesnoise and clutter effectively even for this relatively low SNRcase, as shown in Figure 15(g). Figure 15(h) contains a plot ofMSEs for phase error estimation achieved by MCA and SDAon this scene for various SNR levels. This plot demonstrates

    the robustness of SDA to noise. Average computation timesrequired by MCA and the proposed SDA method for theexperiments displayed in Figure 15 are 0.1629s and 2.5151s,respectively (using non-optimized MATLAB code on an IntelCeleron 2.13GHz CPU). The results of these experimentsshow that although MCA is a fast algorithm, working verywell in scenarios involving high-quality data, its performancedegrades significantly as SNR decreases.

    TABLE IIMSE ACHIEVED BY VARIOUS METHODS IN ESTIMATING THE PHASE

    ERROR FOR THE BACKHOE EXPERIMENT INFIGURE 14.

    PGA Entropy Minimization Proposed SDA Method

    MSEPE 3.3267 2.1715 2.1382

    VI. CONCLUSION

    We have proposed and demonstrated a sparsity-driven tech-nique for joint SAR imaging and phase error correction. Themethod corrects the phase errors during the image formationprocess while it produces high resolution focused SAR images,thanks to its sparsity enforcing nature resulting from the useof a nonquadratic regularization-based framework. While theproposed SDA method requires more computation comparedto existing autofocus techniques due to its sparsity-drivenimage formation part, its overall computational load is notsignificantly more than that of sparsity-driven imaging withoutphase error compensation since image formation and phaseerror estimation are performed simultaneously in the proposedmethod. The method can handle 1D as well as 2D phaseerrors. Experimental results on various scenarios demonstratethe effectiveness of the proposed approach as well as theimprovements it provides over existing methods for phase errorcorrection.

    In this work we considered SAR, but our approach isapplicable in other areas, where similar types of model er-rors are encountered, as well. Since the proposed methodhas a sparsity-driven structure, it is applicable only to radarimaging scenarios in which the underlying scene admits asparse representation in a particular domain. Other potentialextensions may be the formulation of the problem for scenariosinvolving sparse representations of the field in various spatialdictionaries or incorporation of prior information or someconstraints on phase error. For future work, model errors inmultistatic scenarios and target motion induced phase errorswould be of interest as well.

    APPENDIXIn this appendix, we describe how we get from Eqn. (25) to

    Eqn. (26). The cost function in (25) for phase error estimationis as follows:

    φ̂(n+1)1D (m) = arg min

    φ1D(m)

    ∥∥∥ ḡm − e

    (jφ1D(m))C̄mf̂(n+1)

    ∥∥∥

    2

    2∀m

    Here,M denotes the total number of cross-range positions.When we evaluate the norm expression we get:

    ∥∥∥ḡm − e

    (jφ1D(m)) C̄mf̂(n+1)

    ∥∥∥

    2

    2=

    (ḡm − e(jφ1D(m)) C̄mf̂

    (n+1))H (ḡm − e(jφ1D(m)) C̄mf̂

    (n+1))

  • 13

    −8 −6 −4 −2 0 2 4 6 8 10 120

    0.005

    0.01

    0.015

    0.02

    0.025

    Input SNR in dB

    MS

    E

    PGAEntropy Min.Proposed

    (a)

    −8 −6 −4 −2 0 2 4 6 8 10 1215

    20

    25

    30

    35

    40

    45

    50

    55

    60

    65

    Input SNR in dBT

    arge

    t−to

    −ba

    ckgr

    ound

    Rat

    io

    PGAEntropy Min.Proposed

    (b)

    −8 −6 −4 −2 0 2 4 6 8 10 120

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    Input SNR in dB

    MS

    E fo

    r P

    hase

    Err

    or

    PGAEntropy Min.Proposed

    (c)

    Fig. 13. Quantitative evaluation of the reconstruction of the scene in Figure 12(a) for various SNRs. Each point on the curves corresponds to an average over20 experiments with different random 1D phase errors uniformly distributed in[−π, π]. (a) MSE versus SNR. (b) Target-to-background ratio versusSNR. (c)MSE for phase error estimations versus SNR.

    (a) (b) (c) (d)

    Fig. 14. Experiments on the Backhoe data for a 1D random phase error with a uniform distribution in[−π, π]. (a) Conventional imaging with phase error.(b) Result of PGA. (c) Result of entropy minimization. (d) Result of the proposed SDA method.

    (a) (b) (c) (d)

    (e) (f) (g)

    10 15 20 25 300

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    1.8

    Input SNR in dB

    MS

    E fo

    r P

    hase

    Err

    or

    MCAProposed

    (h)

    Fig. 15. (a) The original scene. (b) Conventional imaging from noisy phase-corrupted data for input SNR of27dB. (c) Result of MCA for input SNR of27dB. (d) Result of the proposed SDA method for input SNR of27dB. (e) Conventional imaging from noisy phase-corrupted data for input SNR of10dB. (f)Result of MCA for input SNR of10dB. (g) Result of the proposed SDA method for input SNR of10dB. (h) MSEs for phase error estimation versus SNR.

  • 14

    = ḡHm ḡm − ḡHme

    (jφ1D(m))C̄mf̂(n+1)−

    f̂ (n+1)H

    C̄Hm

    (

    e(jφ1D(m)))H

    ḡm+

    f̂ (n+1)H

    C̄Hm

    (

    e(jφ1D(m)))H

    ︸ ︷︷ ︸

    e(−jφ1D(m))

    e(jφ1D(m))C̄mf̂(n+1)

    = ḡHm ḡm − ḡHm[cos(φ1D(m)) + j sin(φ1D(m))]C̄mf̂

    (n+1)−

    f̂ (n+1)H

    C̄Hm [cos(φ1D(m)) − j sin(φ1D(m))]ḡm+

    f̂ (n+1)H

    C̄Hm C̄mf̂(n+1)

    = ḡHm ḡm − 2Re{cos(φ1D(m))f̂(n+1)H C̄Hm ḡm}+

    2Re{j sin(φ1D(m))f̂(n+1)H C̄Hm ḡm} + f̂

    (n+1)H C̄Hm C̄mf̂(n+1)

    = ḡHm ḡm − 2 cos(φ1D(m))Re{f̂(n+1)H C̄Hm ḡm}−

    2 sin(φ1D(m))Im{f̂(n+1)H C̄Hm ḡm} + f̂

    (n+1)H C̄Hm C̄mf̂(n+1)

    Let Re{f̂ (n+1)H C̄Hm ḡm} = < and Im{f̂ (n+1)H

    C̄Hm ḡm} = =Since we can writesin(φ1D(m)) ascos(φ1D(m)− π2 ) the equationbecomes

    ∥∥∥ ḡm − e

    jφ1D(m)C̄mf̂(n+1)

    ∥∥∥

    2

    2= ḡ

    Hmḡm−

    2[< cos(φ1D(m)) + = cos(φ1D(m) −π

    2)] + f̂

    (n+1)HC̄

    HmC̄mf̂

    (n+1)

    The cosines in the previous equation can be added with phasoraddition rule to a single cosine. The phasors for the terms< cos(φ1D(m))and= cos(φ1D(m) − π2 )can be seen below.

    P1 = Rej0

    = < P2 = =e−

    jπ2 = −j=

    If we add the phasors

    P1 + P2 = < + (−j=) = < − j=

    we can find the magnitude and the phase of the new cosineas

    magnitude =√

  • 15

    decomposition in electromagnetics,”in Proc. Int. Symp.Antennas Propag. Soc., pp. 1984–1987, 1997.

    [22] T. J. Kragh and A. A. Kharbouch, “Monotonic iterativealgorithms for sar image restoration,”in Proc. IEEE Int.Conf. Image Processing, pp. 645–648, 2006.

    [23] D. W. Warner, D. Ghiglia, A. FitzGerrell, and J. Beaver,“Two-dimensional phase gradient autofocus,”Imagereconstruction from incomplete data, Proc. SPIE, vol.4123, pp. 162–173, 2000.

    [24] L. C. Potter, E. Ertin, J. T. Parker, and M. Çetin, “Sparsityand compressed sensing in radar imaging,”Proc. of theIEEE, vol. 98, pp. 1006–1020, 2010.

    [25] V. M. Patel, G. R. Easley, D. M. Healy, and R. Chellappa,“Compressed synthetic aperture radar,”IEEE Journal ofSelected Topics in Signal Processing, vol. 4, pp. 244–254,2010.

    [26] R. Baraniuk and P. Steeghs, “Compressive radar imag-ing,” in Proc. IEEE 2007 Radar Conf., pp. 128–133,2007.

    [27] C. Gurbuz, J. McClellan, and R. Scott, Jr., “A com-pressive sensing data acquisition and imaging method forstepped frequency GPRs,”IEEE Trans. Signal Process-ing, vol. 57, pp. 2640–2650, 2009.

    [28] I. Stojanovic, W. C. Karl, and M. Çetin, “Compressedsensing of mono-static and multi-static SAR,”AlgorithmsSynthetic Aperture Radar Imagery XVI, Proc. SPIE,2009.

    [29] C. Berger, S. Zhou, and P. Willett, “Signal extractionusing compressed sensing for passive radar with OFDMsignals,”in Proc. 11th Int. Conf. Inf. Fusion, p. 16, 2008.

    [30] I. Stojanovic and W.C. Karl, “Imaging of moving targetswith multi-static SAR using an overcomplete dictionary,”IEEE Journal of Selected Topics in Signal Processing,vol. 4, pp. 164–176, 2010.

    [31] M. Çetin and A. Lanterman, “Region-enhanced passiveradar imaging,” Proc. Inst. Elect. Eng. Radar, SonarNavig., vol. 152, pp. 185–194, 2005.

    [32] M. A. Herman and T. Strohmer, “High-resolution radarvia compressed sensing,”IEEE Trans. Signal Processing,vol. 57, pp. 2275–2284, 2009.

    [33] K. R. Varshney, M. Çetin, J. W. Fisher, III, and A. S.Willsky, “Sparse signal representation in structureddictionaries with application to synthetic aperture radar,”IEEE Trans. Signal Processing, vol. 56, pp. 3548–3561,2008.

    [34] C. D. Austin and R. L. Moses, “Wide-angle sparse3D synthetic aperture radar imaging for nonlinear flightpaths,” in Proc. IEEE Nat. Aerosp. Electron. Conf., pp.330–336, 2008.

    [35] C. Austin, E. Ertin, and R. L. Moses, “Sparse multipass3D imaging: Applications to the GOTCHA data set,”Defense Security Symp. Algorithms Synthetic ApertureRadar Imagery XVI, Proc. SPIE, vol. 7337, pp. 733703–1733703–12, 2009.

    [36] M. Ferrara, J. Jackson, and M. Stuff, “Three-dimensionalsparse-aperture moving-target imaging,”Algorithms Syn-thetic Aperture Radar Imagery XIV, Proc. SPIE, vol.6970, 2008.

    [37] M. Herman and T. Strohmer, “General deviants: Ananalysis of perturbations in compressed sensing,”IEEEJournal of Selected Topics in Signal Processing: SpecialIssue on Compressive Sensing, vol. 4(2), pp. 342–349,2010.

    [38] M. Çetin, W. C. Karl, and A. S. Willsky, “Feature-preserving regularization method for complex-valued in-verse problems with application to coherent imaging,”Opt. Eng., vol. 45, 2006.

    [39] N. Ö. Önhon and M. Çetin, “A nonquadratic regulariza-tion based technique for joint SAR imaging and modelerror correction,” Algorithms for Synthetic ApertureRadar Imagery XVI, Proc. SPIE, vol. 7337, 2009.

    [40] N. Ö. Önhon and M. Çetin, “Joint sparsity-driveninversion and model error correction for radar imaging,”IEEE Int. Conf. Acoustics, Speech, Signal Processing, pp.1206–1209, 2010.

    [41] MSTAR, Air Force Research Laboratory, Sensor DataManagement System, https://www.sdms.afrl.af.mil /in-dex.php?collection=mstar.

    [42] Backhoe Data Sample & Visual-D Challenge Problem,Air Force Research Laboratory, Sensor Data Manage-ment System, https://www.sdms.afrl.af.mil.

    [43] M. Çetin and R. L. Moses, “SAR imaging from partial-aperture data with frequency-band omissions,”Algo-rithms for Synthetic Aperture Radar Imagery XII, Proc.SPIE, vol. 5808, pp. 32–43, 2005.

    N. ÖzbenÖnhon (S’09) received the B.S. degree inelectronics and telecommunication engineering andthe M.S. degree in telecommunication engineeringfrom Istanbul Technical University, in 2003 and2006, respectively. She is currently pursuing thePh.D. degree at Sabancı University, Istanbul, Turkey.Her research interests are in the areas of signal andimage processing.

    Müjdat Çetin (S’98-M’02) received the Ph.D. de-gree in electrical engineering from Boston Univer-sity, Boston, MA, USA, in 2001. From 2001 to 2005,he was with the Laboratory for Information andDecision Systems, M.I.T., Cambridge, MA, USA.Since September 2005 he has been a faculty memberat Sabancı University,̇Istanbul, Turkey. Dr. Çetinhas served as the Technical Program Co-chair forthe 2010 International Conference on Pattern Recog-nition and for the 2006 IEEE Turkish Conferenceon Signal Processing, Communications, and their

    Applications. He is currently an area editor for the Journalof Advancesin Information Fusion, a guest editor for Pattern Recognition Letters, anda EURASIP Liaison Officer for Turkey. His research interestsincludestatistical signal and image processing, inverse problems, radar imaging,brain-computer interfaces, machine learning, computer vision, data fusion,wireless sensor networks, biomedical information processing, and sensor arraysignal processing. Dr. Çetin has received several awards including the 2010IEEE Signal Processing Society Best Paper Award, the 2010 METU ParlarFoundation Research Incentive Award, the 2008 Turkish Academy of SciencesDistinguished Young Scientist Award, the 2007 Elsevier Signal ProcessingJournal Best Paper Award, and the 2006 TÜBİTAK Career Award.