Transcript
  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    1/7

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    2/7

    326 ALARD & LUPTON Vol. 503

    method that will also solve the least-squares problem. Thismethod must perform best for the very crowded elds of themicrolensing surveys. The challenge will be to achieve areasonable computing time for processing the huge amountof data from the microlensing experiments. It is essentialthat the computing time be reduced by at least a factor of 100.

    2. THE METHOD

    2.1. PreliminariesBefore looking for the optimal subtraction, we need to

    perform some basic operations to register the frames to areference frame. Usually the frames have slightly dierentcenters and orientation (and possibly scale), and we need toperform an astrometric transform to match the coordinatesof the reference frame. We determine this transform bytting a two-dimensional polynomial using 500 stars on thereference frame, and the same number on the other frame.Using this transform, we then resample the frame on thegrid dened by the reference frame. This resampling is per-formed by interpolation using bicubic splines, which givesexcellent accuracy. All the frames are then on the samecoordinate system, and we can proceed to match the seeing.

    2.2. T he Reference FrameHere we emphasize that we choose to take the best seeing

    frame as the reference. We do not wish to degrade the frameto the worst seeing frame, as this will clearly lower thesignal-to-noise ratio. Later we will match the seeing in ourframes by convolving the reference to the seeing of eachother frame. Matching the image quality to that in goodseeing frames is more difficult, but we are looking for anoptimal result.

    2.3. Seeing Alignment to the ReferenceWe now arrive at the fundamental problem of matching

    the seeing of two frames with dierent PSFs. We do notwant to make any assumption concerning the PSF on theframe, and we plan to use all the pixels. The important pointis that most of the stars on a given frame do not have largeamplitude variations, varying by at most 1% or 2%. Thisallows us to say that most of the pixels on any two frames of the same eld should be very similar if the seeing were thesame. Consequently, one possibility is to try to nd thekernel by nding the least-squares solution of the equation

    Ref (x, y)? Kernel ( u, v) \ I(x , y) , (1)

    where Ref is the reference image and I is the image to bealigned. The symbol ? denotes convolution.

    In principle, solving this equation is a nonlinear problem.An attempt to solve this nonlinear problem was made by

    et al. in order to look for variabilities of Kochanski (1996)active galactic nuclei. However, the computing time of suchmethods is prohibitive, and they cannot be applied to largedata sets, such as those of the microlensing surveys. Weneed to nd a more tractable solution to this problem.

    An important consideration is that if we decompose ourkernel using some basis of functions, the problem becomes astandard linear least-squares problem. If we use the kerneldecomposition of the form

    Kernel ( u, v) \ ;i

    a i Bi(u, v) , (2)

    solving the least-squares problem gives the following matrixequation for the coefficients:a i

    M a \ V ,

    where

    M ij \ P C i(x , y) C j (x , y)p(x , y)2 dxdy ,V i \ P Ref (x , y) C i(x, y)p(x, y)2 dxdy ,

    C i(x , y) \ I(x, y)? Bi(x, y) .

    In choosing to solve the problem by least-squares, we haveimplicitly approximated the images Poisson statistics byGaussian distributions with variance p(x , y)2, such that

    p(x , y) \ kJ I(x , y) .We set the constant k by taking into account the gain of thedetector (the ratio of photons detected to the analog-to-digital converter unit [ADU]). We also assume the noise inthe reference image to be negligible. Note that the matrix Mis just the scalar product of the set of vectors and theC i ,vector V is the scalar product of the vectors with I. All weC ihave to do now is to look for a suitable basis of functions tomodel the kernel. The functions must have nite sums, andmust drop rapidly beyond a given distance (the size of anisolated stars image). To solve this problem, we start with aset of Gaussian functions, which we modify by multiplyingwith a polynomial. These basis functions allow us to modelthe kernel, even if its shape is extremely complicated. Weadopt the following decomposition:

    Kernel ( u, v) \ ;n

    ;dnx

    ;dny

    ak e~( u2`v 2)@2pn2udnxvdny ,

    where and is the degree0 \ d nx Dn , 0 \ d ny ] d nx Dn , Dnof the polynomial corresponding to the nth Gaussian com-ponent. There are a total of terms for(Dn ] 1)(Dn ] 2)/2each value of n. The value of k is implicit in the values of theother indexes.

    In the notation of equation (2),

    B(u, v)4 e~( u2`v 2)@2pn2udnx vdny .In practice, it seems that three Gaussian components withassociated polynomial degrees in the range of 2 to 6 cangive subtracted images with residuals comparable to21@2] photon noise.

    2.4. Dierential Background Subtraction

    Another important issue is that the dierential back-ground variation between the frames can be tted simulta-neously with the kernel. In we did not considerequation (1)any background variations between the two frames; to doso, let us modify in the following way:equation (1)

    Ref (x , y)? Kernel ( x , y) \ I(x, y) ] bg (x, y) . (3)

    We use the following polynomial expression for bg ( x , y) :

    bg (x, y) \ ;i

    ;j

    ak x iyj ,

    where and is the degree of 0 \ i Dbg, 0 \ i ] j Dbg, Dbgthe polynomial used to model the dierential backgroundvariation. The least-squares solution of willequation (3)lead to a matrix equation similar to the previous one, except

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    3/7

    No. 1, 1998 METHOD FOR OPTIMAL IMAGE SUBTRACTION 327

    that we must increase the number of vectors ; our deni-C itions of the matrix M and vector V relative to remain theC isame as in We then have 2.3.

    C i(x , y) \ Gx j yk ,I(x, y)? Bi(x , y) , i\ 0 . . . nbg[ 1

    i \ nbg . . . nbg] n ,

    where andnbg\ (Dbg] 1)(Dbg] 2)/2 [ 1 n \ ; j (Djnote that for the are identical to] 1)(Dj ] 2)/2; i ] nbg, C iour previous results.

    3. TAKING INTO ACCOUNT THE PSF VARIATIONS

    There are two ways to handle the problem of PSF varia-tions. First, most of the time the eld is so dense that atransformation kernel can be determined in small areas,small enough that we can ignore the PSF variation. Of course, this would not apply to less dense elds, such asthose in a supernova search. This is the great advantage of amethod that does not require any bright isolated stars todetermine the kernel, but can be used on any portion of animage, provided that the signal-to-noise ratio is largeenough to determine the kernel. Indeed, the more crowdedthe eld, the easier it is to model variations of the PSF. Asecond possibility is to make an analytical model of thekernel variations. We take the kernel model

    Kernel ( x, y, u, v) \ ;n

    ;dnx

    ;dny

    ;dx

    ;dy

    ak xdx ydy

    ] e~( u2`v 2)@2pn2 udnx vdny ,where and is the degree0 \ d x \ Dk , 0 \ d y ] dx Dk , Dkof the polynomial transform that we use to t the kernelvariations. Provided that the kernel variations with x and yare small enough compared to the u, v variations, we caneasily calculate new expressions for C i :

    C i(x , y) \ Gx j yk ,I(x , y)? Bk(u, v)xdx ydy ,

    i \ 0 . . . nbg

    [ 1i \ nbg . . . nbg] n ,

    where the values of and k are implicit in the index i,dx , dy ,and now n \ ; j (Dj ] 1)(Dj ] 2)/2 ] (Dk ] 1)(Dk ] 2)/2.Unfortunately, these equations do not guarantee the con-servation of ux. Consequently, we need to add the condi-tion that the sum of the kernel must be constant. Tosimplify the equation, we also normalize the functions soBithat each sums to one. We can then rewrite the kerneldecomposition as

    Kernel( x , y, u, v) \ ;k/0

    n~1ak xdx ydy

    ] [Bi(u, v) [ Bn(u, v)] ] norm ] BnWe can calculate the norm (the sum of the kernel) bymaking a constant PSF t in several small areas. The dier-ent values will then be averaged to get the constant norm.

    The solution of the system for the coefficients is veryansimilar to the previous case of a constant PSF. We do notbother to give all the details here.

    4. APPLICATION OF THE METHOD TO OGLE DATA

    The OGLE team has kindly provided us with a stack of images of a eld situated 2 from the Galactic Center, inorder to experiment with our method. In these particularimages the optimal kernel has a complicated shape, whichwould probably be very difficult to compute reliably with a

    simple Fourier division; we consider this eld an excellenttest of our method. The data were taken in drift-scan mode(TDI), so the form of the PSF can vary rapidly with rownumber on the CCD. We extracted a small (500 ] 1000)subframe from the 2048 ] 8192 original images. One of theimages has quite outstanding seeing, and we took it as areference. All frames were resampled to the reference gridusing the method described above. To model the kernel, wetook three Gaussian components with associated poly -nomials. We took p \ 1 pixels for the rst Gaussian, andp \ 3 and p \ 9 for the two others. The degree of theassociated polynomials were 6, 4, and 2, respectively. Wedivided the subframe into 128 ] 256 pixel regions. Weapplied our method to each of these regions, providing uswith one subtracted image per region. We reconstructed thesubtracted image of our whole subframe by mosaicing thesubtracted images obtained for each region. In this set of 86images, the seeing varies from to and some of the0A.7 2A.5,frames have elongated stellar images. We started by makingan initial residual image using all unsaturated pixels. Wethen made a 3 p rejection of the pixel list, to get rid of thevariables. We usually used four iterations of the method, tobe completely unbiased by large-amplitude variables. Wefound that for all images, the nal residual calculated fromthe subtracted image was very close to that expected fromPoisson statistics. To illustrate this result, we plot in Figure

    the initial images and the subtracted image for a small1eld containing a variable star at its center.

    The stellar images are sharply peaked on the reference,while they look quite fuzzy and asymmetric on the otherimage. This is conrmed by the shape of the best convolu-tion kernel, which looks elongated and has a complicatedshape. This example clearly illustrate the ability of ourmethod to deal with any kernel shape. We can imagine thatin this case, any Gaussian approximation of the kernel itself or of its Fourier transform would not be satisfactory. Forillustrative purposes, we also normalized the subtractedimage by the sum of the photon noise expected from the twoimages (see Once this normalization is applied, weFig. 2).see that the larger deviations visible at the location of thebright stars disappear, suggesting that the subtractionerrors correspond to Poisson noise. This is conrmed bycalculating the reduced s2, for which we nd s2 / l \ 1.05(before doing this calculation, we removed a small areaaround the variable star at the center of the image). We alsoplot the histogram of the normalized deviations in Figure 2.This histogram is very close to a Gaussian with zero meanand unit variance [i.e., N (0, 1)]. We observe deviations sig-nicantly larger than the Poisson expectations only for verybright stars (about 5 to 10 times brighter than the brighteststars in the small eld we present). We believe that theseresiduals are due to seeing variations (see for more 5details); the number of such bright stars in an image is verysmall.

    To spot the variables stars, we created a deviationimage by coadding the square of the subtracted images.We normalize the deviation image by normalizing by thepixels standard deviations. We found many variables atvery signicant levels. Most of these seem to be brightgiants with small amplitudes. Some of the variablesappeared to be periodic; we found a few RR Lyraes andsome eclipsing variables. We computed the ux variationsfor these stars by making simple aperture photometry. In

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    4/7

    328 ALARD & LUPTON Vol. 503

    F IG . 1.Example of subtracted image. The two bottom gures of the panel are the original images. On the right is the reference image, and on the left isthe image to be tted by kernel convolution. The two upper gures show the best kernel solution on the right, and the subtracted image on the left. Note thecomplicated shape of the kernel.

    Figures and we give an illustration of the result we have3 4obtained.

    5. COMPUTING TIME

    One might think that a method that ts all the pixels inan image (even if the t is linear) would be much more timeconsuming than conventional methods. But the actual costof the calculations is much lighter than might appear at rstglance. Most of the computing time is taken by the calcu-lation of the matrix we dene in an N 2 process (where 2.3,N is the number of basis function we used). The rest of thecalculation is an N process. However, the matrix could becalculated once and then used to t the kernel solution forall images. One problem with this approach is that we rejectdierent pixels on each frame (due to new saturated pixelsor variable stars), so consequently the matrix elementschange. But in practice, we nd that we reject no more than1% percent of the total number of pixels, so that all that we

    have to do is to calculate the matrix elements for the reject-ed pixels and subtract them from the original values. Thisprocess costs very little CPU, and once the original matrixhas been built, the kernel solution can be tted very quicklyeven if we use several clipping passes. The rest of the oper-ation requires about the same computing time. By applyingthis method, we can process a 1024 ] 1024 frame in about 1minute with a 200 MHz PC; this could certainly beimproved further by using better numerical algorithms forthe solution of the linear system.

    6. SOURCES OF NOISE IN THE RESIDUAL IMAGE

    As discussed above, the variance of the residual image isapproximately equal to the sum of the variances of the inputimages. If we created a reference image by coadding a largenumber of images with good seeing, we could remove thecontribution of the noise in the reference; we would, of course, have to be careful about variability between the

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    5/7

    No. 1, 1998 METHOD FOR OPTIMAL IMAGE SUBTRACTION 329

    F IG . 2.Noise in the subtracted image. The image on the left is the subtracted image normalized by the Poisson deviations of both the reference and theimage for the small eld presented in the previous gure. On the right we show the histogram of the pixels in this image. We superimposed on this histograma Gaussian of variance 1 ( dashed line ). Note that the deviations due to the bright stars are no longer visible. The variable star at center clearly stands out atvery signicant level. The dark pixel in the image is a cosmic ray.

    dierent reference frames. Upon inspection of the residualimages, however, some showed signicantly larger residualsthan expected from Poisson statistics near the position of bright, but nonsaturated, stars (representing less than 1% of the stars visible on the frame). These showed the character-istic signature of centering errors, with equal positive andnegative residuals even for stars that show no evidence of variability (i.e., the sum of all residuals within a few arcse-conds is zero). We believe that this eect is produced by theturbulent atmosphere modulating our kernel on the scale of our subregions.

    & Colavita quote the variance in the angleShao (1992)

    between two stars separated by h as

    pd2 B 5.25(h /rad) 2@3(t /s)~1P C n2(h)h2@3V ~1 (h)dhfor the regime in which we are interested (their eq. [2]). Inthis equation, is the structure function of the refractiveC n

    index, h is the height above the telescope, V is the windspeed, and t is the integration time. They evaluate the inte-gral using data from et al. for a night onRoddier (1990)Mauna Kea with seeing to giveB 0A.5

    pdB 1.1(h /rad) 1@3(t /s)~1@2arcsec .

    F IG . 3.Examples of bright giant light curves. The x-axes are days, the y-axes are percentage variation (with respect to reference frame). Errors bars arederived from the Poisson deviations associated with each image; we do not include the deviations associated with the reference image.

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    6/7

    330 ALARD & LUPTON Vol. 503

    F IG . 4.Periodic variables. The x-axes represent the phase, the y-axes are percentage variation (with respect to reference frame).

    If we assume that the integral over the atmosphere scaleswith seeing in a way similar to the integral

    P C n2(h)dh ,which enters into the denition of the Fried parameter, r0,we may expect that this result will scale as (a(r0j ~6@5)~5@6result that is independent of j as a result of the wavelengthdependence of In 1 Aseeing, therefore, we may expectr0).

    pdB 2(h /rad) 1@3(t /s)~1@2arcsec .On the typical scale of our 128 ] 256 regions, and for 128 sexposures, this corresponds to an rms image motion of about If we model the PSF as a Gaussian with width0A.011.parameter

    a(a B

    0.424 for 1A

    FWHM images), this would

    produce a maximum residual of exp or 1.6%.pd / a ([ 12),

    This is on the same order as the residuals that we see in ourframes.

    7. HARMONIC FITTING TO THE PERIODIC VARIABLES

    We expect the periodic variables light curves to be wellapproximated by truncated Fourier series. We calculate theperiod using the Renson method and we t(Renson 1978),Fourier series with dierent numbers of harmonics. Theerrors are calculated from the photon noise in each image.We do not include the noise associated with the referenceimage because it only produces an error in the total magni-tude, and to rst order does not aect the variable part of the objects ux. We estimate at each time the s2 per degreeof freedom and we look for the best

    s2 with the(

    sd2),

  • 8/4/2019 Alard ApJ v503 p325-331 1998 a Method for Optimal Image Subtraction 37677

    7/7

    No. 1, 1998 METHOD FOR OPTIMAL IMAGE SUBTRACTION 331

    TABLE 1

    H ARMONICS F ITTING TO THE P ERIODIC VARIABLES

    Mean ResidualVariable sd2 (%) *magP1 . . . . . . 2.01 1.0 [ 1.117P2 . . . . . . 1.43 1.1 [ 0.4402P3 . . . . . . 1.55 1.6 0

    P4 . . . . . . 1.46 1.2 0P5 . . . . . . 1.17 1.3 0P6 . . . . . . 1.1 0.6 [ 0.4348

    N OTE In the third column we give the value of the mean residual to the t. It is useful to comparethis residual to possible at-elding errors. We alsogive an estimate of the star magnitude dierence tothe RR Lyrae We assume that the RR Lyrae all*mag.have the same mean magnitude. A crude estimate forthe RR Lyrae mean magnitude in this eld is I ^ 17.

    minimum number of harmonics. The results are given inwhere we see that the resulting value of is closeTable 1, sd

    to unity for most variables. Except for variable P1, ourmean error is at most only 25% larger than the Poisson

    expectation (i.e., of course, this excess is sig-sd2 \ 1.56); sd2nicant. In the case of the variable P1, the is very incon-sd2sistent with the Poisson expectation. This variable hasabout the same brightness as P6. We checked the quality of the subtracted images, but could not identify any defects.The quality of the image subtraction is as good for P1 as forP6, and they have about the same brightness; so what iswrong? Considering that the mean error is fairly small(about 1%), we might suspect some residual error due toat-elding. However, we get a mean residual of only 0.6%for P6 and showing that the at-elding errors aresd2 \ 1.1,much smaller than 0.6%. This is not surprising, becausethese images were taken in drift-scan mode, and conse-quently we average the sensitivity of many pixels. We con-

    clude that there must be some intrinsic reason for P1s badIt is possible that variables do not repeat perfectly fromsd2.cycle to cycle. This kind of variable star is well known tohave spots that are likely to induce variability at the sub-percent level. It is also possible that the RR Lyraes do notrepeat perfectly; they are known to show the Blashko eect,and we can explain some of the as an eect of cycle-to-sd2cycle variations. Although estimating the of periodicsd2variable stars is not an absolute test, we conclude that onaverage we are only about 20% above the Poisson error,and consequently there is not much to be gained fromimproving our method. However, we must note that theerrors attributable to the reference frame are the same forthe integrated ux of a star on each image only at the rst

    order of approximation. By convolving the reference eachtime to t the seeing variations, we slightly change the noisedistribution around the star. Especially for the case in whicha bright star is close to our object, convolving with thekernel might spread some noise into our photometric aper-

    TABLE 2

    H ARMONICS F ITTING TO THE L IGHTCURVES O BTAINED WITH THE

    N EW R EFERENCE

    Mean ResidualVariable sd2 (%)

    P1 . . . . . . 2.0 1.0

    P2 . . . . . . 1.16 1.0P3 . . . . . . 1.27 1.45P4 . . . . . . 1.45 1.2P5 . . . . . . 1.15 1.3P6 . . . . . . 1.03 0.6

    N OTE In the third column we givethe value of the mean residual to the t.It is useful to compare this residual topossible at-elding errors. We alsogive an estimate of the star magnitudedierence to the RR Lyrae We*mag.assume that the RR Lyrae all have thesame mean magnitude. A crude esti-mate for the RR Lyrae mean magni-tude in this eld is I ^ 17.

    ture. This eect will be negligible for good seeing frames,but noticeable when the seeing is bad. An obvious solutionis to construct a reference with as good a signal-to-noiseratio as possible by stacking the best seeing images (see 8).

    Another approach with potential for improving thesignal-to-noise ratio would be to use a matched lter tomeasure the variability of our stars. Unfortunately, simplyapplying the usual PSF lter leads to problems with aper-ture corrections, and we do not investigate this approachhere.

    8. IMPROVING THE REFERENCE FRAME

    We averaged the 20 best seeing images to build a refer-ence frame with an excellent signal-to-noise ratio. The

    resulting seeing is of course not as good as it was in ourprevious reference, which was the best image. But the seeingvariations are much reduced, as is the noise amplitude. Allthe images were reprocessed using this new reference. Wefound that all the subtracted frames were improved. Evenfor the good seeing frame, where the seeing quality of thereference is critical, we found some improvements. The lightcurves of the variables stars were also improved; we give theresult of harmonic tting in Table 2.

    We would like to thank the OGLE team for providingthe CCD images we presented in our article. In particular,we would like to thank A. Udalski and M. Szyman ski forhelping with the data. We are especially indebted to B.

    Paczyn ski for supporting our project, and for many inter-esting discussions. C. Alard would like to acknowledgesupport from NSF grant AST 95-30478 during his stay inPrinceton, where most of the research was done. We thankM. Strauss for reading the draft carefully.

    REFERENCESC., & Guibert, J. 1997, A&A, 326,Alard, 1

    C., et al. 1993, Nature, 365,Alcock, 621E., et al. 1993, Nature, 365,Aubourg, 623

    A., & Tomaney, A. 1996, ApJ, 87,Crotts, L473J., Greg, P., Tyson, A., & Fischer, P. F. 1996, AJ, 111,Kochanski, 1444

    A. C., & Davis, L. E. 1995, in ASP Conf. Ser. 77, AstronomicalPhillips,Data Analysis and Systems IV, ed. R. Shaw Payne & J. J. E. Hayes (SanFrancisco: ASP), 297

    P. 1978, A&A, 63,Renson, 125F., Cowie, L., Graves, J. E., & Songaila, A. 1990, Proc. SPIE,Roddier,

    1236, 485P., & Mateo, M. 1993, PASP, 105,Schechter, 1342

    M., & Colavita, M. M. 1992, A&A, 262,Shao, 353A., & Crotts, A. 1996, AJ, 112, 2872Tomaney, (TC)

    A., et al. 1994, Acta Astron., 44,Udalski, 165


Top Related