space telescope science institute · web viewin an effort to pin the model down for the smaller...

16
Instructions for Using the Alpha-Release of the WFC3/UVIS Pixel-based CTE Correction Jay Anderson STScI February 19, 2013 1. Abstract The current version of the pixel-based CTE correction for WFC3/UVIS is available only as a stand-alone FORTRAN program. This routine is currently an “alpha” release, meaning that we reserve the right to make major changes to it. The routine does work reasonably well in the domain where it can: where the source is bright or where the background is moderate. Faint sources on low backgrounds will always be very difficult to correct for, as they tend to experience losses that are a large fraction of the initial counts. It will always be impossible to reconstruct something from nothing. This routine is not yet a part of the pipeline and it will probably be several months before it is included in the pipeline. Also, at this point, there are also no additional reference files (such as de-trailed darks) that can be used in conjunction with it. 2. Construction of the model A comprehensive description of the construction of the model will eventually follow in a separate ISR, but it is worthwhile to provide a brief history here. The model is generally based on the pixel-based correction developed by Anderson & Bedin (2010) for ACS, which itself was an

Upload: others

Post on 20-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

Instructions for Using the Alpha-Release of the WFC3/UVIS Pixel-based CTE Correction

Jay Anderson

STScI

February 19, 2013

1. AbstractThe current version of the pixel-based CTE correction for WFC3/UVIS is available only as a stand-alone FORTRAN program. This routine is currently an “alpha” release, meaning that we reserve the right to make major changes to it. The routine does work reasonably well in the domain where it can: where the source is bright or where the background is moderate. Faint sources on low backgrounds will always be very difficult to correct for, as they tend to experience losses that are a large fraction of the initial counts. It will always be impossible to reconstruct something from nothing.

This routine is not yet a part of the pipeline and it will probably be several months before it is included in the pipeline. Also, at this point, there are also no additional reference files (such as de-trailed darks) that can be used in conjunction with it.

2. Construction of the modelA comprehensive description of the construction of the model will eventually follow in a separate ISR, but it is worthwhile to provide a brief history here. The model is generally based on the pixel-based correction developed by Anderson & Bedin (2010) for ACS, which itself was an extension of the model constructed by Massey et al. (2010) for their reduction of the COSMOS data.

There are two aspects to the CTE model: (1) how charge is lost from one pixel and (2) how it is released into the upstream pixels. The basic model assumes that charge “traps” are distributed throughout the detector, and each trap is able to grab a particular electron out of the packets that pass through it (either the first, second, fifth, five-hundredth, etc.). Since it is impossible to know exactly what traps can be found in which pixels, we make the assumption that each of the 2070 pixels up the column has an identical spectrum of fractional traps. The current UVIS detector has, on average, about 500 traps in each column. Most of these traps will affect only large pixel packets, but unfortunately there are enough traps that grab the first few electrons to make CTE losses pathological when sources are faint on a low background. An iterative approach was used to construct the model.

Page 2: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

2.1 The initial model The first iteration was similar to what was described in Anderson & Bedin (2010) for ACS. Warm pixels (WPs) serve as convenient delta-function sources for probing how charge is transferred down the detector. We examined the trails behind the few relatively bright WPs that UVIS has. The area under each trail told us the number of traps that impacted the WP during its journey (the losses), and the shape of the trail told us how charge gets released. This provided the initial version of the model.

2.2 Probing losses in the smallest packets. Unfortunately, it is very hard to measure the trails behind WPs that have less than 50 electrons. This is partly because the losses are large (and no longer perturbations) and partly because the trail gets lost in the noise.

In an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark and long-dark exposures. The short-dark exposures were 100s and the long-dark exposures were the standard 900s. To study the CTE losses from small packets, we first apply our initial CTE correction to the long-dark stacks. This gives us a pretty good idea of how much flux started out in each of the medium-bright WPs (say, those with 75 to 1,000 electrons). We can then multiply these images by 0.1111 (100/900, the ratio of exposure times) to get the expected number of counts in each short-dark exposure. We then compare the observed number of counts in the short dark with the expected number to get a direct picture of CTE losses as a function of the number of transfers each WP experienced. This allows us to study electron packets from 10 to 125 electrons in size.

The above procedure allowed us to see how many electrons a charge packet loses on (essentially) zero background. We found the losses to be even greater than expected. A pixel packet that started out with 80 electrons at the top of the chip ended up with less than 30 by the time it reached the readout register; and a packet with 50 ended up with less than 10. We examined the trends for charge packets of different size and found that packets with between 1 and 10 electrons continued to lose more and more electrons as the packet got larger. But once packets reached the size of 12 e−, their losses appeared to be largely independent of packet size. This was true until they reached the size of 50 e−or so, at which point they slowly started losing more again.

These trends told us that there aren’t many traps that grab electrons #13 through #50. Thus, if an image has a background of about 12 electrons, then perhaps that could keep many of the traps filled and significantly mitigate the CTE losses.

2.3 The efficacy of some background. To test this optimistic hypothesis, we took a series of observations of the center of globular cluster Omega Centauri, with pairs of short and deep exposures (10s and 700s) through F336W. This is not unlike what was done above to study WPs in the dark exposures. The reason for this target/filter choice was that the field contains a nice, flat distribution of stars. In the short exposures, there are not many stars brighter then S/N ~ 50, but below this (which is the location of the turnoff) there is a relatively flat distribution with magnitude. These image pairs allowed us to assess losses directly by comparing the observed counts in the short exposures against the predictions from the scaled-down deep exposures. We also varied the

Page 3: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

Figure 1: This figure shows the observed trends that were used to constrain the model, and the model itself. Each of the seven panels corresponds to a set of WPs that the long darks tell us should contain 10, 20, … 80 electrons in the short darks. This WP is then observed after a different number of transfers (250, 750, 1250, and 1750) and the total number of electrons (background + WP) is shown. The black trends show the behavior on very low background, the green on a background of ~2, and the blue on a background of 12. The data were taken in September 2012.

background using the post-flash option. This gave us a direct assessment of how various background levels shield sources from CTE losses. This study confirmed the efficacy of 12 electrons background. (See Anderson et al 2013).

Finally, we took a series of short-long darks with various backgrounds to help us pin down the model in the context of background mitigation. Again, we constructed a model of the dark current in each pixel from the de-trailed long dark exposures (using our initial readout model). Then we examined how the various WPs lost flux as a function of the number of transfers and the background level. Figure 1 shows the WP trends that were

Page 4: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

observed in the short-dark exposures taken with three different levels of background (0 e−, 2e−, and 12e−). We actually fit the model to 10 different levels of background, but for clarity we show only three here.

The black curves in the panels from left to right show what happens to WPs that start out with 10, 20, 30, 40, 50, 60 and 80 electrons on a detector with no background. Losses are 70% or greater for stars that are transferred all the way down the chip, even for WPs that start with 80 electrons. The green curve shows the results for backgrounds of about 2 electrons. Losses are still large, but are down by perhaps a factor of two from the pathological zero-background case. The blue curve shows the losses for WPs on a background of 12. Losses are less than 20% for all WPs.

The model, indicated by the dashed curves, does a nice job describing most of these trends. At the very lowest end (a small WP on a low background), the model over-predicts the losses. The model has a monotonic function with one degree of freedom (the number of traps encountered per 2048 transfers as a function of packet size) to constrain this two-dimensional distribution of losses versus WP intensity and background. The errors at the very low end indicate some inadequacy in the model algorithm, but in fairness, it is very hard to reconstruct sources in this regime anyway.

2.4 The current UVIS model. The current UVIS model algorithm is somewhat different from the algorithm that is operating in the ACS pipeline. Whereas the ACS model allowed traps to affect fractional pixel levels and work on real-number pixel arrays, the new model explicitly deals only with integer numbers of electrons. The new model is also specified in a somewhat simpler manner than the ACS model. Whereas the ACS model specified the “trap density” at various electron levels, the current model is simply specified by the cumulative number of traps as a function of packet-size in electrons. We show the cumulative trap distribution in Figure 2, below.

Figure 2: From the current model: the cumulative number of accessible traps per 2048 pixels-up-the-column, as a function of packet size. The left plot shows the small-packet region with linear scaling, and the right plot shows the full range of packet sizes with log-log scaling. The marginal losses are essentially the slope of the curve.

Page 5: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

It turns out that the trail profile for WFC3/UVIS does not appear to be a perceptible function of packet size. We found that 20% of the trapped electrons are released after the first transfer, 8.5% in the second transfer, 6.75% in the third about 1.5% in the tenth, 0.1% in the fiftieth, and the trail goes to zero after about 60 pixels. There are not a lot of bright warm pixels in the UVIS detector, so it is not trivial to follow the trail out to large distances.

2.5 Dealing with “readout CRs”. It often happens that cosmic rays (CRs) strike the detector during readout. A full-chip readout takes about 90s, so an exposure that is 810s will have 10% of its CRs hit during readout. CRs that hit during readout do not undergo the same number of transfers as the electron packet they arrive at the readout-register with; they undergo fewer. For instance, if a CR hits the detector in pixel [200,200] at the time when pixel [200,2000] is passing through it, then the CR-added electrons will undergo one tenth the implied number of parallel transfers. It is clear that if we treat all electrons as having undergone the number of transfers implied by their vertical pixel location, then we will overestimate the amount of trailing suffered for these CRs, and hence will over-subtract the trails.

Like the ACS model, the current UVIS model also searches for over-subtracted trails. We define an over-subtracted trail as either: a single pixel value below −10 e−, two consecutive pixels totaling −12 e−, or three totaling −15 e−. When we detect such an over-subtracted trail, we iteratively reduce the local CTE scaling by 25% until the trail is no longer negative. This does not identify all readout-CRs, but it does deal with many of them. For images that have backgrounds greater than 10 or so, this will still end up over-subtracting CRs a bit, since we allow their trails to be subtracted down to −10, rather than to 0. It would be possible to have the algorithm use the background sky value rather than zero, as the baseline, below which it looks for over-subtracted trails.

2.6 CTE reconstruction with very low backgrounds. The fact that many of UVIS’s observations have been taken with low background makes CTE reconstruction particularly challenging. This is a regime that ACS has not had to face, since even “bias” exposures with no integration time collect ~5 electrons dark-current in their top-row pixels by the time that pixel completes its 2048 parallel transfers. UVIS pixels currently get about 0.5 electron dark current during readout; they got even less in the past.

We mentioned above that a source with only a few electrons loses almost all of them during the 2000 parallel transfers down the detector. This makes it very hard to reconstruct faint sources on low backgrounds, as there is not much evidence in the read-out image that there was anything there. This is particularly true when we fold in the contribution of the 3 e- readnoise. A source that loses so many electrons that it cannot stand out above the readnoise will be impossible to reconstruct. A corollary of this is that if we ignore the fact that our observed images have readnoise and try to determine what original image could get pushed through the read-out model to produce the observed pixel distribution, we will end up with an image that is perhaps 10× noisier than the

Page 6: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

original observation1. It is clear that readnoise-mitigation will be even more important for UVIS than it was for ACS.

A further complication with low backgrounds is that the exact number of electrons in the background can make a big difference in the CTE losses. A 25 e- WP on no background will lose more than 20 electrons (80%), while the same WP on a background of 3 will lose perhaps 10 electrons (40%). It is therefore critical to accurately estimate the background so that sources can be reconstructed as accurately as possible. It is clear that this estimate of the background must be much more accurate than the readnoise (±3 electrons) allows us to know the number of electrons in any given pixel.

For all the above reasons, the new model includes an improved readnoise-mitigation algorithm. In brief, the goal of the algorithm is to identify the smoothest possible image that is consistent with being the observed image plus readnoise. The reconstruction algorithm acts on this smooth image to make a conservative estimate of how charge may have been transferred from one pixel to another in the real image during readout. The negative of this transfer is then added to the original image.

3. DownloadThe routine can be downloaded by visiting the following website:

http://www.stsci.edu/~jayander/X/EXPORT_WFC3UV_CTE

There are two FORTRAN programs available for download from this directory. Instructions on compiling the programs and on their parameters are given in the README file and in comments at the top of the files. We will just give a very brief description here.

The first program is the correction routine itself, named wfc3uv_ctereverse. It takes a _raw.fits file and generates what is being called a _rac.fits file. This output file is as similar as possible to the original raw file, except that it is real*4 instead of an unsigned integer*2, and has had its electrons re-arranged in accordance with a model of how charge likely got redistributed during readout. These rac files should be able to be run through the CALWFC3 pipeline as if they were normal raw files. The routine also has the option (use “FLC+”) of taking a raw file and an flt file. It will determine from the raw file how the electrons need to be redistributed and will apply this to the flt image, producing an flc file, which can be used for drizzle or other

1 It is easy to see where this comes from. If an observed image is full of empty pixels and has one pixel far from the readout has one electron, then in order to read-out such a distribution, we would need to start with (say) 10 electrons, 9 of which would get lost along the way. This would allow us to read out a value of “1” for the target pixel, but the model would also have to account for the fact that (say) “2” electrons would have been added to the upstream pixel, and single electrons to seven other upstream pixels. The original image would have to look something like: (0 +10 -2 -1 -1 -1 -1 -1 -1 -1) to be read out as (0 1 0 0 0 0 0 0 0 0). Generalizing this to an image full of readnoise, where each pixel can vary by ±3, it is clear that the de-blurred result will be very noisy.

Page 7: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

standard exposure-level image analysis. The routine has several flags and the comments at the beginning of it should help you use it. It is worth noting that the routine cannot operate directly on the flt file, since that file may have had post-flash electrons subtracted, and the reconstruction routine needs to know about them, since they lessen the CTE blurring. Depending on the background, it can take 15 minutes to an hour to construct the correction for a raw image.

The second program is named wfc3uv_forward, and it applies the forward CTE modeling. It takes a file in what I call my “z” format (8412×2070, real*4, with each 2103×2070 amplifier arranged with its parallel readout direction down and its serial readout direction to the left) as an original distribution of pixels. The routine then simulates the image what would result if this distribution were read out (with or without readnoise added afterward). Instead of starting with a pre-readout image, this routine is also able to take a list of sources (x,y and total flux) and a sky background and simulate the initial image and the image that would be read-out. There are comments at the beginning of the program to provide more information on how to run it. We should note that Figure 1 shows us that the current model over-predicts losses when the background is below 5 electrons and the source has less than 20 electrons per pixel. As such, this forward model should not be used to predict in detail what happens to very faint sources on very low backgrounds. We are working on ways to adjust the model to accommodate this inadequacy. It is not simply a modification of the parameters, but rather will require some fine-tuning of the algorithm itself.

4. How accurate are the corrections under various circumstances?We have run the pixel-reconstruction algorithm on the short-dark pairs of Omega Cen images (from §2.3 above) and done aperture photometry and fit PSFs for positions. The results are shown in Figure 3 for photometry and Figure 4 for astrometry.

Page 8: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

Figure 3: Results of simple 5×5-pixel aperture photometry on the short and long images of Omega Cen, for stars more than 1500 pixels from the readout register. We used the long exposures to predict the number of counts in the short exposures. The pairs of exposures were taken at the same pointing, so that no aperture correction is needed to compare the numbers of counts. Each point represents a single star. The median trends are shown in the middle line, and the inter quartiles are shown above and below. The top rows of panels show the losses in the uncorrected exposures. The magnitudes are given in instrumental units, −2.5 log10(number of electrons), such that −10 is S/N ~ 100 and −5 is S/N ~ 10. When the sky background is zero, losses go from 0.1 magnitude at −10 to more than 0.5 magnitude at −7. When the background is 12 electrons, losses are never more than 0.15 magnitude. The photometry on the corrected images is shown in the bottom panels.

Figure 3 shows that the pixel-based correction does a very nice job on images with some amount of sky, but when the sky goes to zero, the correction becomes increasingly inadequate as the source gets fainter. This is not surprising. In order to avoid readnoise amplification, we had to be conservative in terms of what could be a source and what could be noise. Even when the cores of faint sources could be distinguished from read-noise — say, for sources with 100 total counts (instrumental magnitude of −5) should have 20 counts in their central pixels, which is 6 times the readnoise — the surrounding pixels would not necessarily be identified as having significant flux, and as such would not have the full correction applied to them.

Figure 4 shows the same results, but for astrometry. Again, the correction is quite good. This gives us the hope that if the correction restores total flux and position for most stars, it might also do a good job preserving shape. This, however, remains to be tested.

Page 9: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

Figure 4: Same as the previous figure, but for astrometry. Positions were fit with an empirical library PSF for F336W. Astrometry is clearly restored for most stars on most backgrounds.

5. Remaining issues

One issue that has come to light during our post-flash investigation is that there are quite a few pixels that appear to have several traps in them. We know this because the electrons that get added during the post-flash (or background electrons in a science exposure) get trapped and do not make it out of the pixel during the first transfer. For some reason, these pixels often appear to be paired with WPs below them, which makes it somewhat hard to interpret the trails behind WPs on images with background. The UVIS team is investigating this phenomenon.

Figure 5: This is a close up of a post-flashed dark stack with a background of ~12 electrons. The whitest pixels have values of ~2 electrons. The left panel shows a part of the image near the readout register, and on the right shows a part far from the readout register. Black corresponds to higher values. Clear the WPs are CTE-blurred on the right but not on the left. The white spots are pixels significantly below the background. These also are sharp on the left and more blurred on the right.

Page 10: Space Telescope Science Institute · Web viewIn an effort to pin the model down for the smaller charge packets — where CTE losses are most critical — we took a set of short-dark

The fact that these “holes” appear capable of holding several electrons may hint to us that traps may not occur one at a time randomly on the detector, but they may well be bunched up.

6. Anticipated improvements

The correction described here treats every pixel as having exactly the same distribution of traps. The ACS algorithm is a bit more sophisticated than this. Ogaz et al (in prep) studied the parallel overscan pixels for each ACS/WFC column to determine a rough scaling for how many traps each column had relative to the average.

We plan to do a similar exercise for WFC3/UVIS, but we can go even farther. We can use the charge-injection procedure to add ~15,000 electrons to lines separated by 10, 17, or 25 pixels. Examining the trails behind these lines will allow us to estimate not only how many traps can be found in each column, but where along the column the traps can be found. Of course, with such a high injection level, we are not able to probe exactly which electrons (first, hundredth, etc) each trap impacts, we can only estimate the total number of traps. It might be possible to use the scan mode to move stars of different brightness across the detector at a variety of rows to estimate the loss in each column. The WFC3/UVIS team is exploring these options for specifying the CTE model more locally.

ReferencesAnderson et al 2013, “The Efficacy of Post-Flash for Mitigating CTE Losses in

WFC3/UVIS images” http://www.stsci.edu/hst/wfc3/ins_performance/CTE/ANDERSON_UVIS_POSTFLASH_EFFICACY.pdf

Anderson, J. & Bedin, L. R. 2010 PASP 122 1035 “An Empirical Pixel-Based Corretion for Imperfect CTE. I. HST’s Advanced Camera for Surveys”

Massey, R. et al. 2010 MNRAS 401 371

MacKenty & Smith 2013. “CTE White Paper” http://www.stsci.edu/hst/wfc3/ins_performance/CTE/CTE_White_Paper.pdf

Ogaz, Sara et al. ISR in prep.