visual image quality assessment with sensor motion: effect of recording and presentation velocity

7
Visual image quality assessment with sensor motion: effect of recording and presentation velocity Piet Bijl TNO Defense, Security, and Safety: Human Factors, P.O. Box 23, 3769 ZG Soesterberg, The Netherlands([email protected]) Received 29 July 2009; revised 25 November 2009; accepted 26 November 2009; posted 30 November 2009 (Doc. ID 114996); published 12 January 2010 To assess the effect of motion on observer performance with an undersampled uncooled thermal imager, moving imagery from a static scene was recorded at nine different angular velocities ranging from 0 (static) to 1 pixel/frame by use of a tilted rotating mirror. The scene contained a thermal acuity test chart with triangular test patterns based on the triangle orientation discrimination test method. Visual acuity with the sensor was determined in two playback modes: normal speed and slow motion. In both playback conditions, a slow angular velocity of the test pattern over the sensor focal plane (up to 0:25 pixel=frame) results in a large acuity increase (þ50%) in comparison with the static condition be- cause the observer is able to utilize more phases of the same test pattern. At higher sensor velocities the benefit rapidly decreases due to sensor smear, and above 0:50 pixel=frame the difference with the static condition is marginal. Up to 0:75 pixel=frame, the results for the two playback conditions are similar, indicating that temporal display characteristics and human dynamic acuity are not responsible for the reduction. The results obtained with this laboratory test method correspond well with earlier per- ception studies on real targets for low and medium camera motion. © 2010 Optical Society of America OCIS codes: 110.3000, 110.3080, 330.1070. 1. Introduction It is a well-known fact that dynamic imaging of a sta- tic scene, for example, by slowly moving the camera, can drastically improve human performance at iden- tifying objects with an undersampled sensor system [13]. This can be explained by the fact that different frames in such a series contain different information about the objects in the scene, and apparently the human observer is able to integrate these by using the different phases to construct a sharper image. In this way the human observer is able to obtain a 3050% higher acuity than with a static image or, equivalently, to identify or recognize objects in the field at a 3050% longer range [2,3]. The information present in a series of frames is also used in a family of dynamic signal-processing techniques such as dynamic superresolution reconstruction and scene- based nonuniformity correction; see, e.g., [46]. The maximum benefit of these algorithms depends on the amount of undersampling and noise gener- ated in the imager [1,7,8]. Hence, they are most suc- cessfully applied to low-cost, highly undersampled and uncooled thermal sensor systems such as micro- bolometers. For these sensors, identification-range performance increases of up to 70% have been re- ported [3,7]. These empirical findings are quantita- tively supported by predictions from models that simply assume a more densely sampled focal plane and optimal use of all phases, such as NVThermIP (Sensiac, Atlanta, Georgia) [9]. Until now, the effect of sensor motion on perfor- mance has not been a subject of systematic investi- gation. In most studies a velocity of approximately 0:25 pixel=frame with a frame rate of 50 or 60 Hz is used [1,2,10,11], and it is assumed that this is not too slow or too fast to be able to use the different phases by both the human observer and the signal- processing algorithms. In one study [2], a cooled mid- wave infrared (MWIR) thermal imager (frame rate of 60 Hz) is moved with a diagonal shift of 0, 0.25, 0.50, 0003-6935/10/030343-07$15.00/0 © 2010 Optical Society of America 20 January 2010 / Vol. 49, No. 3 / APPLIED OPTICS 343

Upload: piet

Post on 03-Oct-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

Visual image quality assessment with sensor motion:effect of recording and presentation velocity

Piet BijlTNO Defense, Security, and Safety: Human Factors, P.O. Box 23,

3769 ZG Soesterberg, The Netherlands([email protected])

Received 29 July 2009; revised 25 November 2009; accepted 26 November 2009;posted 30 November 2009 (Doc. ID 114996); published 12 January 2010

To assess the effect of motion on observer performance with an undersampled uncooled thermal imager,moving imagery from a static scene was recorded at nine different angular velocities ranging from0 (static) to 1 pixel/frame by use of a tilted rotating mirror. The scene contained a thermal acuity testchart with triangular test patterns based on the triangle orientation discrimination test method. Visualacuity with the sensor was determined in two playback modes: normal speed and slow motion. In bothplayback conditions, a slow angular velocity of the test pattern over the sensor focal plane (up to0:25pixel=frame) results in a large acuity increase (þ50%) in comparison with the static condition be-cause the observer is able to utilize more phases of the same test pattern. At higher sensor velocities thebenefit rapidly decreases due to sensor smear, and above 0:50pixel=frame the difference with the staticcondition is marginal. Up to 0:75pixel=frame, the results for the two playback conditions are similar,indicating that temporal display characteristics and human dynamic acuity are not responsible forthe reduction. The results obtained with this laboratory test method correspond well with earlier per-ception studies on real targets for low and medium camera motion. © 2010 Optical Society of America

OCIS codes: 110.3000, 110.3080, 330.1070.

1. Introduction

It is a well-known fact that dynamic imaging of a sta-tic scene, for example, by slowly moving the camera,can drastically improve human performance at iden-tifying objects with an undersampled sensor system[1–3]. This can be explained by the fact that differentframes in such a series contain different informationabout the objects in the scene, and apparently thehuman observer is able to integrate these by usingthe different phases to construct a sharper image.In this way the human observer is able to obtain a30–50% higher acuity than with a static image or,equivalently, to identify or recognize objects in thefield at a 30–50% longer range [2,3]. The informationpresent in a series of frames is also used in a familyof dynamic signal-processing techniques such asdynamic superresolution reconstruction and scene-based nonuniformity correction; see, e.g., [4–6].

The maximum benefit of these algorithms dependson the amount of undersampling and noise gener-ated in the imager [1,7,8]. Hence, they are most suc-cessfully applied to low-cost, highly undersampledand uncooled thermal sensor systems such as micro-bolometers. For these sensors, identification-rangeperformance increases of up to 70% have been re-ported [3,7]. These empirical findings are quantita-tively supported by predictions from models thatsimply assume a more densely sampled focal planeand optimal use of all phases, such as NVThermIP(Sensiac, Atlanta, Georgia) [9].

Until now, the effect of sensor motion on perfor-mance has not been a subject of systematic investi-gation. In most studies a velocity of approximately0:25pixel=frame with a frame rate of 50 or 60Hzis used [1,2,10,11], and it is assumed that this isnot too slow or too fast to be able to use the differentphases by both the human observer and the signal-processing algorithms. In one study [2], a cooled mid-wave infrared (MWIR) thermal imager (frame rate of60Hz) is moved with a diagonal shift of 0, 0.25, 0.50,

0003-6935/10/030343-07$15.00/0© 2010 Optical Society of America

20 January 2010 / Vol. 49, No. 3 / APPLIED OPTICS 343

Page 2: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

and approximately 1:0pixel=frame. A velocity of 0.25or 0.50 pixel/frame yields a 50% increase in compar-ison with the static condition, but the 1:0pixel=framecondition results in a significantly lower perfor-mance. It was concluded that motion magnitudehas little effect on the amount of performance im-provement; the result for the 1pixel=frame conditionis explained by the fact that a shift of exactly1pixel=frame yields a new frame without any newinformation. The study does not mention other pos-sible factors such as smear generated by sensor anddisplay or reduced visual performance. In all thestudies mentioned above, except for the study byFanning et al. [11], a triangle pattern was used asthe test object. The use of this test pattern fromthe triangle orientation discrimination (TOD) meth-od [12] is explained in Section 2. Fanning et al. [11]measured identification performance for militarytargets in the field with an uncooled long-wave infra-red (LWIR) microbolometer thermal imager. To studythe performance under dynamic conditions andwith a superresolution reconstruction algorithm, acircular motion with a velocity of approximately0:17pixel=frame was applied. The range improve-ment due to motion wasmuch less than that reportedin the other studies mentioned above (approximately20%), but, according to the authors, this had to beascribed to the considerable blur of the objectiveresulting in a camera that was only moderately un-dersampled. In a new study [13] with a more under-sampled sensor, they report much larger rangeimprovements with superresolution versus the staticcondition, but unfortunately they did not include thecondition with motion.A recent study on the identification of hand-held

objects with an uncooled LWIR thermal imager thatmoves at a velocity of 0 and 0:57 pixel=frame by Bein-tema et al. [14] reports no performance improvementin motion at all. This is puzzling, given the results ofthe other studies. There are several possible causes:the improvement in performance is target specific,sensor performance with an uncooled thermal ima-ger is more sensitive to sensor velocity than with acooled sensor, or other system properties such astemporal or spatial display characteristics playa role.To better understand the process I performed the

following experiment. With the setup and thermalcamera used by Beintema et al. [14] and Bijl et al.[3], TOD test patterns were recorded at a range ofdifferent sensor velocities. In a perception experi-ment, the video was presented to observers in twodifferent ways: at normal playback speed (i.e., withthe normal sensor frame rate) and in slow motion.In this way I was able to disentangle effects of therecording process (e.g., motion blur and smear dueto integration in the sensor detector) from display ef-fects (smear) and reduced human visual acuity formoving objects [15].Section 2 provides a brief description of the TOD

method and its applications. The experimental

method is described in Section 3. The results are pre-sented in Section 4 and discussed in Section 5.

2. Triangle Orientation Discrimination Method

The TOD test was developed by Bijl and Valeton [12]as a method to assess the quality of imaging systemswith the human-in-the-loop. Basically, a human ob-server judges the orientation of equilateral triangletest patterns of various sizes and contrasts on a uni-form background using the sensor system under test.The test patterns can have four possible orientations:apex up, down, left, or right (see Fig. 1). The probabil-ity of a correct judgment is a function of test patternsize and contrast and increases with stimulusstrength from chance (25%) up to 100% according toan s-shaped curve called the psychometric function.The threshold is defined at the 75% correct leveland is obtained by fitting a Weibull function throughthe data. Image degradations induced by the sensor(such as blur, noise, and sampling) make the judg-mentsmore difficult and shift the 75% correct thresh-olds toward a larger test pattern size S or to a highercontrast. In such a way, the method evaluates thecombined effect of all the image degradations withinthe sensor system including the observer. Themethodyields a threshold curve of contrast versus reciprocalsize S−1 (in mrad−1) and has been shown to have aclose relationship to real object recognition perfor-mance [16–18]. Using reciprocal angular size is con-venient for several reasons: (1) a higher value meanshigher acuity or better performance, (2) range perfor-mance is proportional to this value, (3) the effects ofatmospheric losses on field performance are easilyincluded in the TOD plots. More details on the testprocedure are provided by Bijl and Valeton [19].

TheTODmethodhasawideapplication area. It canbe applied to well-sampled and undersampled sys-tems. Test equipment has been developed to charac-terize sensors from different spectral ranges such asvisual, thermal, and x-ray [20,21] and effects of mo-tion and image enhancement techniques have beenquantified [1–3,7]. In addition to theTOD testmethoda vision model was developed to replace the humanobserver and to enable automated measurementand automatic characterization of image enhance-ment methods [22,23]. The method has also been ap-plied to quantify the effects of image enhancement onautomated systems and has been compared withmean-square error (MSE) methods [24].

Fig. 1. Test pattern or stimulus in the TOD method is an equi-lateral triangle with one of four possible orientations: apex up,down, left, or right. The observer must indicate its orientation.Task difficulty depends on test pattern size and contrast. See Bijland Valeton [12].

344 APPLIED OPTICS / Vol. 49, No. 3 / 20 January 2010

Page 3: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

3. Methods

The setup is similar to the one described by Bijl et al.[3] and Beintema et al. [14].A FLIR SC2000 undersampled, uncooled microbol-

ometer sensor with a focal plane array of 320 by 240pixels [see Fig. 2(a)] was used. The camera field ofview (FOV) is 24 by 18 deg. The camera gives a cali-brated output of the temperatures in the scene and isregularly calibrated according to the TNO laboratoryquality program. Digital output data (14 bits) is re-corded on a computer at a frame rate of 50Hz. Thedetector time constant is 12ms.A surface mirror was placed in front of the camera

objective at an angle of approximately 45 deg [seeFig. 2(a)] so that an image was obtained with the testpattern in the center (see below). The mirror wasmounted on the axis of an electric motor and slightlytilted with respect to the rotation axis to produce acircular motion of the image. Diameter and speedare set by changing the tilt angle and rotation fre-quency. A circular motion is convenient and has sev-eral advantages over a translation: the magnitude ofthe velocity is constant and the target remains with-in a limited area of the sensor FOV.Thermal test patterns were generated with the

thermal camera acuity tester (TCAT) [see [19] andFig. 2(b)]. The test plate consists of five lines withfour thermal triangle test patterns of arbitrary orien-tation on each line. The test patterns at the top lineare the largest (a triangle base of 2 cm or a trianglesquare-root area of S ¼ 1:32 cm) and each next line ofthe test pattern size decreases by a factor of

ffiffiffi

24p

, sothat the test pattern size decreases by a factor of2 over the plate. The test plate was placed in theapparatus in four different orientations, whichenhances the number of possible test pattern presen-tations and makes learning by heart more difficult.The TCAT was placed at D ¼ 4:86m from the sensor.At this distance, the maximum test pattern sizeis S ¼ 2:7mrad ðS−1 ¼ 0:37mrad−1Þ, and the mini-mum test pattern size is S ¼ 1:35mrad ðS−1 ¼0:74mrad−1Þ; the thermal contrast was ΔT ¼1:93K, which was determined by use of a thermalimager. In this region camera acuity is largely insen-

sitive to thermal contrast because it is high in com-parison with the camera noise (see also [3]). Notethat the measured contrast includes possible lossesdue to the surface mirror.

Both static (50 frames) and dynamic recordings(250 frames) were made. For the dynamic recordings,the radius of the circular motion was approximately4 pixels or 5:2mrad (this is slightly different from theexperiments by Bijl et al. [3] and Beintema et al.[14]), and rotation frequency was varied in such away that the velocity of the TCAT over the sensor fo-cal plane array was 0, 0.125, 0.25, 0.375, 0.50, 0.625,0.75, 0.875 and 1:0pixel=frame. Example recordingsare shown in Fig. 3. Figure 3(a) shows a static record-ing of the TCAT test apparatus; Fig. 3(b) showsthe same scene but now recorded at a speed of0:75pixel=frame. The latter image clearly shows asmear behind the triangle test patterns and eventhe top triangles are difficult to judge.

The observer experiments were carried out in adimly lit room. The image sequences were presentedon a 17 in. (43 cm) IIYama Prolite LCD display.Black-to-black transition time of the display is16ms. The image size wasmagnified two times usingbilinear interpolation and the display was set at aresolution of 1280 by 1024 pixels to ensure thatthe pattern size on the display is not a primary limit-ing factor. In [3] it was shown that the effect of imagesize on the display on performance was negligible un-der these conditions. Image size on the display was16.8 cm by 12.6 cm. To obtain a good test pattern con-trast on the display, the 8 bit gray values shown onthe display were assigned to a linear temperaturerange between 19:0 °C (approximately 25% belowthe TCAT background temperature) to 23:1 °C (25%above the test pattern temperature). In addition, dis-play contrast and brightness were optimized in ad-vance by the experimenter such that a zero graylevel was just visible and a maximum gray levelwas just not saturated. The observers were not al-lowed to touch the display controls, but they were

Fig. 2. (Color online) (a) Camera and the rotating tilted mirrorused to generate a dynamic image and (b) the TCAT used to gen-erate the thermal test patterns.

Fig. 3. Examples of the triangle test pattern images: (a) staticimage and (b) the same scene but now recorded at a speed of 6/8 pixels/frame. The image in (b) clearly shows a smear behindthe triangle test patterns and even the top triangles are difficultto judge.

20 January 2010 / Vol. 49, No. 3 / APPLIED OPTICS 345

Page 4: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

free to choose the optimum distance from the display(most of the time distance was approximately 50 cm).The dynamic image sequences were played back intwo ways: (1) with the normal playback speed (se-quence frame rate identical to the recorded framerate) and (2) in slow motion. For all the sequ-ences, the playback speed of the test patterns wasequal to that of the sequence recorded at0:125pixel=frame. This was achieved by addingone–seven identical frames to the recordings at0:25 − 1pixel=frame behind each original frame. Todivide learning effects equally over the different con-ditions, the image sequences were divided amongfour blocks. Each block contained the same set of se-quences except that the test plate orientations of theTCAT were different (these were also randomly di-vided among the blocks). The blocks were presentedin different order to the observers according to a 4by 4 Latin square design [25], and finally the orderinside the blocks was shuffled. For each image se-quence, the observers had to indicate the orientationof all 20 triangles on the test chart.With normal playback there are nine conditions

that correspond to the recording velocities reportedabove. The reduced speed playback adds seven con-ditions: the static condition cannot be played back at0:125pixel=frame, whereas the 0:125pixel=framecondition is trivially identical to the correspondingnormal playback condition. Eighty responses (fivetest pattern sizes with 16 targets per size) were col-lected for each condition and for each observer. AWeibull function was fitted through these data usinga maximum-likelihood procedure, resulting in a 75%correct threshold triangle size (in milliradians) andstandard error [19]. Finally, a weighted averagewas calculated across observers. The maximum ofthe internal error (due to the uncertainty in the in-dividual threshold estimates) and the external error(due to the differences in observers) was used as theerror in the resulting thresholds.Four observers between 30 and 47 years old parti-

cipated in the experiment. All the observers hadnormal or corrected-to-normal vision. The total ex-periment took approximately 2 h per observer (threesessions).

4. Results

As an example, Fig. 4 shows the fraction of correctresponses versus triangle test pattern size (in milli-radians) for two conditions for observer AMB. Filledtriangles represent the static condition; open trian-gles represent a dynamic condition with a recordingvelocity of 0:25pixel=frame played back at normalspeed. Maximum-likelihood fits of the Weibull func-tion are indicated by the continuous curves. The cor-responding 75% correct threshold sizes are S ¼2:37� 0:15mrad ðS−1 ¼ 0:42� 0:03Þ and S ¼ 1:53�0:08mrad ðS−1 ¼ 0:06� 0:03Þ.Figure 5 shows the results for the individual obser-

vers, and Fig. 6 gives the weighted average over allobservers. The reciprocal S−1 of the 75% correct

threshold angular size (in mrad−1) is plotted as afunction of the camera angular velocity (in pixel/frame). Filled circles represent the data for the con-dition that video is played back at normal speed.Open circles represent the data for the slow motioncondition. For example, the filled circles at 0 and0:25pixel=frame velocity in Fig. 5(a) (observer AMB)are thresholds obtained from the data shown inFig. 4. The error bars in Fig. 5 represent the standarderrors obtained with the maximum-likelihood fit pro-cedure. In Fig. 6, the maximum of the internal andexternal standard errors is shown. Errors in the re-sults for individual observers are typically of theorder of 5–10%. Some thresholds are determinedby extrapolation because they fall outside the rangeof available test pattern sizes in this experiment andthis rapidly reduces their accuracy. Thresholds forthe weighted averages in Fig. 6 are between 3%and 7% except for two data points. The overall imageis similar for all observers although there are quan-titative differences.

The results are as follows:

1. A slow movement of the image (up to0:25pixel=frame) greatly improves performance(þ50% on average) in comparison with the staticcondition.

2. With higher speeds the benefit of motion ra-pidly decreases back to the static performance level.

3. Up to a recording velocity of 0:75pixel=frame,the results for the two presentation conditions are si-milar, showing that human performance is largely in-sensitive to the playback speed and that temporaldisplay characteristics are not responsible for theacuity decrease between 0.25 and 0:75pixel=frame.The normal playback mode yields approximately6% higher acuity values than slow-motion playback.

Fig. 4. Probability correct responses versus test pattern size fortwo conditions: static (filled triangles) and with the scene movingover the sensor focal plane with 0:25pixel=frame (open triangles);the observer was AMB. Maximum-likelihood fits are indicated bythe continuous curves. A sensor motion of 0:25pixel=frame yields aperformance that is approximately 50% better (smaller trianglesizes) than with a static image.

346 APPLIED OPTICS / Vol. 49, No. 3 / 20 January 2010

Page 5: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

No significant dependence of this difference on sen-sor velocity is found.4. Above 0:75pixel=frame, human performance

further drops when the video is presented at originalspeed. When the sequences are played back in slowmotion, the acuity reduction is less pronounced.

The results are discussed in Section 5.

5. Discussion

A. Dynamic Visual Acuity and Sensor Characteristics

A considerable improvement in visual acuity withan undersampled sensor system can be obtainedby moving the scene over the focal plane [1–3,7,10].We systematically investigated the amount of im-provement as a function of angular velocity of thescene over the focal plane array for an uncooled mi-crobolometer thermal imager. In our study, a slowmovement (0:125–0:25pixel=frame) results in a 50%acuity increase, which is in good quantitative agree-ment with earlier studies performed with the TODtest pattern [1–3,7]. The increase can be ascribedto the ability of the human observer to use the extrainformation that is available from the differentphases of the test patterns on the focal plane. Athigher velocities, however, the benefit rapidly de-creases, and, above 0:50pixel=frame, it is only mar-ginal or zero (see Fig. 6). This reduction can beattributed to the considerable smear that appearswith uncooled thermal imagers. Similar experimentsperformed with a cooled system with minor smear [2]show no performance reduction at 0:50pixel=frame.In our experiment, the sensor detectors act as a leakyintegrator with a time constant of 12ms. At a sensorangular velocity of 0.50 pixel/frame, this correspondsto 0.3 times the detector pitch, resulting in a consid-erable increase in the effective detector size. Thus

Fig. 5. TOD acuity (in mrad−1) as a function of the velocity of the sensor over the test patterns (in pixels/frame) for the four observers inthe experiment: (a) observer AMB, (b) PB, (c) SL, (d) NL. Filled circles: video displayed at normal speed; open circles, video displayed inslow motion. The error bars represent the standard error obtained with the maximum-likelihood procedure.

Fig. 6. TOD acuity (in mrad−1) as a function of the velocity of thesensor over the test patterns (in pixels/frame). The weighted aver-age over all the observers is shown. For an explanation of the sym-bols, see Fig. 5.

20 January 2010 / Vol. 49, No. 3 / APPLIED OPTICS 347

Page 6: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

sensor smear effects are indeed of the order of acuityreduction shown in Fig. 6. Current imager per-formance models do not include smear and do notpredict a dependency on sensor velocity.

B. Display and Human Visual System Characteristics

An additional condition in our experiments withslow-motion playback of the same image sequencesshowed that temporal display characteristics and hu-man dynamic acuity are not responsible for the ob-served performance reduction with speed between0.25 and 0:75pixel=frame: in this region we find asmall but negative effect of slow-motion playbackon the acuity. Above a sensor angular velocity of0:75pixel=frame performance drops further for thenormal playback speed, but with slow-motion play-back the acuity reduction is much less pronounced.Significant effects of the display temporal character-istics were not expected. The display black-to-blacktransition time is 16ms, which means that the re-sponse to a dark-to-bright or a bright-to-dark transi-tion on the display is much faster than the responseof the detectors that act as a leaky integrator with atime constant of 12ms.The small negative effect of slow-motion playback

in the range between 0.25 and 0:75 pixel=frame couldbe a result of the pixel replication that could hinderthe human visual system to effectively integrate in-formation from subsequent frames. Above 0.75 pixel/frame and normal playback the observer task be-comes complicated because of the speed at whichthe patterns are moved over the display, and the re-sults might be related to dynamic visual acuity loss.A quantitative comparison with human dynamic vi-sual acuity results from earlier studies, however, isdifficult to make because these were all performedon linearly moving targets (mostly Landolt-C). Inour configuration, the highest target velocity isaround 2:5 deg = sec. With linear motion, acuity lossat this speed is negligible [26–28].

C. Choice of the Test Pattern

In most human visual acuity studies, the Landolt-Coptotype has been used as a test pattern. This opto-type was developed to quantify reading performancewith the human fovea which is a well-sampled sys-tem. The TOD test was developed specifically forthe characterization of sensor systems (which are of-ten undersampled) with the human-in-the-loop, inparticular, in relation to real object discrimination.The triangle test pattern is considered as represen-tative for a feature of a real object.For the unaided eye the two tests yield essentially

the same results: at threshold a simple spatial scalefactor of 2.83 exists between the triangle square-rootarea and Landolt-C gap size regardless of test pat-tern contrast [29]. As a rule of thumb, this corre-sponds to a triangle that just fits within the outercircle of the Landolt-C. As pointed out in [30], thisrule does not hold in general. For undersampled im-agers, discrimination between real objects covering

only a few pixels depends on the phase betweenthe object features and the focal plane. This appliesto the triangle test pattern in a similar manner, re-sulting in a good correspondence between triangleand real object discrimination in all the comparisonstudies performed so far; see, e.g., [14,16–18]]. Withan imaged Landolt-C, however, the gap position rela-tive to the ring varies with phase, but the gaps fromthe four orientations are relatively too far apart tolead to confusion. At the same time, the visibilityof the Landolt-C gap is sensitive to the relative posi-tion. So with the Landolt-C the task is related to gapdetection rather than discrimination. In addition,due to the sensitivity to phase more trials are re-quired for an accurate threshold estimate; over theyears we found that testing an undersampled imagerwith TOD test patterns is more convenient for theobserver than with Landolt-C rings. Finally, a minorpractical advantage of the triangle is the ease tomanufacture thermal patterns.

D. Agreement with Other Studies

The results obtained with this experiment for theTOD test pattern quantitatively agree with the find-ings for the two hand-held targets [14] performed at0:57pixel=frame. For the TOD test patterns, the pre-dicted benefit of motion at this velocity relative to thestatic condition is 6% and not significantly differentfrom 0 (see Fig. 6). The results at low angular veloc-ities qualitatively agree with the results obtained fortank targets [11]: both experiments show an im-provement in comparison with the static case andthe largest improvement is found for the most under-sampled system. A quantitative agreement cannot beexpected since the amount of undersampling for thesensors used in the two experiments is too different.In conclusion, the most probable explanation of theresults of Beintema et al. [14] is the occurrence ofsmear; the TOD data are in agreement with the lim-ited amount of field data collected so far and thereis no immediate reason to assume a target-specificperformance increase.

E. Applications of the Presented Method

Since signal-processing algorithms are particularlyattractive for use in inexpensive imaging systems,the robustness of techniques such as superresolutionversus sensor smear might be of interest. Accordingto the results of Beintema et al. [14], the specific al-gorithm used yields an improvement up to 17%, butthis might be algorithm dependent. Moreover, theflow estimation used in superresolution algorithms(if accurate) potentially enables calculation of andcompensation for the amount of smear. The successof such a correction can be determined using the testmethod applied in this study.

Dynamic electro-optic sensor performance assess-ment becomes increasingly important for the charac-terization of camera observation systems underdynamic conditions, the increased use of under-sampled sensor systems, and the application of

348 APPLIED OPTICS / Vol. 49, No. 3 / 20 January 2010

Page 7: Visual image quality assessment with sensor motion: effect of recording and presentation velocity

dynamic signal processing techniques such assuperresolution. The method applied for this study,based on the TOD method but slightly extended tobe able to disentangle sensor effects from displayand human observer effects, can contribute to betterquantify electro-optic system performance and to un-derstand theunderlyingprocesses.Anumber of appli-cations have already beenmentioned in Section 2 andsubsequent paragraphs. Additional examples are themeasurement of HDTV display quality (including aconsiderable amount of signal processing to obtaina smooth television image) using either synthetic orrecorded dynamic TOD test imagery versus a uniformor structured background or the quality of photo andvideo cameras and displays in cell phones [31].

References1. R. G. Driggers, K. Krapels, S. Murrill, S. S. Young, M. Thielke,

and J. Schuler, “Superresolution performance for under-sampled imagers,” Opt. Eng. 44, 014001 (2004).

2. K. Krapels, R. G. Driggers, and B. Teaney, “Target-acquisitionperformance in undersampled infrared imagers: staticimagery to motion video,” Appl. Opt. 44, 7055–7061 (2005).

3. P. Bijl, K. Schutte, and M. A. Hogervorst, “Applicability ofTOD, MRT, DMRT and MTDP for dynamic image enhance-ment techniques,” Proc SPIE 6207, 154–165 (2006).

4. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolutionimage reconstruction: a technical overview,” IEEE SignalProcess. Mag. 20, 21–36 (2003).

5. K. Schutte, D. J. de Lange, and S. P. van den Broek, “Signalconditioning algorithms for enhanced tactical sensor imagery,”Proc SPIE 5076, 92–100 (2003).

6. S. S. Young and R. G. Driggers, “Super-resolution image recon-struction from a sequence of aliased imagery,” Proc SPIE5784, 51–62 (2005).

7. K. Krapels, R. G. Driggers, E. Jacobs, S. Burks, and S. Young,“Characteristics of infrared imaging systems that benefit fromsuperresolution reconstruction,” Appl. Opt. 46, 4594–4603(2007).

8. G. Holst, “Imaging system performance based upon Fλ=d,”Opt. Eng. 46, 103204 (2007).

9. E. Jacobs and R. G. Driggers, “NVThermIPmodeling of super-resolution algorithms,” Proc SPIE 5784, 125–135 (2005).

10. M. A. Hogervorst, A. Toet, and P. Bijl, “The TOD method fordynamic image quality assessment.,” Report TNO-DV 2006C423 (TNO Defense, Security, and Safety, Soesterberg, TheNetherlands, 2006).

11. J. Fanning, J. Miller, J. Park, G. Tener, J. Reynolds, P. O’Shea,C. Halford, and R. Driggers, “IR system field performance withsuperresolution,” Proc SPIE 6543, 65430Z (2007).

12. P. Bijl and J. M. Valeton, “TOD, the alternative to MRTD andMRC,” Opt. Eng. 37, 1976–1983 (1998).

13. J. Fanning and J. Reynolds, “Target identification perfor-mance of superresolution versus dither,” Proc SPIE 6941,69410N (2008).

14. J. A. Beintema, P. Bijl, M. A. Hogervorst, and J. Dijk, “Targetacquisition performance: effects of target aspect angle,dynamic imaging and signal processing,” Proc SPIE 6941,69410C (2008).

15. J. E. Bos, M. A. Hogervorst, K. Munnoch, and D. Perrault,“Human performance at sea assessed by dynamic visualacuity,” in Proceedings of the Pacific 2008 International Mari-time Conference (Arinex, 2008).

16. P. Bijl and J. M. Valeton, “Validation of the new TOD methodand ACQUIRE model predictions using observer performancedata for ship targets,” Opt. Eng. 37, 1984–1994 (1998).

17. P. Bijl, J. M. Valeton, and A. N. de Jong, “TOD predicts targetacquisition performance for staring and scanning thermalimagers,” Proc SPIE 4030, 96–103 (2000).

18. P. Bijl, M. A. Hogervorst, and A. Toet, “Identification of mili-tary targets and simple laboratory test patterns in band-limited noise,” Proc. SPIE 5407, 104–115 (2004).

19. P. Bijl and J. M. Valeton, “Guidelines for accurate TOD mea-surement.,” Proc SPIE 3701, 14–25 (1999).

20. J. M. Valeton, P. Bijl, E. Agterhuis, and S. Kriekaard, “T-CAT, anew thermal camera acuity tester,” Proc. SPIE 4030, 232–238(2000).

21. P. Bijl, M. A. Hogervorst, J. M. Valeton, and C. J. de Ruiter,“BAXSTER: an image quality tester for x-ray baggage screen-ing systems,” Proc. SPIE 5071, 341–352 (2003).

22. D. J. De Lange, J. M. Valeton, and P. Bijl, “Automatic charac-terization of electro-optical sensors with image-processing,using the triangle orientation discrimination (TOD) method,”Proc. SPIE 3701, 104–111 (2000).

23. M. A. Hogervorst, P. Bijl, and J. M. Valeton, “Capturing thesampling effects: a TOD sensor performance model,” Proc.SPIE 4372, 62–73 (2001).

24. A. W. M. van Eekeren, K. Schutte, O. R. Oudegeest, and L. J.van Vliet, “Performance evaluation of super-resolution recon-struction methods on real-world data,” EURASIP J. Adv.Signal Process. 2007, 43953 (2007).

25. W. A. Wagenaar, “Note on the construction of diagram-balanced Latin squares,” Psychol. Bull. 72, 384–386 (1969).

26. E. Ludvigh and J. W. Miller, “Study of visual acuity during theocular pursuit of moving test objects. I. Introduction,” J. Opt.Soc. Am. 48, 799–802 (1958).

27. J. L. Demer and F. Amjadi, “Dynamic visual acuity of normalsubjects during vertical optotype and head motion.,” Invest.Ophthalmol. Visual Sci. 34, 1894–1906 (1993).

28. V. M. Reading, “Visual resolution as measured by dynamicand static tests.,” Pjlugers Arch. 333, 17–26 (1972).

29. M. A. Hogervorst, P. Bijl, and J. M. Valeton, “Visual sensitivityto different test patterns used in system/human performancetests.,” Report TNO-TM-02-B007 (TNO Human Factors, Soes-terberg, The Netherlands, 2002).

30. P. Bijl, J. M. Valeton, and M. A. Hogervorst, “A critical evalua-tion of test patterns for EO system characterization.,” Proc.SPIE 4372, 27–38 (2001).

31. ITU standard G.1070: opinion model for video-telephony ap-plications (International Telecommunication Union, Geneva,Switzerland, 2007).

20 January 2010 / Vol. 49, No. 3 / APPLIED OPTICS 349