icip2011 motion artifact-free hdr imaging under … · motion artifact-free hdr imaging under...

4
MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok Lee Advanced Multimedia Lab, Samsung Advanced Institute of Technology, Samsung Electronics San #14-1 Nongseo-dong, Giheung-gu, Yongin-si, Gyeonggi-do, Korea 446-712 ABSTRACT High dynamic range (HDR) imaging is one of the most im- portant emerging fields of the next generation digital cam- eras. It is hard to handle a problem so-called ghosting arti- fact caused by camera shake and/or object motion in the method of fusing a set of differently exposed images. Some object motions around under or over saturation region still produce severe artifacts due to the reference image’s dynam- ic range limitation. For the commercial product, it is the important problem to be solved completely. We analyze this problem and propose a new HDR deghosting scheme capa- ble of dealing with various motions. In order to avoid the ghosting artifacts, we capture only two uncompressed Bayer raw images with different exposures, select the wider dy- namic range image as a reference, and process them in the Bayer domain. The experimental results show that our pro- posed method provides motion artifact-free under dynamic environments with various moving objects. Index Terms—HDR Imaging, De-ghosting 1. INTRODUCTION Many researchers have proposed an approach extending the dynamic range by combining multiple low dynamic range (LDR) images with different exposure times of the same scene [1, 2]. When these images are captured sequentially by a digital camera, moving objects and camera shaking cause misalignments between images and produce the several arti- facts if we blend them for the high dynamic range (HDR) image. To eliminate them, the misaligned region, so called ghosts, need to be detected based on the reference image or region among captured images, which can be filled with reference image data. In the aligned region, the multiple images are blended together for the HDR representation. But, if there is a misaligned region around saturation area, we cannot separate regions into the misaligned or aligned one exactly. For example, if an oversaturated region includes a walking person and background partially, then they cannot decide the location which is the real ghost boundary of the person. Several papers just simply treat saturation regions by including them into ghost region [3] or aligned region [4,5,6]. In these cases, the ghost region segmentation errors produce the artifacts that the detail and color are different in the ghost region boundary [4, 6]. After the HDR compres- sion, the color information in the saturated part on ghost region is shown to be clipped and produces grey artifacts [5, 6]. The previous methods do not treat well the deghosting problem around saturation regions and suffer from ghost artifacts frequently. In this paper, we analyze this problem and propose a new HDR deghosting scheme capable of deal- ing with various motions. 2. PROBLEM ANALYSIS 8 bit output (a) ISP LUT curve (b) irradiance range at each image Fig. 1. Irradiance range limitations at each image in RGB domain. Saturation region ghost region Aligned region Blending region (a) No sat. area on ghost region (b) HDR blending result in the reference image Rg Rs1 Saturation region2 Rg Saturaion region1 ghost region Rg Rg Rg Clipping region artifact Ghost boundary region artifact (c) Sat. area on ghost region (d) ideal HDR case (e) HDR ghost artifacts Fig. 2. Saturation (Sat.) area in the ghost region of reference image produces artifacts. The incoming light is attenuated by the lens aperture and produces the sensor irradiance, which is converted to digital by the image sensor to obtain the Bayer raw image. As shown in Fig. 1(a), the Bayer raw data which have more than 12-bit are suppressed at the noise floor region and com- pressed to 8-bit RGB data by Image Signal Processing (ISP) 2011 18th IEEE International Conference on Image Processing 978-1-4577-1302-6/11/$26.00 ©2011 IEEE 361

Upload: others

Post on 23-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ICIP2011 MOTION ARTIFACT-FREE HDR IMAGING UNDER … · MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok

MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS

Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok Lee

Advanced Multimedia Lab, Samsung Advanced Institute of Technology, Samsung Electronics

San #14-1 Nongseo-dong, Giheung-gu, Yongin-si, Gyeonggi-do, Korea 446-712

ABSTRACT

High dynamic range (HDR) imaging is one of the most im-

portant emerging fields of the next generation digital cam-

eras. It is hard to handle a problem so-called ghosting arti-

fact caused by camera shake and/or object motion in the

method of fusing a set of differently exposed images. Some

object motions around under or over saturation region still

produce severe artifacts due to the reference image’s dynam-

ic range limitation. For the commercial product, it is the

important problem to be solved completely. We analyze this

problem and propose a new HDR deghosting scheme capa-

ble of dealing with various motions. In order to avoid the

ghosting artifacts, we capture only two uncompressed Bayer

raw images with different exposures, select the wider dy-

namic range image as a reference, and process them in the

Bayer domain. The experimental results show that our pro-

posed method provides motion artifact-free under dynamic

environments with various moving objects.

Index Terms—HDR Imaging, De-ghosting

1. INTRODUCTION

Many researchers have proposed an approach extending the

dynamic range by combining multiple low dynamic range

(LDR) images with different exposure times of the same

scene [1, 2]. When these images are captured sequentially by

a digital camera, moving objects and camera shaking cause

misalignments between images and produce the several arti-

facts if we blend them for the high dynamic range (HDR)

image. To eliminate them, the misaligned region, so called

ghosts, need to be detected based on the reference image or

region among captured images, which can be filled with

reference image data. In the aligned region, the multiple

images are blended together for the HDR representation.

But, if there is a misaligned region around saturation area,

we cannot separate regions into the misaligned or aligned

one exactly. For example, if an oversaturated region includes

a walking person and background partially, then they cannot

decide the location which is the real ghost boundary of the

person. Several papers just simply treat saturation regions by

including them into ghost region [3] or aligned region

[4,5,6]. In these cases, the ghost region segmentation errors

produce the artifacts that the detail and color are different in

the ghost region boundary [4, 6]. After the HDR compres-

sion, the color information in the saturated part on ghost

region is shown to be clipped and produces grey artifacts [5,

6]. The previous methods do not treat well the deghosting

problem around saturation regions and suffer from ghost

artifacts frequently. In this paper, we analyze this problem

and propose a new HDR deghosting scheme capable of deal-

ing with various motions.

2. PROBLEM ANALYSIS

8 bit output

(a) ISP LUT curve (b) irradiance range at each image

Fig. 1. Irradiance range limitations at each image in RGB domain.

Saturation

region

ghost

region

Aligned region

Blending

region

(a) No sat. area on ghost region (b) HDR blending result

in the reference image

Rg Rs1

Saturation region2

Rg Saturaion

region1

ghost

region

Rg RgRg

Clipping region artifact

Ghost boundary

region artifact

(c) Sat. area on ghost region (d) ideal HDR case (e) HDR ghost artifacts

Fig. 2. Saturation (Sat.) area in the ghost region of reference image

produces artifacts.

The incoming light is attenuated by the lens aperture and

produces the sensor irradiance, which is converted to digital

by the image sensor to obtain the Bayer raw image. As

shown in Fig. 1(a), the Bayer raw data which have more than

12-bit are suppressed at the noise floor region and com-

pressed to 8-bit RGB data by Image Signal Processing (ISP)

2011 18th IEEE International Conference on Image Processing

978-1-4577-1302-6/11/$26.00 ©2011 IEEE 361

Page 2: ICIP2011 MOTION ARTIFACT-FREE HDR IMAGING UNDER … · MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok

[7]. Therefore, each image has the limitation to capture the

irradiance [2] as shown in Fig. 1(b) and the reference image

has the uncovered range for each image.

As shown in Fig. 2, let us assume that there exists satura-

tion region on the reference image due to the observable

irradiance range limitation. If the saturation region is sepa-

rated completely from the ghost region like Fig. 2(a), we can

blend the multiple images on the registered region without

artifacts. But, over or under saturated regions can be located

in the moving object or around the object. The saturation

region goes into the ghost area like Fig. 2(c).

The misaligned ghost region is detectable at least if the ghost

boundary of reference region is discernible. Since there is no

pattern on this saturation area, we cannot find the exact

ghost boundary. This produces the ghost boundary region

artifact. As explained previously, the saturation means that

the real irradiance data is under the uncovered region of the

image. The ghost detection ability of the reference image is

limited to its unsaturated range. If the reference image is the

wide irradiance range, this ghost region must be detected

and we can decide the correct area where we blend the mul-

tiple aligned images or output reference image data like Fig.

2(c). Another problem is the clipping artifact like Fig. 2(e).

The saturation region in ghost area cannot be blended with

other unsaturated image, which region’s color and details are

displayed to be clipped as the grey in the HDR image [5]. In

the saturated ghost boundary, [4] tried to reduce color dif-

ferences using the Poisson solver, but since the true ghost

boundary cannot be detected, the detail pattern discontinuity

is unavoidable.

3. OUR DEGHOSTING SCHEME

Based on our analysis, we propose a new deghosting scheme

to fuse only two Bayer raw images taken with large different

exposures. After the lens shading correction, if we neglect

the sensor noise, we can say that the image keeps the linear

characteristic to the irradiance ( E ) [8] as follows.

tEI ∆⋅⋅= α , (1)

where t∆ means the integration time. In the sensor, E is

quantized uniformly to N levels by the AD converter and has

SNR is approximately proportional to I due to the photon

noise [8]. We can build the simple relation between short

exposure time (SET) and long exposure time (LET) images

using the linear camera model. k denotes the exposure time

ratio between SET image data SETI and LET one LET

I .

).(,SET

LETSETLET

t

tkIkI

∆=⋅= (2)

Fig. 3(a) shows the example result of the brightness align-

ment (histogram matching) between LET and SET Bayer

raw image. The non-overlap region between two images is

the mainly over-saturation region in the LET image. If we

neglect the sensor noise, the SET image almost covers the

irradiance range of the LET one.

(a) Bayer raw data matching result (b) SET irradiance range

Fig. 3. SET image has the bigger irradiance range although it suf-

fers from photon noise and quantization error.

In the viewpoint of the irradiance range condition, we

present a new deghosting scheme. First, we select the SET

raw image as a reference and do not clip or suppress its low

SNR region. We try to preserve the details as possible using

wavelet NR (noise reduction) [9]. Therefore, we can make

the uncovered region be almost eliminated, and observe the

SET’s irradiance image has the details similar to LET one

by the help of the simple histogram matching, while the SET

image in RGB domain fails as shown in Fig. 7(b) and (d).

To reduce the ghost region in the low SNR region, we use

the spatial registration weighted on the dark region. Due to

the minimization of the uncovered region, there are almost

no ghost boundary and clipping region artifacts. But, the

details of the ghost region are a little degraded due to the

Bayer wavelet NR. Furthermore, we represent a adaptive

spatial blending technique, which can cover ghost detection

errors in addition to the brightness alignment error. And

alignment regions can be increased by our multiple motion

alignments’ scheme. But, there are the deghosting limita-

tions. As the exposure difference is bigger, we have the big-

ger brightness alignment error due to the quantization and

sensor noise problem of the SET image. In our experiment,

3 stops’ exposure time difference is suitable.

3.1. Image Brightness Alignment and Image Registration

We align the brightness of SET image to LET’ one using the

histogram matching. It is executed on the Bayer domain

without severe data loss. Hence the converted data is more

accurate and more details are preserved, compared with

them on the 8-bit RGB domain after camera ISP. Then, we

align the two images spatially by a simple hierarchical block

matching which is weighted on the dark region.

3.2. Ghost Region Detection

Even with the image registration, unexpected motions still

induce the ghosting artifacts. To make a ghost detection rate

robust to noise, we carry out a double thresholding method.

Above all, we detect the two kinds of ghost region by apply-

ing high and low threshold values to the intensity differences.

Low threshold pixels are connected each other within the

n×n neighborhoods and if the high threshold pixel is in-

cluded in this chain, the low threshold pixel is determined as

the ghost region one together with the high threshold pixel.

Our ghost detection is performed on the 12-bit Bayer do-

main so that the detection performance is much better than

2011 18th IEEE International Conference on Image Processing

362

Page 3: ICIP2011 MOTION ARTIFACT-FREE HDR IMAGING UNDER … · MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok

the 8-bit RGB one.

3.3. De-ghosting

(a) ghost detection error (b) after spatial blending (c) deghost weight

(d) brightness difference (e) after spatial blending (f) deghost weight

Fig. 4. Eliminating artifacts from ghost detection and the bright-

ness error around the ghost region boundary with our adaptive

spatial blending technique. Using the weight of (c) and (f), the

brightness-aligned SET images are stitched to the LET images.

To remove the ghost artifacts, we replace the pixel values

belonging to the ghost region in the spatially aligned LET

image with those of the SET whose brightness is converted

into the LET image as described in Section 3.1. The bright-

ness alignment error between images and ghost detection

errors can exist around the boundary of the ghost region. In

our case, we apply the spatial feathering technique [10]

around this region. It can delete these seams with the spatial

blending method. To reduce the blended ghost boundary

region, we control adaptively blending weight size to have

big attenuation value in small brightness alignment error

cases. It can eliminate the both artifacts as shown in Fig. 4.

We calculate this weight recursively with linear complexity

O(N) using the distance transform [11].

IF :deghosted image

(foreground aligned)

R0:background in dark region IDeghost:deghosted image

(fore/background aligned)

AND

ISET2

ROI based

alignment

Ghost

Detection

Adaptive

FeatheringISET2

ILETG1

I1LET

ISET2

G1 : ghost region in foreground

ROI based

alignment

Ghost

Detection

Adaptive

FeatheringISET2

ILET

G2 ISET2

Level 2:Foreground

alignment

Level 1:Background

alignment

Ruser :User ROI

I1LET

I2LET I2

LET

Fig. 5. Our deghosting method based on two classes of simple

motion alignments which focus on the dark background region and

residual foreground’s ghost regions respectively.

To align active motion regions with the simple registra-

tion, we operate our registration algorithm twice; back-

ground-centered one for camera shaking and moving object-

centered one are achieved, as described in Fig. 5.

At level 1, we register the LET image ILET

to brightness–

aligned SET image ISET2

focusing on the background part in

the shadow region 0R . And at level 2, we register it again to

residual ghost regions or user specified region userR among

not-aligned ghost area 1G . For example, user can select the

human face or any object in the dark area. During the deg-

hosting, at level 2, we stitch the aligned object region of

I1LET

to ISET2

to obtain the intermediate image IF and then we

add the aligned background region of I2LET

to IF by the

second stitching for the final deghosted image IDeghost

. As

shown in Fig. 6(f), due to the more exact registration, we can

obtain more details in the moving face region.

3.4. HDR Blending

The SET image and the de-ghosted LET image are blended

to create the high dynamic range image after processing in

ISP. In our algorithm, the luminance value of each pixel of

the LET image is used as the HDR blending weight for the

SET image to represent its bright region’s details and the

inverse of the luminance value is used for the LET image to

show the dark region’s details in the de-ghosted LET image

simultaneously in our HDR image [12].

4. EXPERIMENTAL RESULTS

To verify the performance of our proposed HDR deghosting

method, we have tested it on a variety of dynamic scene with

multiple moving objects. We used a Samsung GX-20 camera

without a tripod to capture images of 3 stops’ exposure time

difference on average.

(a) LET image (b) projection to SET (c) SET image (d) projection to LET

(e) brightness alignment (f) deghosted LET (g) HDR blending

from SET to LET in Bayer after registration result

Fig. 6. Comparison of the detail representation performance be-

tween RGB and Bayer raw domain.

As shown in Fig. 6, in RGB case (a-d), the image data is

clipped and compressed to show artifacts after brightness

alignment. But, the Bayer raw case of SET gets the finer

details. After the registration of LET image, we can add the

2011 18th IEEE International Conference on Image Processing

363

Page 4: ICIP2011 MOTION ARTIFACT-FREE HDR IMAGING UNDER … · MOTION ARTIFACT-FREE HDR IMAGING UNDER DYNAMIC ENVIRONMENTS Sung-Chan Park, Hyun-Hwa Oh, Jae-Hyun Kwon, Wonhee Choe, Seong-Deok

LET image’s details to (e) for the deghosted LET and the

final HDR result. Fig. 7(d) and 8(d) shows the ghost artifacts

around the saturation(sat.) region of the AE image (a, c) due

to moving objects or camera movements. Our method has

completely removed the duplication of moving objects and

no ghost artifact in spite of saturated cases as shown in Fig.

7(e) and 8(e). We tested for various 30 HDR scenes. But we

cannot find any ghost artifacts including saturation regions.

As shown in Fig. 9, we can also obtain the artifact-free result

compared with Sony A550 HDR camera.

(a) AE image (b) our HDR image

(c) sat. /dark region in AE (d) ghost artifact (e) our HDR result

Fig. 7. Resultant HDR image under dynamic environment I.

(a) AE image (b) our HDR image

(c) sat./dark region in AE (d) grey ghost artifact (e) our HDR result

Fig. 8. Resultant HDR image under dynamic environment II.

(a ) Sony A550 HDR (b) our HDR system

(c ) magnified Sony result (d) our magnified result

Fig. 9. Comparison between Sony HDR and our results.

5. DISCUSSIONS

On the condition of the moderate 3 stops’ exposure time

difference, we could remove the ghosting artifacts almost

perfectly compared with other commercial HDR products.

Since our method proved successfully on variety dynamic

scenes including saturated ghost regions, it will be applied to

commercial digital cameras to create ghost-free HDR images

of the real scenes.

6. REFERENCES

[1] P.E. Debevec el al, “Recovering high dynamic range radiance

maps form photographs”, SIGGRAPH, pp.369-378, 1997

[2] S. Mann and R. Picard, “Being undigital with digital cameras:

Extending dynamic range by combining differently exposed pic-

ture”, Technical Report 323, M.I.T. Media Lab, 1994

[3] S.B. Kang el al, “High dynamic range video,” ACM Trans. On

Graphics, vol.22, no. 3, pp. 319 – 325, 2003.

[4] O. Gallo et al, “Artifact-free high dynamic range imaging”,

IEEE Int’l Conf. on Computational Photography, 2009

[5] A. Eden et al., “Seamless Image Stitching of Scenes with Large

Motions and Exposure Differences”. CVPR, pp. 2498-2505, 2006

[6] N. Menzel el al, “Freehand HDR photography with motion

compensation”,Vision Modeling and Visualization 07, pp. 127-134,

2007

[7] JAI tech. note TH-1086, AccuPiXEL™ LUT Function, 2000

[8] J. Nakamura, “Image Sensors and Signal Processing for Digital

Still Cameras”, New York, Taylor and Francis Group, 2006

[9] Youngjin Yoo et al, "A digital ISO expansion technique for

digital cameras", Proc. of Electronic Imaging, vol. 7537, 2010

[10] Szeliski, R. “Video mosaics for virtual environments”. IEEE

Computer Graphics and Applications, pp. 22–30, 1996

[11] P. Felzenszwalb, and D. Huttenlocher, “Distance transforms

of sampled functions,” Cornell Computing and Information

Science Technical Report TR2004-1963, 2004

[12] K.E. Lee et al, "Locally Adaptive High Dynamic Range Image

Reproduction Inspired by Human Visual System" Proc. of Elec-

tronic Imaging, Vol. 7241, 72410T, 2009

2011 18th IEEE International Conference on Image Processing

364