a generalized unsharp masking algorithm

13
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011 1249 A Generalized Unsharp Masking Algorithm Guang Deng Abstract—Enhancement of contrast and sharpness of an image is required in many applications. Unsharp masking is a clas- sical tool for sharpness enhancement. We propose a generalized unsharp masking algorithm using the exploratory data model as a unified framework. The proposed algorithm is designed to address three issues: 1) simultaneously enhancing contrast and sharpness by means of individual treatment of the model com- ponent and the residual, 2) reducing the halo effect by means of an edge-preserving filter, and 3) solving the out-of-range problem by means of log-ratio and tangent operations. We also present a study of the properties of the log-ratio operations and reveal a new connection between the Bregman divergence and the generalized linear systems. This connection not only provides a novel insight into the geometrical property of such systems, but also opens a new pathway for system development. We present a new system called the tangent system which is based upon a specific Bregman divergence. Experimental results, which are comparable to re- cently published results, show that the proposed algorithm is able to significantly improve the contrast and sharpness of an image. In the proposed algorithm, the user can adjust the two parameters controlling the contrast and sharpness to produce the desired results. This makes the proposed algorithm practically useful. Index Terms—Bregman divergence, exploratory data model, generalized linear system, image enhancement, unsharp masking. I. INTRODUCTION E NHANCING the sharpness and contrast of images has many practical applications. There has been continuous research into the development of new algorithms. In this sec- tion, we first briefly review previous works which are directly re- lated to our work. These related works include unsharp masking and its variants, histogram equalization, retinex and dehazing al- gorithms, and generalized linear systems. We then describe the motivation and contribution of this paper. A. Related Works 1) Sharpness and Contrast Enhancement: The classical un- sharp masking algorithm can be described by the equation: where is the input image, is the result of a linear low-pass filter, and the gain is a real scaling factor. The signal is usually amplified to increase the sharpness. However, the signal contains 1) de- tails of the image, 2) noise, and 3) over-shoots and under-shoots Manuscript received May 27, 2010; revised August 31, 2010; accepted November 01, 2010. Date of publication November 15, 2010; date of current version April 15, 2011. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Yongyi Yang. The author is with Department of Electronic Engineering, La Trobe Univer- sity, Bundoora, Victoria 3086, Australia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2010.2092441 in areas of sharp edges due to the smoothing of edges. While the enhancement of noise is clearly undesirable, the enhance- ment of the under-shoot and over-shoot creates the visually un- pleasant halo effect. Ideally, the algorithm should only enhance the image details. This requires that the filter is not sensitive to noise and does not smooth sharp edges. These issues have been studied by many researchers. For example, the cubic filter [1] and the edge-preserving filters [2]–[4] have been used to re- place the linear low-pass filter. The former is less sensitive to noise and the latter does not smooth sharp edges. Adaptive gain control has also been studied [5]. Contrast is a basic perceptual attribute of an image [6]. It is difficult to see the details in a low contrast image. Adaptive his- togram equalization [7], [8] is frequently used for contrast en- hancement. The retinex algorithm, first proposed by Land [9], has been recently studied by many researchers for manipulating contrast, sharpness, and dynamic range of digital images. The retinex algorithm is based upon the imaging model in which the observed image is formed by the product of scene reflectance and illuminance. The task is to estimate the reflectance from the observation. Many algorithms use the assumption that the illuminance is spatially smooth. The illuminance is estimated by using a low-pass filter or multiresolution [10] or formulating the estimating problem as a constrained optimization problem [11]. To reduce the halo effect, edge-preserving filters such as: adaptive Gaussian filter [12], weighted least-squares based fil- ters [13] and bilateral filters [11], [14] are used. Recently, novel algorithms for contrast enhancement in de- hazing applications have been published [15], [16]. These al- gorithms are based upon a physical imaging model. Excellent results have been demonstrated. An important issue associated with the unsharp masking and retinex type of algorithm is that the result is usually out of the range of the image [12], [17]–[19]. For example, for an 8-bit image, the range is [0, 255]. A careful rescaling process is usually needed for each image. A histogram-based rescaling process and a number of internal scaling processes are used in the retinex algorithm presented in [19]. 2) Generalized Linear System and the Log-Ratio Approach: In his classic book, Marr [20] has pointed out that to develop an effective computer vision technique one must consider: 1) why the particular operations are used, 2) how the signal can be rep- resented, 3) what implementation structure can be used. Myers [21] has also pointed out that there is no reason to persist with particular operations such as the usual addition and multiplica- tion, if via abstract analysis, more easily implemented and more generalized or abstract versions of mathematical operations can be created for digital signal processing. Consequently, abstract analysis may show new ways of creating systems with desir- able properties. Following these ideas, the generalized linear system, shown in Fig. 1, is developed. The generalized addi- 1057-7149/$26.00 © 2010 IEEE

Upload: sai219

Post on 28-Apr-2015

100 views

Category:

Documents


1 download

DESCRIPTION

DIGITAL IMAGE PROCESSING

TRANSCRIPT

Page 1: A Generalized Unsharp Masking Algorithm

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011 1249

A Generalized Unsharp Masking AlgorithmGuang Deng

Abstract—Enhancement of contrast and sharpness of an imageis required in many applications. Unsharp masking is a clas-sical tool for sharpness enhancement. We propose a generalizedunsharp masking algorithm using the exploratory data modelas a unified framework. The proposed algorithm is designed toaddress three issues: 1) simultaneously enhancing contrast andsharpness by means of individual treatment of the model com-ponent and the residual, 2) reducing the halo effect by means ofan edge-preserving filter, and 3) solving the out-of-range problemby means of log-ratio and tangent operations. We also present astudy of the properties of the log-ratio operations and reveal a newconnection between the Bregman divergence and the generalizedlinear systems. This connection not only provides a novel insightinto the geometrical property of such systems, but also opens anew pathway for system development. We present a new systemcalled the tangent system which is based upon a specific Bregmandivergence. Experimental results, which are comparable to re-cently published results, show that the proposed algorithm is ableto significantly improve the contrast and sharpness of an image.In the proposed algorithm, the user can adjust the two parameterscontrolling the contrast and sharpness to produce the desiredresults. This makes the proposed algorithm practically useful.

Index Terms—Bregman divergence, exploratory data model,generalized linear system, image enhancement, unsharp masking.

I. INTRODUCTION

E NHANCING the sharpness and contrast of images hasmany practical applications. There has been continuous

research into the development of new algorithms. In this sec-tion, we first briefly review previous works which are directly re-lated to our work. These related works include unsharp maskingand its variants, histogram equalization, retinex and dehazing al-gorithms, and generalized linear systems. We then describe themotivation and contribution of this paper.

A. Related Works

1) Sharpness and Contrast Enhancement: The classical un-sharp masking algorithm can be described by the equation:

where is the input image, is the result of alinear low-pass filter, and the gain is a real scalingfactor. The signal is usually amplified toincrease the sharpness. However, the signal contains 1) de-tails of the image, 2) noise, and 3) over-shoots and under-shoots

Manuscript received May 27, 2010; revised August 31, 2010; acceptedNovember 01, 2010. Date of publication November 15, 2010; date of currentversion April 15, 2011. The associate editor coordinating the review of thismanuscript and approving it for publication was Dr. Yongyi Yang.

The author is with Department of Electronic Engineering, La Trobe Univer-sity, Bundoora, Victoria 3086, Australia (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIP.2010.2092441

in areas of sharp edges due to the smoothing of edges. Whilethe enhancement of noise is clearly undesirable, the enhance-ment of the under-shoot and over-shoot creates the visually un-pleasant halo effect. Ideally, the algorithm should only enhancethe image details. This requires that the filter is not sensitiveto noise and does not smooth sharp edges. These issues havebeen studied by many researchers. For example, the cubic filter[1] and the edge-preserving filters [2]–[4] have been used to re-place the linear low-pass filter. The former is less sensitive tonoise and the latter does not smooth sharp edges. Adaptive gaincontrol has also been studied [5].

Contrast is a basic perceptual attribute of an image [6]. It isdifficult to see the details in a low contrast image. Adaptive his-togram equalization [7], [8] is frequently used for contrast en-hancement. The retinex algorithm, first proposed by Land [9],has been recently studied by many researchers for manipulatingcontrast, sharpness, and dynamic range of digital images. Theretinex algorithm is based upon the imaging model in which theobserved image is formed by the product of scene reflectanceand illuminance. The task is to estimate the reflectance fromthe observation. Many algorithms use the assumption that theilluminance is spatially smooth. The illuminance is estimatedby using a low-pass filter or multiresolution [10] or formulatingthe estimating problem as a constrained optimization problem[11]. To reduce the halo effect, edge-preserving filters such as:adaptive Gaussian filter [12], weighted least-squares based fil-ters [13] and bilateral filters [11], [14] are used.

Recently, novel algorithms for contrast enhancement in de-hazing applications have been published [15], [16]. These al-gorithms are based upon a physical imaging model. Excellentresults have been demonstrated.

An important issue associated with the unsharp masking andretinex type of algorithm is that the result is usually out ofthe range of the image [12], [17]–[19]. For example, for an8-bit image, the range is [0, 255]. A careful rescaling processis usually needed for each image. A histogram-based rescalingprocess and a number of internal scaling processes are used inthe retinex algorithm presented in [19].

2) Generalized Linear System and the Log-Ratio Approach:In his classic book, Marr [20] has pointed out that to develop aneffective computer vision technique one must consider: 1) whythe particular operations are used, 2) how the signal can be rep-resented, 3) what implementation structure can be used. Myers[21] has also pointed out that there is no reason to persist withparticular operations such as the usual addition and multiplica-tion, if via abstract analysis, more easily implemented and moregeneralized or abstract versions of mathematical operations canbe created for digital signal processing. Consequently, abstractanalysis may show new ways of creating systems with desir-able properties. Following these ideas, the generalized linearsystem, shown in Fig. 1, is developed. The generalized addi-

1057-7149/$26.00 © 2010 IEEE

Page 2: A Generalized Unsharp Masking Algorithm

1250 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Fig. 1. Block diagram of a generalized linear system, where ��� is usually anonlinear function.

tion and scalar multiplication operations denoted by andare defined as follows:

(1)

and

(2)

where and are signal samples, is usually a real scalar, andis a nonlinear function.The log-ratio approach [17] was proposed to systematically

tackle the out of range problem in image restoration. The log-ratio can be understood from a generalized linear system pointof view, since its operations are implicitly defined by using (1)and (2). A remarkable property of the log-ratio approach is thatthe gray scale set is closed under the new operations.Deng [22] used the log-ratio in a generalized linear system con-text for image enhancement. In their review papers, Cahill andDeng [23] and Pinoli [24] compared the log-ratio with othergeneralized linear system-based image processing techniquessuch as the multiplicative homomorphic filters [25] and the log-arithmic image processing (LIP) model [18], [26].

B. Issues Addressed, Motivation, and Contributions

In this paper we address the following issues related tocontrast and sharpness enhancement. 1) Enhancement of theoverall contrast and the sharpness of the image are two relatedtasks. However, contrast enhancement does not necessarilyleads to sharpness enhancement. 2) Classical unsharp maskingtechniques aiming at enhancing sharpness of the image usuallysuffers from the halo effect. 3) When enhancing the sharpnessof an image, minute details are usually enhanced at the costof enhancing noise as well. Part of the difficulty in dealingwith this issue is rooted in the definition of image details. (4)Recently published algorithms, although very successful incontrast and sharpness enhancement, have a rescaling processwhich must be performed very carefully for each image toensure the best result. In this paper, we address issues 1)–3)using an exploratory data model and issue 4) using the log-ratiooperations and a new generalized linear system proposed inthis paper.

This work is partly motivated by a classic work in unsharpmasking [1], an excellent analysis of the halo effect [27], andthe requirement of the rescaling process [12], [19]. In [17], onlythe log-ratio operations were defined. Motivated by the eleganttheory of the LIP model [24], we study the properties of theseoperations as well as the generalized linear system.

The major contribution and the organization of this paper areas follows. In Section II, we first present an exploratory datamodel as a unified framework for developing a generalizedunsharp masking algorithm. We then outline the proposedalgorithm and show how the previous issues are dealt with interms of the model. In Section III, the log-ratio operations are

defined and their properties are discussed. This is followed by astudy of the connection between the generalized linear systems(including the log-ratio, multiplicative homomorphic system(MHS) and the LIP model) and the Bregman divergence. Thisconnection not only provides us with a new insight into thegeneralized linear system, but also opens a new path way fordeveloping such systems from different Bregman divergences.As an example, we propose a new generalized linear systemcalled the tangent system based upon a specific Bregmandivergence. We show that the new system can also be used tosystematically solve the out-of-range problem. To the best ofour knowledge, the connection between the Bregman diver-gence and the generalized linear system has not been publishedbefore. Thus, the study reported in this section constitute anovel contribution of this paper. In Section IV, we describe thedetails of each building block of the proposed algorithm whichincludes the iterative median filter (IMF), the adaptive gaincontrol, and the adaptive histogram equalization. In Section V,we present simulation results which are compared to recentlypublished results. Conclusion and future work are presented inSection VI. In Appendix I, we discuss the log-ratio and tangentoperations from a vector space point of view.

II. EXPLORATORY DATA ANALYSIS MODEL FOR

IMAGE ENHANCEMENT

A. Image Model and Generalized Unsharp Masking

A well known idea in exploratory data analysis is to decom-pose a signal into two parts. One part fits a particular model,while the other part is the residual. In Tukey’s own words thedata model is: “ PLUS residuals” ([28] pp.208). Fromthis point of view, the output of the filtering process, denoted

, can be regarded as the part of the image that fits themodel. Thus, we can represent an image using the generalizedoperations (not limited to the log-ratio operations) as follows:

(3)

where is called the detail signal (the residual). The detail signalis defined as where is the generalized subtractionoperation.

Although this model is simple, it provides us with a uni-fied framework to study unsharp masking algorithms. A generalform of the unsharp masking algorithm can be written as

(4)

where is output of the algorithm and both and couldbe linear or nonlinear functions. This model explicitly states thatthe part of the image being sharpened is the model residual. Thiswill force the algorithm developer to carefully select an appro-priate model and avoid models such as linear filters. In addition,this model permits the incorporation of contrast enhancement bymeans of a suitable processing function such as adaptivehistogram equalization. As such, the generalized algorithm canenhance the overall contrast and sharpness of the image.

B. Outline of the Proposed Algorithm

The proposed algorithm, shown in Fig. 2, is based upon theprevious image model and generalizes the classical unsharp

Page 3: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1251

Fig. 2. Block diagram of the proposed generalized unsharp masking algorithm.

TABLE ICOMPARISON OF THE CLASSICAL UNSHARP MASKING (UM) ALGORITHM

WITH THE PROPOSED GENERALIZED UNSHARP MASKING (GUM) ALGORITHM.LPF: LOW-PASS FILTER. EPF: EDGE PRESERVING FILTER.

ACE: ADAPTIVE CONTRAST ENHANCEMENT

masking algorithm by addressing issues stated in Section I-B.We address the issue of the halo effect by using an edge-pre-serving filter-the IMF to generate the signal . The choice of theIMF is due to its relative simplicity and well studied propertiessuch as the root signals. Other more advanced edge preservingfilters such as the nonlocal means filter and wavelet-based de-noising filters can also be used. We address the issue of the needfor a careful rescaling process by using new operations definedaccording to the log-ratio and new generalized linear system.Since the gray scale set is closed under these new operations(addition and scalar multiplication formally defined inSection III), the out-of-range problem is systematically solvedand no rescaling is needed. We address the issue of contrastenhancement and sharpening by using two different processes.The image is processed by adaptive histogram equalizationand the output is called . The detail image is processedby where is the adaptive gain and is afunction of the amplitude of the detail signal . The final outputof the algorithm is then given by

(5)

We can see that the proposed algorithm is a generalization ofthe classical unsharp masking algorithm in several ways whichare summarized in Table I. In the following, we present detailsof the new operations and enhancement of the two images and

.

III. LOG-RATIO, GENERALIZED LINEAR SYSTEMS AND

BREGMAN DIVERGENCE

In this section, we first define the new operations using thegeneralized linear system approach. We use (1) and (2) to sim-plify presentation. Note that these operations can be definedfrom the vector space point of view which is similar to that ofthe development of the LIP model [26]. The vector space related

definitions are presented in Appendix I. We then study proper-ties of these new operations from an image processing perspec-tive. We show the connection between the log-ratio, general-ized linear systems and the Bregman divergence. As a result,we not only show novel interpretations of two existing general-ized linear systems, but also develop a new system.

A. Definitions and Properties of Log-Ratio Operations

1) Nonlinear Function: We consider the pixel gray scale ofan image . For an -bit image, we can first add a verysmall positive constant to the pixel gray value then scale it by

such that it is in the range . The nonlinear function isdefined as follows:

(6)

To simplify notation, we define the ratio of the negative imageto the original image as follows:

(7)

2) Addition and Scalar Multiplication: Using (1), the addi-tion of two gray scales and is defined as

(8)

where and . The multiplication of agray scale by a real scalar is defined byusing (2) as follows:

(9)

This operation is called scalar multiplication which is a termi-nology derived from a vector space point of view [29]. We candefine a new zero gray scale, denoted , as follows:

(10)

It is easy to show that . This definition is consistent withthe definition of scalar multiplication in that . Asa result, we can regard the intervals andas the new definitions of negative and positive numbers, respec-tively. The absolute value, denoted by , can be defined in asimilar way as the absolute value of the real number as follows:

(11)

3) Negative Image and Subtraction Operation: A natural ex-tension is to define the negative of the gray scale. Although thiscan be defined in a similar way as those described in (8) and (9),we take another approach to gain deeper insights into the opera-tion. The negative of the gray scale , denoted by , is obtainedby solving

(12)

The result is which is consistent with the classicaldefinition of the negative image. Indeed, this definition is alsoconsistent with the scalar multiplication in that

Page 4: A Generalized Unsharp Masking Algorithm

1252 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Fig. 3. Effects of the log-ratio addition � � � � � (top row) and scalar multiplication operations � � �� � (bottom row).

. Following the notation of the classical definition of neg-ative numbers, we can define the negative gray scale as:

. We can now define the subtraction operation usingthe addition operation defined in (8) as follows:

(13)

where we can easily see that .Using the definition of the negative gray scale, we also have aclear understanding of the scalar multiplication for

(14)

Here we have used and the distributive law fortwo real scalars and

(15)

The distributive law can be easily verified by using (9). Otherlaws such as the commutative and associative laws can be easilyproved and are presented in Appendix I.

4) Properties: What would be the result if we add a constantto an image such that ? Since

the zero is , we only consider two cases: and. The order relations for the addition operation are

shown in Table II.When is fixed, the result is a function of . In the top

row of Fig. 3, we show several cases in order to visualize the

TABLE IIORDER RELATIONS OF THE LOG-RATIO ADDITION

Fig. 4. Enhancement of the dark areas of an image. Images from left to rightare: original, enhanced by gamma correction and enhanced by log-ratio addition.Only the center part of the image is shown due to space limitation.

effects. We can see that when the dynamic range ofthe higher gray scale values are expanded at the cost of com-pressing that of lower gray scale values. When , theeffect is just the reverse of that of the previous case. Therefore,the operation has a similar effect as that of the gammacorrection operation defined as . For example, to enhance thedark areas of the cameraman image (8-bit/pixel), we could use

for the log-ratio and for gamma correc-tion. The results are shown in Fig. 4.

For the subtraction operation, we can derive similar resultsas those of the addition operation by considering

.For scalar multiplication , we already know three

special cases when . In Table III, we list the order

Page 5: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1253

TABLE IIIORDER RELATIONS OF THE LOG-RATIO SCALAR MULTIPLICATION

relations for . The corresponding relationships for the caseof can be easily derived by using (14). In the bottom rowof Fig. 3, we show several cases which help us to understandthe characteristics of the scalar multiplication. We can see fromthe figure that when , the dynamic ranges of thegray scale values close to 0 or 1 are expanded and that near themiddle is compressed. When , the dynamic rangeof the gray scale values near is expanded and those near 0or 1 are compressed. In fact, when is set a large number, say10, the effect is just like a thresholding process, i.e., for

and for .5) Computation: Computations can be directly performed

using the new operations. For example, for any real numbersand , the weighted summation is given by

(16)

The subtraction operation can be regarded as a weighted sum-mation with and . In image processing theclassical weighted average of a block of pixels isgiven by

(17)

where . Similarly, the generalized weighted aver-aging operation is defined as

(18)

where and arethe weighted geometric means of the original and the negativeimages, respectively.

An indirect computational method is through the nonlinearfunction (6). For example, the generalized weighted average canbe calculated as follows:

(19)

Although this computational method may be more efficient thanthe direct method [see (18)] in certain applications, the directmethod provides us more insights into the operation. For ex-ample, (18) clearly shows the connection between the general-ized weighted average and the geometric mean.

B. Log-Ratio, the Generalized Linear System and the BregmanDivergence

We study the connection between the log-ratio and theBregman divergence. This connection not only provides newinsights and a geometrical interpretation of the log-ratio, butalso suggests a new way to develop generalized linear systems.

1) Log-Ratio and the Bregman Divergence: To motivate ourdiscussion, we study the following problem: since the classicalweighted average can be regarded as the solution of the fol-lowing optimization problem:

(20)

what is the corresponding optimization problem (if it exists) thatleads to the generalized weighted average stated in (19)?

To study this problem, we need to recall some recent results inthe Bregman divergence [30], [31]. The Bregman divergence oftwo vectors and , denoted , is defined as follows:

(21)

where is a strictly convex and differentiable func-tion defined over an open convex domain and is thegradient of evaluated at the point . The centroid of a set ofvectors denoted in terms of minimizing the sum ofthe Bregman divergence is studied in a recent paper [31]. Theweighted left-sided centroid1 is given by

(22)

Comparing (19) with (22), we can see that when is a scalar,the generalized weighted average of the log-ratio is a specialcase of the weighted left-sided centroid with .In fact, it is easy to show that

(23)

where the constant of the indefinite integral is omitted. The pre-vious function is called the bit entropy and the corre-sponding Bregman divergence is defined as

(24)

which is called the logistic loss.

1Since the Bregman divergence is not symmetric, i.e., � ����� ��� �����, there is also a righted-sided centroid �

� � �����

�� � �� ����

�� �

where � � � .

Page 6: A Generalized Unsharp Masking Algorithm

1254 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Therefore, the log-ratio has an intrinsic connection with theBregman divergence through the generalized weighted average.This connection reveals a geometrical property of the log-ratiowhich uses a particular Bregman divergence to measure the gen-eralized distance between two points. This can be comparedwith the classical weighted average which uses the Euclideandistance. In terms of the loss function, while the log-ratio ap-proach uses the logistic loss function, the classical weighted av-erage uses the squared loss function.

2) Generalized Linear Systems and the Bregman Divergence:Following this approach, we can derive the connections betweenthe Bregman divergence with other well established generalizedlinear systems such as the MHS with where

and the LIP model [26] withwhere . The corresponding Bregman divergencesare the Kullback–Leibler (KL) divergences for the MHS [31]

(25)

and the LIP

(26)

respectively. In our past work [32], we demonstrated that theLIP model has an information-theoretic interpretation. The re-lationship between the KL divergence and the LIP model revealsa novel insight into its geometrical property.

3) A New Generalized Linear System: Because for a spe-cific Bregman divergence, there is a corresponding generalizedweighted average (the left-sided centroid), we can define a newgeneralized linear system (see Fig. 1) by letting .For example, the log-ratio, MHS and LIP can be developed fromtheir respective Bregman divergences. This approach of defininga generalized linear system is based upon using the Bregman di-vergence as a measure of the distance of two signal samples.The measure can be related to the geometrical properties ofthe signal space. A new generalized linear system for solvingthe out-of-range problem can be developed by using the fol-lowing Bregman divergence (called “Hellinger-like” divergencein Table I in [31]):

(27)

which is generated by the convex functionwhose domain is . The nonlinear function for thecorresponding generalized linear system is as follows:

(28)

In this paper, the generalized linear system is called the tangentsystem2 and the new addition and scalar multiplication opera-tions [defined in (1) and (2) with given in (28)] are calledtangent operations.

For image processing applications, we can first linearlymap the pixel value from the interval to a new interval

2The term tangent comes directly from a geometrical interpretation of thefunction ���� � ��

��� � as the tangent of an angle, i.e., ����� � � ����

where � is an angle determined uniquely by �.

Fig. 5. Properties of the tangent addition operation � � � � � where � is aconstant.

. Then the image is processed by using the tangent op-erations. The result is then mapped back to the intervalthrough the inverse mapping. We can easily verify that thesignal set is closed under the tangent operations.As such the tangent system can be used as an alternative tothe log-ratio to solve the out-of-range problem. The propertiesand applications of the tangent operations can be studied in asimilar way as those presented in Section III-A. We can definethe negative image and the subtraction operation, and studythe order relations for the tangent operations. As an example,we show in Fig. 5 the results of adding a constant to an N-bitimage using the tangent addition. In this simulation,we use a simple function to map the imagefrom to . We can see that the effect is similar tothat of the log-ratio addition (see Fig. 4).

4) Summary: In this section, we show a novel connectionbetween the generalized linear systems and the Bregman diver-gence. We also propose a new system. In Table IV, we summa-rize key components of some generalized linear systems whichcan be developed by using the Bregman divergence. However, adetailed comparison of these systems is out of the scope of thispaper. Pinoli [24] presented a comparison of some generalizedlinear systems from a vector space point of view.

IV. PROPOSED ALGORITHM

A. Dealing With Color Images

We first convert a color image from the RGB color space tothe HSI or the LAB color space. The chrominance componentssuch as the H and S components are not processed. After theluminance component is processed, the inverse conversion isperformed. An enhanced color image in its RGB color spaceis obtained. The rationale for only processing the luminancecomponent is to avoid a potential problem of altering the whitebalance of the image when the RGB components are processedindividually.

B. Enhancement of the Detail Signal

1) The Root Signal and the Detail Signal: Let us denote themedian filtering operation as a function which mapsthe input to the output . An IMF operation can be representedas: where is the iteration indexand . The signal is usually called the root signal of

Page 7: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1255

TABLE IVKEY COMPONENTS OF SOME GENERALIZED LINEAR SYSTEM MOTIVATED BY THE BREGMAN DIVERGENCE. THE DOMAIN OF

THE LIP MODEL IS ������. IN THIS TABLE, IT IS NORMALIZED BY � TO SIMPLIFY NOTATION

Fig. 6. Mean squared difference of images between two iterations��� � � � � �������� � � �� where � is the number of pixelsin the image. Results using two settings of the median filter and using the“cameraman” image are shown.

the filtering process if . It is convenient to define aroot signal as follows:

subject to (29)

where is a suitable measure of the difference be-tween the two images and is a user defined threshold. For nat-ural images, it is usually the case that the mean squared differ-ence, defined as ( is thenumber of pixels), is a monotonic decreasing function of . Anexample is shown in Fig. 6.

It can be easily seen that the definition of the root signal de-pends upon the threshold. For example, it is possible to set alarge value to such that is the root signal. Indeed, after aboutfive iterations the difference changes onlyvery slightly. As such, we can regard or as the root signal.

Of course, the number of iterations, the size and the shapeof the filter mask have certain impacts on the root signal. Theproperties of the root signal has been extensively studied [33].Here we use an example to illustrate the advantage of the pro-posed algorithm over the classical unsharp masking algorithm.

The original signal is shown in Fig. 7, which is the 100th rowof the “cameraman” image. The root signal is produced by anIMF filter with a (3 3) mask and three iterations. The signal

is produced by a linear low-pass filter with a uniform maskof (5 5). The gain for both algorithms is three. Comparing theenhanced signals (last row in the figure), we can clearly see thatwhile the result for the classical unsharp masking algorithm suf-fers from the out of range problem and halo effect (under-shootand over-shoot), the result of the proposed algorithm is free ofsuch problems.

2) Adaptive Gain Control: We can see from Fig. 3 that to en-hance the detail signal the gain must be greater than one. Using auniversal gain for the whole image does not lead to good results,because to enhance the small details a relatively large gain is re-quired. However, a large gain can lead to the saturation of thedetailed signal whose values are larger than a certain threshold.Saturation is undesirable because different amplitudes of the de-tail signal are mapped to the same amplitude of either 1 or 0.This leads to loss of information. Therefore, the gain must beadaptively controlled.

In the following, we only describe the gain control algorithmfor using with the log-ratio operations. Similar algorithm canbe easily developed for using with the tangent operations. Tocontrol the gain, we first perform a linear mapping of the detailsignal to a new signal

(30)

such that the dynamic range of is . A simple idea isto set the gain as a function of the signal and to graduallydecrease the gain from its maximum value whento its minimum value when . More specifically,we propose the following adaptive gain control function

(31)

where is a parameter that controls the rate of decreasing. Thetwo parameters and are obtained by solving the equations:

Page 8: A Generalized Unsharp Masking Algorithm

1256 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Fig. 7. Illustration of the difference between the proposed generalized unsharpmasking algorithm (left panel) and the classical unsharp masking algorithm(right panel). Comparing the two enhanced signal shown in the bottom row ofthe figure, we can clear see that while the classical unsharp masking algorithm(right) produces over- and under-shoots around sharp edges (halo-effects), theproposed algorithm (left) is almost free of these effects.

and . For a fixed , we can easilydetermine the two parameters as follows:

(32)

and

(33)

Although both and could be chosen based uponeach individual image processing task, in general it is reasonableto set . This setting follows the intuition that when theamplitude of the detailed signal is large enough, it does not needfurther amplification. For example, we can see that

(34)

As such, the scalar multiplication has little effect.We now study the effect of and by setting .

In Fig. 8, we show the nonlinear mapping function by usingadaptive gain control with for the four combinationsof and . We also compare these functions to that usinga fixed gain . We can see that while the fixed gainleads to saturation when the amplitude of the detail signalis large, the adaptive gain does not suffer from this problem.

For the adaptive gain control (dashed-line), a larger value ofleads to a larger amplification of the amplitude of the

Fig. 8. Illustrations of adaptive gain control for four combinations of �and �. The results of using a fixed gain � � � are also shown.

detail signal around 1/2. This is evident when we compare thegradients of those dash lines around 1/2. Recall that 1/2 is thezero (smallest amplitude, i.e., is equivalent to )of the detailed signal. Therefore, large value of helps en-hance the minute details of the image. The role of the parameter

is to control the amplification behavior of the nonlinear map-ping function when is relatively large. Comparing the twoplots at the bottom row of Fig. 8, we can see that when ,the nonlinear mapping function is close to saturation whenis large. As such, the setting of and (bottomleft) is better than and (bottom right). We alsoobserve that when is relatively small such asthe effect of the adaptive gain control is not significant. This isalso observed in our experiment with images. As such, adaptivegain control is useful when a relatively large must be usedto enhance the minute details and to avoid the saturation effect.

C. Contrast Enhancement of the Root Signal

For contrast enhancement, we use adaptive histogram equal-ization implemented by a Matlab function in the Image Pro-cessing Toolbox. The function, called “adapthisteq,” has a pa-rameter controlling the contrast. This parameter is determinedby the user through experiments to obtain the most visuallypleasing result. In our simulations, we use default values forother parameters of the function.

V. RESULTS AND COMPARISON

All test images and Farbman’s results (called combined.pngfor each test image) are downloaded from the Internet:www.cs.huji.ac.il/~danix/epd/. We use the canyon imagecalled: Hunt’s Mesa (shown in top-left of Fig. 12) to studythe proposed algorithms. We first show the effects of the twocontributing parts: contrast enhancement and detail enhance-ment. As shown in Fig. 9, contrast enhancement by adaptivehistogram equalization does remove the haze-like effect of theoriginal image and contrast of the cloud is also greatly enhanced.

Page 9: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1257

Fig. 9. Comparison of individual effects of contrast enhancement and detail enhancement. Images from left to right: original, only with contrast enhancementusing adaptive histogram equalization, only with detail enhancement, and with both enhancements.

Fig. 10. Results of the proposed algorithm using (3� 3) mask with different shapes. Top left: square shape. Top right: diagonal cross. Bottom left: horizontal-vertical cross. To illustrate the halo-effect, a linear filter with a (3� 3) uniform mask is used to replace the median filter in the proposed algorithm. The result isshown in the bottom right. The halo-effects are marked by red ellipse.

enhanced. However, the minute details on the rocks are notsharpened. On the other hand, only using detail enhancementdoes sharpen the image but does not improve the overall con-trast. When we combine both operations both contrast and de-tails are improved.

Next, we study the impact of the shape of the filter mask ofthe median filter. In Fig. 10, we show results of the proposedalgorithm using 3 3 filter mask with shapes of square, diag-onal cross and horizontal-vertical cross . For compar-ison, we also show the result of replacing the median filter witha linear filter having a (3 3) uniform mask. As we can observefrom these results, the use of a linear filter leads to the halo ef-fect which appears as a bright line surrounding the relativelydark mountains (for an example, see the bottom right figure inFig. 10). Using a median filter, the halo effect is mostly avoided,although for the square and diagonal cross mask there are still

a number of spots with very mild halo effects. However, the re-sult from the horizontal-vertical cross mask is almost free ofany halo effect. In order to completely remove the halo effect,adaptive filter mask selection could be implemented: the hor-izontal-vertical cross mask for strong vertical/horizontal edge,the diagonal cross mask for strong diagonal edge and the squaremask for the rest of the image. However, in practical application,it may be sufficient to use a fixed mask for the whole image toreduce the computational time.

We have also performed experiments by replacing the log-ratio operations with the tangent operations and keeping thesame parameter settings. We observed that there is no visuallysignificant difference between the results. The results shown inFig. 11 were obtained by the following settings: 1) a medianfilter having a (3 3) uniform mask and three iterations, 2) fixedgain , and 3) the contrast enhancement factor of 0.005.

Page 10: A Generalized Unsharp Masking Algorithm

1258 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Fig. 11. Original image is shown on the left. Results of the proposed algorithm using the log-ratio (middle) and the tangent (right) operations. We can see thereis no visually significant difference between the middle and right images.

Fig. 12. Comparison with other published results. Top left: original image. Top right: produced by the proposed algorithm. Bottom left: Farbman’s result (com-bined.pgn). Bottom right: produced by Meylan’s algorithm using the author’s source code.

Finally, we compare the proposed algorithm with that of twoother published results. In the proposed algorithm, the followingsettings are used: 1) a median filter having a (3 3) horizontal-vertical mask and five iterations, 2) adaptive gain with

, and 3) the contrast enhancement factor of 0.005. The sourcecode for Meylan’s paper [19] on a retinex-based adaptive filteris available on: ivrgwww.epfl.ch/supplementary_material. Wecompare our result with Farbman’s result [13] because it is amultiresolution tone manipulation method which identified theproblem of halo effect and proposed to use edge preserving fil-ters to replace linear filters. The use of an IMF in our workis inspired by Farbman’s work. We compare our result with

Meylan’s result because it is a state-of-the-art retinex-type ofalgorithm which requires a sophisticated rescaling process withcarefully tuned parameters for every image. The use of the log-ratio approach in our method to systematically eliminate therescaling process is clearly an advantage. Comparing these im-ages, we can see that the processed images are free of haze-typeeffect in the original image. They are also free of halo-effects.The contrast and details are significantly enhanced in our re-sult and Farbman’s result. This can be clearly seen in the skypart of the image where the contrast of the clouds are revealed.The contrast of Meylan’s result is not as good as our result andFarbman’s result. We believe the reason is that the default set-

Page 11: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1259

Fig. 13. Comparison using the “badger,” “door,” and “rock” images. Images from top to bottom are: original (first row), Farbman’s results (second row), resultsof the proposed algorithm using contrast parameter of 0.001 (third row) amd using contrast parameter 0.02 (fourth row).

ting of parameters for the rescaling processes is not optimal forthis image. When these parameters are tuned to this image, com-parable result could be obtained. This highlights the importanceof the rescaling process in the retinex type of algorithm in par-ticular and other image enhancement algorithms using classicaladdition and scalar multiplication operations in general. In

In Fig. 13, we present more examples showing that the perfor-mance of the proposed algorithm (third row) is similar to that ofFarbman’s algorithm (second row). We do not present results forMeylan’s algorithm because using default parameter settings,the results are not as good as those of Farbman’s.

In this paper, we do not attempt to quantitatively measure theperformance of the proposed algorithm. This is a difficult task.Part of the difficulty comes from the fact that image enhance-ment such as the results reported in this paper is a subject forhuman evaluation. For example, it is a matter of subjective as-sessment to compare the result of our algorithm (top right ofFig. 12) and that of Farbman’s (bottom left of Fig. 12). We be-lieve that there is no definite answer to the question of whichone is better than the other. We note that objective assessmentof image enhancement is a very important topic for further re-search. Interested readers can find some recent works in [34],[35].

In the following, we present some results obtained in an at-tempt to understand the contribution of the contrast parameter tothe average contrast of the processed image. There are a number

of ways to measure the average contrast [6]. We use the fol-lowing measure due to its simplicity:

(35)

where is the normalized gray scale of the th pixeland is the mean . In Fig. 14, we

show the average contrast as a function of the contrast parameterfor the four test images. The average contrast is calculated usingthe I component (in the HSI color space) of the processed colorimage. We can clearly see that as the parameter increases, the av-erage contrast will increase and will then be saturated. However,when the average contrast reaches its saturation value the imageappears to be overly enhanced. This is shown in the fourth row ofFig. 13. For example, the smooth and out-of-focus backgroundof the original “badger” image has been greatly enhanced. Thisis not a desirable outcome for emphasizing the subject which isthe animal. The color of the door in the “door” image appearsunnaturally bright. In the “rock” image a black dot appears inthe middle of the image. The black dot, which may be due toa tiny dust on the camera sensor, is hardly visible in the orig-inal image. We find that setting the contrast parameter such thatthe average contrast is not saturated generally produces visuallygood results. For this purpose, the contrast parameter is between0.001 and 0.005. Results are shown in the second row of Fig. 13.

Page 12: A Generalized Unsharp Masking Algorithm

1260 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 5, MAY 2011

Fig. 14. Average contrast of the proposed image as a function of the contrastparameter.

It is interesting to note that the default setting of the contrast pa-rameter is 0.005 for the Matlab Image processing Toolbox func-tion: .

The other major parameter of the proposed algorithm isthe maximum gain which controls the sharpness ofthe image. However, the sharpness of the image is difficultto measure. For example, when we use the average absolutegradient values as a measure, we face at least two problems:(a) it does not necessarily correlate with human perception,and (b) it seems to increase with the increase of the gain

. Therefore, a practical approach, taken by developers ofimage processing software, is to let the user to experiment withdifferent settings of the parameter and to choose the one thatproduces the desired result for the user.

We now comment on the computational complexity of theproposed algorithm which is implemented using Matlab. In apersonal computer with Intel Core2 CPU (2.40 GHz), it takesabout 0.88 s for the proposed algorithm to process a (600 800)color image. It takes about 98 s for Meyland’s software (usingdefault parameter settings) to process the same image. The run-ning time for Farbman’s algorithm is not available. However, ac-cording to [13], the running time for the weighted least squared(WLS) filter in Farbman’s algorithm is about 3.5 s per one megapixel. Accordingly, an estimate of the running time for the WLSfilter for a (600 800) image is about 1.7 s. Therefore, the pro-posed algorithm is relatively fast compared to these two algo-rithms. We note that the previous discussion could only pro-vide a very rough comparison of the computational complexity.The running time of an algorithm depends upon factors such as:hardware platforms and levels of software optimization.

VI. CONCLUSION AND FURTHER WORK

In this paper, we use an exploratory data model as a unifiedframework for developing generalized unsharp masking algo-rithms. Using this framework, we propose a new algorithm toaddress three issues associated with classical unsharp maskingalgorithms: 1) simultaneously enhancing contrast and sharp-ness by means of individual treatment of the model componentand the residual, 2) reducing the halo-effect by means of anedge-preserving filter, and 3) eliminating the rescaling process

by means of log-ratio and tangent operations. We present a studyof the log-ratio operations and their properties. In particular, wereveal a new connection between the Bregman divergence andthe generalized linear systems (including the log-ratio, MHSand the LIP model). This connection not only provides us with anovel insight into the geometrical property of generalized linearsystems, but also opens a new pathway to develop such systems.We present a new system called the tangent system which isbased upon a specific Bregman divergence.

Experimental results, which are comparable to recently pub-lished results, show that the proposed algorithm is able to sig-nificantly improve the contrast and sharpness of an image. Inthe proposed algorithm, the user can adjust the two parameterscontrolling the contrast and sharpness to produce the desired re-sults. This makes the proposed algorithm practically useful.

Extensions of this work can be carried out in a number ofdirections. In this work, we only test the IMF as a computa-tionally inexpensive edge preserving filter. It is expected thatother more advanced edge preserving filters such as bilateralfilter/non-local means filter, the least squares filters and wavelet-based denoising can produce similar or even better results. Theproposed algorithm can be easily extended into multiresolutionprocessing. This will allow the user to have better control of thedetail signal enhancement. For contrast enhancement, we onlyuse adaptive histogram equalization. It is expected that usingadvanced algorithms such as recently published retinex and de-hazing algorithm can improve the quality of enhancement. Wedo not consider the problem of avoiding enhancement of noise.This problem can be tackled by designing an edge preservingfilter which is not sensitive to noise. The idea of the cubic filtercan be useful. It can also be tackled by designing a smart adap-tive gain control process such that the gain for noise pixels isset to a small value. We have shown that the proposed the tan-gent system is an alternative to the log-ratio for systematicallysolving the out-of-range problem. It is an interesting task tostudy the application of the tangent system and compare it withother generalized systems.

APPENDIX I

We present the log-ratio and the tangent operations from avector space point of view [29]. These operations are defined inTable IV. We denote as a set of real numbers in the interval

. The interval is and for the log-ratio and thetangent operations, respectively. In the following, we use thegeneral notation and with the understanding that theycould be applied to both the log-ratio and the tangent cases,respectively.

We can verify that the set together with the binary operationon is an albelian group such that the following axioms are

satisfied:• the group is closed under the operation, i.e., for

;• the operation is associative and commutative, i.e., for

and ;• there is an identity element such that for

all ;• there is an inverse element of such that

for all .

Page 13: A Generalized Unsharp Masking Algorithm

DENG: A GENERALIZED UNSHARP MASKING ALGORITHM 1261

For the log-ratio operation, the identity element iswhich is the new definition of zero [see (10)] and the inverseelement [see (12)] is given by the inverse image .For the tangent operations, the identity element is and theinverse element is given by the inverse image .

We can also verify that the previous defined albelian groupand a field , together with an operation of scalar multiplication

of each element of by each element of on the left, is avector space such that for and the followingconditions are satisfied:

• ;• ;• ;• ;• there is an element such that for all .

For both log-ratio and the tangent operations, it is easy to seethat . It is now clear that set is closed under the log-ratioand the tangent operations.

REFERENCES

[1] G. Ramponi, “A cubic unsharp masking technique for contrast en-hancement,” Signal Process., pp. 211–222, 1998.

[2] S. J. Ko and Y. H. Lee, “Center weighted median filters and their ap-plications to image enhancement,” IEEE Trans. Circuits Syst., vol. 38,no. 9, pp. 984–993, Sep. 1991.

[3] M. Fischer, J. L. Paredes, and G. R. Arce, “Weighted median imagesharpeners for the world wide web,” IEEE Trans. Image Process., vol.11, no. 7, pp. 717–727, Jul. 2002.

[4] R. Lukac, B. Smolka, and K. N. Plataniotis, “Sharpening vector medianfilters,” Signal Process., vol. 87, pp. 2085–2099, 2007.

[5] A. Polesel, G. Ramponi, and V. Mathews, “Image enhancement viaadaptive unsharp masking,” IEEE Trans. Image Process., vol. 9, no. 3,pp. 505–510, Mar. 2000.

[6] E. Peli, “Contrast in complex images,” J. Opt. Soc. Amer., vol. 7, no.10, pp. 2032–2040, 1990.

[7] S. Pizer, E. Amburn, J. Austin, R. Cromartie, A. Geselowitz, T. Greer,B. Romeny, J. Zimmerman, and K. Zuiderveld, “Adaptive histogramequalization and its variations,” Comput. Vis. Graph. Image Process.,vol. 39, no. 3, pp. 355–368, Sep. 1987.

[8] J. Stark, “Adaptive image contrast enhancement using generalizationsof histogram equalization,” IEEE Trans. Image Process., vol. 9, no. 5,pp. 889–896, May 2000.

[9] E. Land and J. McCann, “Lightness and retinex theory,” J. Opt. Soc.Amer., vol. 61, no. 1, pp. 1–11, 1971.

[10] B. Funt, F. Ciurea, and J. McCann, “Retinex in MATLAB™,” J. Elec-tron. Imag., pp. 48–57, Jan. 2004.

[11] M. Elad, “Retinex by two bilateral filters,” in Proc. Scale Space, 2005,pp. 217–229.

[12] J. Zhang and S. Kamata, “Adaptive local contrast enhancement for thevisualization of high dynamic range images,” in Proc. Int. Conf. PatternRecognit., 2008, pp. 1–4.

[13] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preservingdecompositions for multi-scale tone and detail manipulation,” ACMTrans. Graph., vol. 27, no. 3, pp. 1–10, Aug. 2008.

[14] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” in Proc. 29th Annu. Conf. Comput. Graph.Interactive Tech., 2002, pp. 257–266.

[15] R. Fattal, “Single image dehazing,” ACM Trans. Graph., vol. 27, no. 3,pp. 1–9, 2008.

[16] K. M. He, J. Sun, and X. O. Tang, “Single image haze removalusing dark channel prior,” in Proc. IEEE Conf. Comput. Vis. PatternRecognit., Jun. 2009, pp. 1956–1963.

[17] H. Shvaytser and S. Peleg, “Inversion of picture operators,” PatternRecognit. Lett., vol. 5, no. 1, pp. 49–61, 1987.

[18] G. Deng, L. W. Cahill, and G. R. Tobin, “The study of logarithmicimage processing model and its application to image enhancement,”IEEE Trans. Image Process., vol. 4, no. 4, pp. 506–512, Apr. 1995.

[19] L. Meylan and S. Süsstrunk, “High dynamic range image renderingusing a retinex-based adaptive filter,” IEEE Trans. Image Process., vol.15, no. 9, pp. 2820–2830, Sep. 2006.

[20] D. Marr, Vision: A Computational Investigation into the Human Repre-sentation and Processing of Visual Information. San Francisco, CA:Freeman, 1982.

[21] D. G. Myers, Digital Signal Processing Efficient Convolution andFourier Transform Technique. Upper Saddle River, NJ: Pren-tice-Hall, 1990.

[22] G. Deng and L. W. Cahill, “Image enhancement using the log-ratioapproach,” in Proc. 28th Asilomar Conf. Signals, Syst. Comput.s, 1994,vol. 1, pp. 198–202.

[23] L. W. Cahill and G. Deng, “An overview of logarithm-based imageprocessing techniques for biomedical applications,” in Proc. 13th Int.Conf. Digital Signal Process., 1997, vol. 1, pp. 93–96.

[24] J. Pinoli, “A general comparative study of the multiplicative homomor-phic log-ratio and logarithmic image processing approaches,” SignalProcess., vol. 58, no. 1, pp. 11–45, 1997.

[25] A. V. Oppenheim, R. W. Schafer, and T. G. Stockham, Jr., “Nonlinearfiltering of multiplied and convolved signals,” Proc. IEEE, vol. 56, no.8, pp. 1264–1291, Aug. 1969.

[26] M. Jourlin and J.-C. Pinoli, “A model for logarithmic image pro-cessing,” J. Microsc., vol. 149, pp. 21–35, 1988.

[27] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preservingdecompositions for multi-scale tone and detail manipulation,” ACMTrans. Graph., vol. 27, no. 3, pp. 67:1–67:10, 2008.

[28] J. W. Tukey, Exploratory Data Analysis. Reading, MA: Addison-Wesley, 1977.

[29] J. B. Fraleigh, A First Course in Abstract Algebra. Reading, MA:Addison-Wesley, 1978.

[30] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering withbregman divergences,” J. Mach. Learn. Res., vol. 6, pp. 1705–1749,Oct. 2005.

[31] F. Nielsen and R. Nock, “Sided and symmetrized bregman centroids,”IEEE Trans. Inf. Theory, vol. 55, no. 6, pp. 2882–2904, Jun. 2009.

[32] G. Deng, “An entropy interpretation of the logarithmic image pro-cessing model with application to contrast enhancement,” IEEE Trans.Image Process., vol. 18, no. 5, pp. 1135–1140, May 2009.

[33] J. P. Fitch, E. J. Coyle, and N. C. Gallagher, Jr., “Root properties andconvergence rates of median filters,” IEEE Trans. Acoust. Speech,Signal Process., vol. 33, no. 1, pp. 230–240, Feb. 1985.

[34] S. Agaian, B. Silver, and K. Panetta, “Transform coefficient histogram-based image enhancement algorithms using contrast entropy,” IEEETrans. Image Process., vol. 16, no. 3, pp. 741–758, Mar. 2007.

[35] M. Cadík, M. Wimmer, L. Neumann, and A. Artusi, “Evaluation ofHDR tone mapping methods using essential perceptual attributes,”Comput. Graph., vol. 32, no. 3, pp. 330–349, Jun. 2008.

Guang Deng is a Reader and Associate Professorin electronic engineering at La Trobe University,Victoria, Australia. His current research interestsinclude Bayesian signal processing and generalizedlinear systems.