image fusion using blur estimation
Post on 28-Mar-2015
36 Views
Preview:
TRANSCRIPT
IMAGE FUSION USING BLUR ESTIMATION
Seyfollah Soleimani1,2, Filip Rooms1 , Wilfried Philips1, Linda Tessens1
1TELIN - IPI - IBBT - Ghent University - St-Pietersnieuwstraat 41, B-9000 Gent, Belgiumtel: +32 9 264 34 12, fax: +32 9 264 42 95
2 Arak University, Shahid Beheshti Street, Arak, Iran{Seyfollah.Soleimani, Filip.Rooms, Wilfried.Philips, Linda.Tessens}@telin.ugent.be,
ABSTRACTIn this paper, a new wavelet based image fusion method is
proposed. In this method, the blur levels of the edge points
are estimated for every slice in the stack of images. Then from
corresponding edge points in different slices, the sharpest one
is brought to the final image and others are eliminated. The
intensities of non-edge pixels are assigned by the slice of its
nearest neighbor edge. Results are promising and outperform
other methods in most cases of the tested methods.
Index Terms— image fusion, blur estimation, local max-
ima, wavelet transform
1. INTRODUCTION
Image fusion is an important technique in image processing.
It is needed when we image non-flat objects while the depth
of field of the optics is not enough to sharply image the whole
object at once. The solution is to combine several images,
each focusing on other parts of the object. Those images
should fuse to reach an image with as much sharp parts as
possible.
In literature a lot of methods have been proposed for im-
age fusion. A survey has been done in [1]. Some of them use
variance-based fusion, real and complex wavelet [2, 3] and
curvelet [4].
In existing wavelet-based methods, the fusion step is done
in the transform domain by keeping the larger coefficients in
amplitude, because the assumption is that larger coefficients
are from the in-focus parts. As explained in [3, 4], after the in-
verse transform, the fused image may contain intensities that
are not present in any of the slices in the stack, so a post-
processing step is needed to overcome this problem.
Here we propose a new wavelet-based method where
the fusion step is done in the spatial domain, so no post-
processing is needed. In this method first, the edge pixels
are detected and their blur level are estimated using Ducot-
tet’s method [5]. In addition to make some improvement in
Ducottet’s method, we limit the edge detection to sharp parts
by setting a scale-dependent threshold. Then from every
corresponding set of edge pixels in the stack, the sharpest
one is kept and others are eliminated. Suppose the slices
are f1, f2, ..., fn. To find the corresponding edge pixels of
an edge pixel in slice k located at (xk, yk), we look in the
neighborhood of that position in other slices(fi, i �= k). The
radius of neighborhood under inspection is set equal to the
blur level of (xk, yk). Now for every detected edge pixel, we
know from which slice its intensity should be assigned. To
assign the intensity of the non-edge pixels, we use the slice
of its nearest neighbor edge pixel.
In section 2 we present an overview of Ducottet’s method.
In section 3 the changes that we have made in Ducottet’s
method are explained. In section 4 the new fusion method
is presented. In section 5, the results for synthetic and real
images are shown and a comparison has been done with other
methods. Finally a conclusion is given in section 6.
2. EDGE DETECTION AND BLUR ESTIMATION
In Ducottet’s method, singularities of images are modeled as
transitions, lines or peaks.
Transitions are modeled as the convolution of a Heaviside
function (H) and a two dimensional Gaussian (G) with vari-
ance σ2 and amplitude A:
Tσ(x, y) = AH(x, y) ∗ Gσ(x, y) =A
2
(1 + erf
(x
σ√
2
))
(∗ is convolution). The line edge model is the convolution of
a Dirac line function and the Gaussian function:
Lσ(x, y) = 2πσ2AGσ(x, 0)
The peak edge model is the convolution of a Dirac point
function with the Gaussian function:
Pσ(x, y) = 2πσ2AGσ(x, y)
In the above equations, σ is the blur level of every edge model.
Ducottet’s method can be summarized as follows:
1. The undecimated wavelet transform of the input image
is calculated for scales ranging from 1 to a selected maximum
4397978-1-4244-7994-8/10/$26.00 ©2010 IEEE ICIP 2010
Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong
scale with a scale step of at most 0.5 using the following com-
plex wavelet:
ψs = ψs1 + iψs
2,
where
ψs1(x, y) = s
∂Gs
∂x(x, y)
ψs2(x, y) = s
∂Gs
∂y(x, y)
and
Gs(x, y) =1
2πs2e−1/2s2(x2+y2).
2. In every scale of the wavelet domain, the local maxima
of the wavelet coefficients are found.
3. For every local maximum in the finest scale, its candi-
date corresponding local maxima are found in the next coarser
scale. This procedure is repeated until the coarsest scale is
reached. For every local maximum in the finest scale, the
maxima function m(s) is defined as the values of the corre-
sponding maxima in scale s. Setting the scale step at most to
0.5 guarantees that the local maximum in the next scale will
not move more than one pixel compared with the location of
the corresponding local maximum in the current scale, so for
finding correspondences only the 8 neighbors of every loca-
tion in the coarser scale are considered.
4. Every extracted maxima function is compared with
maxima functions of the three edge models (transition, line
and peak ) that have been found analytically, and the best fit-
ting model is selected.
The maxima functions of the edge models are respectively
[5]:
MTσ(s) =A√2π
s√s2 + σ2
MLσ(s) =A√e
sσ
s2 + σ2
MPσ(s) =A√e
sσ2
(s2 + σ2)3/2
These maxima functions are shown in Figure 1 for σ = 4and A = 1. When the type of the extracted maxima function
is specified, the blur level and the amplitude for that maxima
function are calculated by curve fitting. For more details of
this method, we refer to [5].
0 2 4 6 8 10 120
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
scale
Max
ima
Func
tions
Mod
uli
Transition
Line
Peak
Fig. 1. Maxima functions for σ = 4, A = 1.
3. IMPROVEMENTS TO DUCOTTET’S METHOD
3.1. Thresholding
To decrease the computational cost of the edge detection and
the blur estimation step, we decrease the number of local
maxima in every scale by removing the weak edges. The
wavelet that is used here is the derivative of a Gaussian func-
tion, so the moduli of complex wavelet coefficients are inten-
sity differences of adjacent pixels of the input image, and in
blurred parts the differences between amplitudes are small.
So the amplitude of moduli of wavelet coefficients in these
parts are smaller than the sharp parts.
To remove weak local maxima, we set a threshold in the
local maxima finding step. We should emphasis that Ducot-
tet does not use any threshold. He proposed this method for
segmentation and that is why he keeps all local maxima. But
here for fusion, we can eliminate weak maxima that represent
blurred parts.
If we set the threshold to a fixed value for all scales, it
would remove some pixels in a scale and keep their corre-
spondences in other scales. As a result the process of creating
maxima functions fails. The threshold should be proportional
to values of wavelet coefficients in every scale, so for every
scale we set as the threshold a factor of the average of the
moduli of all wavelet coefficients in that scale. This way of
setting the threshold is very important, because it preserves
the parent-child links across scales.
3.2. Rounding the arguments
In finding local maxima, we round the arguments of the
wavelet coefficient to one of the following values:
0,±π/4,±π/2,±3π/4,±π
This decreases time complexity of finding the local maxima
even further. As said in [5, 6] to check if a point is a local
maximum, we should compare its modulus with moduli of
two points in the image grid along the direction of the argu-
ment of that point.When the argument is not one of the above
values we should interpolate the points’ moduli. Ducottet’s
method uses two nearest neighbors linear interpolation, but
by rounding the arguments to one of the above values, we use
in fact nearest neighbor interpolation.
3.3. Edge Localization
If we connect pixels of a given maxima function across scales,
it will be a 3 dimensional curve, because they are not from
the same locations in different scales. A difficulty here is
in which location we should report an edge. In our work,
we have selected the third scale for edge localization for all
maxima functions, because the finer scales are more sensitive
to noise and in coarser scales, edges are affected by adjacent
edges. Another problem is that the maxima functions may not
4398
include a coefficient in coarser scales because the process of
finding correspondences may stop in these scales. Since we
only take into account maxima functions with at least three
values, the maxima functions always have an edge pixel from
the third scale.
4. PROPOSED FUSION METHOD
In image fusion, we have a stack of slices in which some parts
are sharp and we want to combine all slices to have a fused
image. We apply edge detection and blur estimation for ev-
ery slice in the stack. Then, for every slice, we have the edge
locations and their blur level. Now we combine all these in-
formation to compose the fused image. A big difficulty that
arises here is that the corresponding lines and peaks in dif-
ferent slices are reported in different locations. This problem
is illustrated in Figure 2. In Figure 2(a) the profiles of two
lines (one by dash line and another by solid line) with differ-
ent blur level and in Figure 2(b) the positions and amplitudes
of the local maxima of their wavelet coefficients (edge pixels)
are shown. For every line, one pair of peaks (local maxima)
are reported, one pair represents the sharper line and other one
represents the blurrier line. The sharper pair should come to
the fused image and the blurrier pair should eliminated. Be-
cause the locations of corresponding peaks are different, we
can not just compare the blur level of edge pixels with the
same positions in different slices and select the sharpest one.
The reason of this displacement is that the used wavelet (the
gradient of the Gaussian) is an odd function so the location of
corresponding lines and peaks will not be the same. Figure 2
inspires one possible solution.
The lines are Gaussians at the same location with differ-
ent blur levels. In our models, the blur level is the σ of Gaus-
sians. Suppose that the variances of the lines are σ12 and σ2
2
and σ1 > σ2 and the center of Gaussians are at 0. We know
that the peaks (local maxima here) of moduli of first deriva-
tive of Gaussians will be located at x = ±σ1 and x = ±σ2
and the distance of the corresponding peaks will be σ1 − σ2.
If we suppose that the estimated blur level is equal or larger
than 0 then, σ2 is at least 0 and the distance between two cor-
responding peaks will be at most σ1. So if there is a sharper
local maximum, it should be within a neighborhood of size
σ1.
To solve the problem of displacement of corresponding
local maxima, we look for every reported local maximum
within a neighborhood of the size of its blur level in all other
slices and if there is one with the same argument and less blur,
we can infer that they are from the same feature and one of
them should be kept, so we eliminate the more blurred one.
After this elimination step, we inspect all slices in the
stack for every location. If for a location only in one slice
an edge pixel has been reported, we assign the label of that
slice in the pre-final image, otherwise we do not assign that
location.
Now we have a labeled image that shows for some loca-
tions from which slice the intensity should be assigned. For
locations that have not been specified yet, we assign it like the
closest labeled pixel.
−10 −8 −6 −4 −2 0 2 4 6 8 100
100
200
300
x
inten
sity
(a)
−10 −8 −6 −4 −2 0 2 4 6 8 100
50
100
150
200
250
300
mod
ulus
of w
avel
et c
oeffi
cien
t
x
(b)
Fig. 2. (a) Profiles of two blurred lines with blur levels (σ) of
2 and 3 (b)Modulus maxima of wavelet coefficients of (a)
5. RESULTS
5.1. Application to synthetic images
We tested this method with some synthetic images and com-
pared the results with several existing methods. The synthetic
images are the same as the ones used in [4] which is one of
the reference methods we are comparing with. From every
ground truth image, three partially blurred slices are created.
Every location is sharp only in one slice. One test image and
its derived stack are shown in Figure 3. Then we apply the
method to created stack and the resulting image is compared
with the initial ground truth image using the PSNR (peak sig-
nal to noise ratio). The resulted PSNRs for different methods
are shown in Table 1 (all numbers are in dB). The results for
the new method have been calculated for a scale step of 0.1
and maximum scale 4. The threshold factor is set to 1 or 2 and
the best result in every case is shown. The best result for every
stack has been set in bold. Our method outperforms the others
in 3 stacks. For the Cloud stack, our method outperforms the
curvelet but not the variance method. The proposed method
performs worse than the curvelet for Algae and Eggs. It can
be explained due to failure of thresholding to remove blurred
parts enough, because in these two images, the smoothness of
the blurred and the sharp parts are very close to each other.
5.2. Application to real images
As a real world data test, we applied this method to a stack
of color microscopic images of Peyer plaques from the intes-
tine of a mouse 1. The stack size is 15. The resulted fused
image of the curvelet method is shown in Figure 4(a) and for
our method is shown in figure 4(b). Clearly we can see that in
1The images are courtesy of Jelena Mitic, Laboratoire d’Optique Biomed-
icate at EPF Lausanne, Zeiss and MIM at ISREC Lausanne.
4399
(a) (b)
(c) (d)
Fig. 3. Fabric Ground truth image (a) and derived slices from
it (b-d).
Table 1. Results of different methods in dB
Var
iance
Com
ple
xD
b6
Com
ple
xD
b6
wit
hch
ecks
Curv
elet
New
Met
hod
Leaves 28.75 39.20 34.97 41.27 45.35Metal 32.50 41.24 36.62 44.18 45.43Fabric 41.47 41.25 35.50 43.14 47.15Eggs 47.76 59.80 59.73 65.82 46.16
Algae 53.34 62.17 58.77 63.92 52.98
Clouds 54.79 49.26 49.21 52.73 53.40
some parts, our proposed method works better. One of these
parts is highlighted in two output images. These parts accom-
panying with suitable slice in stack are enlarged and shown
in Figure 5 . The curvelet method failed in this part , but our
proposed method has worked well.
6. CONCLUSION
The new method outperforms the other methods in half of the
test cases and has the second best result in one case. For the
other two cases, the results are still acceptable. One advantage
of the new method is that it is based only on edge pixels in
the images, while other methods are based on all information
of the images. Another advantage is that the fusion step is
done in the spatial domain, so no post-processing is needed
for checking if any intensity is not in any slice of the stack.
7. REFERENCES
[1] A.G. Valdecasas, D. Marshall, J.M. Becerra, and J.J. Ter-
rero, “On the extended depth of focus algorithms for
(a) Curvelet method (b) Proposed method
Fig. 4. Fused images.
(a) Real Image (b) Curvelet Result (c) Our Result
Fig. 5. Enlarged corresponding parts (To enhance the con-
trast we applied color stretch contrast of Gimp Editor to these
images).
bright eld microscopy,” Micron, vol. 32, pp. 559 569,
2001.
[2] H. Li, B.S. Manjunath, and S.K. Mitra, “Multisensor im-
age fusion using the wavelet transform,” in GraphicalModels and Image Processing, 1995, vol. 57 of 3, p. 235
245.
[3] B. Forster, D. Vam De Ville, J. Berent, D. Sage, and
M. Unser, “Complex wavelets for extended depth-of-
field: A new method for the fusion of multichannel mi-
croscopy images,” in Microscopy Research and Tech-nique, 2004, vol. 65 of 1-2, pp. 33–42.
[4] L. Tessens, A. Ledda, A. Pizurica, and W. Philips, “Ex-
tending the depth of field in microscopy through curvelet-
based frequency-adaptive image fusion,” in Proc. ofthe IEEE International Conference on Acoustics, Speech,and Signal Processing (ICASSP), Honolulu, Hawaii,
USA, 2007, pp. 861–864.
[5] C. Ducottet, T. Fournel, and C. Barat, “Scale-adaptive
detection and local characterization of edges based on
wavelet transform,” Signal Processing, vol. 84, pp. 2115–
2137, 2004.
[6] C. L. Tu and W. L. Hwang, “Analysis of singularities
from modulus maxima of complex wavelets,” IEEE tran-sations on Information Theory, vol. 51, no. 3, pp. 1049–
1062, 2005.
4400
top related