[ieee 2012 7th iapr workshop on pattern recognition in remote sensing (prrs) - tsukuba science city,...

4
Enhanced Recognition using Iterative Learning Fusion in Remote Sensing Images Qianwen Yang, Fuchun Sun, Huaping Liu Department of Computer Science State Key Laboratory of Intelligent Technolo and System Tsinghua Universi, Beijing, P.R. China, 100084 E-mail: [email protected] Abstract This article focuses on remote sensing image fusion in order to improve target recognition performance. Current fusion algorithms are mostly designed for specic purpose and have exponential complexi. We propose a fast and robust image fusion algorithm-the iterative learning fusion (ILF) algorithm, to improve the quali of images. This algorithm combines iterative learning in control theo with Multi-scale Geometric Analysis (MGA) image fusion algorithms; also, we apply color transfer to preserve color feature and cooperate it with S VM to improve recognition. By performing iterative learning, fusion parameters will converge to optimal in MGA fusion process. Theoretical analysis and experiments demonstrate improvement of visual and quantitative performance by proposed algorithm. 1. Introduction Remote sensing imaging technologies differ in principle and they can provide complementary information. Data sion contests of GRRS show that sing data helps to raise recognition rate. However, the scarcity of data drives the requirement to exploit remote sensing images as much as possible. Thus, in order to improve the quality of mono-source image, images om various sources are sed to enhance the quality. We focus on producing higher-quality images for recognition using remote sensing image sion. In remote sensing image sion, the equency- domain algorithm wavelet [l ] were first used by Li(1995). Then multi-resolution (MMGA) was applied to image sion, including ridgelet curvelet [2 ] [3 ] " contourlet and many other improved methods. However, these algorithms are usually one-step techniques which cannot ensure optimal sion. Thus, many optimization algorithms are applied to optimize sion results, such as pulse coupled neural network (PC) [4 ] , maximum likelihood estimation (MLEi 5] and convex optimization [6 ] . Some novel leaing algorithms were also inoduced into image sion, incl . ud�f Markov Random Field (MRFP ] and sparse codmg [ . But many of the algorithms show exponential complexity to the size of image. addition, most of the optimization algorithms cannot make ll use of feature information in sion process. Thus, we propose a novel algorithm- iterative leaing sion, theory om optimization conol. We apply it to remote sensing image sion, formulating the process to sion system. Iterative leaing image sion (lLF) is developed to perform mixed-level sion - pixel and feature sion by presetting the leaing parameter. It is combined with contourlet analysis and color ansfer techniques. Also, SYM is incorporated with image sion to perform recognition. The rest of this article is suctured as follows. Section 2 inoduces ndamentals of iterative leaing theory. Section 3 establishes the model for ILF and provides analysis of its convergence, complexity and robustness. Section 4 gives some experiment results on multi-focus images and remote sensing images are shown to testi its effectiveness. Section 5 summarizes the advantages and weaesses in our work. 2. Fundamentals of iterative learning Iterative leaing conol (ILC) is a real-time conol technique, which was first proposed by S. Arimoto and S. Kawamura [9 ] . Here we inoduce ILC briefly, denote

Upload: lamnhan

Post on 27-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2012 7th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) - Tsukuba Science City, Japan (2012.11.11-2012.11.11)] 7th IAPR Workshop on Pattern Recognition in Remote

Enhanced Recognition using Iterative Learning Fusion in Remote Sensing Images

Qianwen Yang, Fuchun Sun, Huaping Liu Department of Computer Science

State Key Laboratory of Intelligent Technology and System Tsinghua University, Beijing, P.R. China, 100084

E-mail: [email protected]

Abstract

This article focuses on remote sensing image fusion in order to improve target recognition performance. Current fusion algorithms are mostly designed for specific purpose and have exponential complexity. We propose a fast and robust image fusion algorithm-the iterative learning fusion (ILF) algorithm, to improve the quality of images. This algorithm combines iterative learning in control theory with Multi-scale Geometric Analysis (MGA) image fusion algorithms; also, we apply color transfer to preserve color feature and cooperate it with S VM to improve recognition. By performing iterative learning, fusion parameters will converge to optimal in MGA fusion process. Theoretical analysis and experiments demonstrate improvement of visual and quantitative performance by proposed algorithm.

1. Introduction

Remote sensing imaging technologies differ in principle and they can provide complementary information. Data fusion contests of GRRS show that fusing data helps to raise recognition rate. However, the scarcity of data drives the requirement to exploit remote sensing images as much as possible. Thus, in order to improve the quality of mono-source image, images from various sources are fused to enhance the quality. We focus on producing higher-quality images for recognition using remote sensing image fusion.

In remote sensing image fusion, the frequency­domain algorithm wavelet[l] were first used by Li(1995). Then multi-resolution (MRA/MGA) was applied to image fusion, including ridgelet curvelet [2]

[3] "

contourlet and many other improved methods.

However, these algorithms are usually one-step techniques which cannot ensure optimal fusion. Thus, many optimization algorithms are applied to optimize fusion results, such as pulse coupled neural network (PCNN)[4], maximum likelihood estimation (MLEi5]

and convex optimization[6]. Some novel learning algorithms were also introduced into image fusion, incl.ud�f Markov Random Field (MRFP] and sparse codmg[. But many of the algorithms show exponential complexity to the size of image. In addition, most of the optimization algorithms cannot make full use of feature information in fusion process.

Thus, we propose a novel algorithm- iterative learning fusion, theory from optimization control. We apply it to remote sensing image fusion, formulating the process to fusion system. Iterative learning image fusion (lLF) is developed to perform mixed-level fusion - pixel and feature fusion by presetting the learning parameter. It is combined with contourlet analysis and color transfer techniques. Also, SYM is incorporated with image fusion to perform recognition.

The rest of this article is structured as follows. Section 2 introduces fundamentals of iterative learning theory. Section 3 establishes the model for ILF and provides analysis of its convergence, complexity and robustness. Section 4 gives some experiment results on multi-focus images and remote sensing images are shown to testify its effectiveness. Section 5 summarizes the advantages and weaknesses in our work.

2. Fundamentals of iterative learning

Iterative learning control (ILC) is a real-time control technique, which was first proposed by S. Arimoto and S. Kawamura[9]. Here we introduce ILC briefly, denote

Page 2: [IEEE 2012 7th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) - Tsukuba Science City, Japan (2012.11.11-2012.11.11)] 7th IAPR Workshop on Pattern Recognition in Remote

that plant is g (t), control input is U (t), ideal output

iSYd (t), and output is y(t) .

With repeated input signal, iterative learning control unit can produce the control signal

U;+I (t) = u; (t)+qLly; (t) (1)

where, i E N+ is the iteration epoch.

In ILC, convergence Vi �OC, Lly; (t) � 0, flu; (t) ---+ 0

can realize, since error I Lly;+,I :s:; 11- qgl·ILly; I, which is

�� Illy; I :s:; �� yHl ILlyo I � 0. (2)

Similarly, we have LluH, = (l-qg)Llu; (3)

Functional analysis shows that U; will converge

to Ud and y; to y" based on the same condition.

Mathematical analysis validates its convergence and several properties. In this paper, we try to analyze the effectiveness of iterative learning in image fusion.

3. Iterative Learning Fusion

3.1. Algorithm formulation

Iterative Learning Control is developed for optimal control by producing the desired output, and we borrow the idea of iterati ve learning for image fusion.

Iterative learning image fusion (lLF) incorporates ILC with image fusion. In order to enhance recognition rate, we first focus on improving quantitative fusion evaluation index to optimize fusion performance.

We set the evaluation index as the control feedback, aiming at the optimization of image fusion; and exponential decrease rate is set as the reference input. Similar as ILC, which is used in discrete systems, our algorithm is proposed as a discrete optimization algorithm for the problel11 of image fusion.

Diffcn!ntial Feedback

Figure I. Iterative Fusion algorithm

Iteration technique is widely used in image fusion, however, ILF differs in principle with simple iteration. Former iteration strategy only performs the fusion step-by-step. By adapting iterative learning to image fusion, ILF regards fusion process as control system.

For image priors, contourlet edge feature is used for presetting of these priors. Color transfer technique is combined with lab color decomposition. Color transfer technique is chosen for the multi-sensor images in order to extract color information. The I component is used for presetting of weight matrix, and contour let analysis is used to set Kj, K2 matrix (step-size matrix). Once set learning parameters, the image of optimal fusion can be gained via iterative learning.

In control system, let Y and U represent the output and input functional space, and r as reference input, then limYk = r, limuk = Uoo (4) k�oo k�oo

This process is equivalent to discrete system with time constant state variables. In iterative learning, the learning machine produces input of (k + 1) iteration.

To analyze the property of fusion convergence, we focus on the optimization of fusion index Q£ (entropy) by a weight sequence wk . We can set

G ( Wk ) = Ilek II: + Ilwk -wk_,ll: (5)

This multi-objected problem is a balance of both error and evaluation.

Thus, it requires the decline of L2-norm

G ( wk+J :s:; G (Wk ). By enforcing constraint on control

signal, we can set input as I lwk+, -wk II: < Ilwk -wk_,ll: ' and

error I lek+,II: :s:; Ilek II: ' thus ILlQE,k+,I:s:; I LlQE,k I will be

convergent. Denote that error is ek+l = ik+, -LlQE,k = ik+, -GWk (6)

where G is a fusion function whose inputs are images for fusion and control input W.

Analysis of the input and output space shows that the weight updating function has the following form

Wk+l=(J+G'Gr'(Wk+G'ik+l) (7) which is the Levenberg-Marquardt(Newton) iteration. Then, to choose the inputi is another problem. In ILF, a decreasing function was chosen as

i = /).QE,oe-k (8) Assuming that k increases, the output indicator

value will decrease from LlQE,o to LlQE,oe-k. So when

k � OCJ , feedback error will gradually approach zero.

3.2. Analysis of the algorithm

3.2.1. Convergence Once the input and the learning algorithm are set, we can test the convergence of the problem.

If lim IILlQ£'k 112 = 0, we have lim l lLlwk II: = 0, we will k....:"oo 2 k....:"oo show that it converges to the optimal parameter of the fusion process.

Page 3: [IEEE 2012 7th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) - Tsukuba Science City, Japan (2012.11.11-2012.11.11)] 7th IAPR Workshop on Pattern Recognition in Remote

It can be proved that the difference of QE converges

to 0 in L2-norm. However, when difference occurs, fusion index may not be optimal. The following theorem indicates the technique to approximate the optimal value. Theorem l. In iterative learning optimization of fusion index, feedback differential will produce convergent entropy to the maximum value when AQ£ � 0 ,

anddQ£ /dw � O. Assuming we have the entropy function

QE = -2:[p (t) /log (P (/))] "" -fJP(l)/log(P(I))]. I

Proof

Between minimum Q£ = 0 and Q£ = max, we can

d h dQE fL(I/IOg,P(I)+P(I)/IOg,P(I))dl (9)

stu y t at- =-

dw 0 dw where (log, P (/)Y = -P (/) has no feasible solution.

Hence QE may converge to maximum as weight converges; in this case, however, once perturbation emerged the state will be different.

Once weight converges, we can denote that

d'Q£ (i) '" 0

( 2 _ P(t) )d 2P(t) � 0 (10)

dw' ' log, p(!) (log, p(t))3 dw'

Stable solution is attainable. 0

As functional analysis shows, the strategy can approximate global maximum as much as possible. 3.2.2. Complexity and robustness After proving the convergence of the algorithm, the algorithm is analyzed of its complexity and robustness. Complexity involves two algorithms-ILF and contour let transform, ILF complexity is proportion to image size m· n . By the exponential descendent rate, we have

g (mn) = exp-I (s/ MHo) < log (mn)/ s (11) approximated by the Maclaurin formula. Thus, the

complexity of ILF iS O (mn log (mn)/s) . (12)

Contourlet complexity is based on image size Nand decomposition layer L .Let N = mn, L = 3, C = [c" c" c3 ] .

g (I (x)) = c,mn ! 4+c,mn!8+c3mnI16 = C'mn = O (mn) Thus the system complexity is the product of the

two algorithms, i.e. O (m ' n' log(I(x))/s) . (13)

Secondly, robustness when input w is impaired is provided as follows. Theorem 2 Robustness with noise: In noise-polluted environment, the updating equation

Wk+1 = aWk + G'

ek+1 = aWk +G' (i - D (Q£ )) (14)

D(Q£) is the differential feedback, where lal < I is the

disturbance coefficient (assuming signal strength does not increase). L2-norm ofw has a non-zero limit and converge to zero when a is infinitely close to one.

Since Wk+1 =(l+G'

Gr'(awk+G'Ce-

k) differs from above only for disturbance coefficient a , convergence in control input space requires that W converges in W. Then lVoo ",(I-a)J+C'CY'C'Ce-' converges to optimal solution. Thus, when lal is close enough to one, it approaches optimal fusion parameter.

4. Experiments

4.1 Image fusion

We apply the proposed algorithm to both visual and remote sensing images, compared with the mean value fusion, NSCT, PCNN[41,MRF[71. And the evaluation indicators are entropy Q£, clarity Qc and cross-entropy

QCE proposed in [10] and [11]. Experiments are based

on core2 processor, 2G memory with Matlab 201ORb. 4.1.1. Multi-focus image fusion The fusion results for multi-focus images are shown in Fig 2 and Table l.

T bl 1 1m fu ' I a e age slon resu ts

Index Q£ Qc �£

Original : imagel 7.4343 5.8616 ---image2 7.2900 2.9124 --

ILF 7.4430 4.9787 0.0504

Mean 7.3540 4.0866 0.1569

NCST 7.4376 5.8799 0.0516 PCNN 7.4404 6.4963 0. 0560

MRF 7.3351 3.5723 0.2608

(a) Orignal Image

(b) Fused color image (From left to right: ILF, Mean, NSCT, PCNN, MRF fusion images)

Figure 2. Fusion of multi-focus images

Compared with other algorithms, ILF shows better visual clarity with general higher index. 4.1.2. Multi-sensor image fusion Image fusion results of multi-sensor images are shown in Table 2 and Fig 3.

In terms of execution time, ILF is compared with other fusion algorithms in Table 3. Although for some indexes other algorithms exceed ILF, it is better in terms of complexity to effectiveness. Furthermore, experiments are performed on fused remote sensing images to testifY enhanced recognition.

(a) Orignallmages

Page 4: [IEEE 2012 7th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) - Tsukuba Science City, Japan (2012.11.11-2012.11.11)] 7th IAPR Workshop on Pattern Recognition in Remote

(b) Fused color image (From left to right: lLF, Mean, NSCT, PCNN, MRF fusion images)

Figure 3. Fusion of multi-sensor images T bl 2 1m fu ' I a e age SIOn resu ts

Index

o .. I I image I ngma I .

Igi 2 Ima e

ILF

Mean

NCST

PCNN

MRF

QE 7.0013

6.6060

7.2967

6.7413

6.9598

6.9708

6.9655

Qc QCE 9.9491 ----8.9539 ----9.9756 0.9079

7.1094 0.2022

9.8929 0.0982

6.4112 1.4284

9.8880 0.0373

T bl 3 T . d f d'ff, a e Ime reqUIre 0 I erent Image sIze

Time/ s 256x256 512x512

ILF 1.763090 4.757003

PCNN 2.998417 12.672633

MRF 821.107232 4302.34223

NSCT 113.627892 444.878189

In lll1age fusIOn, multi-focus lillages complement each other and improves visual and evaluation index, where ILF has better global performance. Also, multi-sensor images with different resolution can be fused to improve image quality. ILF shows the best entropy and clarity in the above image fusion example.

4.2. Classification

Classification is performed on original and fused remote sensing images. With texture and spectral features employed, SYM performs the classification. The training set(China online lake database) is made up of 100 samples, including 60 pair of lakes images and 40 non-lake samples, and test set has 30 positive pair and 30 negative samples.

Recogn�ion accuracy - iteration epochs 0.94 ,---,---�---,-C�-,---���-�>

0.93

0.89 �ILF �PCNN

0.88 -MRF

5 10 15 20 25 30 35 40 4S 50 Eoochs infusion

for multi-step processes, and Table 4 illustrates the highest recognition rate of different algorithms.

Compared with other algorithms, ILF improves the recognition rate of remote sensing lake images more significantly in entropy and clarity.

5. Conclusion

In this paper we propose the Iterative Learning Image Fusion (ILF) algorithm. Both theoretical analysis and experiments show its effectiveness in improving the quality of images compared with other new techniques. However, further work should be done to realize real-time application in remote sensing images.

References

[1] H. Li, B.S. Manjunath, S.K. Mitra. Multi sensor Image Fusion Using the Wavelet Transform. Graphical Models and image Processing, 1995, Vol. 57(3): pp. 235-245.

[2] F. Nencini, A. Garzelli, S. Baronti, L. A1parone. Remote sensing image fusion using the curvelet transform. information Fusion, 2007, Vol. 8: pp. 143-156.

[3] Y. Zheng, C. Zhu, J. Song, X. Zhao. Fusion of Multiband SAR Images Based on Contourlet Transform, Proceedings of iEEE international Conference on information Acquisition, 2006.

[4] S. Yang, M. Wang, X. Yan, W. Qi, L. Jiao. Fusion of multi-parametric SAR images based on SW-nonsubsampled contourlet and PCNN. Signal Processing. 2009, Vo1.89: pp. 2596-2608.

[5] S. Chen, Q. Guo, H. Leung, Bosse, E. Browse. A Maximum Likelihood Approach to Joint Image Registration and Fusion. iEEE Trans on image Processing, Vol. 20(5): pp.1363 - 1372.

[6] J. Yuan, J. Shi, X. Tai, Y. Boykov. A Study on Convex Optimization Approaches to Image Fusion. A.M. Bruckstein et al. (Eds.), SSVM 2011: pp. 122-133.

[7] M. Xu, H. Chen, P. K. Varshney. An Image Fusion Approach Based on Markov Random Fields. iEEE Trans on Geoscience and Remote Sensing, Vol. 49(12), 2011: pp. 5116-5127.

[8] H. Yin, S. Li, L. Fang. Simultaneous image fusion and super-resolution using sparse representation. information Fusion. Jan, 20l2(To be published).

[9] S. Arimoto, S. Kawamura, F. Miyazaki (1984). Bettering operation of robots by learning. Journal of Robotic Systems. Vol. 1 (2): pp.123-140.

[10] G. Qu, D. Zhang, P. Yan. Information measure for performance of image fusion. Electronic Letters, 2002, Vol. 38(7): pp.313-316.

[II] G. Piella, H. Heijmans. A new quality metric on image fusion. Proceedings of 2003 international Coriference on image Processing, iCiP 2003. Sept. 2003, Vol. 2: pp. 173-176.