research article the nonlocal sparse reconstruction...
TRANSCRIPT
Research ArticleThe Nonlocal Sparse Reconstruction Algorithm bySimilarity Measurement with Shearlet Feature Vector
Wu Qidi Li Yibing Lin Yun and Yang Xiaodong
College of Information and Communication Engineering Harbin Engineering University Harbin China
Correspondence should be addressed to Lin Yun linyunhrbeueducn
Received 24 August 2013 Revised 7 January 2014 Accepted 19 January 2014 Published 4 March 2014
Academic Editor Yi-Hung Liu
Copyright copy 2014 Wu Qidi et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Due to the limited accuracy of conventional methods with image restoration the paper supplied a nonlocal sparsity reconstructionalgorithmwith similarity measurement To improve the performance of restoration results we proposed two schemes to dictionarylearning and sparse coding respectively In the part of the dictionary learning we measured the similarity between patches fromdegraded image by constructing the Shearlet feature vector Besides we classified the patches into different classes with similarityand trained the cluster dictionary for each class by cascading which we could gain the universal dictionary In the part of sparsecoding we proposed a novel optimal objective function with the coding residual item which can suppress the residual betweenthe estimate coding and true sparse coding Additionally we show the derivation of self-adaptive regularization parameter inoptimization under the Bayesian framework which can make the performance better It can be indicated from the experimentalresults that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the codingresidual suppression the proposed method shows advantage both on visual perception and PSNR compared to the conventionalmethods
1 Introduction
The reasonable representation is the basis of many tasks inimage processing The meaning of ldquoreasonablerdquo is to expressthe important information in image with less coefficientswhich is called sparse representation [1] We always want toexpress the signal in a cost-effective way which could reducethe cost in signal processing However the emerging of sparserepresentation satisfied the requirements exactly Wavelet isthe landmark work which is the most optimal expression ofone-dimension signal But it showsweak ability to express thehigh-dimension signal for its limited directions For solvingthe problem Donoho proposed the multiscale geometricanalysis theory which contains Ridgelet Curvelet [2] Ban-dlet and Contourlet [3] mainly and then applied them toimage restoration which achieved good effect
As for the sparse representation in image restorationthere are twomain approaches including multiscale geomet-ric analysis (MGA) and dictionary learning With the MGADonoho proposed the pioneer wavelet threshold shrinkingmethod After that Yin designed a non-gaussian bivariate
Shrinkage function [4] under the MAP norm which gotthe better performance In 2006 as the limited directions inwavelet Arthur proposed the NSCT transformation [5 6]and applied it in the restoration area by Bayesian frameworkBut these conventional restoration models may not be accu-rate enough when the image is degraded seriously for theirfixed over-completed dictionary which is not sufficient topresent the abundant structures in nature images So a newconcept is proposed which is called dictionary learning [7ndash9] Besides many methods about how to sparse coding areproposed such as MP [10] OMP [11] and GP [12] whichis called sparse coding Further a method called K-SVD[13 14] is proposed by Aharon which updates the dictionaryby making SVD decomposition to the residual signal and isapplied to precision image restoration
The goal of this paper is to research the sparse construc-tion algorithm for image restoration based on the dictionarylearning framework We propose a nonlocal block sparsereconstruction scheme by shearlet feature vector by whichwemeasure the similarity between image patches and classifythem for learning cluster dictionary Then we propose
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 586014 8 pageshttpdxdoiorg1011552014586014
2 Mathematical Problems in Engineering
a new objective function for sparse coding to realize the highaccuracy image restoration
2 Image Restoration with Dictionary Learning
Suppose that the model among the observation signal 119910original signal 119909 and noise 119899 satisfied
119910 = 119909 + 119899 (1)
According to the sparse representation theory in [1] thesparse coding is equal to solving the optimization problem in(2)
min 1205720
st 1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2le 120576
(2)
In which 119863 is the sparse dictionary of 119909 and 120572 is thecoding vectorThen we can get the original signal by 119909 = 119863120576 is a constant with the noise standard deviation 120576 = (119862120590)2
For the general restoration algorithm based on the sparsetheory dictionary 119863 is fixed So a new design for dictionarylearning called K-SVD is proposed in reference [13] andgood results have been achieved in image restoration Inthe K-SVD we implement the sparse coding and dictionarylearning simultaneously And the optimization task in (2) canbe changed to
(119909 119863) = arg min
120572119863
1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+ 1205821205720 119909 = 119863
119909 (3)
Elad et al divided the image into many patches for notincreasing the dictionary dimension and got the block-basedsparse coding
119894119895with
119894119895= arg min 10038171003817100381710038171003817119910119894119895 minus 119863120572
10038171003817100381710038171003817
2
2+ 1205821205720 (4)
Inwhich119910119894119895presents the patchwhose central is (119894 119895)The
size of patch is 7 times 7 They are gained by sliding a squaredwindows in the image Then we can get the restored imageby averaging these patches
Generally K-SVD learns the dictionary with some ran-dom training examples from the image so the learned atomsshow some weakness in presenting certain local structure inimage which makes the coding not sparsity enough So thepaper has two contributions The first is that we proposeda dictionary learning scheme based on clustering whichtrained subdictionary for each cluster and then producedthe universal dictionary with them Secondly we proposeda novel objective function for sparse coding which added acoding residual item compared to the traditional ones Thedetail for the two contributions will be introduced as follows
3 A Novel Scheme for Image Restoration
31 Nonsubsampling Shearlet Transform NonsubsamplingShearlet [15] is a multiscale geometric analytical methodadvanced by Lim and Kutyniok It consisted of laplacian
pyramid and direction filter bands with multidirections andshift invariant For its parabola scale principle NonsamplingShearlet has excellent performances on capturing geometricalfeature and singularity in high-dimension which makes itwidely used in image restoration
32 Similarity Measurement As to the many advantageswith Shearlet we propose a similarity measurement methodImplement the Shearlet transform with 119897 levels and constructthe feature vector for each pixel as
119881 = (119904V10 119904V11205761 119904V1198970 119904V119897120576119897)119879
(5)
Here 120576119897is the number of subband in 119897th level and
119904V119897120576 is the Shearlet coefficient With the anisotropy featurethe coefficient for signal is large while the local geometricstructure is similar to the basis function Conversely thecoefficient is small while the noise has the isotropy featureso the coefficients are uniform in each primary function Dueto the above two reasons the vector in (5) has the betterantinoise ability according to which we take (5) to measurethe similarity between the two patches Suppose the size ofpatch is 5 times 5 we get the vector in (5) for all the 25 pixelsThenwe construct some subvector119881sub119896 generated by the 119896thdimension of 119881
119881sub119896 = (1198811 (119896) 1198812 (119896) 11988125 (119896))119879 (6)
In which the 119881119894(119896) presents the 119896th element of 119881
119894(119894 is the
order number of the pixel in each patch 119894 = 1 25) Sup-pose 1198811sub119896 and 119881
2
sub119896 are from two patches to be measuredwe compute the index 119878119894 as follows
119878119894 = exp
minus
sum119896
100381710038171003817100381710038171198811
sub119896 minus 1198812
sub11989610038171003817100381710038171003817
2
2
V2119896
(7)
In which V119896is called shearlet wave coefficient V
119896= 120574120590119896
(120590119896is the coefficient standard deviation with 119896th sub-band)
With the formulation in (6) we can measure the similaritybetween two patches better than the Euclidean distance ofgray in spatial domain
33 Nonlocal Dictionary Learning
331 Nonlocal Dictionary Learning The restoration algo-rithm consisted of twomain parts that are dictionary learningand sparse coding respectively In this paper we improve thealgorithm with the method for similarity measurement inSection 32 It is different to the conventional methods thatwe did not simply select the training patches from differentexample images to train the dictionary while we select thenonlocal similar patches from the degraded image we wantto restoration And then we cluster these patches to train thecluster subdictionary which is more sufficient to make theuniversal dictionary better adapt to local geometrical featurethan global K-SVD Besides the training data in the samecluster is similar to each other so there is no need to producean over-completed dictionary So each subdictionary size is
Mathematical Problems in Engineering 3
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 1 Restoration performance comparison on house (120590 = 15)
half of the size of corresponding cluster Now we present theconcrete description for learning scheme as follows
Step 1 Implement the Shearlet transform to the image 119884
Step 2 Construct the Shearlet feature vector according to theformula (5) (The patch size is 5 times 5)
Step 3 Calculate the standard deviation of 119896th subband (120590119896=
(11198732) sum1198732
119894=1(119910119894minus 119910)2) and gain the wave coefficient V
119896 119910119894and
119910 is the shearlet coefficient and themean value of coefficients119873 is the total number of coefficients in 119896th subband
Step 4 Cluster the patches by method in [16] with the index119878119894 in formula (6) and then produce the cluster Ω
119894(119894 =
1 2 119870 is the cluster indicator)
Step 5 Take the patches in Ω119894as the training data and
learning the cluster subdictionary with K-SVD
Step 6 By cascading all the cluster dictionaries we can getthe universal dictionary for the whole image
With the ability of capturing the local geometrical featurethe patches in the same set are highly similar by theirgeometry structure which makes atoms in subdictionaryhave the strong adaptability to local structure So they can
sufficiently present the local geometrical feature Cascaded byall the subdictionaries the universal dictionary realizes thegoal that presents all kinds of features in the whole image
Now we show a simple example to show the advantage ofclustering-based dictionary learning We list a set of trainingexamples
[100 200 0]119879
[110lowastrandn (1) 120lowastrandn (1) 0]119879
[100lowastrandn (1) 80lowastrandn (1) 0]119879
[0 150lowastrandn (1) 200lowastrandn (1)]119879
(8)
Here ldquorandnrdquo means produce a Gaussian stochasticvariable and then we take the above two groups of trainingdata to learn a dictionary with two atoms We adopt theproposed algorithm and K-SVD respectively Set the Maxi-mum iteration number with KSVD to be 10 and we show thelearned dictionary in Table 1
As can be seen in Table 1 the atoms by proposedalgorithm have the more similar structure with the trainingdata
332 Sparse Coding For improving the coding accuracywe proposed a novel optimization called coding residual
4 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 2 Restoration performance comparison on Barbara (120590 = 15)
Table 1 Learned Dictionary with different methods
Method Learned Dictionary
KSVD [[
[
minus05630 minus00459
minus07995 02835
minus02096 minus09579
]]
]
Proposed algorithm [[
[
minus05375 0
minus08432 03294
00000 minus09442
]]
]
suppression optimization which is more sufficient than 1198971-
norm sparse coding When dictionary is given the 1198971-norm
sparse coding for original signal can be realized as follows
119909= arg min 119909 minus 1198631205722 + 1205821205721 (9)
In the restoration problem we can only get the observa-tion 119910 So the sparse coding can be rewritten as
119910= arg min 1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 1205821205721 (10)
Comparing (9) and (10) we can see that if we want toreconstruct 119909 with high accuracy the coding
119910is expected
to be as close as possible to the true sparse coding 119909 So we
introduce the residual coding 120572120575as follows
120572120575= 120572119910minus 120572119909 (11)
Then we change the object function (10) into the residualsuppression form
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120582sum
119896
100381710038171003817100381712057211989610038171003817100381710038171 + 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171
120572119896
120575= 120572119896minus 120578119894
(12)
where 119896 is the indicator of 119896th patch In addition we cannotget the true sparse coding practically so the 120578
119894is a good
estimation of the true sparse coding and can be calculatedfromweighted average of sparse coding of patches in the samecluster 120582 and 120583 are the regularization parameters If 119896th patchis fromΩ
119894 we calculate 120578
119894
120578119894= sum
119901
120596119894119901119894119901
120596119894119901= 119888 exp minus10038171003817100381710038171003817119909119894 minus 119909119894119901
10038171003817100381710038171003817
2
2
(119888 is the normalization parameter)
(13)
The second term in optimization equation (12) is usedto ensure the local sparsity that only part of atoms areselected for the dictionary But in our scheme we selectone subdictionary for each patch in a certain cluster which
Mathematical Problems in Engineering 5
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 3 Restoration performance comparison on Bridge (120590 = 20)
means that the coding of another subdictionary is zero Soour scheme guarantees the sparsity naturally that is we canmove the second term in (12) and rewrite it as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171 (14)
With (14) we can compact the optimization only withcoding residual constraint whichmeans that the coding fromthe observed signal by (14) is close to that from the originalsignal by (9)
333 Scheme Summary Thewhole scheme consisted of two-level iterative algorithm now we present the brief steps asfollows
Initial set the initial image 119883 = 119884 (119884 is the degradedimage) 119863(0) is the DCT complete dictionary and calculatethe initial sparse coding for each patch with any pursuitalgorithm
Outer Loop
(a) on the (OL)th iteration learn the dictionary119863(OL) by119883 with the algorithm proposed in Section 331 (119871 isthe maximum iterative number OL = 1 2 119871)
(b) compute the good estimation set 120578119894119894=1119870
for all theclusters under the119863(OL)
(c) Inner Loop
(i) for each patch we get its coding 119910by (14)
which can be solved with the method in [17](ii) repeat (i) until all the patches are processed
(d) estimate all the restoration patches by 119909 = 119863(OL)119910
(e) OL = OL + 1 repeat
On one hand the 120578119894is more and more approach to the
true sparse coding which makes the residual suppressionmore sufficient On the other hand by alternating the sparsecoding and dictionary learning coding and dictionary are allimproved and promote each other
334 Parameter Selection In conventional models the regu-larization parameter is generally a constant Hence for mak-ing (14)more sufficient we give the derivation of self-adaptiveregularization parameter 120583 under the Bayes framework
For written convenience we take 120575 = 120572119896minus 120578119894as the
residual coding vector Under the Bayes framework theMAPestimation of 120575 can be computed as
120575 = arg max log (119901 (120575 | 119910))
= arg max log (119901 (119910 | 120575)) + log (119901 (120575)) (15)
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Mathematical Problems in Engineering
a new objective function for sparse coding to realize the highaccuracy image restoration
2 Image Restoration with Dictionary Learning
Suppose that the model among the observation signal 119910original signal 119909 and noise 119899 satisfied
119910 = 119909 + 119899 (1)
According to the sparse representation theory in [1] thesparse coding is equal to solving the optimization problem in(2)
min 1205720
st 1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2le 120576
(2)
In which 119863 is the sparse dictionary of 119909 and 120572 is thecoding vectorThen we can get the original signal by 119909 = 119863120576 is a constant with the noise standard deviation 120576 = (119862120590)2
For the general restoration algorithm based on the sparsetheory dictionary 119863 is fixed So a new design for dictionarylearning called K-SVD is proposed in reference [13] andgood results have been achieved in image restoration Inthe K-SVD we implement the sparse coding and dictionarylearning simultaneously And the optimization task in (2) canbe changed to
(119909 119863) = arg min
120572119863
1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+ 1205821205720 119909 = 119863
119909 (3)
Elad et al divided the image into many patches for notincreasing the dictionary dimension and got the block-basedsparse coding
119894119895with
119894119895= arg min 10038171003817100381710038171003817119910119894119895 minus 119863120572
10038171003817100381710038171003817
2
2+ 1205821205720 (4)
Inwhich119910119894119895presents the patchwhose central is (119894 119895)The
size of patch is 7 times 7 They are gained by sliding a squaredwindows in the image Then we can get the restored imageby averaging these patches
Generally K-SVD learns the dictionary with some ran-dom training examples from the image so the learned atomsshow some weakness in presenting certain local structure inimage which makes the coding not sparsity enough So thepaper has two contributions The first is that we proposeda dictionary learning scheme based on clustering whichtrained subdictionary for each cluster and then producedthe universal dictionary with them Secondly we proposeda novel objective function for sparse coding which added acoding residual item compared to the traditional ones Thedetail for the two contributions will be introduced as follows
3 A Novel Scheme for Image Restoration
31 Nonsubsampling Shearlet Transform NonsubsamplingShearlet [15] is a multiscale geometric analytical methodadvanced by Lim and Kutyniok It consisted of laplacian
pyramid and direction filter bands with multidirections andshift invariant For its parabola scale principle NonsamplingShearlet has excellent performances on capturing geometricalfeature and singularity in high-dimension which makes itwidely used in image restoration
32 Similarity Measurement As to the many advantageswith Shearlet we propose a similarity measurement methodImplement the Shearlet transform with 119897 levels and constructthe feature vector for each pixel as
119881 = (119904V10 119904V11205761 119904V1198970 119904V119897120576119897)119879
(5)
Here 120576119897is the number of subband in 119897th level and
119904V119897120576 is the Shearlet coefficient With the anisotropy featurethe coefficient for signal is large while the local geometricstructure is similar to the basis function Conversely thecoefficient is small while the noise has the isotropy featureso the coefficients are uniform in each primary function Dueto the above two reasons the vector in (5) has the betterantinoise ability according to which we take (5) to measurethe similarity between the two patches Suppose the size ofpatch is 5 times 5 we get the vector in (5) for all the 25 pixelsThenwe construct some subvector119881sub119896 generated by the 119896thdimension of 119881
119881sub119896 = (1198811 (119896) 1198812 (119896) 11988125 (119896))119879 (6)
In which the 119881119894(119896) presents the 119896th element of 119881
119894(119894 is the
order number of the pixel in each patch 119894 = 1 25) Sup-pose 1198811sub119896 and 119881
2
sub119896 are from two patches to be measuredwe compute the index 119878119894 as follows
119878119894 = exp
minus
sum119896
100381710038171003817100381710038171198811
sub119896 minus 1198812
sub11989610038171003817100381710038171003817
2
2
V2119896
(7)
In which V119896is called shearlet wave coefficient V
119896= 120574120590119896
(120590119896is the coefficient standard deviation with 119896th sub-band)
With the formulation in (6) we can measure the similaritybetween two patches better than the Euclidean distance ofgray in spatial domain
33 Nonlocal Dictionary Learning
331 Nonlocal Dictionary Learning The restoration algo-rithm consisted of twomain parts that are dictionary learningand sparse coding respectively In this paper we improve thealgorithm with the method for similarity measurement inSection 32 It is different to the conventional methods thatwe did not simply select the training patches from differentexample images to train the dictionary while we select thenonlocal similar patches from the degraded image we wantto restoration And then we cluster these patches to train thecluster subdictionary which is more sufficient to make theuniversal dictionary better adapt to local geometrical featurethan global K-SVD Besides the training data in the samecluster is similar to each other so there is no need to producean over-completed dictionary So each subdictionary size is
Mathematical Problems in Engineering 3
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 1 Restoration performance comparison on house (120590 = 15)
half of the size of corresponding cluster Now we present theconcrete description for learning scheme as follows
Step 1 Implement the Shearlet transform to the image 119884
Step 2 Construct the Shearlet feature vector according to theformula (5) (The patch size is 5 times 5)
Step 3 Calculate the standard deviation of 119896th subband (120590119896=
(11198732) sum1198732
119894=1(119910119894minus 119910)2) and gain the wave coefficient V
119896 119910119894and
119910 is the shearlet coefficient and themean value of coefficients119873 is the total number of coefficients in 119896th subband
Step 4 Cluster the patches by method in [16] with the index119878119894 in formula (6) and then produce the cluster Ω
119894(119894 =
1 2 119870 is the cluster indicator)
Step 5 Take the patches in Ω119894as the training data and
learning the cluster subdictionary with K-SVD
Step 6 By cascading all the cluster dictionaries we can getthe universal dictionary for the whole image
With the ability of capturing the local geometrical featurethe patches in the same set are highly similar by theirgeometry structure which makes atoms in subdictionaryhave the strong adaptability to local structure So they can
sufficiently present the local geometrical feature Cascaded byall the subdictionaries the universal dictionary realizes thegoal that presents all kinds of features in the whole image
Now we show a simple example to show the advantage ofclustering-based dictionary learning We list a set of trainingexamples
[100 200 0]119879
[110lowastrandn (1) 120lowastrandn (1) 0]119879
[100lowastrandn (1) 80lowastrandn (1) 0]119879
[0 150lowastrandn (1) 200lowastrandn (1)]119879
(8)
Here ldquorandnrdquo means produce a Gaussian stochasticvariable and then we take the above two groups of trainingdata to learn a dictionary with two atoms We adopt theproposed algorithm and K-SVD respectively Set the Maxi-mum iteration number with KSVD to be 10 and we show thelearned dictionary in Table 1
As can be seen in Table 1 the atoms by proposedalgorithm have the more similar structure with the trainingdata
332 Sparse Coding For improving the coding accuracywe proposed a novel optimization called coding residual
4 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 2 Restoration performance comparison on Barbara (120590 = 15)
Table 1 Learned Dictionary with different methods
Method Learned Dictionary
KSVD [[
[
minus05630 minus00459
minus07995 02835
minus02096 minus09579
]]
]
Proposed algorithm [[
[
minus05375 0
minus08432 03294
00000 minus09442
]]
]
suppression optimization which is more sufficient than 1198971-
norm sparse coding When dictionary is given the 1198971-norm
sparse coding for original signal can be realized as follows
119909= arg min 119909 minus 1198631205722 + 1205821205721 (9)
In the restoration problem we can only get the observa-tion 119910 So the sparse coding can be rewritten as
119910= arg min 1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 1205821205721 (10)
Comparing (9) and (10) we can see that if we want toreconstruct 119909 with high accuracy the coding
119910is expected
to be as close as possible to the true sparse coding 119909 So we
introduce the residual coding 120572120575as follows
120572120575= 120572119910minus 120572119909 (11)
Then we change the object function (10) into the residualsuppression form
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120582sum
119896
100381710038171003817100381712057211989610038171003817100381710038171 + 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171
120572119896
120575= 120572119896minus 120578119894
(12)
where 119896 is the indicator of 119896th patch In addition we cannotget the true sparse coding practically so the 120578
119894is a good
estimation of the true sparse coding and can be calculatedfromweighted average of sparse coding of patches in the samecluster 120582 and 120583 are the regularization parameters If 119896th patchis fromΩ
119894 we calculate 120578
119894
120578119894= sum
119901
120596119894119901119894119901
120596119894119901= 119888 exp minus10038171003817100381710038171003817119909119894 minus 119909119894119901
10038171003817100381710038171003817
2
2
(119888 is the normalization parameter)
(13)
The second term in optimization equation (12) is usedto ensure the local sparsity that only part of atoms areselected for the dictionary But in our scheme we selectone subdictionary for each patch in a certain cluster which
Mathematical Problems in Engineering 5
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 3 Restoration performance comparison on Bridge (120590 = 20)
means that the coding of another subdictionary is zero Soour scheme guarantees the sparsity naturally that is we canmove the second term in (12) and rewrite it as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171 (14)
With (14) we can compact the optimization only withcoding residual constraint whichmeans that the coding fromthe observed signal by (14) is close to that from the originalsignal by (9)
333 Scheme Summary Thewhole scheme consisted of two-level iterative algorithm now we present the brief steps asfollows
Initial set the initial image 119883 = 119884 (119884 is the degradedimage) 119863(0) is the DCT complete dictionary and calculatethe initial sparse coding for each patch with any pursuitalgorithm
Outer Loop
(a) on the (OL)th iteration learn the dictionary119863(OL) by119883 with the algorithm proposed in Section 331 (119871 isthe maximum iterative number OL = 1 2 119871)
(b) compute the good estimation set 120578119894119894=1119870
for all theclusters under the119863(OL)
(c) Inner Loop
(i) for each patch we get its coding 119910by (14)
which can be solved with the method in [17](ii) repeat (i) until all the patches are processed
(d) estimate all the restoration patches by 119909 = 119863(OL)119910
(e) OL = OL + 1 repeat
On one hand the 120578119894is more and more approach to the
true sparse coding which makes the residual suppressionmore sufficient On the other hand by alternating the sparsecoding and dictionary learning coding and dictionary are allimproved and promote each other
334 Parameter Selection In conventional models the regu-larization parameter is generally a constant Hence for mak-ing (14)more sufficient we give the derivation of self-adaptiveregularization parameter 120583 under the Bayes framework
For written convenience we take 120575 = 120572119896minus 120578119894as the
residual coding vector Under the Bayes framework theMAPestimation of 120575 can be computed as
120575 = arg max log (119901 (120575 | 119910))
= arg max log (119901 (119910 | 120575)) + log (119901 (120575)) (15)
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 3
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 1 Restoration performance comparison on house (120590 = 15)
half of the size of corresponding cluster Now we present theconcrete description for learning scheme as follows
Step 1 Implement the Shearlet transform to the image 119884
Step 2 Construct the Shearlet feature vector according to theformula (5) (The patch size is 5 times 5)
Step 3 Calculate the standard deviation of 119896th subband (120590119896=
(11198732) sum1198732
119894=1(119910119894minus 119910)2) and gain the wave coefficient V
119896 119910119894and
119910 is the shearlet coefficient and themean value of coefficients119873 is the total number of coefficients in 119896th subband
Step 4 Cluster the patches by method in [16] with the index119878119894 in formula (6) and then produce the cluster Ω
119894(119894 =
1 2 119870 is the cluster indicator)
Step 5 Take the patches in Ω119894as the training data and
learning the cluster subdictionary with K-SVD
Step 6 By cascading all the cluster dictionaries we can getthe universal dictionary for the whole image
With the ability of capturing the local geometrical featurethe patches in the same set are highly similar by theirgeometry structure which makes atoms in subdictionaryhave the strong adaptability to local structure So they can
sufficiently present the local geometrical feature Cascaded byall the subdictionaries the universal dictionary realizes thegoal that presents all kinds of features in the whole image
Now we show a simple example to show the advantage ofclustering-based dictionary learning We list a set of trainingexamples
[100 200 0]119879
[110lowastrandn (1) 120lowastrandn (1) 0]119879
[100lowastrandn (1) 80lowastrandn (1) 0]119879
[0 150lowastrandn (1) 200lowastrandn (1)]119879
(8)
Here ldquorandnrdquo means produce a Gaussian stochasticvariable and then we take the above two groups of trainingdata to learn a dictionary with two atoms We adopt theproposed algorithm and K-SVD respectively Set the Maxi-mum iteration number with KSVD to be 10 and we show thelearned dictionary in Table 1
As can be seen in Table 1 the atoms by proposedalgorithm have the more similar structure with the trainingdata
332 Sparse Coding For improving the coding accuracywe proposed a novel optimization called coding residual
4 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 2 Restoration performance comparison on Barbara (120590 = 15)
Table 1 Learned Dictionary with different methods
Method Learned Dictionary
KSVD [[
[
minus05630 minus00459
minus07995 02835
minus02096 minus09579
]]
]
Proposed algorithm [[
[
minus05375 0
minus08432 03294
00000 minus09442
]]
]
suppression optimization which is more sufficient than 1198971-
norm sparse coding When dictionary is given the 1198971-norm
sparse coding for original signal can be realized as follows
119909= arg min 119909 minus 1198631205722 + 1205821205721 (9)
In the restoration problem we can only get the observa-tion 119910 So the sparse coding can be rewritten as
119910= arg min 1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 1205821205721 (10)
Comparing (9) and (10) we can see that if we want toreconstruct 119909 with high accuracy the coding
119910is expected
to be as close as possible to the true sparse coding 119909 So we
introduce the residual coding 120572120575as follows
120572120575= 120572119910minus 120572119909 (11)
Then we change the object function (10) into the residualsuppression form
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120582sum
119896
100381710038171003817100381712057211989610038171003817100381710038171 + 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171
120572119896
120575= 120572119896minus 120578119894
(12)
where 119896 is the indicator of 119896th patch In addition we cannotget the true sparse coding practically so the 120578
119894is a good
estimation of the true sparse coding and can be calculatedfromweighted average of sparse coding of patches in the samecluster 120582 and 120583 are the regularization parameters If 119896th patchis fromΩ
119894 we calculate 120578
119894
120578119894= sum
119901
120596119894119901119894119901
120596119894119901= 119888 exp minus10038171003817100381710038171003817119909119894 minus 119909119894119901
10038171003817100381710038171003817
2
2
(119888 is the normalization parameter)
(13)
The second term in optimization equation (12) is usedto ensure the local sparsity that only part of atoms areselected for the dictionary But in our scheme we selectone subdictionary for each patch in a certain cluster which
Mathematical Problems in Engineering 5
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 3 Restoration performance comparison on Bridge (120590 = 20)
means that the coding of another subdictionary is zero Soour scheme guarantees the sparsity naturally that is we canmove the second term in (12) and rewrite it as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171 (14)
With (14) we can compact the optimization only withcoding residual constraint whichmeans that the coding fromthe observed signal by (14) is close to that from the originalsignal by (9)
333 Scheme Summary Thewhole scheme consisted of two-level iterative algorithm now we present the brief steps asfollows
Initial set the initial image 119883 = 119884 (119884 is the degradedimage) 119863(0) is the DCT complete dictionary and calculatethe initial sparse coding for each patch with any pursuitalgorithm
Outer Loop
(a) on the (OL)th iteration learn the dictionary119863(OL) by119883 with the algorithm proposed in Section 331 (119871 isthe maximum iterative number OL = 1 2 119871)
(b) compute the good estimation set 120578119894119894=1119870
for all theclusters under the119863(OL)
(c) Inner Loop
(i) for each patch we get its coding 119910by (14)
which can be solved with the method in [17](ii) repeat (i) until all the patches are processed
(d) estimate all the restoration patches by 119909 = 119863(OL)119910
(e) OL = OL + 1 repeat
On one hand the 120578119894is more and more approach to the
true sparse coding which makes the residual suppressionmore sufficient On the other hand by alternating the sparsecoding and dictionary learning coding and dictionary are allimproved and promote each other
334 Parameter Selection In conventional models the regu-larization parameter is generally a constant Hence for mak-ing (14)more sufficient we give the derivation of self-adaptiveregularization parameter 120583 under the Bayes framework
For written convenience we take 120575 = 120572119896minus 120578119894as the
residual coding vector Under the Bayes framework theMAPestimation of 120575 can be computed as
120575 = arg max log (119901 (120575 | 119910))
= arg max log (119901 (119910 | 120575)) + log (119901 (120575)) (15)
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 2 Restoration performance comparison on Barbara (120590 = 15)
Table 1 Learned Dictionary with different methods
Method Learned Dictionary
KSVD [[
[
minus05630 minus00459
minus07995 02835
minus02096 minus09579
]]
]
Proposed algorithm [[
[
minus05375 0
minus08432 03294
00000 minus09442
]]
]
suppression optimization which is more sufficient than 1198971-
norm sparse coding When dictionary is given the 1198971-norm
sparse coding for original signal can be realized as follows
119909= arg min 119909 minus 1198631205722 + 1205821205721 (9)
In the restoration problem we can only get the observa-tion 119910 So the sparse coding can be rewritten as
119910= arg min 1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 1205821205721 (10)
Comparing (9) and (10) we can see that if we want toreconstruct 119909 with high accuracy the coding
119910is expected
to be as close as possible to the true sparse coding 119909 So we
introduce the residual coding 120572120575as follows
120572120575= 120572119910minus 120572119909 (11)
Then we change the object function (10) into the residualsuppression form
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120582sum
119896
100381710038171003817100381712057211989610038171003817100381710038171 + 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171
120572119896
120575= 120572119896minus 120578119894
(12)
where 119896 is the indicator of 119896th patch In addition we cannotget the true sparse coding practically so the 120578
119894is a good
estimation of the true sparse coding and can be calculatedfromweighted average of sparse coding of patches in the samecluster 120582 and 120583 are the regularization parameters If 119896th patchis fromΩ
119894 we calculate 120578
119894
120578119894= sum
119901
120596119894119901119894119901
120596119894119901= 119888 exp minus10038171003817100381710038171003817119909119894 minus 119909119894119901
10038171003817100381710038171003817
2
2
(119888 is the normalization parameter)
(13)
The second term in optimization equation (12) is usedto ensure the local sparsity that only part of atoms areselected for the dictionary But in our scheme we selectone subdictionary for each patch in a certain cluster which
Mathematical Problems in Engineering 5
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 3 Restoration performance comparison on Bridge (120590 = 20)
means that the coding of another subdictionary is zero Soour scheme guarantees the sparsity naturally that is we canmove the second term in (12) and rewrite it as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171 (14)
With (14) we can compact the optimization only withcoding residual constraint whichmeans that the coding fromthe observed signal by (14) is close to that from the originalsignal by (9)
333 Scheme Summary Thewhole scheme consisted of two-level iterative algorithm now we present the brief steps asfollows
Initial set the initial image 119883 = 119884 (119884 is the degradedimage) 119863(0) is the DCT complete dictionary and calculatethe initial sparse coding for each patch with any pursuitalgorithm
Outer Loop
(a) on the (OL)th iteration learn the dictionary119863(OL) by119883 with the algorithm proposed in Section 331 (119871 isthe maximum iterative number OL = 1 2 119871)
(b) compute the good estimation set 120578119894119894=1119870
for all theclusters under the119863(OL)
(c) Inner Loop
(i) for each patch we get its coding 119910by (14)
which can be solved with the method in [17](ii) repeat (i) until all the patches are processed
(d) estimate all the restoration patches by 119909 = 119863(OL)119910
(e) OL = OL + 1 repeat
On one hand the 120578119894is more and more approach to the
true sparse coding which makes the residual suppressionmore sufficient On the other hand by alternating the sparsecoding and dictionary learning coding and dictionary are allimproved and promote each other
334 Parameter Selection In conventional models the regu-larization parameter is generally a constant Hence for mak-ing (14)more sufficient we give the derivation of self-adaptiveregularization parameter 120583 under the Bayes framework
For written convenience we take 120575 = 120572119896minus 120578119894as the
residual coding vector Under the Bayes framework theMAPestimation of 120575 can be computed as
120575 = arg max log (119901 (120575 | 119910))
= arg max log (119901 (119910 | 120575)) + log (119901 (120575)) (15)
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 5
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 3 Restoration performance comparison on Bridge (120590 = 20)
means that the coding of another subdictionary is zero Soour scheme guarantees the sparsity naturally that is we canmove the second term in (12) and rewrite it as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172+ 120583sum
119896
10038171003817100381710038171003817120572119896
120575
100381710038171003817100381710038171 (14)
With (14) we can compact the optimization only withcoding residual constraint whichmeans that the coding fromthe observed signal by (14) is close to that from the originalsignal by (9)
333 Scheme Summary Thewhole scheme consisted of two-level iterative algorithm now we present the brief steps asfollows
Initial set the initial image 119883 = 119884 (119884 is the degradedimage) 119863(0) is the DCT complete dictionary and calculatethe initial sparse coding for each patch with any pursuitalgorithm
Outer Loop
(a) on the (OL)th iteration learn the dictionary119863(OL) by119883 with the algorithm proposed in Section 331 (119871 isthe maximum iterative number OL = 1 2 119871)
(b) compute the good estimation set 120578119894119894=1119870
for all theclusters under the119863(OL)
(c) Inner Loop
(i) for each patch we get its coding 119910by (14)
which can be solved with the method in [17](ii) repeat (i) until all the patches are processed
(d) estimate all the restoration patches by 119909 = 119863(OL)119910
(e) OL = OL + 1 repeat
On one hand the 120578119894is more and more approach to the
true sparse coding which makes the residual suppressionmore sufficient On the other hand by alternating the sparsecoding and dictionary learning coding and dictionary are allimproved and promote each other
334 Parameter Selection In conventional models the regu-larization parameter is generally a constant Hence for mak-ing (14)more sufficient we give the derivation of self-adaptiveregularization parameter 120583 under the Bayes framework
For written convenience we take 120575 = 120572119896minus 120578119894as the
residual coding vector Under the Bayes framework theMAPestimation of 120575 can be computed as
120575 = arg max log (119901 (120575 | 119910))
= arg max log (119901 (119910 | 120575)) + log (119901 (120575)) (15)
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Mathematical Problems in Engineering
(a) Noisy (b) Original (c) NSCT
(d) SSBM3D (e) MK-SVD (f) Proposed
Figure 4 Restoration performance comparison on Butterfly (120590 = 20)
And the 119901(119910 | 120575) is
119901 (119910 | 120575) = 119901 (119910 | 120572 120578) =1
radic2120587120590119899
expminus 121205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2
(16)
where 120590119899is the standard deviation of noise
As for 119901(120575) with some statistics experiments we gain theexperience model with the iid Laplacian distribution
119901 (120575) = prod
119896
prod
119894
119901 (120575119896 (119894))
= prod
119896
prod
119894
1
2120590119896119894
expminus 1120590119896119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816
(17)
In which 120575119896(119894) is the 119894th element of the residual vector of
the 119896th patch and 120590119896119894is the standard deviation of 120575
119896(119894)
Combining (15) we can obtain the following
119910= arg min 1
21205902119899
1003817100381710038171003817119910 minus 11986312057210038171003817100381710038172
2+1
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (18)
For a given 120578119894(120572119896= 120578119894+ 120575119896) the optimal sparse coding 120572
can be obtained as follows
119910= arg min1003817100381710038171003817119910 minus 119863120572
10038171003817100381710038172
2+21205902
119899
120590119896119894
sum
119896
sum
119894
1003816100381610038161003816120575119896 (119894)1003816100381610038161003816 (19)
Compared to the regularization in (14) we can set the 120583to be the following self-adaptive form
120583119896119894=21205902
119899
120590119896119894
(20)
4 Experimental Results and Analysis
To verify the performance of the algorithm proposed in thepaper we show some contrast restoration experiments forimage denoising The contrast algorithms are respectivelyNSCT method in [6] SSBM3D in [18] MK-SVD in [14]and the proposed algorithm in this paper The noisy imageis generated by adding the Gaussian noise with differentstandard deviation (120590 = 15 20) The size of test images is all256 times 256 and we set the parameters as follows120576 = 3 119897 = 3 (mentioned in Section 32) 119870 =
50 (mentioned in Section 331) 119901 = 5 (mentioned inSection 332)
To show the objective evaluation for restoration imagewe take the PSNR as the indicator for different algorithmsMeanwhile we show part of the experiment results to viewthe denoising performance Suppose 119909 and 119909 are the original
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 7
Table 2 The PSNR (dB) indicator with 120590 = 15
Noisy NSCT BM3D K-SVD ProposedHouse 2489 3311 3467 3392 3422Barbara 2463 3049 3233 3140 3197Peppers 2461 3151 3270 3177 3216Bridge 2470 2771 2883 2855 2862Butterfly 2463 2917 3064 2989 3018
Table 3 The PSNR (dB) indicator with 120590 = 20
Noisy NSCT BM3D K-SVD ProposedHouse 2210 3178 3376 3267 3289Barbara 2205 2895 3068 2975 3019Peppers 2209 3012 3136 3046 3112Bridge 2213 2641 2725 2687 2702Butterfly 2111 2765 2946 2832 2896
image and restoration image respectively the definition ofPSNR is
PSNR = 10 log 2552
119909 minus 1199092
2
(21)
Figures 1 and 2 are the restoration performance with thestandard deviation 120590 = 15 while Figures 3 and 4 are theperformance with 120590 = 20 As for the other images we onlyshow the PSNR value reported in Tables 2 and 3
For each group by the local detail image we can get someconclusion Though all the four methods can achieve thedenoising task there are somedifferences between each otherFor the NSCT method the ability of capturing structure isnot adaptive to different image due to its fixed dictionaryAdditionally some scratches appeared in the restorationresults which did not exist in the original image and thePSNR is lowest among the four methods Compared to theother method BM3D shows advantage on PSNR value Butits restoration results are smoothed excessively which leadsto lose much detail texture Owing to the dictionary thatis learned by the random example patches from differentimages the K-SVD algorithm generates some visual artifactsand cannot recover the local geometry feature sufficientlySo the restoration results of K-SVD are only better than theNSCT in the four methods The proposed method in thispaper shows advantage both on the restoration performanceand PSNR value Though its PSNR is lower than the BM3Dthe detail recovered ability is stronger than the other threemethods
5 Conclusion
Owing to that the abundant geometry information and theself-similarity are the important influencers to be utilizedin the image restoration by sparse representation the paperproposed a novel scheme that restores the image by thenonlocal sparse reconstruction with similarity measurementWe cluster the patches from the degraded image by similaritymeasurement with Shearlet feature vector which is good at
capturing the local geometry structure in image and then takethem to train the cluster dictionary and get good estimationof true sparse coding for original image Additionally wealso show the derivation of regularization parameter underthe Bayes framework By the cluster dictionary learning andthe coding residual suppression the proposed scheme showsadvantages both on the visual performance and PSNR valuecompared to the leading denoising methods Besides theimage denoising the proposed model can be extensive toother restoration tasks such as deblurring and superresolu-tion
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work is supported by the Nation Nature ScienceFoundation of China nos 61201237 and 61301095 NatureScience Foundation of Heilongjiang Province of China noQC2012C069 and the Fundamental Research Funds for theCentral Universities no HEUCFZ1129 Meantime this paperis funded by the Defense preresearch foundation Project ofshipbuilding industry science no 10J316
References
[1] M Alferdo D L Donoho and M Elad ldquoFrom sparse solutionsof systems of equations to sparse modeling of signals andimagesrdquo Society for Industrial and Applied Mathematics vol 51no 1 pp 34ndash81 2009
[2] G G Bhutada R S Anand and S C Saxena ldquoEdge preservedimage enhancement using adaptive fusion of images denoisedby wavelet and curvelet transformrdquo Digital Signal Processingvol 21 no 1 pp 118ndash130 2011
[3] L Shang P-G Su and T Liu ldquoDenoising MMW image usingthe combination method of contourlet and KSC shrinkagerdquoNeurocomputing vol 83 no 15 pp 229ndash233 2012
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Mathematical Problems in Engineering
[4] S Yin L Cao Y Ling and G Jin ldquoImage denoising withanisotropic bivariate shrinkagerdquo Signal Processing vol 91 no8 pp 2078ndash2090 2011
[5] A L da Cunha J Zhou and M N Do ldquoThe nonsubsampledcontourlet transform theory design and applicationsrdquo IEEETransactions on Image Processing vol 15 no 10 pp 3089ndash31012006
[6] J Jia and L Chen ldquoUsing normal inverse Gaussian model forimage denoising in NSCT domainrdquo Acta Electronica Sinica vol39 no 7 pp 1563ndash1568 2011
[7] R Rubinstein A M Bruckstein and M Elad ldquoDictionaries forsparse representation modelingrdquo Proceedings of the IEEE vol98 no 6 pp 1045ndash1057 2010
[8] Y He T Gan W Chen and H Wang ldquoMulti-stage imagedenoising based on correlation coefficient matching and sparsedictionary pruningrdquo Signal Processing vol 92 no 1 pp 139ndash1492012
[9] K Labusch E Barth and TMartinetz ldquoSoft-competitive learn-ing of sparse codes and its application to image reconstructionrdquoNeurocomputing vol 74 no 9 pp 1418ndash1428 2011
[10] Z Xue D Han and J Tian ldquoFast and robust reconstructionapproach for sparse fluorescence tomography based on adaptivematching pursuitrdquo inProceedings of the IEEEConference onAsiaCommunications and Photonics pp 1ndash6 November 2011
[11] D Needell and R Vershynin ldquoSignal recovery from incom-plete and inaccurate measurements via regularized orthogonalmatching pursuitrdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 310ndash316 2010
[12] T Blumensath and M E Davies ldquoStagewise weak gradientpursuitsrdquo IEEE Transactions on Signal Processing vol 57 no 11pp 4333ndash4346 2009
[13] M Aharon M Elad and A Bruckstein ldquoK-SVD an algorithmfor designing overcomplete dictionaries for sparse representa-tionrdquo IEEE Transactions on Signal Processing vol 54 no 11 pp4311ndash4322 2006
[14] J Mairal G Sapiro and M Elad ldquoLearning multiscale sparserepresentations for image and video restorationrdquo MultiscaleModeling and Simulation vol 7 no 1 pp 214ndash241 2008
[15] W-Q Lim ldquoThe discrete shearlet transform a new directionaltransform and compactly supported shearlet framesrdquo IEEETransactions on Image Processing vol 19 no 5 pp 1166ndash11802010
[16] T Kanungo D M Mount N S Netanyahu C D PiatkoR Silverman and A Y Wu ldquoAn efficient k-means clusteringalgorithms analysis and implementationrdquo IEEETransactions onPattern Analysis andMachine Intelligence vol 24 no 7 pp 881ndash892 2002
[17] I Daubechies M Defrise and C de Mol ldquoAn iterative thresh-olding algorithm for linear inverse problems with a sparsityconstraintrdquo Communications on Pure and Applied Mathematicsvol 57 no 11 pp 1413ndash1457 2004
[18] M Poderico S Parrilli G Poggi and L Verdoliva ldquoSigmoidshrinkage for BM3D denoising algorithmrdquo in Proceedings of theIEEE International Workshop on Multimedia Signal Processing(MMSP rsquo10) pp 423ndash426 October 2010
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of