sparse signal recovery by ${\ell_q}$ minimization under restricted isometry property
TRANSCRIPT
1154 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 9, SEPTEMBER 2014
Sparse Signal Recovery by MinimizationUnder Restricted Isometry Property
Chao-Bing Song and Shu-Tao Xia
Abstract—In the context of compressed sensing, the nonconvexminimization with has been studied in recent years.
In this letter, by generalizing the sharp bound for minimiza-tion of Cai and Zhang, we show that the condition
in terms of restricted isometry constant (RIC) can guar-
antee the exact recovery of -sparse signals in the noiseless caseand the stable recovery of approximately -sparse signals in thenoisy case by minimization. This result is more general than thesharp bound for minimization when the order of RIC is greaterthan and illustrates the fact that a better approximation tominimization is provided by minimization than that providedby minimization.
Index Terms—Compressed sensing, minimization, restrictedisometry property, sparse signal recovery.
I. INTRODUCTION
A S a new paradigm for signal sampling, compressedsensing (CS) [1], [2], [3] has attracted a lot of
attention in recent years. Consider a -sparse signalwhich has at most nonzero
entries. Let be a measurement matrix with andbe a measurement vector. CS deals with recovering the
original signal from the measurement vector by finding thesparsest solution to the underdetermined linear system ,i.e., solving the following minimization problem:
(1)
where denotes the -norm of . Unfortu-nately, as a typical combinatorial optimization problem, this op-timal recovery algorithm is NP-hard [2]. One popular strategyis to relax the minimization problem to an minimizationproblem:
(2)
Manuscript received March 30, 2014; revised May 03, 2014; accepted May04, 2014. Date of publication May 13, 2014; date of current version June 03,2014. This work was supported in part by theMajor State Basic Research Devel-opment Program of China (973 Program) under Grant 2012CB315803, the Na-tional Natural Science Foundation of China under Grant 61371078, and by theResearch Fund for the Doctoral Program of Higher Education of China underGrant 20130002110051. The associate editor coordinating the review of thismanuscript and approving it for publication was Prof. Jie Liang.The authors are with the Graduate School at ShenZhen, Tsinghua University,
Shenzhen, Guangdong 518055, China (e-mail: [email protected];[email protected]).Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/LSP.2014.2323238
Due to the convex essence of minimization, we can solve itin polynomial time [2].In order to describe the equivalence condition between recon-
struction algorithms with polynomial time and minimization,restricted isometry property (RIP) is introduced in Candès andTao [2], which has been one of the most popular properties ofmeasurement matrix in CS. We can rewrite the definition of RIPas follows.Definition 1: The measurement matrix is said to
satisfy the -order RIP if for any -sparse signal ,
(3)
where . The infimum of , denoted by , is called the-order restricted isometry constant (RIC) of . When is notan integer, we define as , where denotes the ceilingfunction.There are a lot of papers to discuss the equivalence condition
between minimization and minimization in terms of RIC,such as in Candès and Tao [2],in Candès [4], in Foucart [5], in Cai andZhang [6], and in Cai and Zhang [7]. Inthese conditions, is the first RIC condition,while and are sharp bounds inthe sense that we can find counterexample that minimizationcan’t find exactly if these conditions don’t hold [6], [7].Instead of minimization, from the fact that, solving an ( ) minimization problem
(4)
may provide a better approximation to minimization. Theadvantages of minimization can be found in [8]. Althoughfinding a global minimizer of (4) is NP-hard, a lot of algorithmswith polynomial time have been proposed to find a local mini-mizer of (4), such as the algorithms in [8], [9], [10].In practical applications, there often exist noises in measure-
ments and the original signal may be not exact sparse. In thenoisy case, we can relax the constraint in (4) as follows,
(5)
where denotes some noise structure. In this setting, we needto recover with bounded errors, i.e., recover stably.Several RIC bounds of minimization are given in the liter-
ature, such as in Foucart and Lai [11],in Hsia and Sheu [12]. Other similar results can be found in Saab,Chartrand and Yilmaz [13], Lai and Liu [14], Zhou Kong, LuoandXiu [15]. In this letter, wemainly focus on the RIC condition
1070-9908 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
SONG AND XIA: SPARSE SIGNAL RECOVERY BY MINIMIZATION 1155
of minimization. We show that if
, minimization can recover -sparse signals exactly in thenoiseless case and recover approximately -sparse signals stablyin the noisy case. From this condition, we show that as a relax-tion way closer to minimization, minimization can guar-antee sparse signal recovery in a more general condition in termsof RIC.The remainder of the letter is organized as follows. In
Section II, we introduce related notations and lemmas. InSection III, we give our main results in both noiseless and noisysettings. In Section IV, unified proofs of the main results inSection III are given. Finally, conclusion is given in Section V.
II. PRELIMINARIES
Let ’s are different unit vectors with entry 1 or inposition and entries 0 in other positions, whichCai and Zhang [6] call indicator vectors. Let bean arbitrary vector in , where . Let
denote the support of or the set of indices of nonzeroentries in . Let be the vector with all but the largestentries in absolute values set to zeros and
. For , let -norm of a vector be. In addition, let and
be the number of nonzero entries in . Letbe “the power of the vector ”. In addition,
let denote the spectral norm of .Then we introduce direct consequences of the Hölder in-
equality as follows.Lemma 1: If and ,
Moreover, if is -sparse, then
The following lemma introduced in Cai and Zhang [7] is crucialto get the proposal results on .Lemma 2 (Sparse Representation of a Polytope): For a pos-
itive number and a positive integer , define the polytopeby
For any , define the set of sparse vectorsby
(6)
Then if and only if is in the convex hull of. In particular, any can be expressed as
(7)
III. MAIN RESULTS
In the noiseless case, we have the following result.
Theorem 1: Assume that is a -sparse signal andwith . Then if the -order
RIC of the measurement matrix satisfies
(8)
the minimizer of (4) will recover exactly.In the noisy case, two types of bounded noise settings• ,• ,are of particular interest. The first bounded noise setting was
introduced in [16]. The second one was motivated by DantzigSelector in [17]. The corresponding results in the two noisycases are given in Theorems 2 and 3, respectively.Theorem 2: Assume that is an approximately -sparse
signal, with , andwith in (5). Then if the
-order RIC of the measurement matrix satisfies
the minimizer of (5) will recover x stably as follows:
(9)
Theorem 3: Assume that is an approxi-mately -sparse signal, with
, and within (5). Then if the -order RIC of
the measurement matrix satisfies
the minimizer of (5) will recover x stably as follows:
(10)
The proposed RIC condition is a natural generalization of thesharp result in Cai and
Zhang [7]. Rewrite for (8), and it is easy
to find that if and
. Therefore, in terms of RIC with order more than ,the condition of the measurement matrix is relaxed if we use
minimization instead of minimization. In addi-tion, in Theorems 2 and 3, we use a relatively stricter condition
and respectively thanused in Cai and Zhang [7]. In our proofs, in order to get an
analytic upper bound of , the stricter condition may benecessary. Finally, although the proposed bound is better than
1156 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 9, SEPTEMBER 2014
the existing results, a further research is still needed to verifywhether it is sharp or not.
IV. PROOFS
Assume that is an approximately -sparse signal. Let de-note the support of the largest entries of and denote thecomplement of . Let denote the vector that sets all en-tries of but the entry in to zero. Let , andwe have . Assume that and is theminimizer of (5). Let , and we have
Immediately,
(11)
Note that from the definitions in Section II and the beginningof the proof, is equivalent to , intro-ducing the symbol is just for distinguishing from
.Then, assume that is an integer. Let , where’s are indicator vectors.Without loss of generality, assume that
. Set . We divideinto two parts with disjoint supports,
, where
Then , ;besides, all non-zero entries of has magnitude larger than
, so is -sparse. Let , then
(12)
We now apply Lemma 2. Then can be expressed as a convexcombination of sparse vectors: , where is
-sparse. Now we suppose are to be deter-mined. Denote , then
(13)
and , are all -sparse vectors.Define . Then
.We can check the following identity in norm,
(14)
Since and (13), we have
Assume that
(15)
where is some constant specified by the noisy setting (seethe concrete value of in the discussion at the end of the proofs).
Set , . For notational convenience, wewrite for . Let the left-hand side of (14) minus theright-hand side, we get
(16)
SONG AND XIA: SPARSE SIGNAL RECOVERY BY MINIMIZATION 1157
(17)
Consider as the independent variable in theinequality (17) . If we want the solution about
is upper bounded, the coefficient of the second-order termshould be less than zero. Therefore, we have
(18)
and
(19)
In (16), we used the fact that
(20)
(21)
where (20) is from (12) and (21) is from Lemma 1.If is not an integer, note , then, is an integer, from the above derivations, we know thatif
(19) holds. While
so if is not an integer, the conditionis still enough to guarantee that the solution about
of the inequality (17) is upper-bounded.From [6, Lemma 5.4] and (11), we have
. So
Then
(22)
Next, we discuss the noiseless case and the two noisy cases re-spectively.1) The noiseless case: If is -sparse, then
. Therefore in (15), let ,then in (22), we have , i.e., recoversexactly. This completes the proof of Theorem 1.
2) The noisy case }: If is approximately-sparse, , and the spectral norm of is
, then
(23)
In this case, the assumption holds if. Therefore, in (15), let, then we have (9) from (22). This
proves Theorem 2.3) The noisy case : If is approxi-mately -sparse, , the spectral normof is , then
In this case, the assumption holds if. Therefore, in (15), let
, then we have (10) from(22). This finishes the proof of Theorem 3.
V. CONCLUSION
We improved the RIC bound of minimization by general-izing the result in Cai and Zhang [7]. Under the more generalRIC bound, minimization can recover sparse signals exactlyand approximately sparse signals stably. Although it is a stepforward for the RIC study of minimization, whether the pro-posed bound is sharp or not needs further research.
ACKNOWLEDGMENT
The authors would like to thank the two anonymous reviewersand Mr. Xin-Ji Liu for the valuable suggestions that improvedthe presentation of the letter.
1158 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 9, SEPTEMBER 2014
REFERENCES[1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.
52, no. 4, pp. 1289–1306, 2006.[2] E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE
Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, 2005.[3] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles:
Exact signal reconstruction from highly incomplete frequency infor-mation,” IEEE Trans. Inf. Theory, no. 2, pp. 489–509, 2006.
[4] E. J. Candès, “The restricted isometry property and its implicationsfor compressed sensing,” Compt. Rend. Math., vol. 346, no. 9, pp.589–592, 2008.
[5] S. Foucart, “A note on guaranteed sparse recovery via -minimiza-tion,” Appl. Comput. Harmon. Anal., vol. 29, no. 1, pp. 97–103, 2010.
[6] T. T. Cai and A. Zhang, “Sharp RIP bound for sparse signal andlow-rank matrix recovery,” Appl. Comput. Harmon. Anal., vol. 35, pp.74–93, 2013.
[7] T. T. Cai and A. Zhang, “Sparse representation of a polytope andrecovery of sparse signals and low-rank matrices,” IEEE Trans. Inf.Theory, vol. 60, no. 1, pp. 122–132, Jan 2014.
[8] M.-J. Lai and J. Wang, “An unconstrained minimization withfor sparse solution of underdetermined linear systems,” SIAM
J. Optim., vol. 21, no. 1, pp. 82–101, 2011.[9] I. Daubechies, R. DeVore, M. Fornasier, and C. S. Güntürk, “Iteratively
reweighted least squares minimization for sparse recovery,” Commun.Pure Appl. Math., vol. 63, no. 1, pp. 1–38, 2010.
[10] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for com-pressive sensing,” in IEEE Int. Conf. Acoustics, Speech and Signal Pro-cessing, 2008 ICASSP 2008, 2008, pp. 3869–3872, IEEE.
[11] S. Foucart andM.-J. Lai, “Sparsest solutions of underdetermined linearsystems via -minimization for ,” Appl. Comput. Harmon.Anal., vol. 26, no. 3, pp. 395–407, 2009.
[12] Y. Hsia and R.-L. Sheu, “On RIC bounds of compressed sensingmatrices for approximating sparse solutions using lq quasi norms,”[Online]. Available: http://www.optimization-online.org/DB_FILE/2012/09/3610.pdf.
[13] R. Saab, R. Chartrand, and O. Yilmaz, “Stable sparse approximationsvia nonconvex optimization,” in IEEE Int. Conf. Acoustics, Speech andSignal Processing, 2008 ICASSP 2008, 2008, pp. 3885–3888, IEEE.
[14] M.-J. Lai and L. Y. Liu, “A new estimate of restricted isometry con-stants for sparse solutions,” Appl. Comput. Harmon. Anal., vol. 30, pp.402–406, 2011.
[15] S. Zhou, L. Kong, Z. Luo, and N. Xiu, “New RIC bounds via -min-imization with in compressed sensing,” arXiv preprintarXiv:1308.0455, 2013.
[16] D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery ofsparse overcomplete representations in the presence of noise,” IEEETrans. Inf. Theory, vol. 52, no. 1, pp. 6–18, 2006.
[17] E. Candès and T. Tao, “The dantzig selector: Statistical estimationwhen p is much larger than n,” Ann. Statist., pp. 2313–2351, 2007.