[ieee 2014 ieee china summit & international conference on signal and information processing...

5
DETERMINISTIC COMPLEX-VALUED MEASUREMENT MATRICES BASED ON BERLEKAMP-JUSTESEN CODES Lu Liu Xin-Ji Liu Shu-Tao Xia Graduate School at Shenzhen, Tsinghua University Shenzhen, Guangdong 518055, P. R. China {lu-liu12@mails, liuxj11@mails, xiast@sz}. tsinghua.edu.cn ABSTRACT Nowadays deterministic construction of measurement matri- ces is a hot topic in compressed sensing. In this paper, we pro- pose two classes of deterministic complex-valued measure- ment matrices based on Berlekamp-Justesen codes. Row and column numbers of these matrices are tunable through row and column puncturing. Moreover, the proposed matrices are obtained from circular matrices, thus the storage costs of them are relatively low and both the sampling and recovery process can be simpler. Simulation results show that the proposed matrices perform better than Gaussian random matrices and some other deterministic measurement matrices under OMP recovery, especially for image reconstruction. Index Termscompressed sensing, measurement matri- ces, Berlekamp-Justesen, LDPC, complex-valued matrices 1. INTRODUCTION Since Compressed Sensing (CS) was proposed in 2006 [2, 3], it has drawn a lot of attention in the past few years. The process of compressed sensing can be divided into two parts: sampling and reconstruction [4]. The sampling process main- ly concerns how to construct an efficient and robust measure- ment matrix and this is what this paper is working for. For the second part, pioneering works on reliable reconstruction can be found in [2, 3, 5]. For the construction of measurement matrices in CS, it has been proved that many random matrices are good measure- ment matrices [6]. Random matrices can be realized easily and used to perfectly recover sparse signals with high proba- bility. However, storage cost of them is relatively high.As a result, it is of great value to find deterministic measurement matrices which perform as well as random matrices. There has already been some deterministic construction- s of measurement matrices. For example, some m × m 2 This work is supported in part by the Major State Basic Research De- velopment Program of China (973 Program, 2012CB315803) and the open research fund of National Mobile Communications Research Laboratory of Southeast University (2011D11). The authors would like to thank Dr. Arash Amini and Prof. Marvasti for explaining details of [1] patiently by emails. complex-valued matrices based on chirp functions have been shown in [7]. [8] and [9] used the Delsarte-Goethals frame to construct a large class of matrices with small coherence. In [10], Amini et al. constructed the deterministic binary, bipolar and ternary compressed sensing matrices via BCH codes. Considered that almost all existing deterministic ma- trices have strict restriction to the number of rows, some con- structions in [10] were later extended to matrix designs based on the p-ary BCH codes to obtain matrices with more flex- ible sizes [1]. However, the sizes of these matrices are still not flexible enough and their punctured versions (i.e. subma- trices by removing some rows and columns) perform worse than random matrices. Recently, close relations between LDPC codes and CS have been revealed in [11] and it has been proved that parity- check matrices of good LDPC codes are also good measure- ment matrices for CS. What’s more, a class of optimal deter- ministic compressed sensing matrices based on LDPC codes has been constructed in [12]. In this paper, we propose two classes of deterministic measurement matrices with flexible sizes via Berlekamp- Justesen (B-J) based LDPC codes [13]. This kind of binary matrices in R have already been investigated in [14], in this paper, we generalize them to C for the measuremen- t of complex-valued signals. By numbers of experiments, we show that they have good performances under the OMP algorithm [5] in CS. 2. BINARY B-J MATRIX CONSTRUCTION In this section, we briefly introduce the construction of q-ary B-J codes, binary matrices from B-J codes and methods to construct matrices with flexible rows and columns from these binary matrices. For more details, please refer to [13, 15]. 2.1. The q-ary B-J Codes Let GF(q) be a Galois field with primitive element α and GF(q 2 ) be the extension field of GF(q) with primitive element α 0 . Obviously, β = α q-1 0 is the primitive (q+1)-th of unity, 723 978-1-4799-5403-2/14/$31.00 ©2014 IEEE ChinaSIP 2014

Upload: shu-tao

Post on 01-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China (2014.7.9-2014.7.13)] 2014 IEEE China Summit & International

DETERMINISTIC COMPLEX-VALUED MEASUREMENT MATRICESBASED ON BERLEKAMP-JUSTESEN CODES

Lu Liu Xin-Ji Liu Shu-Tao Xia

Graduate School at Shenzhen, Tsinghua UniversityShenzhen, Guangdong 518055, P. R. China

{lu-liu12@mails, liuxj11@mails, xiast@sz}. tsinghua.edu.cn

ABSTRACT

Nowadays deterministic construction of measurement matri-ces is a hot topic in compressed sensing. In this paper, we pro-pose two classes of deterministic complex-valued measure-ment matrices based on Berlekamp-Justesen codes. Row andcolumn numbers of these matrices are tunable through rowand column puncturing. Moreover, the proposed matrices areobtained from circular matrices, thus the storage costs of themare relatively low and both the sampling and recovery processcan be simpler. Simulation results show that the proposedmatrices perform better than Gaussian random matrices andsome other deterministic measurement matrices under OMPrecovery, especially for image reconstruction.

Index Terms— compressed sensing, measurement matri-ces, Berlekamp-Justesen, LDPC, complex-valued matrices

1. INTRODUCTION

Since Compressed Sensing (CS) was proposed in 2006 [2, 3],it has drawn a lot of attention in the past few years. Theprocess of compressed sensing can be divided into two parts:sampling and reconstruction [4]. The sampling process main-ly concerns how to construct an efficient and robust measure-ment matrix and this is what this paper is working for. For thesecond part, pioneering works on reliable reconstruction canbe found in [2, 3, 5].

For the construction of measurement matrices in CS, it hasbeen proved that many random matrices are good measure-ment matrices [6]. Random matrices can be realized easilyand used to perfectly recover sparse signals with high proba-bility. However, storage cost of them is relatively high.As aresult, it is of great value to find deterministic measurementmatrices which perform as well as random matrices.

There has already been some deterministic construction-s of measurement matrices. For example, some m × m2

This work is supported in part by the Major State Basic Research De-velopment Program of China (973 Program, 2012CB315803) and the openresearch fund of National Mobile Communications Research Laboratory ofSoutheast University (2011D11). The authors would like to thank Dr. ArashAmini and Prof. Marvasti for explaining details of [1] patiently by emails.

complex-valued matrices based on chirp functions have beenshown in [7]. [8] and [9] used the Delsarte-Goethals frameto construct a large class of matrices with small coherence.In [10], Amini et al. constructed the deterministic binary,bipolar and ternary compressed sensing matrices via BCHcodes. Considered that almost all existing deterministic ma-trices have strict restriction to the number of rows, some con-structions in [10] were later extended to matrix designs basedon the p-ary BCH codes to obtain matrices with more flex-ible sizes [1]. However, the sizes of these matrices are stillnot flexible enough and their punctured versions (i.e. subma-trices by removing some rows and columns) perform worsethan random matrices.

Recently, close relations between LDPC codes and CShave been revealed in [11] and it has been proved that parity-check matrices of good LDPC codes are also good measure-ment matrices for CS. What’s more, a class of optimal deter-ministic compressed sensing matrices based on LDPC codeshas been constructed in [12].

In this paper, we propose two classes of deterministicmeasurement matrices with flexible sizes via Berlekamp-Justesen (B-J) based LDPC codes [13]. This kind of binarymatrices in R have already been investigated in [14], inthis paper, we generalize them to C for the measuremen-t of complex-valued signals. By numbers of experiments,we show that they have good performances under the OMPalgorithm [5] in CS.

2. BINARY B-J MATRIX CONSTRUCTION

In this section, we briefly introduce the construction of q-aryB-J codes, binary matrices from B-J codes and methods toconstruct matrices with flexible rows and columns from thesebinary matrices. For more details, please refer to [13, 15].

2.1. The q-ary B-J Codes

Let GF(q) be a Galois field with primitive element α andGF(q2) be the extension field of GF(q) with primitive elementα0. Obviously, β = αq−10 is the primitive (q+1)-th of unity,

723978-1-4799-5403-2/14/$31.00 ©2014 IEEE ChinaSIP 2014

Page 2: [IEEE 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China (2014.7.9-2014.7.13)] 2014 IEEE China Summit & International

and β−i + βi ∈ GF(q) for any i. Hence, xq+1 − 1 can factoras follows:

xq+1 − 1 =(x− 1)f1(x)f2(x) · · · fq/2(x) (1)

where fi(x) = (x−β−i)(x−βi) = x2−(β−i+βi)x+1. It’seasily found that fi(x) are irreducible quadratics over GF(q).

Let q = 2m and CBJ be the q-ary [q + 1, 2, q] B-J codewith length q + 1, dimension 2 and minimum distance q. By(1), the generator polynomial of the CBJ can be written as:

g(x) = (x− 1)f1(x)f2(x) · · · fq/2−1(x)= 1 + g1x+ · · ·+ gq−1x

q−1, gi 6= 0 ∈ GF(q)(2)

We also call c(x) = c1 + c2x + · · · + cq+1xq a codeword if

c = (c1, c2, · · · , cq+1) ∈ CBJ . All the codewords of CBJcan be written as

CBJ = {g(x)(λ+ µx) : λ, µ ∈ GF(q)}, (3)

and the set of all non-zeros codewords of CBJ , say C∗BJ , is

C∗BJ = {λg(x)xi : λ 6= 0 ∈ GF(q), i = 1, · · · , q + 1}, (4)

2.2. Binary Matrices via (q-1)-tuple Substitution to B-JCodes

Recall that GF(q)={0, 1, α, α2, · · · , αq−2}, define the loca-tion vector of αj in GF(q) as y(αj) = (y0, y1, · · · , yq−2) forj = 0, 1, · · · , q − 2, where yj = 1 and all the other elementsare 0. The location vector of 0 in GF(q) is the all-zero (q-1)-tuple. Thus we associate every element in GF(q) with abinary (q-1)-tuple over GF(2).

Define a codewords subset Ci of C∗BJ as follows

Ci = {λg(x)xi : λ 6= 0 ∈ GF(q)}. (5)

Obviously, each Ci has q − 1 codewords and all Ci form theq2 − 1 codewords of C∗BJ . Let Bi be a (q − 1) × (q + 1)matrix over GF(q) with rows composed of all the codewordsin Ci. The elements of the i-th column of Bi are all-zero, andother columns consist of (q-1) different non-zero elements ofGF(q).

For i = 1, 2, · · · , q + 1, substituting all elements in Biwith (q-1)-tuples and denote the resulting binary matrix asHi = (Pi,1 Pi,2 · · · Pi,q+1). Let H = (HT

1 ,HT2 , · · · ,HT

q+1)T ,

it’s easily known that any two rows of Hi have no common 1and any two rows of H have at most one common 1.

2.3. Binary Matrices Resizing

For some deterministic matrices, the resized (sub)matricesperform poorly, while the resized (sub)matrices of binary ma-trices from B-J codes have good performance [14]. The resiz-ing method is presented as follows.

For 2 ≤ γ ≤ ρ ≤ q+1 and 0 ≤ u ≤ q− 2, let H(γ, ρ, u)be the following submatrix of H:

0 P1,2 · · · P1,γ · · · P1,ρ L1,u

P2,1 0 · · · P2,γ · · · P2,ρ L2,u

...... · · ·

... · · ·...

...Pγ,1 Pγ,2 · · · 0 · · · Pγ,ρ Lγ,u

(6)

where for i = 1, 2, · · · , γ, Li,u is a (q−1)×umatrix obtainedby removing the last (q-1-u) columns of Pi,ρ+1.

Theorem 1 [13, Theorem 2] H(γ, ρ, u) has girth greaterthan 4 for any 2 ≤ γ ≤ ρ ≤ q + 1 and 0 ≤ u ≤ q − 2, whereu = 0 for ρ = q + 1. Each column weight of the previousγ(q − 1) columns of H(γ, ρ, u) is γ − 1 and the weight of theother columns is γ.

3. COMPLEX-VALUED MATRIX CONSTRUCTION

In this section, based on B-J codes, we introduce two waysto construct complex-valued matrices to measure complex-valued signals.

3.1. Sparse Complex Matrix Construction

By (6), a sparse binary matrix H(γ, ρ, u) is obtained and wecan turn it into a complex matrix directly. In order to keepthe sparsity of complex matrices, we substitute each 1 ofH(γ, ρ, u) by 1 + i, where i is the imaginary unit. Thus, wecan obtain the desired sparse complex matrix A(1)(γ, ρ, u)as follows:

A(1)(γ, ρ, u) = (1 + i)H(γ, ρ, u). (7)

Obviously, A(1)(γ, ρ, u) can be stored in small space dueto its sparsity.

3.2. q-ary Complex Matrix Construction and Resizing

The sparse complex matrix construction is easy, but it influ-ence the recovering performance, see Section 4 for detail-s. In the following, we introduce another way to constructcomplex-valued matrices which will perform comparable to,or even better than, random complex-valued matrices.

Define a codewords subset C(λ) of C∗BJ as follows:

C(λ) = {λg(x)xi : i = 1, 2, · · · , q + 1}, (8)

where gi 6= 0 ∈ GF(q) and xq+1 = 1. Let A(λ) be a (q +1)×(q+1) matrix overGF (q) with rows composed of all thecodewords in C(λ). Recall that GF(q)={0, α, α2, · · · , αq−2,αq−1 = 1}, and then we can substitute integer i for everyelement αi in A(λ), where i ∈ [1, q − 1]. The element 0 ofA(λ) is taken place by integer 0. Clearly, A(λ) is cyclic andthus can be stored efficiently in practice.

724

Page 3: [IEEE 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China (2014.7.9-2014.7.13)] 2014 IEEE China Summit & International

By selecting the previous m rows and the first n columnsof A(λ), we can get a submatrix A(m,n, λ), where m and ncan be chosen according to our requirements. Finally, substi-tute every integer k ∈ [0, q − 1] in A(m,n, λ) by the corre-sponding point ei

2πq k on the unit circle of the complex plane

and normalize the resulting matrix, we can obtain the desiredq-ary complex matrix A(2)(m,n, λ) with complex-valued el-ements:

A(2)(m,n, λ) =1√m[ei

2πq akj ], (9)

where akj denotes the element in the k-th row and the j-thcolumn of A(m,n, λ) and i is the imaginary unit.

The construction algorithm is shown in Algorithm 1.

Algorithm 1 q-ary complex matrices construction steps

Input:The size of the measurement matrix: m, n and λ.

Output:The measurement matrix A(2)(m,n, λ).

1: m⇐ dlog(n)e2: q ⇐ 2m

3: Let g(x) be the generator polynomial of the CBJ ,1 + g1x+ · · ·+ gq−1x

q−1 ⇐ g(x),gi 6= 0 ∈ GF(q)

4: C(λ)⇐ {λg(x)xi : i = 1, 2, · · · , q + 1},xq+1 ⇐ 1

5: A(λ)⇐ all the codewords in C(λ)ak,j ⇐ the k-th row and the j-th column of A(λ),α⇐ a primitive element of GF(q)

6: if ak,j = αi then7: ak,j ⇐ i8: else if ak,j = 0 then9: ak,j ⇐ 0

10: end if11: Select the previous m rows and the first n columns of

A(λ)

12: A(2)(m,n, λ)⇐ 1√m[ei

2πq akj ]

Its computational complexity is a little higher. But de-terministic matrices are generated only once and the storagecosts of them are relatively low since the cyclic property.

4. SIMULATION RESULTS

In this section, the performance of the proposed two class-es of complex-valued matrices will be shown. We comparethem with various types of sampling matrices, including de-terministic matrices based on chirp functions [7] and p-aryBCH codes [1] as well as random matrices (random rows ofthe DFT matrices and Gaussian random matrices).

80 90 100 110 120 130 140 150

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

m=248,n=1023

Sparsity Order: k

Per

fect

Rec

over

y P

erce

ntag

e

RnddftSparse ComplexBCHChirpComlex Rnd

Fig. 1: The perfect recovery percentage vs. sparsity order of the sparse complex matrixA(1)(8, 33, 0) and the corresponding deterministic and random matrices

5 10 15 20 25 30 35

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

m=50,n=257

Sparsity Order: k

Per

fect

Rec

over

y P

erce

ntag

e

Rnddftq−ary ComplexBCHChirpComplex Rnd

Fig. 2: The perfect recovery percentage vs. sparsity order of the q-ary complex matrixA(2)(50, 257, 1) and the corresponding deterministic and random matrices.

4.1. Experimental Configurations

All the simulations are performed under the same conditions,see [16]. When SNRrec. ≥ 100dB, the signal is said tobe perfectly recovered. We show the percentage of perfectrecovery with different sparsity signals. The OMP algorithmis used to reconstruct the signals by all measurement matrices.The results are averaged over 5000 runs for each sparsity tohave smooth curves (500 runs for image reconstruction). Forimage reconstruction, the reconstructed images and their PeakSignal to Noise Rations (PSNR) are shown.

4.2. Experimental Results

Let q = 25, γ = 8, ρ = 33, u = 0, the size of sparsecomplex matrix A(1)(8, 33, 0) is 248 × 1023. In order tohave a good performance evaluation, we have implementedother four classes of complex matrices with the same sizes:5-ary BCH-based matrix, chirp-based matrix, complex ran-dom Gaussian matrix (with real and imaginary parts of theelements chosen i.i.d from Gaussian distribution) and matrixformed by random rows of DFT. The perfect recovery per-centage comparison of these matrices are shown in Fig. 1(A(1) is denoted as “Sparse Complex”). Fig. 1 indicates thatthe sparse complex matrix performs almost equally and better

725

Page 4: [IEEE 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China (2014.7.9-2014.7.13)] 2014 IEEE China Summit & International

80 90 100 110 120 130 140 150

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

m=248,n=1023

Sparsity Order: k

Per

fect

Rec

over

y P

erce

ntag

e

Rnddftq−ary ComplexBCHChirpComplex Rnd

Fig. 3: The perfect recovery percentage vs. sparsity order of the q-ary complex matrixA(2)(248, 1023, 1) and the corresponding deterministic and random matrices.

than the other matrices except DFT matrix.Let q = 28, m = 50, n = 257, λ = 1, the q-ary com-

plex matrix A(2)(50, 257, 1) is compared with four classes ofcomplex matrices with the same size 50 × 257, including 3-ary BCH-based matrix, chirp-based matrix, complex randomGaussian matrix and matrix formed by random rows of DFT,see Fig. 2 ( q-ary complex matrix is denoted as “q-ary Com-plex”). Let q = 210, m = 248, n = 1023, λ = 1, the q-ary complex matrix A(2)(248, 1023, 1) is compared with fourclasses of complex matrices with the same size 248 × 1023,including 5-ary BCH-based matrix, see Fig. 3. The q-arycomplex matrix performs almost the same, sometimes evenbetter than, DFT matrix and all the other matrices.

(a) 25%Sparse Image (b) SBJ, PSNR=65.4 (c) RDFT, PSNR=63.1

(d) CRnd, PSNR=61.6 (e) BCH, PSNR=61.3 (f) Chirp, PSNR=41.5

Fig. 4: Performances of utilizing different sampling matrices in order to compress a62× 62 Lena image with sparsity of 25%.

In Fig. 4, we use the Lena image of size 62 × 62. Theoriginal signal has been made sparse ( kn = 0.25) by discard-ing 75% of the Haar wavelet coefficients. In this scenario,we use the sparse complex matrix. All the employed matri-ces are 2400 × 3907. The second image is the reconstructedimage using the sparse complex matrix (called “SBJ” in fig-ure) selected from the 4095 × 4095 matrix based on GF(26).The other matrices are random rows of DFT matrix (called“RDFT” in figure), Complex-valued Random matrix (called

“CRnd” in figure), 7-ary BCH matrix and Chirp matrix. Ob-viously, the sparse complex matrix performs better than othermatrices in the sense of PSNR.

Finally, we use the pepper image of size 512 × 512 inFig. 5. The original signal has been made sparse ( kn = 0.2)by block DCT (The block compressed sensing algorithm fornatural images can be found in [17]). In this scenario, weuse the q-ary complex matrix (called “qBJ” in figure 5) withq = 210. All the employed matrices are with sizes 205 ×1024 and the image is reconstructed by 16 × 16 blocks. Fig.5 shows that the q-ary complex matrix performs better thanother matrices in the sense of PSNR.

(a) Original Image (b) qBJ, PSNR=30.7

(c) RDFT, PSNR=29.4 (d) CRnd, PSNR=30

Fig. 5: Comparison of utilizing different sampling matrices in order to compress a512× 512 Pepper image.

5. CONCLUSION

In this paper, we proposed two classes of deterministiccomplex-valued measurement matrices based on B-J codes:the sparse complex-valued matrix and the p-ary complex-valued matrix. They come from cyclic matrices, which meansthat the storage cost of such matrix is relatively low, leadingto simplicity in both sampling and reconstruction. Moreover,they can be resized flexibly by puncturing rows and columns.Finally, numbers of simulations are done under the OM-P algorithm. Simulation results show that the matrices weconstructed perform comparable to, sometimes even betterthan, Gaussian random matrices and some other deterministicmatrices, especially for image reconstruction.

726

Page 5: [IEEE 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China (2014.7.9-2014.7.13)] 2014 IEEE China Summit & International

6. REFERENCES

[1] A. Amini, V. Montazerhodjat, and F. Marvasti, “Matri-ces with small coherence using p-ary block codes,” Sig-nal Processing, IEEE Transactions on, vol. 60, no. 1,pp. 172–181, 2012.

[2] D. L. Donoho, “Compressed sensing,” Information The-ory, IEEE Transactions on, vol. 52, no. 4, pp. 1289–1306, 2006.

[3] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertain-ty principles: Exact signal reconstruction from highlyincomplete frequency information,” Information Theo-ry, IEEE Transactions on, vol. 52, no. 2, pp. 489–509,2006.

[4] E. J. Candes and M. B. Wakin, “An introduction to com-pressive sampling,” Signal Processing Magazine, IEEE,vol. 25, no. 2, pp. 21–30, 2008.

[5] J. A. Tropp and A. C. Gilbert, “Signal recoveryfrom random measurements via orthogonal matchingpursuit,” Information Theory, IEEE Transactions on,vol. 53, no. 12, pp. 4655–4666, 2007.

[6] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin,“A simple proof of the restricted isometry property forrandom matrices,” Constructive Approximation, vol. 28,no. 3, pp. 253–263, 2008.

[7] L. Applebaum, S. D. Howard, S. Searle, and R. Calder-bank, “Chirp sensing codes: Deterministic compressedsensing measurements for fast recovery,” Applied andComputational Harmonic Analysis, vol. 26, no. 2, pp.283–290, 2009.

[8] R. Calderbank, S. Howard, and S. Jafarpour, “Construc-tion of a large class of deterministic sensing matricesthat satisfy a statistical isometry property,” Selected Top-ics in Signal Processing, IEEE Journal of, vol. 4, no. 2,pp. 358–374, 2010.

[9] S. Jafarpour, Deterministic compressed sensing.Princeton University, 2011.

[10] A. Amini and F. Marvasti, “Deterministic construc-tion of binary, bipolar, and ternary compressed sensingmatrices,” Information Theory, IEEE Transactions on,vol. 57, no. 4, pp. 2360–2370, 2011.

[11] A. G. Dimakis, R. Smarandache, and P. O. Vonto-bel, “LDPC codes for compressed sensing,” InformationTheory, IEEE Transactions on, vol. 58, no. 5, pp. 3093–3114, 2012.

[12] A. S. Tehrani, A. Dimakis, and G. Caire, “Optimal de-terministic compressed sensing matrices,” in 2013 Inter-national Conference on Acoustics, Speech, and SignalProcessing. IEEE, 2013, pp. 5895–5899.

[13] X. Ge and S.-T. Xia, “LDPC codes based on berlekamp-justesen codes with large stopping distances,” in In-formation Theory Workshop, 2006. ITW’06 Chengdu.IEEE. IEEE, 2006, pp. 214–218.

[14] L. Dandan, L. Xinji, X. Shutao, and J. Yong, “A class ofdeterministic construction of binary compressed sensingmatrices,” Journal of Electronics (China), vol. 29, no. 6,pp. 493–500, 2012.

[15] E. Berlekamp and J. Justesen, “Some long cyclic linearbinary codes are not so bad,” Information Theory, IEEETransactions on, vol. 20, no. 3, pp. 351–356, 1974.

[16] X.-J. Liu and S.-T. Xia, “Constructions of quasi-cyclicmeasurement matrices based on array codes,” in Infor-mation Theory Proceedings (ISIT), 2013 IEEE Interna-tional Symposium on. IEEE, 2013, pp. 479–483.

[17] L. Gan, “Block compressed sensing of natural images,”in Digital Signal Processing, 2007 15th InternationalConference on. IEEE, 2007, pp. 403–406.

727