ieeetransactionsonsignalprocessing,vol.64,no.8,april15,2016 … · 2016. 4. 16. ·...

14
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016 2051 Sequence Design to Minimize the Weighted Integrated and Peak Sidelobe Levels Junxiao Song, Prabhu Babu, and Daniel P. Palomar, Fellow, IEEE Abstract—Sequences with low aperiodic autocorrelation side- lobes are well known to have extensive applications in active sensing and communication systems. In this paper, we first con- sider the problem of minimizing the weighted integrated sidelobe level (WISL), which can be used to design sequences with im- pulse-like autocorrelation and a zero (or low) correlation zone. Two algorithms based on the general majorization-minimization method are developed to tackle the WISL minimization problem with guaranteed convergence to a stationary point. The proposed methods are then extended to optimize the -norm of the auto- correlation sidelobes, which leads to a way to minimize the peak sidelobe level (PSL) criterion. All the proposed algorithms can be implemented via the fast Fourier transform (FFT) and thus are computationally efficient. An acceleration scheme is considered to further accelerate the algorithms. Numerical experiments show that the proposed algorithms can efficiently generate sequences with virtually zero autocorrelation sidelobes in a specified lag in- terval and can also produce very long sequences with much smaller PSL compared with some well known analytical sequences. Index Terms—Autocorrelation, majorization-minimization, peak sidelobe level, unit-modulus sequences, weighted integrated sidelobe level. I. INTRODUCTION S EQUENCES with good autocorrelation properties lie at the heart of many active sensing and communication systems. Important applications include synchronization of digital com- munication systems (e.g., GPS receivers or CDMA cellular sys- tems), pilot sequences for channel estimation, coded sonar and radar systems and even cryptography for secure systems [1]–[5]. In practice, due to the limitations of sequence generation hard- ware components (such as the maximum signal amplitude clip of analog-to-digital converters and power amplifiers), unit-mod- ulus sequences (also known as polyphase sequences) are of spe- cial interest because of their maximum energy efficiency [5]. Let denote a complex unit-modulus sequence of length , then the aperiodic autocorrelations of are defined as (1) Manuscript received June 13, 2015; revised October 25, 2015; accepted De- cember 01, 2015. Date of publication December 22, 2015; date of current ver- sion March 03, 2016. The associate editor coordinating the review of this man- uscript and approving it for publication was Dr. Ian Clarkson. This work was supported by the Hong Kong RGC 16206315 research grant. The authors are with the Hong Kong University of Science and Technology (HKUST), Clear Water Bay, Kowloon, Hong Kong (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSP.2015.2510982 The problem of sequence design for good autocorrelation prop- erties usually arises when small autocorrelation sidelobes (i.e., ) are required. To measure the goodness of the autocorre- lation property of a sequence, two commonly used metrics are the integrated sidelobe level (ISL) (2) and the peak sidelobe level (PSL) (3) Owing to the practical importance of sequences with low au- tocorrelation sidelobes, a lot of effort has been devoted to identi- fying such sequences. Binary Barker sequences, with their peak sidelobe level (PSL) no greater than 1, are perhaps the most well-known such sequences [6]. However, it is generally ac- cepted that they do not exist for lengths greater than 13. In 1965, Golomb and Scholtz [7] started to investigate more general se- quences called generalized Barker sequences, which obey the same PSL maximum, but may have complex (polyphase) ele- ments. Since then, a lot of work has been done to extend the list of polyphase Barker sequences [8]–[12], and the longest one ever found is of length 77. It is still unknown whether there exist longer polyphase Barker sequences. Apart from searching for longer polyphase Barker sequences, some families of polyphase sequences with good autocorrelation properties that can be con- structed in closed-form have also been proposed in the literature, such as the Frank sequences [13], the Chu sequences [14], and the Golomb sequences [15]. It has been shown that the PSL’s of these sequences grow almost linearly with the square root of the length of the sequences [16], [17]. In recent years, several optimization based approaches have been proposed to tackle sequence design problems, see [18]–[22]. Among them, [22] and [18] proposed to design unit-modulus sequences with low autocorrelation by directly minimizing the true ISL metric or a simpler criterion that is “al- most equivalent” to the ISL metric. Efficient algorithms based on fast Fourier transform (FFT) operations were developed and shown to be capable of producing very long sequences (of length or even larger) with much lower autocorrelation compared with the Frank sequences and Golomb sequences. Why the ISL metric was chosen as the objective in the optimiza- tion approaches? It is probably because the ISL metric is more tractable compared with the PSL metric from an optimization point of view. But as in the definition of Barker sequences, PSL seems to be the preferred metric in many cases. So it is of particular interest to also develop efficient optimization 1053-587X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 10-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016 2051

Sequence Design to Minimize the WeightedIntegrated and Peak Sidelobe Levels

Junxiao Song, Prabhu Babu, and Daniel P. Palomar, Fellow, IEEE

Abstract—Sequences with low aperiodic autocorrelation side-lobes are well known to have extensive applications in activesensing and communication systems. In this paper, we first con-sider the problem of minimizing the weighted integrated sidelobelevel (WISL), which can be used to design sequences with im-pulse-like autocorrelation and a zero (or low) correlation zone.Two algorithms based on the general majorization-minimizationmethod are developed to tackle the WISL minimization problemwith guaranteed convergence to a stationary point. The proposedmethods are then extended to optimize the -norm of the auto-correlation sidelobes, which leads to a way to minimize the peaksidelobe level (PSL) criterion. All the proposed algorithms can beimplemented via the fast Fourier transform (FFT) and thus arecomputationally efficient. An acceleration scheme is considered tofurther accelerate the algorithms. Numerical experiments showthat the proposed algorithms can efficiently generate sequenceswith virtually zero autocorrelation sidelobes in a specified lag in-terval and can also produce very long sequences withmuch smallerPSL compared with some well known analytical sequences.Index Terms—Autocorrelation, majorization-minimization,

peak sidelobe level, unit-modulus sequences, weighted integratedsidelobe level.

I. INTRODUCTION

S EQUENCESwith good autocorrelation properties lie at theheart of many active sensing and communication systems.

Important applications include synchronization of digital com-munication systems (e.g., GPS receivers or CDMA cellular sys-tems), pilot sequences for channel estimation, coded sonar andradar systems and even cryptography for secure systems [1]–[5].In practice, due to the limitations of sequence generation hard-ware components (such as the maximum signal amplitude clipof analog-to-digital converters and power amplifiers), unit-mod-ulus sequences (also known as polyphase sequences) are of spe-cial interest because of their maximum energy efficiency [5].Let denote a complex unit-modulus sequence of

length , then the aperiodic autocorrelations of aredefined as

(1)

Manuscript received June 13, 2015; revised October 25, 2015; accepted De-cember 01, 2015. Date of publication December 22, 2015; date of current ver-sion March 03, 2016. The associate editor coordinating the review of this man-uscript and approving it for publication was Dr. Ian Clarkson. This work wassupported by the Hong Kong RGC 16206315 research grant.The authors are with the Hong Kong University of Science and Technology

(HKUST), Clear Water Bay, Kowloon, Hong Kong (e-mail: [email protected];[email protected]; [email protected]).Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TSP.2015.2510982

The problem of sequence design for good autocorrelation prop-erties usually arises when small autocorrelation sidelobes (i.e.,

) are required. To measure the goodness of the autocorre-lation property of a sequence, two commonly used metrics arethe integrated sidelobe level (ISL)

(2)

and the peak sidelobe level (PSL)

(3)

Owing to the practical importance of sequences with low au-tocorrelation sidelobes, a lot of effort has been devoted to identi-fying such sequences. Binary Barker sequences, with their peaksidelobe level (PSL) no greater than 1, are perhaps the mostwell-known such sequences [6]. However, it is generally ac-cepted that they do not exist for lengths greater than 13. In 1965,Golomb and Scholtz [7] started to investigate more general se-quences called generalized Barker sequences, which obey thesame PSL maximum, but may have complex (polyphase) ele-ments. Since then, a lot of work has been done to extend thelist of polyphase Barker sequences [8]–[12], and the longest oneever found is of length 77. It is still unknown whether there existlonger polyphase Barker sequences. Apart from searching forlonger polyphase Barker sequences, some families of polyphasesequences with good autocorrelation properties that can be con-structed in closed-form have also been proposed in the literature,such as the Frank sequences [13], the Chu sequences [14], andthe Golomb sequences [15]. It has been shown that the PSL’s ofthese sequences grow almost linearly with the square root of thelength of the sequences [16], [17].In recent years, several optimization based approaches

have been proposed to tackle sequence design problems, see[18]–[22]. Among them, [22] and [18] proposed to designunit-modulus sequences with low autocorrelation by directlyminimizing the true ISL metric or a simpler criterion that is “al-most equivalent” to the ISL metric. Efficient algorithms basedon fast Fourier transform (FFT) operations were developedand shown to be capable of producing very long sequences (oflength or even larger) with much lower autocorrelationcompared with the Frank sequences and Golomb sequences.Why the ISL metric was chosen as the objective in the optimiza-tion approaches? It is probably because the ISL metric is moretractable compared with the PSL metric from an optimizationpoint of view. But as in the definition of Barker sequences,PSL seems to be the preferred metric in many cases. So itis of particular interest to also develop efficient optimization

1053-587X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2052 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

algorithms that can minimize the PSL metric. Additionally,in some applications, instead of sequences with impulse-likeautocorrelation, zero correlation zone (ZCZ) sequences (withzero correlations over a smaller range) would suffice [23],[24]. In [18], an algorithm named WeCAN (weighted cyclicalgorithm new) was proposed to design sequences with zero orlow correlation zone.In this paper, we consider the problem of minimizing the

weighted ISL metric, which includes the ISL minimizationproblem as a special case and can be used to design zero or lowcorrelation zone sequences by properly choosing the weights.Two efficient algorithms are developed based on the generalmajorization-minimization (MM) method by constructing twodifferent majorization functions. The proposed algorithms canbe implemented by means of FFT operations and are thusvery efficient in practice. The convergence of the algorithmsto a stationary point is proved. An acceleration scheme is alsointroduced to further accelerate the proposed algorithms. Wealso extend the proposed algorithms to minimize the -normof the autocorrelation sidelobes. The resulting algorithm can beadopted to minimize the PSLmetric of unit-modulus sequences.The remaining sections of the paper are organized as fol-

lows. In Section II, the problem formulation is presented. InSections III and IV, we first give a brief review of the MMmethod and then two MM algorithms are derived, followedby the convergence analysis and an acceleration scheme inSection V. In Section VI, the algorithms are extended to min-imize the -norm of the autocorrelation sidelobes. Finally,Section VII presents some numerical results and the conclu-sions are given in Section VIII.

Notation

Boldface upper case letters denote matrices, boldface lowercase letters denote column vectors, and italics denote scalars.

and denote the real field and the complex field, respec-tively. and denote the real and imaginary partrespectively. denotes the phase of a complex number.The superscripts , and denote transpose, complexconjugate, and conjugate transpose, respectively. denotesthe Hadamard product. denotes the (i-th, j-th) elementof matrix and denotes the i-th element of vector .denotes the i-th row of matrix , denotes the j-th columnof matrix , and denotes the submatrix of fromto . denotes the trace of a matrix. is a columnvector consisting of all the diagonal elements of .is a diagonal matrix formed with as its principal diagonal.

is a column vector consisting of all the columns ofstacked. denotes an identity matrix.

II. PROBLEM FORMULATION

Let denote the complex unit-modulus sequence tobe designed and be the aperiodic autocorrelations of

as defined in (1), then we define the weighted inte-grated sidelobe level (WISL) as

(4)

where . It is easy to see that theWISLmetric includes the ISLmetric as a special case by simply taking

.The problem of interest in this paper is the following WISL

minimization problem:

(5)

which includes the ISL minimization problem considered in[22] as a special case and can be used to design zero (or low)correlation zone sequences by assigning larger weights for thesidelobes that we want to minimize.An algorithm named WeCAN was proposed in [18] to tackle

this problem. However, instead of directly minimizing theWISL metric as in (5), WeCAN tries to minimize an “almostequivalent” criterion. Moreover, the WeCAN algorithm re-quires computing the square-root of an matrix at thebeginning and FFT’s at each iteration, which could be costlyfor large . In the next section, we will develop algorithms thatdirectly minimize the original WISL metric, and at the sametime are computationally much more efficient than the WeCANalgorithm.

III. SEQUENCE DESIGN VIA MAJORIZATION-MINIMIZATION

In this section, we first introduce the general majorization-minimization (MM) method briefly and then apply it to derivesimple algorithms to solve the problem (5).

A. The MM Method

The MM method refers to the majorization-minimizationmethod, which is an approach to solve optimization problemsthat are too difficult to solve directly. The principle behind theMM method is to transform a difficult problem into a seriesof simple problems. Interested readers may refer to [25], [26]and references therein for more details (recent generalizationsinclude [27], [28]).Suppose we want to minimize over . Instead of

minimizing the cost function directly, the MM approachoptimizes a sequence of approximate objective functions thatmajorize . More specifically, starting from a feasible point

the algorithm produces a sequence according to thefollowing update rule:

(6)

where is the point generated by the algorithm at iterationand is the majorization function of at .

Formally, the function is said to majorize the functionat the point if

(7)(8)

In other words, function is an upper bound ofover and coincides with at .

Page 3: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2053

To summarize, to minimize over , the mainsteps of the majorization-minimization scheme are1) Find a feasible point and set .2) Construct a function that majorizes at .3) Let .4) If some convergence criterion is met, exit; otherwise, set

and go to step (2).It is easy to show that with this scheme, the objective valueis monotonically decreasing (nonincreasing) at every iteration,i.e.,

(9)

The first inequality and the third equality follow from the prop-erties of the majorization function, namely (7) and (8) respec-tively and the second inequality follows from (6). The mono-tonicity makes MM algorithms very stable in practice.

B. WISL Minimization via MMTo solve the problem (5) via majorization-minimization, the

key step is to find a majorization function of the objective suchthat the majorized problem is easy to solve. For that purpose wefirst present a simple result that will be useful later.Lemma 1 [22]: Let be an Hermitian matrix and

be another Hermitian matrix such that . Then forany point , the quadratic function is majorizedby at .Let us define to be Toeplitz

matrices with the th diagonal elements being 1 and 0 else-where. Noticing that

(10)

we can rewrite the problem (5) as

(11)

Let , it is easy to see that, so we can rewrite the problem (11) in

a more symmetric way:

(12)

where and . Since, the problem (12) can be further rewritten

using notation as (the constant factor is ignored):

(13)

Let us define

(14)

and denote the objective function of (13) by , then, which is clearly a quadratic function in .

According to Lemma 1, we can construct a majorization func-tion of by simply choosing a matrix such that .A simple choice of can be , whereis the maximum eigenvalue of . Owing to the special structureof , it can be shown that can be computed efficientlyin closed form.Lemma 2: Let be the matrix defined in (14). Then the max-

imum eigenvalue of is given by.

Proof: It is easy to see that the set of vectorsare mutually orthogonal. For

, we have

(15)

Thus , are the nonzeroeigenvalues of with corresponding eigenvectors

. Since and , themaximum eigenvalue of is given by

.Then given at iteration , by choosing

in Lemma 1 we know that the objective of (13)is majorized by the following function at :

(16)

Since , the first term of(16) is just a constant. After ignoring the constant terms, themajorized problem of (13) is given by

(17)

Substituting in (14) back into (17), the problem becomes

(18)

Page 4: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2054 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

Since , the problem (18) can be rewrittenas

(19)

which can be further simplified as

(20)

where

. . ....

.... . . . . .

(21)

is a Hermitian Toeplitz matrix, and are the auto-correlations of the sequence .It is clear that the objective function in (20) is quadratic in, but the problem (20) is still hard to solve directly. So we

propose to majorize the objective function of problem (20) atagain to further simplify the problem that we need to solve

at each iteration. Similarly, to construct a majorization functionof the objective, we need to find a matrix such that

. As one choice, one may choose, as in the first majorization

step. But in this case, to compute the maximum eigenvalue ofthe matrix , some iterative algorithmsare needed, in contrast to the simple closed form expressionin the first majorization step. To maintain the simplicity andthe computational efficiency of the algorithm, here we proposeto use some upper bound ofthat can be easily computed instead. To derive the upper bound,we first introduce a useful result regarding the bounds of theextreme eigenvalues of Hermitian Toeplitz matrices [29].Lemma 3: Let be an Hermitian Toeplitz matrix

defined by as follows

. . ....

.... . . . . .

and be a FFT matrix with. Let and

be the discrete Fourier transform of . Then

(22)

(23)

Proof: See [29].Since the matrix is Hermitian Toeplitz, according to

Lemma 3, we know that

(24)

where and

(25)Let us denote the upper bound of at the right-hand sideof (24) by , i.e.,

(26)

Since , it is easy to see that

(27)

Thus, we may choose in Lemma 1 and the objectiveof (20) is majorized by

(28)

Since , the first term of (28) is a constant. Againby ignoring the constant terms, we have the majorized problemof (20):

(29)

which can be rewritten as

(30)

where

(31)

It is easy to see that the problem (30) has a closed form solution,which is given by

(32)

Note that although we have applied the majorization-mini-mization scheme twice at the point , it can be viewed asdirectly majorizing the objective function of (5) at by thefollowing function:

(33)

Page 5: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2055

and the minimizer of over the constraint set is givenby (32).According to the steps of the majorization-minimization

scheme described in Section III-A, we can now readily havea straightforward implementation of the algorithm, which ateach iteration computes according to (31) and updates via(32). It is easy to see that the main cost is the computation ofin (31). To obtain an efficient implementation of the algorithm,here we further explore the special structures of the matricesinvolved in the computation of .We first notice that to compute , we need to compute

the FFT of the vector in (25) and the autocorrelationsof are needed to form the vector .

It is well known that the autocorrelations can be computedefficiently via FFT (IFFT) operations, i.e.,

(34)

where is the FFT matrix and denotes theelement-wise absolute-squared value. Next we present anothersimple result regarding Hermitian Toeplitz matrices that can beused to compute the matrix vector multiplication effi-ciently via FFT (IFFT).Lemma 4: Let be an Hermitian Toeplitz matrix

defined as follows

. . ....

.... . . . . .

and be a FFT matrix with. Then can be de-

composed as , where.

Proof: The Hermitian Toeplitz Matrix can beembedded in a circulant matrix of dimension asfollows:

(35)

where

. . ....

.... . . . . .

(36)

The circulant matrix can be diagonalized by the FFT matrix[30], i.e.,

(37)

where is the first column of , i.e.,. Since the matrix is

just the upper left block of , we can easily obtain.

Since the matrix is Hermitian Toeplitz, from Lemma 4 weeasily have

(38)

where is the same as the one defined in (25), which can bereused here. With the decomposition of given in (38), it iseasy to see that the matrix vector multiplication can beperformed by means of FFT (IFFT) operations.Nowwe are ready to summarize the overall algorithm and it is

given in Algorithm 1. The algorithm falls into the general frame-work of MM algorithms and thus preserves the monotonicity ofsuch algorithms. It is also worth noting that the per iterationcomputation of the algorithm is dominated by four FFT (IFFT)operations and thus computationally very efficient.

Algorithm 1: MWISL—Monotonic minimizer for WeightedISL

Require: sequence length , weights1: Set , initialize .2:3: repeat4:5:6:7:8:

9:

10:11:12: until convergence

IV. WISL MINIMIZATION WITH AN IMPROVEDMAJORIZATION FUNCTION

As described in the previous section, the proposed algorithmis based on the majorization-minimization principle, and thenature of the majorization functions usually dictate the perfor-mance of the algorithm. In this section, we further explore thespecial structure of the problem and construct a different ma-jorization function.Notice that to obtain the simple algorithm in the previous

section, a key point is that the first term of the majorizationfunction in (16) is just a constant and can be ig-nored, which removes the higher order term in the objective ofthe majorized problem (17). In the previous section, we havechosen such that

is a constant over the constraint set.But it is easy to see that, to ensure the termbeing constant, it is sufficient to require being diagonal, i.e.,choosing . To construct a tight majorizationfunction, here we consider the following problem

(39)

Page 6: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2056 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

i.e., we choose the diagonal matrix that minimizes thesum of eigenvalues of the difference . Since is aconstant matrix, problem (39) can be written as

(40)

Problem (40) is an SDP (semidefinite program), and there isno closed form solution in general. However, due to the specialproperties (i.e., symmetry and nonnegativity) of the matrix in(14), it can be shown that problem (40) can be solved in closedform as given in the following lemma.Lemma 5: Let be an real symmetric nonnegative

matrix. Then the problem

(41)

admits the following optimal solution:(42)

Proof: Since , we have

By choosing , we get , with the equalityachieved by . So it remains to show that isfeasible, i.e., . Since for any , we have

(43)

where the third equality follows from the symmetry of andthe last inequality follows from the fact that is nonnegative,the proof is complete.Then given at iteration , by choosing

in Lemma 1, the objective of (13)is majorized by the following function at :

(44)

As discussed above, the first term of (44) is a constant and by ig-noring the constant terms we have the majorized problem givenas follows:

(45)

Since

(46)

where is the inverse operation of , following thederivation in the previous section, we can simplify (45) to

(47)

where is defined in (21) and

The objective in (47) is quadratic in and similar as beforewe would like to find a matrix such that

. For the same reason as in the previous section,here we use some upper bound ofto construct the matrix . We first present two results that willbe used to derive the upper bound. The first result indicates somerelations between Hadamard and conventional matrix multipli-cation [31].Lemma 6: Let , and let . Then the

diagonal entry of the matrix coincides with theentry of the vector , , i.e.,

(48)

Then the second result follows, which reveals a fact regardingthe eigenvalues of the matrix .Lemma 7: Let be an matrix and with

. Then and share thesame set of eigenvalues.

Proof: Suppose is an eigenvalue of and is the corre-sponding eigenvector, i.e., , then

(49)

which means is also an eigenvalue of the matrix ,with the corresponding eigenvector given by .

Page 7: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2057

With Lemma 7, we have

(50)

Noticing that the matrix is symmetric Toeplitz, according toLemma 3, we know that

(51)

where and

(52)By defining

(53)

we now have

(54)

where is defined in (26) and by choosingin Lemma 1 we know that the objective of (47) is majorized by

By ignoring the constant terms, the majorized problem of (47)is given by

(55)

Similar to the problem (29) in the previous section, (55) alsoadmits a closed form solution given by

(56)

where

(57)

It is worth noting that the in (57) can be computed efficientlyvia FFT operations. To see that, we first note that can becomputed by means of FFT as described in the previous section.According to Lemma 6, we have

(58)

Since is symmetric Toeplitz, by Lemma 4 we can decomposeas

(59)

and thus can also be computed efficiently via FFT opera-tions. We further note that we only need to compute once.The overall algorithm is then summarized in Algorithm 2, forwhich the main computation of each iteration is still just fourFFT (IFFT) operations and thus of order .

Algorithm 2: MWISL-Diag—Monotonic minimizer forWeighted ISLRequire: sequence length , weights1: Set , initialize .2:

3:4:5:6: repeat7:8:9:10:11:

12:

13:14:15: until convergence

V. CONVERGENCE ANALYSIS AND ACCELERATION SCHEME

A. Convergence AnalysisThe MWISL and MWISL-Diag algorithms given in Algo-

rithm 1 and 2 are based on the general majorization-minimiza-tion framework, thus according to Section III-A, we know thatthe sequence of objective values (i.e., weighted ISL) evaluatedat generated by the algorithms is nonincreasing. And itis easy to see that the weighted ISL metric in (4) is boundedbelow by 0, thus the sequence of objective values is guaranteedto converge to a finite value.Now we further analyze the convergence property of the se-

quence itself. In the following, we will focus on the se-quence generated by the MWISL algorithm (i.e., Algorithm 1),and prove the convergence to a stationary point. The same resultcan be proved for the MWISL-Diag algorithm (i.e., Algorithm2) similarly.To make it clear what is a stationary point in our case, we

first introduce a first-order optimality condition for minimizinga smooth function over an arbitrary constraint set, which followsfrom [32].Proposition 8: Let be a smooth function, and

let be a local minimum of over a subset of . Then

(60)

Page 8: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2058 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

where denotes the tangent cone of at .A point is said to be a stationary point of the problem

if it satisfies the first-order optimality condition (60).To facilitate the analysis, we further note that upon defining

(61)

(62)

(63)

and based on the expression of in (10), it is straightforwardto show that the complex WISL minimization problem (5) isequivalent to the following real one:

(64)

We are now ready to state the convergence properties ofMWISL.Theorem 9: Let be the sequence generated by the

MWISL algorithm in Algorithm 1. Then every limit point ofthe sequence is a stationary point of the problem (5).

Proof: Denote the objective functions of the problem (5)and its real equivalent (64) by and , respectively. De-note the constraint sets of the problem (5) and (64) by and ,respectively, i.e., and

. From thederivation of MWISL in Section III-B, we know that, at itera-tion , is majorized by the function in (33) atover . Then according to the general MM scheme described inSection III-A, we have

which means is a nonincreasing sequence.Since the sequence is bounded, we know that it has at

least one limit point. Consider a limit point and a subse-quence that converges to , we have

Letting , we obtain

(65)

i.e., is a global minimizer of over . With thedefinitions of , and given in (61), (62) and (63), andby ignoring the constant terms in , it is easy to showthatminimizing over is equivalent to the followingreal problem:

(66)

where and

(67)Since minimizes over , is a global min-imizer of (66) and since is just a constant over , isalso a global minimizer of

(68)

Then as a necessary condition, we have

(69)

where denotes the objective function of (68). It is easy tocheck that

Thus we have

(70)

implying that is a stationary point of the problem (64).Due to the equivalence of problem (64) and (5), the proof iscomplete.

B. Acceleration SchemeIn MM algorithms, the convergence speed is usually dictated

by the nature of the majorization functions. Due to the suc-cessive majorization steps that we have carried out in the pre-vious sections to construct the majorization functions, the con-vergence of MWISL and MWISL-Diag seems to be slow. Inthis subsection, we briefly introduce an acceleration scheme thatcan be applied to accelerate the proposed MM algorithms. It isthe so called squared iterative method (SQUAREM), which wasoriginally proposed in [33] to accelerate any Expectation-Max-imization (EM) algorithms. SQUAREM adapts the idea of theCauchy-Barzilai-Borwein (CBB) method [34], which combinesthe classical steepest descent method and the two-point step sizegradient method [35], to solve the nonlinear fixed-point problemof EM. It only requires the EM updating scheme and can bereadily implemented as an off-the-shelf accelerator. Since MMis a generalization of EM and the update rule is also just afixed-point iteration, SQUAREM can be easily applied to MMalgorithms after some minor modifications.Suppose we have derived anMM algorithm tominimize

over and let denote the nonlinear fixed-pointiteration map of the MM algorithm:

(71)

For example, the iterationmap of theMWISL algorithm is givenby (32). Then the steps of the accelerated MM algorithm basedon SQUAREM are given in Algorithm 3. A problem of the gen-eral SQUAREM is that it may violate the nonlinear constraints,so in Algorithm 3 we need to project wayward points back tothe feasible region by . For the unit-modulus constraintsin the problem under consideration, the projection can be done

Page 9: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2059

by simply applying the function element-wise to the so-lution vectors. A second problem of SQUAREM is that it canviolate the descent property of the original MM algorithm. Toensure the descent property, a strategy based on backtrackinghas been adopted in Algorithm 3, which repeatedly halves thedistance between and : until the descentproperty is maintained. To see why this works, we first notethat if . In addition,since due to the descent property of orig-inal MM steps, is guaranteed to hold as

. It is worth mentioning that, in practice, usually onlya few back-tracking steps are needed to maintain the mono-tonicity of the algorithm.

Algorithm 3: The acceleration scheme for MM algorithms

Require: parameters1: Set , initialize .2: repeat3:4:5:6:7: Compute the step-length8:9: while do10:11:12: end while13:14:15: until convergence

VI. MINIMIZING THE -NORM OFAUTOCORRELATION SIDELOBES

In previous sections, we have developed algorithms to min-imize the weighted ISL metric of a unit-modulus sequence. Itis clear that the (unweighted) ISL metric is just the squared-norm of the autocorrelation sidelobes and in this section we

would like to consider the more general -norm metric of theautocorrelation sidelobes defined as

(72)

with . The motivation is that by choosing differentvalues, we may get different metrics of particular interest. For

instance, by choosing , the -normmetric tends to the-norm of the autocorrelation sidelobes, which is known as the

peak sidelobe level (PSL). So it is well motivated to considerthe more general -norm ( ) metric minimizationproblem

(73)

which is equivalent to

(74)

It is easy to see that by choosing , problem (74) becomesthe ISL minimization problem. It is also worth noting that wecan easily incorporate weights in the objective (i.e., weighted-norm) as in (4).To tackle the problem (74) via majorization-minimization,

we need to construct a majorization function of the objectiveand the idea is to majorize each by aquadratic function of . It is clear that when , it is im-possible to construct a global quadratic majorization function of

. However, we can still majorize it by a quadratic functionlocally based on the following lemma.Lemma 10: Let with and .

Then for any given , is majorized at overthe interval by the following quadratic function

(75)

where

(76)

Proof: See Appendix A.Given at iteration , according to Lemma 10, we know

that is majorized at over by

(77)

where

(78)

(79)

Since the objective decreases at every iteration in theMM framework, at the current iteration , it is sufficientto majorize over the set on which the objective issmaller, i.e., , which im-

plies . Hence we can choose

in (78). Then the majorized problem of(74) in this case is given by (ignoring the constant terms)

(80)

We can see that the first term in the objective isjust the weighted ISL metric with weights , and thuscan be further majorized as in Section III-B. Following the steps

Page 10: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2060 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

in Section III-B until (20) (i.e., just the first majorization), weknow that it is majorized at by (with constant terms ignored)

(81)

where and are defined in (21) and (14) with . Forthe second term, since it can be shown that , we have

(82)

(83)

(84)

(85)

where , . By adding the twomajorization functions, i.e., (81) and (85), and defining

(86)we have the majorized problem of (80) given by

(87)

where

(88)

We can see that the problem (87) has the same form as (20), thenby following similar steps as in Section III-B, we can performone more majorization step and get the majorized problem

(89)

where

(90)

and this time should be computed based on weightsin (78) and is based on the weights in (86). As in pre-

vious cases, we only need to solve (89) in closed form at everyiteration and the overall algorithm is summarized in Algorithm4. Note that, to avoid numerical issue, we have used the nor-malized and (i.e., divided by ) in Algorithm 4, which isequivalent to divide the objective in (80) by during the deriva-tion. It is also worth noting that the algorithm can be acceleratedby the scheme described in Section V-B.

1http://www.sal.ufl.edu/book/

Algorithm 4: Monotonic minimizer for the -metric ofautocorrelation sidelobes

Require: sequence length , parameter1: Set , initialize .2: repeat3:4:5:

6:

7:8:9:10:11:

12:

13:14:15: until convergence

VII. NUMERICAL EXPERIMENTS

To compare the performance of the proposed MWISL algo-rithm and its variants with existing algorithms and to show thepotential of proposed algorithms in designing sequences for var-ious scenarios, we present some experimental results in this sec-tion. All experiments were performed on a PC with a 3.20 GHzi5–3470 CPU and 8GB RAM.

A. Weighted ISL MinimizationIn this subsection, we give an example of applying the pro-

posed MWISL and MWISL-Diag algorithms (with and withoutacceleration) to design sequences with low correlation sidelobesonly at required lags and compare the performance with theWeCAN algorithm [18]. The Matlab code of the benchmark al-gorithm, i.e., WeCAN, was downloaded from the website1 ofthe book [5].Suppose we want to design a sequence of length

and with small correlations only at and .To achieve this, we set the weights as follows:

otherwise (91)

such that only the autocorrelations at the required lags will beminimized. Note that in this example there aredegrees of freedom, i.e., the free phases of (theinitial phase does not matter), and our goal is to match 80real numbers (i.e., the real and imaginary parts ofand ). As there are enough degrees of freedom,the weighted ISL can be driven to 0 in principle. So in thisexperiment, we will allow enough iterations for the algorithmsto be run and will not stop until the weighted ISL goes below

. The algorithms are initialized by a same randomly

Page 11: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2061

Fig. 1. Evolution of the weighted ISL with respect to the number of iterations.

generated sequence. The evolution curves of the weighted ISLwith respect to the number of iterations and time are shownin Figs. 1 and 2, respectively. From Fig. 1, we can see thatall the algorithms can drive the weighted ISL to whenenough iterations are allowed, but the proposed algorithms(especially the accelerated ones) require far fewer iterationscompared with the WeCAN algorithm. In addition, we can seethat the MWISL-Diag algorithms (with and without acceler-ation) converge a bit faster than the corresponding MWISLalgorithms, which means the majorization function proposedin Section IV is somehow better than the one in Section III-B.One may also notice that the slope of convergence varies alot for the two accelerated methods. It is due to the back-tracking strategy adopted in the acceleration scheme to ensuremonotonicity, which means the algorithms may switch to slowpure MM steps sometimes. From Fig. 2, we can see that interms of the computational time the superiority of the proposedalgorithms is more significant, more specially the acceleratedMWISL and MWISL-Diag algorithms take only 0.07 and 0.06seconds respectively, while the WeCAN algorithm takes morethan 1000 seconds. It is because the proposed MWISL (andMWISL-Diag) algorithms require only four FFT operationsper iteration, while each iteration of WeCAN requires com-putations of -point FFTs. The correlation level of the outputsequence of the accelerated MWISL-Diag algorithm is shownin Fig. 3, where the correlation level is defined as

We can see in Fig. 3 that the autocorrelation sidelobes are sup-pressed to almost zero (about ) at the required lags.

B. PSL MinimizationIn this subsection, we test the performance of the proposed

Algorithm 4 in Section VI in minimizing the peak sidelobe level(PSL) of the autocorrelation sidelobes, which is of particular in-terest. To apply the algorihtm, we need to choose the parameter. To examine the effect of the parameter , we first apply theaccelerated version of Algorithm 4 (denoted as MM-PSL) with

Fig. 2. Evolution of the weighted ISL with respect to time (in seconds). Theplot within the time interval second is zoomed in and shown in the upperright corner.

Fig. 3. Correlation level of the sequence of length designed byaccelerated MWISL-Diag algorithm with weights in (91).

four different values, i.e., and 10000, todesign a sequence of length . Frank sequences [13]are used to initialize the algorithm, which are known to be se-quences with good autocorrelation. More specifically, Frank se-quences are defined for lengths that are perfect squares and theFrank sequence of length is given by

(92)

For all values, we stop the algorithm after iterationsand the evolution curves of the PSL are shown in Fig. 4. Fromthe figure, we can see that smaller values lead to faster conver-gence. However, if is too small, it may not decrease the PSLat a later stage, as we can see that finally gives smallerPSL compared with . It may be explained by the fact that-norm with larger values approximates the -norm better.

So in practice, gradually increasing the value is probably abetter approach.In the second experiment, we consider both an increasing

scheme of (denoted as MM-PSL-adaptive) and the fixed

Page 12: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2062 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

Fig. 4. The evolution curves of the peak sidelobe level (PSL).

scheme with . For the increasing scheme, we apply theMM-PSL algorithm with increasing values .For each value, the stopping criterion was chosen to be

, withbeing the objective in (73), and the maximum allowed numberof iterations was set to be . For , the algorithmis initialized by the Frank sequence and for larger values,it is initialized by the solution obtained at the previous . Forthe fixed scheme, the stopping criterion was chosen to be

, and the max-imum allowed number of iterations was . In this case, inaddition to the Frank sequence, the Golomb sequence [15] wasalso used as the initial sequence, which is also known for itsgood autocorrelation properties. In contrast to Frank sequences,Golomb sequences are defined for any positive integer and aGolomb sequence of length is given by

(93)

The two schemes are applied to design sequences of thefollowing lengths: ,and the PSL’s of the resulting sequences are shown in Fig. 5.From the figure, we can see that for all lengths, the MM-PSL(G)and MM-PSL(F) sequences give nearly the same PSL; bothare much smaller than the PSL of Golomb and Frank se-quences, while a bit larger than the PSL of MM-PSL-adaptivesequences. For example, when , the PSL values of theMM-PSL(F) and MM-PSL-adaptive sequences are 4.36 and3.48, while the PSL values of Golomb and Frank sequencesare 48.03 and 31.84, respectively. The correlation level ofthe Golomb, Frank and the MM-PSL-adaptive sequencesare shown in Fig. 6. We can notice that the autocorrelationsidelobes of the Golomb and Frank sequences are relativelylarge for close to 0 and , while the MM-PSL-adaptivesequence has much more uniform autocorrelation sidelobesacross all lags.

VIII. CONCLUSIONWe have developed two efficient algorithms for theminimiza-

tion of the weighted integrated sidelobe level (WISL) metric

Fig. 5. Peak sidelobe level (PSL) versus sequence length. MM-PSL(G) andMM-PSL(F) denote the MM-PSL algorithm initialized by Golomb and Franksequences, respectively.

Fig. 6. Correlation level of the Golomb, Frank and MM-PSL-adaptivesequences of length .

of unit-modulus sequences. The proposed algorithms are de-rived based on applying two successive majorization steps andwe have proved that they will converge to a stationary point ofthe original WISL minimization problem. By performing onemore majorization step in the derivations, we have extended theproposed algorithms to tackle the problem of minimizing the-norm of the autocorrelation sidelobes. All the algorithms can

be implemented by means of FFT operations and thus are com-putationally very efficient in practice. An acceleration schemethat can be used to further speed up the proposed algorithmshas also been considered. By some numerical examples, we

Page 13: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

SONG et al.: SEQUENCE DESIGN TO MINIMIZE THE WEIGHTED INTEGRATED AND PEAK SIDELOBE LEVELS 2063

have shown that the proposed WISL minimization algorithmscan generate sequences with virtually zero autocorrelation side-lobes in some specified lag intervals with much lower compu-tational cost compared with the state-of-the-art. It has also beenobserved that the proposed -metric minimization algorithmcan produce long sequences with much more uniform autocor-relation sidelobes and much smaller PSL compared with Frankand Golomb sequences, which are known for their good auto-correlation properties.

APPENDIXPROOF OF LEMMA 10

Proof: For any given , let us consider a quadraticfunction of the following form

(94)

where . It is easy to check that . So tomake be a majorization function of at over theinterval , we need to further have for all

. Equivalently, we must have

(95)

for all , . Let us define the function

(96)

for all . The derivative of is given by

Since is convex on when , we have

which implies for all . Thus,is increasing on the interval and the maximum is

achieved at . Then the smallest we may choose is

(97)

By substituting into in (94) and appropriately rear-ranging terms, we can obtain the function in (75).

REFERENCES[1] R. Turyn, “Sequences with small correlation,” Error Correct. Codes,

pp. 195–228, 1968.[2] P. Spasojevic and C. Georghiades, “Complementary sequences for

ISI channel estimation,” IEEE Trans. Inf. Theory, vol. 47, no. 3, pp.1145–1152, Mar. 2001.

[3] N. Levanon and E. Mozeson, Radar Signals. New York, NY, USA:Wiley, 2004.

[4] S. W. Golomb and G. Gong, Signal Design for Good Correlation: ForWireless Communication, Cryptography, and Radar. Cambridge,U.K.: Cambridge Univ. Press, 2005.

[5] H. He, J. Li, and P. Stoica, Waveform Design for Active SensingSystems: A Computational Approach. Cambridge, U.K.: CambridgeUniv. Press, 2012.

[6] R. Barker, “Group synchronizing of binary digital systems,” Commun.Theory, pp. 273–287, 1953.

[7] S. W. Golomb and R. A. Scholtz, “Generalized Barker sequences,”IEEE Trans. Inf. Theory, vol. 11, no. 4, pp. 533–537, 1965.

[8] N. Zhang and S. W. Golomb, “Sixty-phase generalized Barker se-quences,” IEEE Trans. Inf. Theory, vol. 35, no. 4, pp. 911–912, 1989.

[9] M. Friese and H. Zottmann, “Polyphase Barker sequences up to length31,” Electron. Lett., vol. 30, no. 23, pp. 1930–1931, 1994.

[10] A. Brenner, “Polyphase Barker sequences up to length 45 with smallalphabets,” Electron. Lett., vol. 34, no. 16, pp. 1576–1577, 1998.

[11] P. Borwein and R. Ferguson, “Polyphase sequences with low autocor-relation,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1564–1567, 2005.

[12] C. Nunn and G. Coxson, “Polyphase pulse compression codes withoptimal peak and integrated sidelobes,” IEEE Trans. Aerosp. Electron.Syst., vol. 45, no. 2, pp. 775–781, Apr. 2009.

[13] R. L. Frank, “Polyphase codes with good nonperiodic correlation prop-erties,” IEEE Trans. Inf. Theory, vol. 9, no. 1, pp. 43–45, Jan. 1963.

[14] D. Chu, “Polyphase codes with good periodic correlation properties(corresp.),” IEEE Trans. Inf. Theory, vol. 18, no. 4, pp. 531–532, 1972.

[15] N. Zhang and S.W. Golomb, “Polyphase sequence with low autocorre-lations,” IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 1085–1089, 1993.

[16] R. Turyn, “The correlation function of a sequence of roots of 1,” IEEETrans. Inf. Theory, vol. 13, no. 3, pp. 524–525, 1967, Correspondence.

[17] W. H. Mow, S.-Y. Li, and J. Li, “Aperiodic autocorrelation and cross-correlation of polyphase sequences,” IEEE Trans. Inf. Theory, vol. 43,no. 3, pp. 1000–1007, May 1997.

[18] P. Stoica, H. He, and J. Li, “New algorithms for designing unimod-ular sequences with good correlation properties,” IEEE Trans. SignalProcess., vol. 57, no. 4, pp. 1415–1425, 2009.

[19] M. Soltanalian and P. Stoica, “Computational design of sequences withgood correlation properties,” IEEE Trans. Signal Process., vol. 60, no.5, pp. 2180–2193, May 2012.

[20] M. M. Naghsh, M. Modarres-Hashemi, S. ShahbazPanahi, M. Soltana-lian, and P. Stoica, “Unified optimization framework for multi-staticradar code design using information-theoretic criteria,” IEEE Trans.Signal Process., vol. 61, no. 21, pp. 5401–5416, 2013.

[21] M. Soltanalian and P. Stoica, “Designing unimodular codes viaquadratic optimization,” IEEE Trans. Signal Process., vol. 62, no. 5,pp. 1221–1234, Mar. 2014.

[22] J. Song, P. Babu, and D. P. Palomar, “Optimization methods for de-signing sequences with low autocorrelation sidelobes,” IEEE Trans.Signal Process., vol. 63, no. 15, pp. 3998–4009, Aug. 2015.

[23] H. Torii, M. Nakamura, and N. Suehiro, “A new class of zero-corre-lation zone sequences,” IEEE Trans. Inf. Theory, vol. 50, no. 3, pp.559–565, 2004.

[24] X. Tang and W. H. Mow, “A new systematic construction of zero cor-relation zone sequences based on interleaved perfect sequences,” IEEETrans. Inf. Theory, vol. 54, no. 12, pp. 5729–5734, 2008.

[25] D. R. Hunter and K. Lange, “A tutorial on MM algorithms,” Amer.Statistician, vol. 58, no. 1, pp. 30–37, 2004.

[26] P. Stoica and Y. Selen, “Cyclic minimizers, majorization techniques,and the expectation-maximization algorithm: A refresher,” IEEESignal Process. Mag., vol. 21, no. 1, pp. 112–114, Jan. 2004.

[27] M. Razaviyayn,M. Hong, and Z.-Q. Luo, “A unified convergence anal-ysis of block successive minimization methods for nonsmooth opti-mization,” SIAM J. Optim., vol. 23, no. 2, pp. 1126–1153, 2013.

[28] G. Scutari, F. Facchinei, P. Song, D. P. Palomar, and J.-S. Pang,“Decomposition by partial linearization: Parallel optimization ofmulti-agent systems,” IEEE Trans. Signal Process., vol. 62, no. 3, pp.641–656, Feb. 2014.

[29] P. J. S. G. Ferreira, “Localization of the eigenvalues of Toeplitz ma-trices using additive decomposition, embedding in circulants, and theFourier transform,” in Proc. 10th IFAC Symp. Syst. Identif., Copen-hagen, Denmark, Jul. 1994, pp. 271–276.

[30] R. M. Gray, Toeplitz and Circulant Matrices: A Review. Norwell,MA, USA: Now Publishers, 2006, vol. 2.

Page 14: IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 … · 2016. 4. 16. · IEEETRANSACTIONSONSIGNALPROCESSING,VOL.64,NO.8,APRIL15,2016 2051 SequenceDesigntoMinimizetheWeighted

2064 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 8, APRIL 15, 2016

[31] H. Roger and R. J. Charles, Topics in Matrix Analysis. Cambridge,U.K.: Cambridge Univ. Press, 1994.

[32] D. P. Bertsekas, A. Nedic, and A. E. Ozdaglar, Convex Analysis andOptimization. Singapore: Athena Scientific, 2003.

[33] R. Varadhan and C. Roland, “Simple and globally convergent methodsfor accelerating the convergence of any EM algorithm,” Scand. J.Statist., vol. 35, no. 2, pp. 335–353, 2008.

[34] M. Raydan and B. F. Svaiter, “Relaxed steepest descent and Cauchy-Barzilai-Borwein method,” Comput. Optim. Appl., vol. 21, no. 2, pp.155–167, 2002.

[35] J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,”IMA J. Numer. Anal., vol. 8, no. 1, pp. 141–148, 1988.

Junxiao Song received the B.Sc. degree in ControlScience and Engineering from Zhejiang University(ZJU), Hangzhou, China, in 2011, and the Ph.D.degree in Electronic and Computer Engineeringfrom the Hong Kong University of Science andTechnology (HKUST), Hong Kong, in 2015.His research interests are in convex optimization

and efficient algorithms, with applications in big data,signal processing and financial engineering.

Prabhu Babu received the Ph.D. degree in electrical engineering from the Up-psala University, Sweden, in 2012.He is currently a Research Associate with the Department of Electronic and

Computer Engineering at the Hong Kong University of Science and Technology(HKUST), Hong Kong.

Daniel P. Palomar (S’99–M’03–SM’08–F’12) re-ceived the Electrical Engineering and Ph.D. degreesfrom the Technical University of Catalonia (UPC),Barcelona, Spain, in 1998 and 2003, respectively.He is a Professor in the Department of Electronic

and Computer Engineering at the Hong Kong Uni-versity of Science and Technology (HKUST), HongKong, which he joined in 2006. Since 2013, he is aFellow of the Institute for Advance Study (IAS) atHKUST. He had previously held several researchappointments, namely, at King’s College London

(KCL), London, U.K.; Stanford University, Stanford, CA; TelecommunicationsTechnological Center of Catalonia (CTTC), Barcelona; Royal Institute ofTechnology (KTH), Stockholm, Sweden; University of Rome “La Sapienza”,Rome, Italy; and Princeton University, Princeton, NJ. His current researchinterests include applications of convex optimization theory, game theory,and variational inequality theory to financial systems, big data systems, andcommunication systems.Dr. Palomar is an IEEE Fellow, a recipient of a 2004/06 Fulbright Research

Fellowship, the 2004 Young Author Best Paper Award by the IEEE Signal Pro-cessing Society, the 2002/03 best Ph.D. prize in information technologies andcommunications by the Technical University of Catalonia (UPC), the 2002/03Rosina Ribalta first prize for the Best Doctoral Thesis in information technolo-gies and communications by the Epson Foundation, and the 2004 prize for thebest Doctoral Thesis in Advanced Mobile Communications by the VodafoneFoundation.He is a Guest Editor of the IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL

PROCESSING 2016 Special Issue on “Financial Signal Processing and MachineLearning for Electronic Trading” and has been Associate Editor of IEEETRANSACTIONS ON INFORMATION THEORY and of IEEE TRANSACTIONSON SIGNAL PROCESSING, a Guest Editor of the IEEE SIGNAL PROCESSINGMAGAZINE 2010 Special Issue on “Convex Optimization for Signal Pro-cessing,” the IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS 2008Special Issue on “Game Theory in Communication Systems,” and the IEEEJOURNAL ON SELECTED AREAS IN COMMUNICATIONS 2007 Special Issue on“Optimization of MIMO Transceivers for Realistic Communication Networks.”