prof. masaaki nagahara - sparsity methods for systems and … · 2021. 1. 21. · sparsity methods...

117
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara 1 1 The University of Kitakyushu [email protected] M. Nagahara (Univ of Kitakyushu) Sparsity Methods 1 / 41

Upload: others

Post on 14-Mar-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Sparsity Methods for Systems and ControlGreedy Algorithms

Masaaki Nagahara1

1The University of [email protected]

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 1 / 41

Page 2: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Table of Contents

1 ℓ0 Optimization

2 Orthogonal Matching Pursuit

3 Thresholding algorithm

4 Numerical Example

5 Conclusion

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 2 / 41

Page 3: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Table of Contents

1 ℓ0 Optimization

2 Orthogonal Matching Pursuit

3 Thresholding algorithm

4 Numerical Example

5 Conclusion

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 3 / 41

Page 4: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 Optimization

ℓ0 optimization

minimizex∈Rn

∥x∥0 subject to y � Φx ,

Here we directly solve the ℓ0 optimization problem without anyconvex relaxations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 4 / 41

Page 5: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 Optimization

ℓ0 optimization

minimizex∈Rn

∥x∥0 subject to y � Φx ,

Here we directly solve the ℓ0 optimization problem without anyconvex relaxations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 4 / 41

Page 6: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

Mutual coherenceFor a matrix Φ � [ϕ1 ,ϕ2 , . . . ,ϕn] ∈ Rm×n with ϕi ∈ Rm , i � 1, 2, . . . , n,we define the mutual coherence µ(Φ) by

µ(Φ) ≜ maxi , j�1,...,n

i, j

|⟨ϕi ,ϕ j⟩|∥ϕi ∥2∥ϕ j ∥2

.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 5 / 41

Page 7: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

Mutual coherence

µ(Φ) ≜ maxi , j�1,...,n

i, j

|⟨ϕi ,ϕ j⟩|∥ϕi ∥2∥ϕ j ∥2

.

The cosine of the angle θi j between ϕi and ϕ j

cos θi j �⟨ϕi ,ϕ j⟩

∥ϕi ∥2∥ϕ j ∥2.

θi j ≈ 0◦ (| cos θi j | ≈ 1) ⇒ ϕi and ϕ j are coherentθi j ≈ 90◦ (| cos θi j | ≈ 0) ⇒ ϕi and ϕ j are incoherent

θij

φi

φj

θij

φiφj

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 6 / 41

Page 8: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

Mutual coherence

µ(Φ) ≜ maxi , j�1,...,n

i, j

|⟨ϕi ,ϕ j⟩|∥ϕi ∥2∥ϕ j ∥2

.

The cosine of the angle θi j between ϕi and ϕ j

cos θi j �⟨ϕi ,ϕ j⟩

∥ϕi ∥2∥ϕ j ∥2.

θi j ≈ 0◦ (| cos θi j | ≈ 1) ⇒ ϕi and ϕ j are coherentθi j ≈ 90◦ (| cos θi j | ≈ 0) ⇒ ϕi and ϕ j are incoherent

θij

φi

φj

θij

φiφj

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 6 / 41

Page 9: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

Mutual coherence

µ(Φ) ≜ maxi , j�1,...,n

i, j

|⟨ϕi ,ϕ j⟩|∥ϕi ∥2∥ϕ j ∥2

.

The cosine of the angle θi j between ϕi and ϕ j

cos θi j �⟨ϕi ,ϕ j⟩

∥ϕi ∥2∥ϕ j ∥2.

θi j ≈ 0◦ (| cos θi j | ≈ 1) ⇒ ϕi and ϕ j are coherentθi j ≈ 90◦ (| cos θi j | ≈ 0) ⇒ ϕi and ϕ j are incoherent

θij

φi

φj

θij

φiφj

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 6 / 41

Page 10: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

For any Φ, we have 0 ≤ µ(Φ) ≤ 1.Some of ϕ1 , . . . ,ϕn are similar ⇒ µ(Φ) is large (µ(Φ) ≈ 1)ϕ1 , . . . ,ϕn are uniformly spread ⇒ µ(Φ) is small

φ1

φ2

φ3

φ1

φ2

φ3

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 7 / 41

Page 11: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

For any Φ, we have 0 ≤ µ(Φ) ≤ 1.Some of ϕ1 , . . . ,ϕn are similar ⇒ µ(Φ) is large (µ(Φ) ≈ 1)ϕ1 , . . . ,ϕn are uniformly spread ⇒ µ(Φ) is small

φ1

φ2

φ3

φ1

φ2

φ3

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 7 / 41

Page 12: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Mutual coherence

For any Φ, we have 0 ≤ µ(Φ) ≤ 1.Some of ϕ1 , . . . ,ϕn are similar ⇒ µ(Φ) is large (µ(Φ) ≈ 1)ϕ1 , . . . ,ϕn are uniformly spread ⇒ µ(Φ) is small

φ1

φ2

φ3

φ1

φ2

φ3

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 7 / 41

Page 13: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

ℓ0 optimization

minimizex∈Rn

∥x∥0 subject to y � Φx ,

TheoremIf there exists a vector x ∈ Rn that satisfies linear equation Φx � y, and

∥x∥0 <12

(1 +

1µ(Φ)

),

then x is the sparsest solution of the linear equation (i.e. the solution of the ℓ0optimization).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 8 / 41

Page 14: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1.Then,

12

(1 +

1µ(Φ)

)> 1.

From the theorem, if there exists a 1-sparse vector x (i.e. ∥x∥0 � 1)satisfying Φx � y, this is ℓ0 optimal.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 9 / 41

Page 15: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1.Then,

12

(1 +

1µ(Φ)

)> 1.

From the theorem, if there exists a 1-sparse vector x (i.e. ∥x∥0 � 1)satisfying Φx � y, this is ℓ0 optimal.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 9 / 41

Page 16: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1.Then,

12

(1 +

1µ(Φ)

)> 1.

From the theorem, if there exists a 1-sparse vector x (i.e. ∥x∥0 � 1)satisfying Φx � y, this is ℓ0 optimal.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 9 / 41

Page 17: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 18: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 19: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 20: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 21: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 22: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 23: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

Assume x is 1-sparse. (∥x∥0 � 1)The equation y � Φx becomes

y � Φx � x1ϕ1 + x2ϕ2 + · · · + xnϕn .

Define the errore(i) ≜ min

x∈R∥xϕi − y∥2

2 .

A 1-sparse vector satisfying y � Φx is found by searching i thatminimizes e(i).

Actually, if a 1-sparse solution exists, e(i) � 0 for some i.

It needs O(n) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 10 / 41

Page 24: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/3.Then,

12

(1 +

1µ(Φ)

)> 2.

From the theorem, if there exists a 2-sparse vector x (i.e. ∥x∥0 ≤ 2)satisfying Φx � y, this is ℓ0 optimal.Finding 2-sparse vector needs O(n2) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 11 / 41

Page 25: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/3.Then,

12

(1 +

1µ(Φ)

)> 2.

From the theorem, if there exists a 2-sparse vector x (i.e. ∥x∥0 ≤ 2)satisfying Φx � y, this is ℓ0 optimal.Finding 2-sparse vector needs O(n2) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 11 / 41

Page 26: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/3.Then,

12

(1 +

1µ(Φ)

)> 2.

From the theorem, if there exists a 2-sparse vector x (i.e. ∥x∥0 ≤ 2)satisfying Φx � y, this is ℓ0 optimal.Finding 2-sparse vector needs O(n2) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 11 / 41

Page 27: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/3.Then,

12

(1 +

1µ(Φ)

)> 2.

From the theorem, if there exists a 2-sparse vector x (i.e. ∥x∥0 ≤ 2)satisfying Φx � y, this is ℓ0 optimal.Finding 2-sparse vector needs O(n2) computations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 11 / 41

Page 28: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/(2k − 1).Then,

12

(1 +

1µ(Φ)

)> k.

From the theorem, if there exists a k-sparse vector x (i.e. ∥x∥0 ≤ k)satisfying Φx � y, this is ℓ0 optimal.Finding k-sparse vector x satisfying Φx � y is almost impossiblewhen k is large, since it needs O(nk) computations.In this chapter, we learn greedy methods for this problem.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 12 / 41

Page 29: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/(2k − 1).Then,

12

(1 +

1µ(Φ)

)> k.

From the theorem, if there exists a k-sparse vector x (i.e. ∥x∥0 ≤ k)satisfying Φx � y, this is ℓ0 optimal.Finding k-sparse vector x satisfying Φx � y is almost impossiblewhen k is large, since it needs O(nk) computations.In this chapter, we learn greedy methods for this problem.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 12 / 41

Page 30: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/(2k − 1).Then,

12

(1 +

1µ(Φ)

)> k.

From the theorem, if there exists a k-sparse vector x (i.e. ∥x∥0 ≤ k)satisfying Φx � y, this is ℓ0 optimal.Finding k-sparse vector x satisfying Φx � y is almost impossiblewhen k is large, since it needs O(nk) computations.In this chapter, we learn greedy methods for this problem.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 12 / 41

Page 31: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/(2k − 1).Then,

12

(1 +

1µ(Φ)

)> k.

From the theorem, if there exists a k-sparse vector x (i.e. ∥x∥0 ≤ k)satisfying Φx � y, this is ℓ0 optimal.Finding k-sparse vector x satisfying Φx � y is almost impossiblewhen k is large, since it needs O(nk) computations.In this chapter, we learn greedy methods for this problem.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 12 / 41

Page 32: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Characterization of ℓ0 solution

Suppose µ(Φ) < 1/(2k − 1).Then,

12

(1 +

1µ(Φ)

)> k.

From the theorem, if there exists a k-sparse vector x (i.e. ∥x∥0 ≤ k)satisfying Φx � y, this is ℓ0 optimal.Finding k-sparse vector x satisfying Φx � y is almost impossiblewhen k is large, since it needs O(nk) computations.In this chapter, we learn greedy methods for this problem.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 12 / 41

Page 33: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Table of Contents

1 ℓ0 Optimization

2 Orthogonal Matching Pursuit

3 Thresholding algorithm

4 Numerical Example

5 Conclusion

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 13 / 41

Page 34: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 optimization

ℓ0 optimization

minimizex∈Rn

∥x∥0 subject to y � Φx ,

To solve this, we employ greedy methods.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 14 / 41

Page 35: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 optimization

ℓ0 optimization

minimizex∈Rn

∥x∥0 subject to y � Φx ,

To solve this, we employ greedy methods.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 14 / 41

Page 36: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 37: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 38: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 39: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 40: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 41: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 42: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

Finding 1-sparse solution of y � Φx is of O(n).This is done by searching an index i that minimizes

e(i) ≜ minx∈R

∥xϕi − y∥22 .

If there is no 1-sparse solution, e(i) never gets 0, but we iterate thisminimization.

Matching Pursuit (MP)1 Find a 1-sparse vector x[1] that minimizes ∥Φx − y∥2.2 For k � 1, 2, 3, . . . do

Compute the residue r[k] � y −Φx[k]Find a 1-sparse vector x∗ that minimizes ∥Φx − r[k]∥2Set x[k + 1] � x[k] + x∗

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 15 / 41

Page 43: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]

k � 1i[1] � 2 minimizes e(i) � minx∈R ∥xϕi − y∥2

2 .y � x[1]ϕi[1] + r[1]

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 16 / 41

Page 44: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]

k � 1i[1] � 2 minimizes e(i) � minx∈R ∥xϕi − y∥2

2 .y � x[1]ϕi[1] + r[1]

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 16 / 41

Page 45: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]

k � 1i[1] � 2 minimizes e(i) � minx∈R ∥xϕi − y∥2

2 .y � x[1]ϕi[1] + r[1]

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 16 / 41

Page 46: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]x[2]φ

3

r[2]

k � 2i[2] � 3 minimizes minx∈R ∥xϕi − r[1]∥2

2 .r[1] � x[2]ϕi[2] + r[2]y � x[1]ϕi[1] + x[2]ϕi[2] + r[2]We can continue this for k � 3, 4, 5, . . ..

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 17 / 41

Page 47: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]x[2]φ

3

r[2]

k � 2i[2] � 3 minimizes minx∈R ∥xϕi − r[1]∥2

2 .r[1] � x[2]ϕi[2] + r[2]y � x[1]ϕi[1] + x[2]ϕi[2] + r[2]We can continue this for k � 3, 4, 5, . . ..

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 17 / 41

Page 48: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]x[2]φ

3

r[2]

k � 2i[2] � 3 minimizes minx∈R ∥xϕi − r[1]∥2

2 .r[1] � x[2]ϕi[2] + r[2]y � x[1]ϕi[1] + x[2]ϕi[2] + r[2]We can continue this for k � 3, 4, 5, . . ..

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 17 / 41

Page 49: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]x[2]φ

3

r[2]

k � 2i[2] � 3 minimizes minx∈R ∥xϕi − r[1]∥2

2 .r[1] � x[2]ϕi[2] + r[2]y � x[1]ϕi[1] + x[2]ϕi[2] + r[2]We can continue this for k � 3, 4, 5, . . ..

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 17 / 41

Page 50: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

φ2

φ1

φ3

y

x[1]φ2

r[1]x[2]φ

3

r[2]

k � 2i[2] � 3 minimizes minx∈R ∥xϕi − r[1]∥2

2 .r[1] � x[2]ϕi[2] + r[2]y � x[1]ϕi[1] + x[2]ϕi[2] + r[2]We can continue this for k � 3, 4, 5, . . ..

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 17 / 41

Page 51: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

After k step, we have

y � x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] + r[k].

The vector y is approximated by

y[k] ≜ x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] � Φx[k]

with k-sparse vector

x[k] ≜ x[1]e i[1] + x[2]e i[2] + · · · + x[k]e i[k]

One needs O(nk) computations to obtain k-sparse vector thatapproximates a solution of y � Φx.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 18 / 41

Page 52: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

After k step, we have

y � x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] + r[k].

The vector y is approximated by

y[k] ≜ x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] � Φx[k]

with k-sparse vector

x[k] ≜ x[1]e i[1] + x[2]e i[2] + · · · + x[k]e i[k]

One needs O(nk) computations to obtain k-sparse vector thatapproximates a solution of y � Φx.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 18 / 41

Page 53: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit

After k step, we have

y � x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] + r[k].

The vector y is approximated by

y[k] ≜ x[1]ϕi[1] + x[2]ϕi[2] + · · · + x[k]ϕi[k] � Φx[k]

with k-sparse vector

x[k] ≜ x[1]e i[1] + x[2]e i[2] + · · · + x[k]e i[k]

One needs O(nk) computations to obtain k-sparse vector thatapproximates a solution of y � Φx.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 18 / 41

Page 54: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

For MP, we need to obtain a 1-sparse vector that minimizes∥Φx − r ∥2

2 .Since Φ � [ϕ1 , . . . ,ϕn] and x � [x1 , . . . , xn]⊤,

Φx � x1ϕ1 + · · · + xnϕn .

Since x is 1-sparse, we just need to obtain the index i∗ and thecoefficient xi∗ that minimizes ∥xiϕi − r ∥2

2 .We have

e(i) � minx

∥xϕi − r ∥22 � ∥y∥2

2 −⟨ϕi , r⟩2

∥ϕi ∥22

From this

i∗ � arg mini

e(i) � arg max⟨ϕi , r⟩2

∥ϕi ∥22.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 19 / 41

Page 55: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

For MP, we need to obtain a 1-sparse vector that minimizes∥Φx − r ∥2

2 .Since Φ � [ϕ1 , . . . ,ϕn] and x � [x1 , . . . , xn]⊤,

Φx � x1ϕ1 + · · · + xnϕn .

Since x is 1-sparse, we just need to obtain the index i∗ and thecoefficient xi∗ that minimizes ∥xiϕi − r ∥2

2 .We have

e(i) � minx

∥xϕi − r ∥22 � ∥y∥2

2 −⟨ϕi , r⟩2

∥ϕi ∥22

From this

i∗ � arg mini

e(i) � arg max⟨ϕi , r⟩2

∥ϕi ∥22.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 19 / 41

Page 56: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

For MP, we need to obtain a 1-sparse vector that minimizes∥Φx − r ∥2

2 .Since Φ � [ϕ1 , . . . ,ϕn] and x � [x1 , . . . , xn]⊤,

Φx � x1ϕ1 + · · · + xnϕn .

Since x is 1-sparse, we just need to obtain the index i∗ and thecoefficient xi∗ that minimizes ∥xiϕi − r ∥2

2 .We have

e(i) � minx

∥xϕi − r ∥22 � ∥y∥2

2 −⟨ϕi , r⟩2

∥ϕi ∥22

From this

i∗ � arg mini

e(i) � arg max⟨ϕi , r⟩2

∥ϕi ∥22.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 19 / 41

Page 57: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Finding 1-sparse vector

For MP, we need to obtain a 1-sparse vector that minimizes∥Φx − r ∥2

2 .Since Φ � [ϕ1 , . . . ,ϕn] and x � [x1 , . . . , xn]⊤,

Φx � x1ϕ1 + · · · + xnϕn .

Since x is 1-sparse, we just need to obtain the index i∗ and thecoefficient xi∗ that minimizes ∥xiϕi − r ∥2

2 .We have

e(i) � minx

∥xϕi − r ∥22 � ∥y∥2

2 −⟨ϕi , r⟩2

∥ϕi ∥22

From this

i∗ � arg mini

e(i) � arg max⟨ϕi , r⟩2

∥ϕi ∥22.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 19 / 41

Page 58: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit: convergence

TheoremAssume that dictionary {ϕ1 ,ϕ2 , . . . ,ϕn} has m linearly independent vectors(i.e. rank Φ � m). Then there exists a constant c ∈ (0, 1) such that

∥r[k]∥22 ≤ ck ∥y∥2

2 , k � 0, 1, 2, . . . . (1)

The residue r[k] monotonically decreases and

limk→∞

r[k] � 0

The convergence rate is first order; the residue decreasesexponentially, that is, O(ck).Much faster than FISTA with O(1/k2).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 20 / 41

Page 59: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit: convergence

TheoremAssume that dictionary {ϕ1 ,ϕ2 , . . . ,ϕn} has m linearly independent vectors(i.e. rank Φ � m). Then there exists a constant c ∈ (0, 1) such that

∥r[k]∥22 ≤ ck ∥y∥2

2 , k � 0, 1, 2, . . . . (1)

The residue r[k] monotonically decreases and

limk→∞

r[k] � 0

The convergence rate is first order; the residue decreasesexponentially, that is, O(ck).Much faster than FISTA with O(1/k2).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 20 / 41

Page 60: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit: convergence

TheoremAssume that dictionary {ϕ1 ,ϕ2 , . . . ,ϕn} has m linearly independent vectors(i.e. rank Φ � m). Then there exists a constant c ∈ (0, 1) such that

∥r[k]∥22 ≤ ck ∥y∥2

2 , k � 0, 1, 2, . . . . (1)

The residue r[k] monotonically decreases and

limk→∞

r[k] � 0

The convergence rate is first order; the residue decreasesexponentially, that is, O(ck).Much faster than FISTA with O(1/k2).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 20 / 41

Page 61: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Matching pursuit: convergence

TheoremAssume that dictionary {ϕ1 ,ϕ2 , . . . ,ϕn} has m linearly independent vectors(i.e. rank Φ � m). Then there exists a constant c ∈ (0, 1) such that

∥r[k]∥22 ≤ ck ∥y∥2

2 , k � 0, 1, 2, . . . . (1)

The residue r[k] monotonically decreases and

limk→∞

r[k] � 0

The convergence rate is first order; the residue decreasesexponentially, that is, O(ck).Much faster than FISTA with O(1/k2).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 20 / 41

Page 62: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

MP cannot always achieve r[k] � 0 with finite k.This is because MP may choose an index i[k] that was alreadychosen in previous steps.OMP (Orthogonal Matching Pursuit) can achieve r[k] � 0 in a finitenumber of iterations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 21 / 41

Page 63: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

MP cannot always achieve r[k] � 0 with finite k.This is because MP may choose an index i[k] that was alreadychosen in previous steps.OMP (Orthogonal Matching Pursuit) can achieve r[k] � 0 in a finitenumber of iterations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 21 / 41

Page 64: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

MP cannot always achieve r[k] � 0 with finite k.This is because MP may choose an index i[k] that was alreadychosen in previous steps.OMP (Orthogonal Matching Pursuit) can achieve r[k] � 0 in a finitenumber of iterations.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 21 / 41

Page 65: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

Choose the index i[k] of a 1-sparse vector that minimizes∥Φx − r[k − 1]∥2:

i[k] � arg maxi∈{1,...,n}

⟨ϕi , r[k − 1]⟩2

∥ϕi ∥22

, r[0] � y , k � 1, 2, . . .

Store the index in the chosen index set:

Sk � Sk−1 ∪ {i[k]}, S0 � ∅, k � 1, 2, . . .

Approximate y by a vector y[k] in Ck � span{ϕi : i ∈ Sk}, that is,

y[k] � arg minv∈Ck

12 ∥v − y∥2

2 � ΠCk (y) : projection onto Ck

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 22 / 41

Page 66: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

Choose the index i[k] of a 1-sparse vector that minimizes∥Φx − r[k − 1]∥2:

i[k] � arg maxi∈{1,...,n}

⟨ϕi , r[k − 1]⟩2

∥ϕi ∥22

, r[0] � y , k � 1, 2, . . .

Store the index in the chosen index set:

Sk � Sk−1 ∪ {i[k]}, S0 � ∅, k � 1, 2, . . .

Approximate y by a vector y[k] in Ck � span{ϕi : i ∈ Sk}, that is,

y[k] � arg minv∈Ck

12 ∥v − y∥2

2 � ΠCk (y) : projection onto Ck

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 22 / 41

Page 67: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

Choose the index i[k] of a 1-sparse vector that minimizes∥Φx − r[k − 1]∥2:

i[k] � arg maxi∈{1,...,n}

⟨ϕi , r[k − 1]⟩2

∥ϕi ∥22

, r[0] � y , k � 1, 2, . . .

Store the index in the chosen index set:

Sk � Sk−1 ∪ {i[k]}, S0 � ∅, k � 1, 2, . . .

Approximate y by a vector y[k] in Ck � span{ϕi : i ∈ Sk}, that is,

y[k] � arg minv∈Ck

12 ∥v − y∥2

2 � ΠCk (y) : projection onto Ck

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 22 / 41

Page 68: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The coefficient vector x[k] that satisfies

y[k] �∑i∈Sk

xi[k]ϕi � ΦSk x[k]

is given byx[k] �

(Φ⊤

SkΦSk

)−1Φ⊤

Sky.

Finally, the k-sparse vector is computed by(x[k]

)Sk

� x[k],(x[k]

)Sc

k� 0,

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 23 / 41

Page 69: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The coefficient vector x[k] that satisfies

y[k] �∑i∈Sk

xi[k]ϕi � ΦSk x[k]

is given byx[k] �

(Φ⊤

SkΦSk

)−1Φ⊤

Sky.

Finally, the k-sparse vector is computed by(x[k]

)Sk

� x[k],(x[k]

)Sc

k� 0,

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 23 / 41

Page 70: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The residue vector r[k] is orthogonal to the linear subspace Ck .From this,

⟨v , r[k]⟩ � 0, ∀v ∈ Ck .

Any vector ϕi in Ck will never be chosen at the next step:

i[k + 1] � arg maxi∈{1,2,...,n}

⟨ϕi , r[k]⟩2

∥ϕi ∥22

� arg maxi∈{1,2,...,n}ϕi<Ck

⟨ϕi , r[k]⟩2

∥ϕi ∥22

This is why the algorithm is called the orthogonal MP (OMP).M. Nagahara (Univ of Kitakyushu) Sparsity Methods 24 / 41

Page 71: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The residue vector r[k] is orthogonal to the linear subspace Ck .From this,

⟨v , r[k]⟩ � 0, ∀v ∈ Ck .

Any vector ϕi in Ck will never be chosen at the next step:

i[k + 1] � arg maxi∈{1,2,...,n}

⟨ϕi , r[k]⟩2

∥ϕi ∥22

� arg maxi∈{1,2,...,n}ϕi<Ck

⟨ϕi , r[k]⟩2

∥ϕi ∥22

This is why the algorithm is called the orthogonal MP (OMP).M. Nagahara (Univ of Kitakyushu) Sparsity Methods 24 / 41

Page 72: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The residue vector r[k] is orthogonal to the linear subspace Ck .From this,

⟨v , r[k]⟩ � 0, ∀v ∈ Ck .

Any vector ϕi in Ck will never be chosen at the next step:

i[k + 1] � arg maxi∈{1,2,...,n}

⟨ϕi , r[k]⟩2

∥ϕi ∥22

� arg maxi∈{1,2,...,n}ϕi<Ck

⟨ϕi , r[k]⟩2

∥ϕi ∥22

This is why the algorithm is called the orthogonal MP (OMP).M. Nagahara (Univ of Kitakyushu) Sparsity Methods 24 / 41

Page 73: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Orthogonal Matching Pursuit (OMP)

φ2

φ1

y

Ck = span{φi : i ∈ Sk}

y[k]

r[k]

The residue vector r[k] is orthogonal to the linear subspace Ck .From this,

⟨v , r[k]⟩ � 0, ∀v ∈ Ck .

Any vector ϕi in Ck will never be chosen at the next step:

i[k + 1] � arg maxi∈{1,2,...,n}

⟨ϕi , r[k]⟩2

∥ϕi ∥22

� arg maxi∈{1,2,...,n}ϕi<Ck

⟨ϕi , r[k]⟩2

∥ϕi ∥22

This is why the algorithm is called the orthogonal MP (OMP).M. Nagahara (Univ of Kitakyushu) Sparsity Methods 24 / 41

Page 74: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

OMP: convergence

TheoremAssume that rank(Φ) � m. Assume also that there exists a vector x ∈ Rn

such that y � Φx and

∥x∥0 <12

(1 +

1µ(Φ)

).

Then, this vector x is the unique solution of the ℓ0 optimization, and OMPgives it in k � ∥x∥0 steps.

At each step of OMP, we need to compute the matrix inversion of

(Φ⊤SkΦSk )−1Φ⊤

Sky

If the number k � ∥x∥0 is very large, then this inversion mayimpose a heavy computational burden.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 25 / 41

Page 75: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

OMP: convergence

TheoremAssume that rank(Φ) � m. Assume also that there exists a vector x ∈ Rn

such that y � Φx and

∥x∥0 <12

(1 +

1µ(Φ)

).

Then, this vector x is the unique solution of the ℓ0 optimization, and OMPgives it in k � ∥x∥0 steps.

At each step of OMP, we need to compute the matrix inversion of

(Φ⊤SkΦSk )−1Φ⊤

Sky

If the number k � ∥x∥0 is very large, then this inversion mayimpose a heavy computational burden.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 25 / 41

Page 76: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

OMP: convergence

TheoremAssume that rank(Φ) � m. Assume also that there exists a vector x ∈ Rn

such that y � Φx and

∥x∥0 <12

(1 +

1µ(Φ)

).

Then, this vector x is the unique solution of the ℓ0 optimization, and OMPgives it in k � ∥x∥0 steps.

At each step of OMP, we need to compute the matrix inversion of

(Φ⊤SkΦSk )−1Φ⊤

Sky

If the number k � ∥x∥0 is very large, then this inversion mayimpose a heavy computational burden.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 25 / 41

Page 77: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Table of Contents

1 ℓ0 Optimization

2 Orthogonal Matching Pursuit

3 Thresholding algorithm

4 Numerical Example

5 Conclusion

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 26 / 41

Page 78: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Optimization problems

In this section, we consider the following two optimization problem:

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

We introduce greedy algorithms called thresholding algorithms forthese ℓ0 optimization problems.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 27 / 41

Page 79: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 80: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 81: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 82: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 83: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 84: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

ℓ0 regularization

minimizex∈Rn

12 ∥Φx − y∥2

2 + λ∥x∥0

Consider the optimization problem:minimize

x∈Rnf1(x) + f2(x),

f1: differentiable and convex, dom( f1) � Rn

f2: proper, closed, and convexThe proximal gradient algorithm

x[k + 1] � proxγ f2

(x[k] − γ∇ f1(x[k])

).

For the ℓ0 regularization,

f1(x) ≜12 ∥Φx − y∥2

2 , f2(x) ≜ λ∥x∥0.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 28 / 41

Page 85: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

The function f2(x) � λ∥x∥0 is not convex.The proximal operator of λ∥x∥0 is given by the hard-thresholdingoperator Hθ(v) with θ �

√2γλ, where

[Hθ(v)]i ≜

{vi , |vi | ≥ θ,0, |vi | < θ, i � 1, 2, . . . , n ,

0

v

θ

−θ

Hθ(v)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 29 / 41

Page 86: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ℓ0 regularization and proximal gradient algorithm

The function f2(x) � λ∥x∥0 is not convex.The proximal operator of λ∥x∥0 is given by the hard-thresholdingoperator Hθ(v) with θ �

√2γλ, where

[Hθ(v)]i ≜

{vi , |vi | ≥ θ,0, |vi | < θ, i � 1, 2, . . . , n ,

0

v

θ

−θ

Hθ(v)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 29 / 41

Page 87: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Hard-thresholding operator

Proximal operator of f2(x) � λ∥x∥0

proxγλ∥·∥0(v) � H√

2γλ(v).

Hard-thresholding operator Hθ(v) rounds small elements(|vi | < θ) to 0, where θ �

√2γλ.

vivi

θ

−θ−θ

θii

Hθ(v)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 30 / 41

Page 88: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Hard-thresholding operator

Proximal operator of f2(x) � λ∥x∥0

proxγλ∥·∥0(v) � H√

2γλ(v).

Hard-thresholding operator Hθ(v) rounds small elements(|vi | < θ) to 0, where θ �

√2γλ.

vivi

θ

−θ−θ

θii

Hθ(v)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 30 / 41

Page 89: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative Hard-Thresholding (IHT) Algorithm

Iterative Hard-Thresholding (IHT)Initialization: Give an initial vector x[0] and positive number γ > 0.Iteration: for k � 0, 1, 2, . . . do

x[k + 1] � H√2γλ

(x[k] − γΦ⊤(Φx[k] − y)

).

TheoremAssume that

γ <1

∥Φ∥2 ,

holds. Then the sequence {x[0], x[1], x[2], . . .} generated by IHT convergesto a local minimizer of the ℓ0 regularization. Moreover, the convergence is firstorder:

∥x[k + 1] − x∗∥2 ≤ c∥x[k] − x∗∥2 , k � 0, 1, 2, . . . .

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 31 / 41

Page 90: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative Hard-Thresholding (IHT) Algorithm

Iterative Hard-Thresholding (IHT)Initialization: Give an initial vector x[0] and positive number γ > 0.Iteration: for k � 0, 1, 2, . . . do

x[k + 1] � H√2γλ

(x[k] − γΦ⊤(Φx[k] − y)

).

TheoremAssume that

γ <1

∥Φ∥2 ,

holds. Then the sequence {x[0], x[1], x[2], . . .} generated by IHT convergesto a local minimizer of the ℓ0 regularization. Moreover, the convergence is firstorder:

∥x[k + 1] − x∗∥2 ≤ c∥x[k] − x∗∥2 , k � 0, 1, 2, . . . .

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 31 / 41

Page 91: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse approximation

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The set of s-sparse vectors in Rn :

Σs ≜ {x ∈ Rn : ∥x∥0 ≤ s}.This is non-convex.The indicator function:

IΣs (x) �{

0, ∥x∥0 ≤ s ,∞, ∥x∥0 > s .

The problem is equivalently described by

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 32 / 41

Page 92: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse approximation

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The set of s-sparse vectors in Rn :

Σs ≜ {x ∈ Rn : ∥x∥0 ≤ s}.This is non-convex.The indicator function:

IΣs (x) �{

0, ∥x∥0 ≤ s ,∞, ∥x∥0 > s .

The problem is equivalently described by

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 32 / 41

Page 93: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse approximation

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The set of s-sparse vectors in Rn :

Σs ≜ {x ∈ Rn : ∥x∥0 ≤ s}.This is non-convex.The indicator function:

IΣs (x) �{

0, ∥x∥0 ≤ s ,∞, ∥x∥0 > s .

The problem is equivalently described by

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 32 / 41

Page 94: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse approximation

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The set of s-sparse vectors in Rn :

Σs ≜ {x ∈ Rn : ∥x∥0 ≤ s}.This is non-convex.The indicator function:

IΣs (x) �{

0, ∥x∥0 ≤ s ,∞, ∥x∥0 > s .

The problem is equivalently described by

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 32 / 41

Page 95: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse approximation

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The set of s-sparse vectors in Rn :

Σs ≜ {x ∈ Rn : ∥x∥0 ≤ s}.This is non-convex.The indicator function:

IΣs (x) �{

0, ∥x∥0 ≤ s ,∞, ∥x∥0 > s .

The problem is equivalently described by

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 32 / 41

Page 96: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse operator

We apply the proximal gradient algorithm for

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

The proximal operator of IΣs (x) is the projection onto Σs .The projection is given by s-sparse operator Hs(v) that sets all butthe s largest (in magnitude) elements of v to 0.Note that this is not unique.

H3(v)

12

3

vi

i

12

3

vi

i

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 33 / 41

Page 97: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse operator

We apply the proximal gradient algorithm for

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

The proximal operator of IΣs (x) is the projection onto Σs .The projection is given by s-sparse operator Hs(v) that sets all butthe s largest (in magnitude) elements of v to 0.Note that this is not unique.

H3(v)

12

3

vi

i

12

3

vi

i

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 33 / 41

Page 98: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse operator

We apply the proximal gradient algorithm for

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

The proximal operator of IΣs (x) is the projection onto Σs .The projection is given by s-sparse operator Hs(v) that sets all butthe s largest (in magnitude) elements of v to 0.Note that this is not unique.

H3(v)

12

3

vi

i

12

3

vi

i

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 33 / 41

Page 99: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

s-sparse operator

We apply the proximal gradient algorithm for

minimizex∈Rn

12 ∥Φx − y∥2

2 + IΣs (x).

The proximal operator of IΣs (x) is the projection onto Σs .The projection is given by s-sparse operator Hs(v) that sets all butthe s largest (in magnitude) elements of v to 0.Note that this is not unique.

H3(v)

12

3

vi

i

12

3

vi

i

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 33 / 41

Page 100: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative s-sparse algorithm

Iterative s-sparse algorithmInitialization: Give an initial vector x[0] and a positive number γ > 0Iteration: for k � 0, 1, 2, . . . do

x[k + 1] � Hs(x[k] − γΦ⊤(Φx[k] − y)

).

TheoremAssume that rank(Φ) � m, and the column vectors ϕi , i � 1, 2, . . . , n, arenon-zero. Assume also that the constant γ > 0 satisfies

γ <1

∥Φ∥2 .

Then the sequence {x[0], x[1], x[2], . . .} generated by the s-sparse algorithmconverges to a local minimizer of the s-sparse approximation. Moreover, theconvergence is first order.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 34 / 41

Page 101: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Iterative s-sparse algorithm

Iterative s-sparse algorithmInitialization: Give an initial vector x[0] and a positive number γ > 0Iteration: for k � 0, 1, 2, . . . do

x[k + 1] � Hs(x[k] − γΦ⊤(Φx[k] − y)

).

TheoremAssume that rank(Φ) � m, and the column vectors ϕi , i � 1, 2, . . . , n, arenon-zero. Assume also that the constant γ > 0 satisfies

γ <1

∥Φ∥2 .

Then the sequence {x[0], x[1], x[2], . . .} generated by the s-sparse algorithmconverges to a local minimizer of the s-sparse approximation. Moreover, theconvergence is first order.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 34 / 41

Page 102: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Compressed Sampling Matching Pursuit (CoSaMP)

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The OMP can be extended to solve the s-sparse approximation.This is called the Compressed Sampling Matching Pursuit(CoSaMP)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 35 / 41

Page 103: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Compressed Sampling Matching Pursuit (CoSaMP)

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The OMP can be extended to solve the s-sparse approximation.This is called the Compressed Sampling Matching Pursuit(CoSaMP)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 35 / 41

Page 104: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Compressed Sampling Matching Pursuit (CoSaMP)

s-sparse approximation

minimizex∈Rn

12 ∥Φx − y∥2

2 subject to ∥x∥0 ≤ s

The OMP can be extended to solve the s-sparse approximation.This is called the Compressed Sampling Matching Pursuit(CoSaMP)

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 35 / 41

Page 105: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

CoSaMP algorithm for s-sparse approximation

CoSaMP algorithm for s-sparse approximationInitialization: x[0] � 0, r[0] � y, S0 � ∅Iteration: for k � 1, 2, . . . do

I[k] :� supp

{H2s

(⟨ϕi

∥ϕi ∥2, r[k − 1]

⟩2)},

Sk :� Sk−1 ∪ I[k],x[k] :�

(Φ⊤

SkΦSk

)−1Φ⊤

Sky ,

(z[k])Sk :� x[k],(z[k])Sc

k:� 0,

x[k] :� Hs (z[k]) ,Sk :� supp{x[k]},

r[k] :� y −ΦSk x[k].

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 36 / 41

Page 106: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Table of Contents

1 ℓ0 Optimization

2 Orthogonal Matching Pursuit

3 Thresholding algorithm

4 Numerical Example

5 Conclusion

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 37 / 41

Page 107: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Sparse polynomial curve fitting

80th-order sparse polynomial: y � −t80 + t.Generate data from this polynomial

D � {(t1 , y1), (t2 , y2), . . . , (t11 , y11)}, yi � −t80i + ti .

on t1 � 0, t2 � 0.1, t3 � 0.2, . . . , t11 � 1.

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 38 / 41

Page 108: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Sparse polynomial curve fitting

80th-order sparse polynomial: y � −t80 + t.Generate data from this polynomial

D � {(t1 , y1), (t2 , y2), . . . , (t11 , y11)}, yi � −t80i + ti .

on t1 � 0, t2 � 0.1, t3 � 0.2, . . . , t11 � 1.

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 38 / 41

Page 109: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Resultsc � (−1, 0, . . . , 0︸ ︷︷ ︸

78

, 1, 0)

0 20 40 60 80

-1

-0.5

0

0.5

1

L1-OPT

0 20 40 60 80

-1

-0.5

0

0.5

1

MP

0 20 40 60 80

-1

-0.5

0

0.5

1

OMP

0 20 40 60 80

-1

-0.5

0

0.5

1

IHT

0 20 40 60 80

-1

-0.5

0

0.5

1

ISS

0 20 40 60 80

-1

-0.5

0

0.5

1

COSAMP

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 39 / 41

Page 110: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Results

Methods ℓ1 OPT MP OMP IHT ISS CoSaMPError 2.7 × 10−10 9.1 × 10−6 4.1 × 10−16 0.0017 0.83 4.1 × 10−11

Iterations 10 18 2 105 105 3

The error r � y −Φx∗ is smallest with OMP.The number of iteration is smallest with OMP.IHT and ISS converged to local minimizers with large residues.OMP is best for this example, but this is not always true.The performance depends on the problem and data, and weshould adopt trial and error to seek the best algorithm.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 40 / 41

Page 111: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Results

Methods ℓ1 OPT MP OMP IHT ISS CoSaMPError 2.7 × 10−10 9.1 × 10−6 4.1 × 10−16 0.0017 0.83 4.1 × 10−11

Iterations 10 18 2 105 105 3

The error r � y −Φx∗ is smallest with OMP.The number of iteration is smallest with OMP.IHT and ISS converged to local minimizers with large residues.OMP is best for this example, but this is not always true.The performance depends on the problem and data, and weshould adopt trial and error to seek the best algorithm.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 40 / 41

Page 112: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Results

Methods ℓ1 OPT MP OMP IHT ISS CoSaMPError 2.7 × 10−10 9.1 × 10−6 4.1 × 10−16 0.0017 0.83 4.1 × 10−11

Iterations 10 18 2 105 105 3

The error r � y −Φx∗ is smallest with OMP.The number of iteration is smallest with OMP.IHT and ISS converged to local minimizers with large residues.OMP is best for this example, but this is not always true.The performance depends on the problem and data, and weshould adopt trial and error to seek the best algorithm.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 40 / 41

Page 113: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Results

Methods ℓ1 OPT MP OMP IHT ISS CoSaMPError 2.7 × 10−10 9.1 × 10−6 4.1 × 10−16 0.0017 0.83 4.1 × 10−11

Iterations 10 18 2 105 105 3

The error r � y −Φx∗ is smallest with OMP.The number of iteration is smallest with OMP.IHT and ISS converged to local minimizers with large residues.OMP is best for this example, but this is not always true.The performance depends on the problem and data, and weshould adopt trial and error to seek the best algorithm.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 40 / 41

Page 114: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Results

Methods ℓ1 OPT MP OMP IHT ISS CoSaMPError 2.7 × 10−10 9.1 × 10−6 4.1 × 10−16 0.0017 0.83 4.1 × 10−11

Iterations 10 18 2 105 105 3

The error r � y −Φx∗ is smallest with OMP.The number of iteration is smallest with OMP.IHT and ISS converged to local minimizers with large residues.OMP is best for this example, but this is not always true.The performance depends on the problem and data, and weshould adopt trial and error to seek the best algorithm.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 40 / 41

Page 115: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Conclusion

Greedy algorithms are available to directly solve ℓ0 optimization.The greedy algorithms introduced in this chapter show the linearconvergence, which are much faster than the proximal splittingalgorithms.A local optimal solution is obtained by a greedy algorithm, whichis not necessarily a global optimizer.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 41 / 41

Page 116: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Conclusion

Greedy algorithms are available to directly solve ℓ0 optimization.The greedy algorithms introduced in this chapter show the linearconvergence, which are much faster than the proximal splittingalgorithms.A local optimal solution is obtained by a greedy algorithm, whichis not necessarily a global optimizer.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 41 / 41

Page 117: Prof. Masaaki Nagahara - Sparsity Methods for Systems and … · 2021. 1. 21. · Sparsity Methods for Systems and Control Greedy Algorithms Masaaki Nagahara1 1The University of Kitakyushu

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

Conclusion

Greedy algorithms are available to directly solve ℓ0 optimization.The greedy algorithms introduced in this chapter show the linearconvergence, which are much faster than the proximal splittingalgorithms.A local optimal solution is obtained by a greedy algorithm, whichis not necessarily a global optimizer.

M. Nagahara (Univ of Kitakyushu) Sparsity Methods 41 / 41