compressive sensing based image reconstruction for ... · and the initial starts 0 for the analytic...

131
COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR COMPUTED TOMOGRAPHY DOSE REDUCTION BY CHUANG MIAO A Dissertation Submitted to the Graduate Faculty of WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Biomedical Engineering AUGUST 2015 Winston-Salem, North Carolina Approved by: Hengyong Yu, Ph.D., Advisor, Chair Ge Wang, Ph.D., Co-advisor Robert J. Plemmons, Ph.D. Guohua Cao, Ph.D. King Chuen Li, M.D.

Upload: others

Post on 18-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION

FOR COMPUTED TOMOGRAPHY DOSE REDUCTION

BY

CHUANG MIAO

A Dissertation Submitted to the Graduate Faculty of

WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND

SCIENCES

in Partial Fulfillment of the Requirements

for the Degree of

DOCTOR OF PHILOSOPHY

Biomedical Engineering

AUGUST 2015

Winston-Salem, North Carolina

Approved by:

Hengyong Yu, Ph.D., Advisor, Chair

Ge Wang, Ph.D., Co-advisor

Robert J. Plemmons, Ph.D.

Guohua Cao, Ph.D.

King Chuen Li, M.D.

Page 2: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

ii

ACKNOWLEDGEMENTS

I would like to express my sincere gratitude to my advisor, Dr. Hengyong Yu, for

introducing me to the amazing world of medical imaging. His guidance and

valuable advice was the key to all the work that has been reported in this

dissertation. He has devoted a great amount of time to instruct me over the past

few years, not only in CT image reconstruction field but also in helping me

develop many aspects beyond the scope of academics. His constant inspiration

and support both for my research and career are highly appreciated.

I would like to thank my co-advisor, Dr. Ge Wang. I learned a lot from his

great vision and strategic thinking. I am very grateful for his valuable suggestions

and constant support.

I would like to thank Drs. Robert J. Plemmons and Guohua Cao for their

great teaching, insightful comments, and thoughtful discussion regarding my

research.

I would like to thank all the previous and current lab members for a great

reseach environment and helpful feedback.

Finally, I would like to thank my parents, sisters and brothers-in-law for

their everlasting care, support, and love. You are all my eternal source of

happiness.

Page 3: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

iii

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ......................................................................................ii

ABSTRACT ........................................................................................................ xiii

1. Introduction ....................................................................................................... 1

1.1 Overview of X-ray Computed Tomography ................................................. 1

1.2 Current CT Technology ............................................................................... 1

1.2.1 First Generation CT Scanners .............................................................. 2

1.2.2 Second Generation CT Scanners ......................................................... 2

1.2.3 Third Generation CT Scanners ............................................................. 2

1.2.4 Fourth Generation CT Scanners .......................................................... 3

1.3 Urgency for Dose Reduction ....................................................................... 3

1.4 Thesis Organization .................................................................................... 4

2. System Models for CT Imaging ........................................................................ 8

2.1 Background ................................................................................................. 8

2.2 Discretization of a CT Imaging System ....................................................... 9

2.3 System Models ......................................................................................... 10

2.3.1 AIM ..................................................................................................... 10

2.3.2 DDM ................................................................................................... 10

2.3.3 IDDM .................................................................................................. 12

Page 4: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

iv

2.4 Results ...................................................................................................... 14

2.4.1 Theoretical Verification ....................................................................... 14

2.4.2 Real Data Experiment ........................................................................ 17

2.5 Discussion and Conclusion ....................................................................... 25

3. Thresholding Representation for the �� (� < � < �) Regularization .............. 26

3.1 Background ............................................................................................... 26

3.2 General Iterative Thresholding Representation ......................................... 28

3.2.1 Problem Description ........................................................................... 28

3.2.2 Threshold Filtering Algorithm for �� Regularization ............................ 29

3.2.3 Thresholding Representation for the ��, ��/ and �� Regularizations 30

3.2.4 General Iterative Thresholding Representation for �� (� < � < �)

Regularization ............................................................................................. 31

3.3 General Analytic Thresholding Representation for �� (� < � < �)

Regularization ................................................................................................. 39

3.3.1 Approximate Analytic Formulas .......................................................... 39

3.4 Numerical Verification ............................................................................... 41

3.4.1 Numerical Analysis of the Iterative Thresholding Representation ...... 42

3.4.2 Analysis of Different Analytic Thresholding Formulas ........................ 46

4. General Thresholding Algorithm for �� (� < � < �) Regularization Based CT

Reconstruction .................................................................................................... 53

Page 5: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

v

4.1 General Thresholding Algorithm ............................................................... 54

4.1.1 Algorithm Development ...................................................................... 54

4.1.2 Application and Evaluation ................................................................. 64

4.1.3 Conclusion and Discussion ................................................................ 81

4.2 Alternating Iteration Algorithm ................................................................... 82

4.2.1 Algorithm Development ...................................................................... 84

4.2.2 Evaluations and Results ..................................................................... 84

4.2.3 Conclusions and Discussions ............................................................. 99

5. Conclusions and Future Work ...................................................................... 101

References ....................................................................................................... 104

Appendix A ....................................................................................................... 114

A.1 Distance-driven Kernel ........................................................................... 114

Curriculum Vitae ............................................................................................... 116

Page 6: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

vi

LIST OF TABLES

4.1 Pseudo-codes of the Reweighted Algorithm ................................................. 58

4.2 Pseudo-codes of the General Thresholding Algorithm ................................. 61

Page 7: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

vii

LIST OF FIGURES

2.1 Area integral model assuming a 2D fan beam geometry .............................. 11

2.2 Distance-driven model assuming a 2D fan beam geometry ......................... 12

2.3 Improved distance-driven model assuming a 2D fan beam geometry .......... 13

2.4 Analysis of the area integral model assuming a 2D parallel geometry with a

one row image and one projection angle with one detector cell. (a), (b) and (c)

are the cases that the detector cell size is comparable, larger and smaller than

the image pixel size. ........................................................................................... 15

2.5 Figure 2.5. Same with Fig. 2.4 but for DDM.................................................. 16

2.6 Same with Fig. 2.4 but for IDDM ................................................................... 17

2.7 Reference images reconstructed from 984 views using OS-SART algorithm

after 50 iterations. The display window is [0, 600 HU] ........................................ 19

2.8 The running time of one iteration for AIM-, DDM-, and IDDM-based method

with different number of detector cells in projection datasets. The reconstructed

image size is 512 × 512 ...................................................................................... 19

2.9 The image noise (HU) with respect to the iteration number. Upper left: 888

detector cells; Upper right: 444 detector cells; Lower left: 222 detector cells;

Lower right: 111 detector cells ............................................................................ 22

2.10 The measured spatial resolution (mm) with respect to the iteration number.

Upper left: 888 detector cells; Upper right: 444 detector cells; Lower left: 222

detector cells; Lower right: 111 detector cells ..................................................... 24

Page 8: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

viii

3.1 Numerical �� kernel functions (thin lines) for � = 5 and the corresponding

known analytical ��, ��/� and �� kernel functions. ................................................. 41

3.2 Comparison of different initial starts for each analytic thresholding formulas.

(a) shows plots of real �(�) and different initial starts �0(�) with respect to � for

� = 0.5 and � = 5; and (b) shows plots of the absolute difference between real �

and the initial starts �0 for the analytic thresholding formula 4 and 5 .................. 43

3.3 Comparisons of iterative process with different initial values �� for � = 0.5.

The iteration number is fixed as 3 and � = 5. In (a), The solid line is the exact

analytic solution of ��/� ; The dashed and dotted lines represent the iteration

process with initial values �� = 0 and �� = � − �� , respectively. The regions

around � = 2.77 (b) and around � = 5 for p=0.5 are magnified, the iteration

numbers are marked .......................................................................................... 45

3.4 Plots of �(�) with respect to � (dark solid line) assuming (a) � = 2.77 and (b)

� = 5 with � = 0.5 and � = 5 ............................................................................... 46

3.5 Plots of error bounds of different analytic formulas with � = 0.5 and � = 5 . 50

3.6 Plots of �(�) with respect to � for different analytic formulas with � = 0.5 and

� = 5. In (a), the thickest solid line is the real �(�) and other lines correspond to

different analytic formulas. Two smaller regions are magnified as (b) and (c) .... 51

4.1 Reconstructed Shepp-Logan phantom images from 9 views. The images in

the top row were reconstructed by the general thresholding algorithm, the images

in the bottom row were reconstructed by reweighted algorithm. The images from

left to right columns correspond to � = 1.0 , � = 0.9 and � = 0.1, respectively. For

Page 9: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

ix

the general thresholding algorithm, the iteration numbers are 5000 for � = 0.1,

� = 0.9 and � = 1.0. For the reweighted algorithm, the iteration numbers for � =1.0 , � = 0.9 and � = 0.1 are 2 × 10! , 6 × 10! and 1.3 × 10$ , respectively. The

display window is [0.1, 0.3] ................................................................................. 69

4.2 RMSE curves for (a) the general thresholding algorithm and (b) the

reweighted algorithm for 9 views with optimal regularization parameters ........... 70

4.3 Reference image for the simulated physical phantom experiment. The image

is first reconstructed from 984 views using the OS-SART method and then

filtered by the analytic formula (iii) using discrete gradient transform. The display

window is [-1000, -500]HU ................................................................................. 72

4.4 The cross-validation for the optimal parameters selection for 13 views with .

(a) is the plot of minimum RMSE with respect to different and (b) is the plot of

maximum SSIM with respect to different ............................................................ 74

4.5 Physical phantom images reconstructed from 13 views after 2000 iterations.

The images in the top row were reconstructed by the general thresholding

algorithm, the images in the bottom row were reconstructed by the reweighted

algorithm. From left to right columns, the images were reconstructed with � = 0.5,

� = 0.9 and � = 1.0, respectively. The optimal regularization parameters were

selected by cross-validation for the reweighted algorithm and by Eq. (4.25) for

the general thresholding algorithm. The display window is [-1000, -500]HU ...... 75

4.6 The SSIM curves of the physical phantom reconstruction with respect to the

iteration number for 13 views.............................................................................. 76

Page 10: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

x

4.7 Same as Fig. 4.5 but from 33 views ............................................................. 77

4.8 Same as Fig. 4.6 but from 33 views ............................................................. 77

4.9 Physical phantom images reconstructed from realistic projection data after

200 iterations with zero initializations. The images in the top row were

reconstructed from 33 views with � = 0.4, the images in the bottom row were

reconstructed from 55 views with � = 0.7. The images in the left column were

reconstructed by the reweighted algorithm, the images in the right column were

reconstructed by the general thresholding algorithm. For both algorithms, the

optimal regularization parameters were selected by cross-validation. The display

window is [-1000, -500]HU ................................................................................. 78

4.10 The image in the first column is the reference image which was

reconstructed from 2200 views. The images in the second column are the

reconstructed images (top) by the reweighted algorithm and the difference image

(bottom) between the reference image and the reconstructed image. The images

in the third column are same with the second column but using the proposed

general threshold filtering algorithm. For the images in the second and third

columns, the view number was 385, � = 0.4 and the iteration number is 10. The

optimal � was selected in a similar way as in the physical phantom using cross-

validation. The optimal � for both the reweighted algorithm and the proposed

general threshold filtering algorithm was 10&�. The display window for the first

row image is [-300 1000] HU and for the second row images is [-5 5] HU ......... 80

Page 11: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

xi

4.11 The SSIM curve of the patient dataset experiment in Fig. 1 with respect to

the iteration number. The solid and dashed lines are for the reweighted algorithm

and the proposed general threshold filtering algorithm, respectively .................. 81

4.12 Flowchart of the proposed alternating iteration algorithm. '� and '�

represent the iteration numbers for the �� and �� regularizations, respectively ... 85

4.13 Reconstructed images from 8 views. The first row images are

reconstructions using single �� regularizations with the initialization selected as

the reconstructions from �� regularization. From left to right are �� , ��.( and ��.�

regularizations. The images from the second row to the fifth row are

reconstructed using the proposed alternating iteration algorithms. For the second

and third rows, the iteration number for �1 and �2 are ('�, '�) = (5,10). While

the initializations for the second row images are reconstruction from ��

regularization, the initializations for the third row images are zeros. The fourth

and fifth rows are same with the second and third rows but the iteration number

for �1 and �2 are ('�, '�) = (5,15). All the initializations from �� regularization

are same and selected by cross-validation. The iteration number was selected as

10000. The iteration number for the images from �� initializations and zeros are

60000 and 70000, respectively. The display window is [0.1 0.4] ........................ 90

4.14 The RMSE curves for the experiment in Fig. 4.13 ...................................... 91

4.15 Reconstructed images from 7 views. The first row images are

reconstructions using single �� regularizations with the initialization selected as

the reconstructions from �� regularization. From left to right are �� , ��.( and ��.�

Page 12: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

xii

regularizations. The images in the second and third rows are reconstructed using

the proposed alternating iteration algorithms. Tthe iteration number for �1 and �2

are ('�, '�) = (5,15). While the initializations for the second row images are

reconstruction from �� regularization, the initializations for the third row images

are zeros. All the initializations from �� regularization are same and selected by

cross-validation. The iteration number was selected as 10000. The iteration

number for the images from �� initializations and zeros are 60000 and 70000,

respectively. The display window is [0.1 0.4] ...................................................... 92

4.16 The RMSE curves for the experiment in Fig. 4.15 ...................................... 93

4.17 The RMSE curves for different alternating iteration numbers ('�, '�) for 8

views for (a) � = 0.3 and (b) � = 0.2 ................................................................... 94

4.18 Same with Fig. 4.17 but for 7 views ............................................................ 95

4.19 Reconstructed images from 15 views after 2000 iterations. The first row

images are reconstructions using single �� regularizations. From left to right are

��, ��.* and ��.$ regularizations, respectively. The second and third row images are

reconstructions using the proposed alternating iteration algorithms. From left to

right are �2 = 0.7 and �2 = 0.5. In the second and third rows, the alternating

iteration number for �1 and �2 are ('�, '�) = (5,5) and ('�, '�) = (15,5) ,

respectively. � = 0.1 was selected by cross-validation and used for all the

algorithms ........................................................................................................... 98

4.20 The SSIM curves for the experiments in Fig. 4.19 ...................................... 99

A1 Distance-driven kernel ................................................................................ 115

Page 13: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

xiii

ABSTRACT

Excessive radiation exposure is one of the major concerns in the computed

tomography (CT) field. Few-view reconstruction using iterative algorithm is an

important strategy to reduce the radiation dose. In the iterative CT reconstruction,

the projection / backprojection model plays an important role in the overall

computational cost, image quality, and reconstruction accuracy. In this

dissertation, we first propose an improved distance-driven model (IDDM) whose

computational cost is as low as the well-known distance-driven model (DDM) and

the accuracy is comparable to the accurate area integral model (AIM). Recently,

the �� (0 < � < 1) regularization has attracted a great attention because it can

generate sparser solutions than the �� regularization. We derive several analytic

thresholding representations for �� (0 ≤ � ≤ 1) regularization and develop a

corresponding general thresholding algorithm which is adequate and efficient for

large-scale problems such as CT reconstruction. The analytic thresholding

representation for the �� regularization permits a fast solution similar to the

iterative hard thresholding algorithm for the �� regularization and the iterative soft

thresholding algorithm for the �� regularization. The �� (0 < � < 1) regularization

is very sensitive to noise and the initialization has a significant influence on the

performance. We finally propose an alternating iteration algorithm based on the

derived analytic thresholding representations. For the proposed alternating

iteration algorithm, the zero initialization works equally well as the �� initialization

and is robust to noise.

Page 14: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

1

Chapter 1. Introduction

1.1 Overview of X-ray Computed Tomography

Computed Tomography (CT) is a method of generating a cross-sectional image

of the internal structures of a solid object, such as human body. X-ray CT is the

most routinely used diagnostic imaging modality. The first commercial x-ray CT

prototype was developed in 1972 by Godfrey Hounsfield. The first clinical

available scanner was installed in 1974 for head scans only. The process of

generating the cross-sectional images from projections is the so-called image

reconstruction. The reconstructed images are gray-level representations of the

attenuation coefficients of each elements of the scanning object. The x-ray CT is

called transmission tomography since the x-ray source is located outside of the

scanning object. Other imaging modalities, such as Positron Emission

Tomography (PET) [1], [2], [3], [4], Single Photon Emission Computed

Tomography (SPECT) [5], [6], [7], are called emission tomography since the

radiation source is located inside the scanning object. Another important imaging

modality is Magnetic Resonance Imaging (MRI) [8], [9], [10] which uses a strong

magnetic field to generate the internal structures of human body. Below we

provide a concise review of the development of x-ray CT scanners.

1.2 Current CT Technology

The CT technology has been tremendous improved especially over the past two

decades. The driven force is to achieve a higher temporal resolution, a higher

spatial resolution, faster image reconstruction and lower radiation dose. The

Page 15: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

2

generations of CT scanners can be roughly categorized by the beam geometry

and the scanning trajectory.

1.2.1 First Generation CT Scanners

The first generation of CT scanners used a thin parallel x-ray beam. The x-ray

source and the detector are in a fixed relative position and move in a translate-

rotate motion mode to measure the projection data. The original Houndsfield’s

scanner, called the EMI-Scanner, which mainly used for the head imaging with

less motion since the process of data acquisition for a total 180 degrees took

about 4 minutes.

1.2.2 Second Generation CT Scanners

In the second generation design, the number of detector elements was increased

to allow the use of a narrow fan beam to cover larger areas. Although the

translate-rotate motion mode was still used in the second generation CT scanner,

the rotation of the gantry was increased from 1 degree to 30 degrees. Hence,

significant temporal resolution improvement and reduced artifacts caused by

patient motion were achieved. The second generation CT scanner is still used for

some industrial CT scanner where a large field-of-view (FOV) is necessary.

1.2.3 Third Generation CT Scanners

The third generation CT scanners are the most widely used nowadays. The

coverage of the fan beam was increased and the FOV was large enough to cover

the whole object. The full projection data can be acquired by simply the rotation

of the source and detector and no translation was needed. The temporal

Page 16: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

3

resolution was increased largely to allow the imaging of moving organs such as

lungs and heart. Currently, the fastest rotation time is 0.28 seconds.

1.2.4 Fourth Generation CT Scanners

The fourth generation CT scanners have detector elements 360 degrees around

the gantry and only the tube rotates. Hence, the fourth generation geometry is

also called rotate-stationary motion mode. Although the temporal resolution was

not improved, the ring artifacts problem in the third generation CT scanners was

solved. The ring artifacts are due to a faulty calibration of a detector element

resulting erroneous readings at each angular direction, which will result in ring

artifacts during image reconstruction. The cost of the fourth generation CT

scanner is very high because the detector elements number is much more than

the third generation.

1.3 Urgency for Dose Reduction

The x-ray CT has been extensively used in clinics as a primary noninvasive

diagnostic imaging modality. However, it has been recognized that frequent use

of x-ray CT can lead to an increase of the probability of cancerous, genetic and

other diseases due to excessive radiation exposure [11], [12], [13]. Studies have

shown that a single full-body CT scan at normal dose may increases the cancer

mortality risks by 0.08% [14]. This risk factor increases up to 1.9% when a total of

30 CT scans are considered. The estimated risk of cancer mortality for infants is

at least an order of magnitude higher than adults [11]. As a consequence, the

ALARA (as low as reasonably achievable) principle has been proposed to

Page 17: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

4

minimize the patient radiation dose. Hence, it is highly desirable to reduce the

radiation dose while maintaining acceptable image quality.

There are two ways to decrease the radiation dose. One is to reduce the

x-ray photon flux, and the other is to reduce the number of projection

measurements across the whole imaging object. The former is usually

implemented by lowering the mAs, kVp levels and tube current modulation.

Nonetheless, this approach will result in an insufficient number of samples and

hence elevate the quantum noise level on the projections. As a result, the images

reconstructed by the conventional filtered backprojection (FBP) algorithm [15]

suffer from severe streak artifacts since the FBP algorithm requires the sampling

rate of projections satisfy the Nyquist sampling theorem [16], [17]. Recently, the

compressive sensing (CS) based iterative reconstruction techniques show

potential to accurately reconstruct the signal from far fewer samples than what is

required by the Nyquist sampling theorem [16], [17]. In this dissertation, we will

study the thresholding representation theory for the �� (0 < � < 1) regularization

and develop the corresponding iterative algorithms for dose reduction.

1.4 Thesis Organization

Mathematically, a CT imaging system can be modeled by a linear matrix

equation ,- = �, where - is the unknown image to be reconstructed, � is the

projection measurements and , is the system matrix. Because the system

matrix is too large to be pre-stored in the internal memory, it usually computes

on-the-fly. In the iterative reconstruction algorithms, the projection and

backprojection operations are repeatedly applied to approximate the image that

Page 18: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

5

best fits the measurements according to an appropriate objection function.

Therefore, the calculation of the system matrix plays an important role both for

the reconstruction accuracy and the efficiency of the iterative algorithm. Many

projection/backprojection models (system models) are available to calculate the

system matrix ,. All the system models compromise between the computational

complexity and accuracy. In Chapter 2, we propose an improved distance-driven

model (IDDM) whose computational cost is as low as the distance-driven model

(DDM) and the accuracy is as high as the area integral model (AIM). Please refer

to my master thesis [18] for systematic analysis and detailed comparisons of the

IDDM with different system models. For the completeness and systemic

considerations of the dissertation, here we only show representative theoretical

analysis and phantom experiments to demonstrate the reasonability of the IDDM.

We will employ the IDDM as the system model in all the experiments through this

dissertation.

Recently, the �� (0 < � < 1) regularization has attracted a great attention

because it can generate sparser solutions than the �� regularization. Because the

�� regularization is non-convex, nonsmooth and non-Lipschits problem, we can

only obtain a local optima yet it can generate sparser solutions than the ��

regularization [19], [20]. The classic Abel theorem and previous research have

demonstrated that, among all the �� regularizations with 0 < � < 1 , only the

solutions of ��/� and ��/( regularizations can be analytically represented in the

thresholding form [21]. Recently, the analytic thresholding representation formula

for the �./ regularization has been developed by Xu et al [21]. This finding permits

Page 19: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

6

a fast iterative thresholding algorithm named half thresholding algorithm which is

similar to the hard thresholding algorithm for the �� regularization [22], [23], [24],

[25], [26] and soft thresholding algorithm for the �� regularization [26], [27], [28],

[29]. Besides the analytic thresholding representations for �� , ��/� and ��

regularizations, we are also interested in the analytic thresholding

representations for other �� regularizations, and we develop the corresponding

iterative thresholding algorithm. In Chapter 3, we first develop an iterative

procedure of the general thresholding representation for the �� regularization with

0 < � < 1. Then we show the convergence properties of the iterative general

thresholding representation procedure. We then derive several approximate yet

analytic general thresholding formulas for any �� (0 < � < 1) regularization by

assigning different initial values and iteration numbers. The error bounds of each

analytic formula are analyzed. Numerical experiments are performed to verify the

iterative thresholding representation and to compare the accuracy of each

analytic thresholding formula. In Chapter 4, we develop an iterative thresholding

algorithm for the �� (0 < � < 1) regularization and investigate the strategies for

the optimal parameters selection. We propose to call the developed algorithm

iterative general thresholding algorithm for the �� regularization. Then we apply

the iterative general thresholding algorithm to CT image reconstruction and

evaluate its performance. For better convergence properties, we also propose an

alternating iteration algorithm in a thresholding form based on the derived

general analytic thresholding representations. The alternating iteration algorithm

alternatively optimizes one �� and one �� (0 < � < 1) regularized objection

Page 20: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

7

functions. While the �� regularization can help to find a sparser solution, the ��

regularization can prevent the iteration run away from the global minimum. Both

numerical simulations and physical phantom experiments are performed to

evaluate the alternating iteration algorithm. Finally, in Chapter 5, we conclude

this dissertation.

Page 21: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

8

Chapter 2. System Models for CT Imaging

2.1 Background

In CT applications, the projection and/or backprojection model (system model) is

necessary for image reconstruction, artifacts correction, and simulation purposes.

Particularly, in the iterative CT image reconstruction algorithms, the projection

and backprojection are repeatedly applied to approximate the image that best fits

the projection measurements according to an appropriate objective function. The

projection and/or backprojection operations play a critical role in the overall

computation and reconstructed image quality. There are many system models

available, all of the system models compromise between the computational

complexity and accuracy. To our best knowledge, the current system models can

be divided into three categories [30]. The first is the pixel-driven model (PDM),

which is widely used in the backprojection to update the image pixel values for

the FBP algorithm [31], [32], [33]. The PDM is rarely used in the projection

operation to update the detector values because it tends to introduce high-

frequency artifacts [30], [34]. The second is the ray-driven model (RDM), which is

often used in the projection operation to update the detector values. RDM is

rarely used in the backprojection operation because it tends to introduce artifacts

[30], [35]. The state-of-the-art is the distance-driven model (DDM) [35], which

combines the advantages of the PDM and RDM, and can be used both in

projection and backprojection with a relative low computational cost. Recently, a

finite-detector-based system model called the area integral model (AIM) was

proposed for the iterative CT image reconstructions [36]. This model without any

Page 22: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

9

interpolation is different from all the aforementioned system models, and can be

used both in projection and backprojection with high accuracy. However, AIM is

very computational intensive. Here, we proposed an improved distance-drive

model (IDDM) which has a similar computational cost with the DDM and the

comparable accuracy with AIM. We will introduce each model in details below.

We evaluate each model using physical phantom CT data.

2.2 Discretization of a CT Imaging System

Assuming a 2D parallel-beam or fan-beam CT imaging geometry, 0 = 1-2,34 ∈ℝ7 × ℝ8 can be used to represent a digital image, where the indices 1 ≤ 9 ≤ :,

1 ≤ ; ≤ < are integers. Define

-= = -2,3, > = (9 − 1) × < + ;, (2.1)

with 1 ≤ > ≤ ', and ' = : × <, we can re-arrange the digital image into a vector

0 = @-�, -�, … , -B C� ∈ ℝB . For convenience, we may use either sign -= or -2,3 to

denote the digital image. In practice, a CT imaging system can be modeled as

the following linear equation [37].

D0 = �, (2.2)

where � ∈ ℝE represents projection measurements, 0 ∈ ℝB represents an

unknown image, and D = {GH=} ∈ ℝE × ℝB is the system matrix and each

element GH= quantifies the contribution of the >JK pixel to the LJK ray path. Let

�H be the LJK measured projection data, which is the sum of the product

between -= and the corresponding weighting coefficient GH= . Eq. (2) can be

reformatted as

Page 23: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

10

�H = ∑ GH=-=B=N� , L = 1, 2, … , O. (2.3)

In summary, we have a system matrix D = (GH=)E×B and two vectors 0 =@-�, -�, … , -BC� ∈ ℝB and � = @��, ��, … , �EC� ∈ ℝE for the discrete-discrete

imaging model. The major difference between each system model is how to

calculate the system matrix ,.

2.3 System Models

2.3.1 AIM

The AIM refers to the name by quantifying each weighting coefficient related to

the overlapped area between each pixel and each ray path. As indicated in Fig.

2.1, the AIM considers an x-ray as a narrow fan-beam which covers a region by

connecting the x-ray source and the two endpoints of a detector cell [36]. The

weighting coefficient GH= is defined as the normalized area which is the ratio

between the overlapped area (PH=) and the corresponding fan-arc length which

is the product of the distance from the center of the L-th pixel to the x-ray source

and the narrow fan beam angle Q . Please refer to [36] for the details of the

derivation.

2.3.2 DDM

The state-of-the-art system model is the DDM which considers both the detector

cell and the image pixel have width. In order to quantify the weighting coefficient,

the DDM requires the overlapped length between each pixel and each detector

cell [30]. To calculate the overlapped length, we map all the detector cell

boundaries onto the centerline of the image row of interest (Fig. 2.2). One can

Page 24: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

11

Figure 2.1. Area integral model assuming a 2D fan beam geometry.

also map all the pixel boundaries in an image row of interest onto the detector, or

map both sets of boundaries onto a common line (i.e. � axis). �= and �=R�

represent the two mapping boundary locations (sample locations) of the >JK pixel

in the centerline of the image row of interest. Similarly, �= and �=R� represent the

two mapping boundary locations (destination locations) of the LJK detector cell.

Based on these boundaries, we can calculate the overlapped length between

each detector cell and each image pixel. By applying the normalized distance-

driven kernel (see Appendix A), the weighting coefficient can be calculated as Eq.

A3. It is worth to mention that the interpolation errors in the 45 degrees direction

are relatively large, one need to map detector cell boundaries to the column of

Page 25: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

12

interest after each 45 degree. The 2D fan beam geometry DDM can be easily

extended to the 3D cone beam geometry by applying the distance-driven kernel

both in the in-plane direction and in the patient-bed translation direction, resulting

in two nested loops. Instead of calculating the overlapped length, the overlapped

area between each voxel and each detector cell are used to calculate the

weighting coefficient. For the 3D cone beam geometry, the computational cost is

much higher than the 2D fan beam case.

Figure 2.2. Distance-driven model assuming a 2D fan beam geometry.

2.3.3 IDDM

As shown in Fig. 2.3, instead of mapping the detector to the center line of the

image row of interest, the IDDM maps the detector to the lower and upper

boundaries of the image row of interest. In each image row boundary (lower or

Page 26: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

13

upper), the weighting coefficient is computed similar to that of DDM using Eq. A3.

The final weighting coefficient is the average of the calculated weighting

coefficients in the upper and lower boundaries of the image row of interest. Same

to the DDM, the mapping direction changes every 45 degree to minimize the

interpolation errors. The 2D IDDM can also be extended to the 3D cone beam

geometry by applying the same strategy with DDM. Compare to the DDM, the

IDDM considers the contributions of all the related pixels in one row which result

in a higher model accuracy.

Figure 2.3. Improved distance-driven model assuming a 2D fan beam geometry.

Page 27: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

14

2.4 Results

2.4.1 Theoretical Verification

We qualitatively analyze each model in terms of accuracy, high resolution

reconstruction ability, and the likelihood of introducing artifacts assuming a 2D

parallel beam geometry. The conclusions apply to the fan beam geometry and

the conclusions in the 2D cases can be always extended to the 3D cone beam

cases. We analyze the performance of each model in three cases, namely, the

detector cell size is smaller, comparable, and larger than the image pixel size.

For the case that the detector cell size is larger than the pixel size, it can test the

high resolution reconstruction ability because the reconstructed image pixel size

smaller than the detector cell size can be regarded as high resolution

reconstruction compare to the detector resolution. For simplification, we analyze

the performance of each model assuming a one row image and a one view

projection. The results can be easily extended to more general cases with

multiple image rows and multiple projection views. For each case, the image

pixel size is fixed and the detector cell size is keep changing.

2.4.1.1 AIM

Fig. 2.4 shows the AIM in three cases assuming a 2D parallel beam geometry.

Because the AIM considers the detector cell has width, the dotted line represents

the virtual line from the focal spot to the detector cell center. The pixels that have

interactions with the ray path are highlighted with different color. One can see

that the AIM considers the contribution from all the related pixels. Because the

weighting coefficient involves the calculation of the exact overlapped area

Page 28: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

15

between each pixel and each ray path, the computational cost of AIM is high.

Although the computational cost of AIM is high, the AIM has a high accuracy.

Figure 2.4. Analysis of the area integral model assuming a 2D parallel geometry

with a one row image and one projection angle with one detector cell. (a), (b) and

(c) are the cases that the detector cell size is comparable, larger and smaller

than the image pixel size.

2.4.1.2 DDM

The DDM considers the detector cell has width. Compare to the AIM, instead of

calculating the overlapped area, the DDM only needs to calculate the overlapped

length between each pixel and each detector cell. Fig. 2.5 shows the DDM in

three different cases. For each case, DDM might omit the contribution from some

pixels. For example, for the case (a) where the detector cell size is comparable

with the image pixel size, there are three pixels (p2, p3, p4) have interactions

with the ray path. However, DDM omits the contribution of p4 and considers p3

as a full contribution which is not exactly the case. Similar behaviors may

(a) (c) (b)

Page 29: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

16

observe in the cases (b) and (c). We will see that IDDM overcome this drawback

elegantly. Theoretically, compare to the AIM, the computational cost of DDM is

low with a low accuracy.

Figure 2.5. Same with Fig. 2.4 but for DDM.

2.4.1.3 IDDM

The IDDM needs to calculate the overlapped length between each ray path and

the lower and upper boundaries of the image row of interest. As indicated in Fig.

2.6, for each case, the IDDM considers all the contributions of the related pixels.

For example, in case (a), there are three pixels (p2, p3, p4) have interactions

with the ray path. Unlike the DDM, the IDDM considers the contribution of pixel

p4 and doesn’t regard p2 as a full contribution. Same for cases (b) and (c), the

IDDM overcome the shortages of the DDM. Because, the lower boundary of one

image row is the upper boundary of the next image row, the computational cost

of the IDDM is same with DDM. In summary, the IDDM has a computational cost

as low as DDM, and the accuracy should be comparable to the AIM.

(a) (c) (b)

Page 30: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

17

Figure 2.6. Same with Fig. 2.4 but for IDDM.

2.4.2 Real Data Experiment

In order to verify and compare the performances of AIM, DDM and IDDM in the

practical applications, a physical phantom experiment was performed on a GE

Discovery CT750 HD scanner with a circular scanning trajectory at Wake Forest

University Health Sciences. After appropriate pre-processing, we extracted a

sinogram of the central slice in typical equiangular fan beam geometry with a

radius of the scanning trajectory 538.5 LL. Over a 360° scanning range, 984

projection views were uniformly sampled. For each projection, 888 detector cells

were equiangularly distributed. The field of view was 249.2 LL in radius and the

iso-center spatial resolution was 584 µL. Except the original projection data with

888 detector cells, we combined two, four and eight detector cells into one and

obtained the other three sets of projection data with 444, 222 and 111 detector

cells to simulate different level of low-resolution projections, respectively. Then

(a) (c) (b)

Page 31: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

18

the images were reconstructed form the four sets of projection data using the

AIM-, DDM- and IDDM-based methods.

We used the OS-SART [38] algorithm with different system models. The

parameters for the OS-SART algorithms were set to be the same. The subset

size of OS-SART was set to 41 and the initial image was set to be zeros. All the

reconstructed image sizes were 512 × 512 to cover the whole field of view (FOV)

which defines the pixel size as 973.3 × 973.3 µL� . In the original projection

dataset, the image pixel size and the detector cell size are comparable. For the

other three projection datasets, the image pixel size is smaller than the detector

cell size. As the detector cell number decreases from 444 to 111, the detector

cell sizes become larger and larger than the fixed image pixel size. Therefore, we

can test the capability of high resolution reconstructions using these three

projection datasets. In practical applications, the reconstructed image pixel size

is always comparable or smaller than the detector cell size, so we didn’t evaluate

the case that the detector cell size is smaller than the image pixel size. Fig. 2.7

shows the full view of the physical phantom.

Using the aforementioned sinograms, the three system models were

evaluated and compared quantitatively in terms of computational cost, image

noise and the spatial resolution.

(A) Computational Cost: The computational intensive part of the algorithm was

implemented in C++ and interfaced with Matlab. The code was runing on a

platform of PC (4.0 GB RAM, 3.2 GHz CPU). With the four projection datasets,

the running time of one iteration for AIM-, DDM- and IDDM-based methods are

Page 32: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

19

Figure 2.8. The running time of one iteration for AIM-, DDM-, and IDDM-based

method with different number of detector cells in projection datasets. The

reconstructed image size is 512 × 512.

Figure 2.7. Reference images reconstructed from 984 views using OS-SART

algorithm after 50 iterations. The display window is [0, 600 HU].

Page 33: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

20

shown in Fig. 2.8. The IDDM- (green line) and DDM-based (blue line) methods

have similar computational cost which is much less than the AIM-based (red line)

method. Specifically, in the 111 detector cells case, the computational cost of the

DDM-, IDDM- and AIM-based methods for one iteration are ~8.47, ~9.86 and

~112.74 seconds, respectively. The computational cost of AIM-based method is

about 13 and 12 times more than the DDM- and IDDM-based methods,

respectively.

(B) Image Noise: The standard variances of a homogenous region in the

reconstructed images were calculated as the image noise. The results from each

projection datasets are shown in Fig. 2.9. There is a common trend in the four

cases, namely, the curves all first go down to a point and then go up and tend to

level off finally. This is because, at the beginning, the low frequency components

are reconstructed which result in smoother images. Then the high frequency

components (details) are reconstructed. In the meanwhile, the noise will be

reconstructed from the noise in the projection data and the noise level will be

level off finally. For the case of 888 detector cells, the noise performance of each

model based methods is very similar before certain iteration number (i.e. 30

iterations). After that, the IDDM-based method gives a little bit larger noise than

DDM- and AIM-based methods. The behavior of the noise level is very similar

between the case of 888 detector cells and 444 detector cells. In both cases, the

IDDM reconstructed a little bit smaller image noise and this becomes more

Page 34: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

21

0 4 10 20 30 40 500

20

40

60

DDMIDDMAIM

0 4 10 20 30 40 500

20

40

60

DDMIDDMAIM

Nois

e L

eve

l (H

U)

Iteration

0 4 10 20 30 40 500

20

40

60

DDMIDDMAIM

(a)

(b)

(c)

Nois

e L

eve

l (H

U)

Iteration

Nois

e L

eve

l (H

U)

Iteration

Page 35: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

22

apparent as the detector cell size increase (i.e. 222 detector cells). We noticed

that the noise behavior of the 111 detector cells case is different with others. This

might be due to two reasons. First, this case represents the ultra-high spatia

resolution reconstruction compare to the detector cell size. Second, the

reconstructed image noise is composed of projection noise and the model error.

(C) Spatial Resolution: The spatial resolution was calculated as the full-width-

of-half-maximums (FWHMs) along the edge in a red square regions indicated in

Fig. 2.7 [39]. The results for different cases are shown in Fig. 2.10. For the case

of 888 detector cells (upper left), As the iteration number increases, the spatial

resolution of three methods become better and then decrease a little bit, and

level off finally. The decrease might be cause by the noise in the projection data.

Overall, the spatial resolution performance of the three methods is very similar

0 3 10 20 30 40 500

20

40

60

DDMIDDMAIM

(d)

Nois

e L

eve

l (H

U)

Iteration

Figure 2.9. The image noise (HU) with respect to the iteration number. Upper left: 888

detector cells; Upper right: 444 detector cells; Lower left: 222 detector cells; Lower

right: 111 detector cells.

Page 36: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

23

10 20 30 40 501

2

3

4

5

6

DDMIDDMAIM

10 20 30 40 501

2

3

4

5

6

DDMIDDMAIM

10 20 30 40 501

2

3

4

5

6

DDMIDDMAIM

Nois

e L

eve

l (H

U)

Iteration

(a)

(b)

(c)

Nois

e L

eve

l (H

U)

Iteration

Nois

e L

eve

l (H

U)

Iteration

Page 37: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

24

and they all achieve the best spatial resolution at 19 iterations (DDM: 1.5530 mm;

IDDM: 1.4632 mm; AIM: 1.4985 mm).

For the high resolution reconstruction case with 444 detector cells (upper

right), the behavior of the spatial resolution is similar to the 888 detector cells

case. The best spatial resolution achieved by the DDM-, IDDM-, and AIM-based

methods are all around 23 iterations (DDM: 1.5574 mm, 21 iterations; IDDM:

1.4970 mm, 25 iterations; AIM: 1.5375 mm, 23 iterations). For the ultra-high

resolution reconstruction cases with 444 (lower left) and 222 (lower right)

detector cells, overall, the AIM-based method reconstructs better spatial

resolution than the DDM- and IDDM-based methods, and the IDDM-based

method reconstructs better spatial resolution than the DDM-based method. This

phenomenon becomes more apparent for the case of 111 detector cells. In all

10 20 30 40 501

2

3

4

5

6

DDMIDDMAIM

(d)

Nois

e L

eve

l (H

U)

Iteration

Figure 2.10. The measured spatial resolution (mm) with respect to the iteration

number. Upper left: 888 detector cells; Upper right: 444 detector cells; Lower left: 222

detector cells; Lower right: 111 detector cells.

Page 38: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

25

the cases, the IDDM-based method can achieve the best spatial resolution at

certain iterations.

2.5 Discussion and Conclusion

In this section, we proposed the IDDM which can offer a high accuracy and low

computational cost in the meanwhile. While the three methods have similar

image noise performance, the AIM based method can offer a little bit higher

spatial resolution especially in the high resolution reconstruction case (i.e. 111

detector cells). Among the three methods, the IDDM based method always

achieves the highest spatial resolution at some certain iteration number. In sum,

the proposed IDDM is a promising system model since it can satisfy the

requirements of higher accuracy with low computational cost.

Page 39: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

26

Chapter 3. Thresholding Representation for the �� (� < � < �)

Regularization

3.1 Background

For more than half century, it has been a common belief that signals has to be

sampled at least twice the highest frequency to recover the original signals

accurately based on the Nyquist-Shannon theorem [40]. Recently, the emerging

theory of compressive sensing (CS) [16], [17], [19], [41], [42] shows that a sparse

or compressible signal can be accurately reconstructed from far fewer samples

than what is required by the Nyquist-Shannon theorem. Based on the CS theory,

most signals are sparse in some transform domain, and the additional constraints

can be applied to solve the sparsity constrained optimization problems effectively.

The sparsity of a signal S is defined as ‖S‖�, officially called ��-norm, which is the

number of nonzero elements of S. The ��-norm constrained optimization problem

is called �� regularization. However, the �� regularization problem is NP-hard

especially when the component number is large [43]. As a consequence, the

well-known ��- norm has been widely used to relax the ��- norm, which can also

produce sparse solutions [44], [45]. The �� regularization problem can be

transformed into an equivalent convex quadratic optimization problem, so various

linear programming techniques can be applied to solve it efficiently [19], [46], [47],

[48], [49]. Nevertheless, the �� regularization may yield inconsistent results and

fails to recover the sparsest signal [50], [51], [52].

Page 40: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

27

Recently, the �� (0 < � < 1) regularization has attracted great attention

since it can generate sparser solutions than the �� regularization. The ��

regularization is non-convex, nonsmooth, and non-Lipschits problem. Thus, we

can only obtain the local optimal solutions, yet it may produce sparser solutions

than the �� regularization [19], [20]. It is nontrivial to perform a though theoretical

analysis and develop efficient algorithms for solutions of �� regularization.

Through a phase diagram study [53], Xu et al demonstrated that: (1) the ��

regularization can produce solutions sparser than the �� regularization. As the

value of � becomes smaller, the �� regularization generates sparser optimal

solutions; (2) the �./ regularization plays a representative role among all the ��

regularizations. When � ∈ (1, ��), the �./ regularization always generates the best

optimal sparse solutions, and when � ∈ (0, ��), there is no significant difference

between �� regularizations. What’s more, Xu et al has shown that the solution of

the �./ regularization can be analytically expressed in a thresholding form,

distinguishing it from other �� (� ≠ �() regularizations [54]. This finding permits a

fast iterative thresholding algorithm (they call the half-threshold filtering

algorithm), similar to the iterative hard-threshold filtering algorithm for the ��

regularization [22], [23], [24], [25], [26] and the iterative soft-threshold filtering

algorithm for the �� regularization [26], [27], [28], [29]. Besides the iterative

threshold filtering algorithms for ��, �./ and �� regularizations, we would also like to

Page 41: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

28

find the analytical thresholding representations for other �� regularizations and

develop the corresponding iterative threshold filtering algorithms.

In this section, we will derive analytic thresholding representations for any

�� regularization. We first develop an iterative procedure for the general

thresholding representation for any �� regularization with 0 < � < 1 . Then we

show the convergence properties of the iterative procedure. By assigning

different initial values and iteration numbers carefully, we derive several

approximate analytic general thresholding representation for any �� regularization

with 0 < � < 1. The error bounds of each analytic formula are analyzed. Finally,

we perform numerical experiments to verify the developed iterative thresholding

representation and analyze the accuracy of each analytic formula.

3.2 General Iterative Thresholding Representation

3.2.1 Problem Description

Mathematically, the problem can be described as the following linear equation

V = WS + X, (3.1)

where W ∈ ℝE × ℝB is a known O × ' system matrix, V = (��, … , �E)� ∈ ℝE is a

known observation, and X ∈ ℝE represents the noise. Usually, the linear system

is ill-posed and the goal is to solve the Eq. (3.1) such that S = (��, … , �B)� ∈ ℝB

is of the sparsest structure, that is S has the fewest nonzero elements. The

corresponding sparsity constraint problem can be described as

minS∈ℝ\‖S‖� s.t. ‖V − WS‖ < ], (3.2)

Page 42: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

29

where 0 < � < 1, ] is the error term related to noise level, and ‖S‖� is the so-

called �� norm of ℝB which can be defined as

‖S‖� = (∑ |�2|�B2N� )�/�. (3.3)

Note that, the �� expression in Eq. (3.3) is not a traditional norm when � < 1. The

constrained problem in Eq. (3.2) is frequently converted to the following

unconstrained problem:

minS∈ℝ\_‖V − WS‖� + �‖S‖��`, (3.4)

where � > 0 is a weight parameter balancing the data fidelity term and the

penalty term.

3.2.2 Threshold Filtering Algorithm for �� Regularization

Any solution of the �� regularization problem defined by Eq. (3.4) is called an ��

solution (including local minimizers). An �� regularization problem permits a

thresholding representation, that is, there exist an thresholding function ℎ such

that an iterative threshold filtering algorithm can be defined for any of its ��

solutions [54]. The general form of a thresholding function ℎ can be defined as

ℎ(�) = cd(�), |�| > e 0, f�ℎghi9jg . (3.5)

where e represents the threshold value and d is the defining function. In order to

deal with the vector S, a diagonally nonlinear mapping k can be deduced from ℎ,

k(S) = 1ℎ(��), ℎ(��), … , ℎ(�B)4� . (3.6)

Page 43: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

30

k is called a thresholding operator. When an �� regularization problem in Eq. (3.4)

permits a thresholding representation, every of its �� solution can be considered

as a fixed point [54]. Hence, an iterative thresholding algorithm can be naturally

defined as

SlR� = km∅(Sl)o, (3.7)

where ∅ is an affine (linear) transform and can be defined as

∅(S) = S + pW�(V − WS), (3.8)

where p is a positive parameter (i.e. p < �‖q‖/) controls the step length in each

iteration. Once we know the thresholding function ℎ, the corresponding iterative

thresholding algorithm for the �� regularization can be defined.

3.2.3 Thresholding Representation for the ��, ��/ and �� Regularizations

The well-known analytic thresholding representations for �� and �� regularizations

have been well defined. Recently, a thorough theoretical analysis and the

analytic thresholding representation for ��/� regularization has been fully

developed by Xu et al [54]. The corresponding iterative threshold filtering

algorithm is called half-threshold filtering algorithm. The thresholding function

ℎr,�/� for ��/� regularization can be defined as

ℎr,�/�(�) = sdr,./(�), |�| > ( √�u! ��/(0, f�ℎghi9jg , (3.9)

where dr,�/� is defined as

Page 44: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

31

dr,./(�) = �( � v1 + cos v�z( − �( cos&� vr{ m|S|( o&u/|||. (3.10)

When � = 1, the thresholding function ℎr,� for �� regularization is defined

as [27]

ℎr,�(�) = c� − j}>(�)�/2, |�| > �/20, f�ℎghi9jg. (3.11)

The corresponding iterative thresholding algorithm is called the soft-threshold

filtering algorithm.

When � = 0, the thresholding function ℎr,� for �� regularization is defined

as

ℎr,�(�) = c�, |�| > ��/�0, f�ℎghi9jg. (3.12)

The corresponding iterative threshold filtering algorithm is the well-known hard-

threshold filtering algorithm [23], [37].

3.2.4 General Iterative Thresholding Representation for �� (� < � < �)

Regularization

The analytic thresholding formulas can be easily incorporated into an iterative

threshold filtering algorithm for fast solutions. Besides the analytic thresholding

representations for ��, ��/� and �� regularizations, we would like to develop the

general thresholding representation for any �� (0 < � < 1) regularization. The key

is to find a thresholding function ℎr,� which in turn equivalent to find the threshold

value er,� and the defining function dr,�. Following the same derivation in [54], we

Page 45: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

32

consider the following problem to derive the threshold value er,� and the defining

function dr,�. Given a fixed S = (��, ��, … , �B)� ∈ ℝB as the input, we consider

�2 = arg min��∈ℝ {(�2 − �2)� + �|�2|�} , � > 0, (3.13)

It should be clarified that �2 in Eq. (3.13) is the unknown output for the

thresholding function with respect to the known input �2 , and there is no

relationship with the observation V in Eq. (3.1). Notice that all the elements share

the same format of thresholding function, let us omit the sub-index 9 for

abbreviation and consider the following general format of the problem

� = argmin� ((� − �)� + �|�|�) , � > 0,0 < � < 1. (3.14)

It is equivalent to

� � = argmin��� ((� − �)� + ���) , � > 0,0 < � < 1� = argmin��� ((� − �)� + �(−�)�) , � > 0,0 < � < 1. (3.15)

Since the thresholding function is symmetric, we only consider the equation with

� ≥ 0 in the following derivations. The results can be easily extended to the case

� < 0.

3.2.4.1 Threshold Value ��,�

We derive the threshold value er,� in a heuristic way. Let � denotes the objective

function for the case � ≥ 0

� = (� − �)� + ���. (3.16)

When � ≥ er,�, by considering the first order optimality condition of �, we obtain

Page 46: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

33

2(� − �) + ����&� = 0. (3.17)

By reformatting the Eq. (3.16), we have ��� = � − (� − �)� and substitute it into

Eq. (3.17), we arrive at the following equation

(2 − �)�� − 2�(1 − �)� + �(� − ��) = 0. (3.18)

Noticing that there exist a jump in the thresholding curve, the two roots of Eq.

(3.18) should be � = 0 and the value of � at the critical point.

Because � = �� when � = 0, � should also be equal to �� at the critical

point er,�. Substituting � = �� into Eq. (3.18), we obtain the non-zero root

� = ��(�&�)�&� . (3.19)

Let � = �� and substitute Eq. (3.19) into Eq. (3.16), we arrive at

�� = m� − ��(�&�)�&� o� + � m��(�&�)�&� o�, (3.20)

which yields the threshold value er,�

er,� = �&�� (1 − �)��./��� ./�� . (3.21)

There are some previous work reported the same finding [55], [56], [57]. In [55],

an alternative formula for the threshold er,� was derived by assuming that the ��

regularization term satisfies a series of predefined assumptions in a generalized

gradient projection method (GGPM). [55] and [56] also reported this finding in a

different setting. Different from aforementioned previous work, we derived the

threshold value er,� in a heuristic way and the derivation is very clean.

Page 47: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

34

3.2.4.2 General Iterative Thresholding Representation

Although the classic Abel theorem and previous research have shown that only

the solutions of ��/� and ��/( regularizations permit analytic thresholding

representation among the �� regularizations with 0 < � < 1 [54], we are still

interested in developing the analytic thresholding representation for any other ��

regularizations with 0 < � < 1. Here, we first develop a general iterative recursive

solution for any �� regularization with 0 ≤ � ≤ 1, which can result in very good

approximations after a few iterations. Then, we prove the convergence properties

of the iterative procedure.

By reformatting Eq. (3.17), we have

� = � − r�� ��&�, (3.22)

which can be considered as a typical fixed-point problem. Let (�) = � − r�� ��&� ,

from Eq. (3.22), we immediately obtain the following iterative recursive formula,

�lR� = �(�l), }9�g> �>� ��, � = 0,1 … . (3.23)

Combining Eqs. (3.23) and (3.21), we immediately have the general iterative

thresholding representation

ℎr,�(�) = s� − j9}>(�) r�� m�l(|�|)o�&� , � → ∞, |�| > �&�� (1 − �)��./��� ./��0, f�ℎghi9jg . (3.24)

Let � = � − � and substitute it into Eq. (3.22), we can obtain the following

simplified fixed point problem,

� = r�� (� − �)�&�, (3.25)

Page 48: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

35

which implies an iterative recursive solution for �,

�lR�(�) = r�� (� − �l)�&�, }9�g> �>� ��, � = 0,1, … . (3.26)

Combining Eqs. (3.26) and (3.21), we immediately have an alternative general

thresholding representation

ℎr,�(�) = s� − j9}>(�)�l(|�|), � → ∞, |�| > �&�� (1 − �)��./��� ./��0, f�ℎghi9jg . (3.27)

Since � usually is very small, a few iteration steps can result in very good

approximations. We will prove and analyze the convergence properties of the

general iterative thresholding representation.

It is worth to mention that the direct iterative thresholding representation in

Eqs. 3.22 and 3.23 is directly derived from the well-known fixed-point theory.

While similar work have been reported by other researchers [55], [56], [57], they

do not involve the optimization for the analytic thresholding formulas. In this work,

further analysis is performed to derive an alternative general iterative

thresholding representation (Eqs. 3.26 and 3.27) for any �� regularization, which

makes it possible to optimize and derive different analytic thresholding

representation formulas.

3.2.4.3 Proof of Convergence Properties

Here, we justify the convergence properties of the proposed general iterative

thresholding formula. We first prove the convergence. Then we analyze the

convergence speed for the case of � ≥ 0. It is straightforward to demonstrate that

the properties apply to the case of � ≤ 0.

Page 49: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

36

Theorem 1: The objective function � has an unique global minimum �∗, which is

the fixed point of �(�). Proof: We first show the objective function � has no more than one global

minimum. Then we prove the existence of such a global minimum which is

actually the fixed point of �(�). The first and second order derivatives of � are

�� = 2� + ����&� − 2� = 2� − 2�(�), (3.28)

��� = 2 − ��(1 − �)��&� = 2 − 2��(�). (3.29)

Because ���� = ��(1 − �)(2 − �)��&( > 0 for any 0 < � < 1 when � > 0, ��� is a

monotonically increasing function with respect to �. Let ��� = 0, we obtain the

unique inflection point at �� = mr�(�&�)� o ./��. Hence, ��� < 0 when � < ��, and � ≥0 when � ≥ ��. Since we are interested in � ≥ er,� = �&�� (1 − �)��./��� ./��, that is,

� ≥ �� = 2er,�(1 − �)2 − � = 2(1 − �)2 − � 2 − �2 (1 − �)�&��&�� ��&� = (1 − �) ��&�� ��&�. Moreover,

���� = (�&�) ./��r ./��m��(.��)/ o ./�� = m��o ./�� > 1 (because 2 < �� < ∞ and

�� < ��&� < 1, so

m��o ./�� > √2/ > 1), which implies that �� > ��. Therefore, � is a strictly convex

function when � ∈ @�� , ∞) and there is no more than one global minimum.

Now let’s show such a global minimum is exist when � ≥ �� , which is

actually the fixed point of �(�). Because ��� > 0 when � ≥ ��, �� is monotonically

increasing when � ≥ ��. The minimum of �� is

Page 50: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

37

����� (� = ��) = 2�� + �����&� − 2� ≤ 2�� + �����&� − 2er,� = 0. (3.30)

Eq. (3.30) implies that �� must have positive values. Because �� is a continuous

and monotonically increasing function when � ∈ @�� , ∞), �� must exist one and

only one zero point �∗ ∈ @�� , ∞), which is the fixed point of �(�) and the unique

global minimum of �. This completes the proof. ▄

Theorem 2: The general iterative thresholding formula converges to �∗ with any

starting point �� ∈ (�� , ∞).

We use the fixed point theorem to prove theorem 2, which equals to prove the

following two conditions are satisfied:

(a) There exists a closed interval @�, �C and the range of the mapping �(�) ∈@�, �C for all � ∈ @�, �C. (b) There exists a positive constant � < 1 such that |��(�)| ≤ � for all � ∈ @�, �C. First, let’s consider the second condition. We have shown that when � < ��, ��� <0, that is ��(�) > 1; when � ≥ ��, ��� ≥ 0, that is ��(�) ≤ 1. Because

��(�) = r�(�&�)� ��&� > 0, (3.31)

���(�) = − r�(�&�)(�&�)� ��&( < 0, (3.32)

for any � ∈ (�� , ∞), so ∃ � < 1 satisfying 0 < ��(�) ≤ � < 1. Hence, any @�, �C ⊂(�� , ∞) to satisfies |��(�)| ≤ � < 1.

Second, let’s find �, � to satisfy the first condition. We have shown that �� > ��

and �∗ ∈ @�� , ∞) which imply that �∗ ∈ (�� , ∞) . Because �� is a monotonically

Page 51: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

38

increasing function when � ∈ (�� , ∞) and �∗ is the zero point of ��. For any � ∈(�� , �∗C, �� ≤ 0, that is � ≤ �(�). For any � ∈ (�∗, ∞) , �� > 0, that is � > �(�).

Based on the above analysis, we can conclude that �, � should satisfy � ∈ (�� , �∗C and � ∈ @�∗, ∞). Since we are interested in � ≥ ��, we can choose � ∈ (�� , �∗C, � ∈ @�∗, ∞) to meet both of the conditions. Hence, for any starting point �� ∈(�� , ∞), the developed general iterative thresholding representation converges to

the fixed point. ▄

3.2.4.4 Convergence Analysis

Because the fixed point is �∗, we have �∗ = �(�∗). Assume any initial point �� ∈(�� , ∞). Since �(�) is continuous, by the mean value theorem, we have

|�lR� − �∗| = |�(�l) − �(�∗)| = |��(�l)(�l − �∗)|, (3.33)

From Eqs. (3.31) and (3.32), we can obtain the maximum of ��(�)

�� ¡� (� = ��) = r�(�&�)� ¢(1 − �) ./��� ./��£�&� = ��. (3.34)

Because lim�→¥ �(�) = � (� ≥ ��,r ), the minimum of ��(�) is

����� (� = �) = r�(�&�)� ��&�. (3.35)

Therefore, we have r�(�&�)� ��&� < � ≤ �� < 1. From Eq. (3.33), we can estimate

the convergence speed as,

|�lR� − �∗| = |��(�l)(�l − �∗)| ≤ �|(�l − �∗)| ≤ ��|(�l&� − �∗)| ≤ ⋯ ≤�lR�|(�� − �∗)|, (3.36)

Page 52: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

39

mr�(�&�)� ��&�olR� |(�� − �∗)| ≤ �lR�|(�� − �∗)| ≤ m��olR� |(�� − �∗)|. (3.37)

3.3 General Analytic Thresholding Representation for �� (� < � < �)

Regularization

3.3.1 Approximate Analytic Formulas

Because the general analytic thresholding representation for the �� regularization

is not available, the proposed iterative general thresholding procedure can

provide approximate yet analytic solutions by using a few iterations and choosing

appropriate initial starts. By selecting the iteration numbers and different initial

starts, we can derive several analytic formulas to approximate the true

thresholding functions. This directly leads to the iterative general threshold

filtering algorithm. Here, we investigate several possible choices for initial values

and iteration numbers, resulting in several analytic thresholding formulas. For

convenience, we employ the iterative thresholding procedure in Eqs. (3.26) and

(3.27). The five analytic general thresholding representation formulas are as

follows:

(i) Initial start ��(|�|) = 0 and the iteration number ' = 2

ℎr,�(�) =�� − j}>(�) ∙ r�� m|�| − r�� |�|�&�o�&� , |�| > �&�� (1 − �)��./��� ./��0, f�ℎghi9jg (3.38)

(ii) Initial start ��(|�|) = |�| − �� and the iteration number ' = 2

Page 53: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

40

ℎr,�(�) =�� − j}>(�) ∙ r�� ¢|�| − �� (1 − �)��./��� ./��£�&� |�| > �&�� (1 − �)��./��� ./��

0, f�ℎghi9jg . (3.39)

(iii) Initial start ��(|�|) = �� − �� and the iteration number ' = 2

ℎr,�(�) =¨� − j}>(�) ∙ r�� v|�| − r�� ¢|�| − �� (1 − �)��./��� ./��£�&�|�&� |�| > �&�� (1 − �)��./��� ./��

0, f�ℎghi9jg. (3.40)

(iv) Initial start ��(|�|) = ©|�|�&� and the iteration number ' = 1

Because ��(|�|) passes through (�� , �� − ��) , we can obtain © = r��/�� m�&��&�o�&�

and ��(|�|) = r��/�� m�&��&�o�&� |�|�&� . After one iteration, the final approximate

analytic thresholding formula is

ℎr,�(�) =�� − j}>(�) ∙ r�� ¢|�| − r��/�� m�&��&�o�&� |�|�&�£�&� , |�| > �&�� (1 − �)��./��� ./��

0, f�ℎghi9jg . (3.41)

It should be mention that � = � − � has been used to derive Eq. (3.41).

(v) Initial start ��(|�|) = ©(|�| − ��)�&� and the iteration number ' = 1

Page 54: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

41

Because ��(|�|) passes through (�� , �� − ��) , we can obtain © = � m��o�&� (1 −�)�&� and ��(|�|) = � m��o�&� (1 − �)�&�(|�| − ��)�&� . After one iteration, the

approximate analytic thresholding formula is

ℎr,�(�) =¨� − j}>(�) ∙ r�� v|�| − � m��o�&� (1 − �)�&� ¢|�| − (1 − �) ./��� ./��£�&�|�&� |�| > �&�� (1 − �)��./��� ./��

0, f�ℎghi9jg .

(3.42)

3.4 Numerical Verification

We perform a numerical experiment to find the optimal solution of Eq. (3.13) and

to verify our theoretical results for the derived analytic thresholding

1 2 3 4 5 60

1

2

3

4

5

6

x

y

p=0, Analytic

p=0.5, Analytic

p=1.0, Analytic

From top to bottom:

p=0,0.1,...,1.0

Figure 3.1. Numerical �� kernel functions (thin lines) for � = 5 and the

corresponding known analytical ��, ��/� and �� kernel functions.

Page 55: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

42

representations. We considered a positive real-value signal � ∈ (0,10) as the

input and fixed � = 5. Given any � ∈ @0,1C, we numerically searched the solution

� for each input of �. Fig. 3.1 shows the numerical results which are thresholding-

like functions, and the numerical results of �� , ��/� and �� regularizations are

consistent with the analytic solutions. Using this numerical method, we verify the

correctness of the proposed iterative thresholding representation and illustrate

the convergence properties and the convergence speed with respect to different

initial starts. We also analyze and compare the accuracy of each approximate

analytic thresholding representations.

3.4.1 Numerical Analysis of the Iterative Thresholding Representation

In order to verify the correctness of the general iterative thresholding procedure

and compare the convergence speed with different initial starts, we show how

different initial starts affect the iteration process. Fig. 3.2 (a) shows the plots of

real � and different initial starts �� with respect to � given � = 0.5 and � = 5. With

the increase of � , the initial start ��(�) = 0 (initial value for the analytic

thresholding formula (i)) becomes closer and closer to the real value. Different

from the initial start ��(�) = 0, the initial start ��(�) = � − �� (initial value for the

analytic thresholding formula (ii)) becomes further and further to the real value as

� increases. We can predict the initial start ��(�) = � − �� outperforms the initial

start ��(�) = 0 only when � is smaller than certain value. For the initial start

��(�) = �� − �� (initial value for the analytic thresholding formula (iii)), it is good

when � is small, and it outperforms the initial value ��(�) = � − �� for all � (� >��). Obviously, the initial starts ��(�) = ©��&� (initial value for the analytic

Page 56: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

43

2 4 6 8 100

0.04

0.08

0.12

|t(x)-t40(x)|

|t(x)-t50(x)|

0 2 4 6 8 100

1

2

3

4

5t 0(x)

x

t(x)=x - y

t0(x)=0

t0(x)=x - y

T

t0(x)=x

T - y

T

t0(x)=αx(p-1)

t0(x)=α(x-y

T)(p-1)

(a)

(b)

Figure 3.2. Comparison of different initial starts for each analytic thresholding

formulas. (a) shows plots of real �(�) and different initial starts �0(�) with

respect to � for � = 0.5 and � = 5 ; and (b) shows plots of the absolute

difference between real � and the initial starts �0 for the analytic thresholding

formula 4 and 5.

Page 57: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

44

thresholding formula (iv)) and ��(�) = ©(� − ��)�&� (initial value for the analytic

thresholding formula (v)) are closer than the other three initial starts for all � (� >��). Fig. 3.2 (b) shows the plots of the absolute differences between the real �(�)

and the initial starts ��(�) = ©��&� and ��(�) = ©(� − ��)�&�, respectively. We

can see that the initial start ��(�) = ©��&� has smaller absolute difference for all

�, which implies that the initial start ��(�) = ©��&� might be the best choice.

In order to further confirm the above analysis, we show the iteration

process with the initial starts ��(�) = 0 and ��(�) = � − �� for the analytic

thresholding formulas (ii) and (iii), respectively. Because the analytic thresholding

representation for the ��/� regularization has been developed [54], we selected

� = 0.5 with � = 5 and used the analytic solution as the truth and reference. The

iteration number was selected as 3. From Fig. 3.3, we can see that the iteration

process is convergent. As the iteration number increases, the iterative processes

approach the truth. However, the initial starts significantly affect the iteration. Fig.

3.3 (b) shows that, for a smaller � (i.e. � = 2.77), the iteration process with the

initial start �� = � − �� can approach the truth solution very well after 2 iterations,

and it needs more than 3 iterations to reach a good approximation for the initial

start �� = 0. Fig. 3.3 (c) shows that, for a greater � (i.e. � = 5), the iteration

process with initial start �� = 0 converges faster than the one with initial start �� =� − ��. Those results are consistent with the aforementioned theoretical analysis,

that is, the initial value �� = � − �� is better than the initial value �� = 0 only when

� is smaller than certain value. In fact, �� = 0 and �� = � − �� correspond to �� =� and �� = ��, respectively. In order to show this point clearly,

Page 58: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

45

2 3 4 5 60

1

2

3

4

5

x

y

Truth

t0=0

t0=x-y

T

(a)

(b)

(c)

4.998 5 5.002

4.4

4.402

4.404

4.406

4.408

3

2

2.76 2.77 2.78 2.79 2.8

1.86

1.88

1.9

3

1

Figure 3.3. Comparisons of iterative process with different initial values �� for

� = 0.5. The iteration number is fixed as 3 and � = 5. In (a), The solid line is

the exact analytic solution of ��/�; The dashed and dotted lines represent the

iteration process with initial values �� = 0 and �� = � − �� , respectively. The

regions around � = 2.77 (b) and around � = 5 for p=0.5 are magnified, the

iteration numbers are marked.

Page 59: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

46

Fig. 3.4 shows the plots of �(�) with respect to � for � = 2.77 and � = 5. For a

smaller � (i.e. � = 2.77), �� = �� is closer to the fixed point �∗ than �� = � which

confirms that the iteration starts from �� = �� (�� = � − ��) converges faster than

that from �� = � (�� = 0). For a greater � (i.e. � = 5), �� = � is closer to the fixed

point �∗ than �� = �� which confirms the iteration starts from �� = �� ( �� =

0 2 4 6 8 100

1

3

5

�∗�e�ª �

�(�) 0 2 4 6 8 10

0

0.5

1

1.5

2

2.5

�« �e � �∗

�(�)

(�) � = 2.77

(�) � = 5

Figure 3.4. Plots of �(�) with respect to � (dark solid line) assuming (a) � =2.77 and (b) � = 5 with � = 0.5 and � = 5.

Page 60: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

47

� − ��) converges faster than that from �� = � (�� = 0). Therefore, it is possible

to construct concise approximate yet analytic thresholding formulas by choosing

appropriate initial start and iteration number.

3.4.2 Analysis of Different Analytic Thresholding Formulas

3.4.2.1 Error Boundaries of the General Analytic Thresholding Formulas

Here, we analyze the error boundaries of each formula. For convenience, we

only analyze the case � ≥ 0 for each formula and the results can be directly

extended to the case � < 0.

|�lR� − �l| = |ø(�l) − ø(�l&�)| = |ø�(­l)(�l − �l&�)| ≤ �|(�l − �l&�)|. (3.43)

By Eqs. (3.43) and (3.36), we have

|�lR� − �l| = |(�l − �∗) − (�lR� − �∗)| ≥ |�l − �∗| − |�lR� − �∗| ≥|�l − �∗| − �|�l − �∗| = (1 − �)|�l − �∗|.

(3.44)

Combining Eqs. (3.43) and (3.44), we arrive at

|�l − �∗| ≤ ��&® |�lR� − �l| ≤ ®�&® |�l − �l&�| ≤ ⋯ ≤ ®¯�&® |�� − ��| =®¯�&® |�� − ��|. (3.45)

Because r�(�&�)� |�|�&� < � ≤ �� < 1 , for any initial start �� ∈ @�� , ∞C and the

iteration number ' , we can estimate the error bounds when using �l to

approximate �∗.

Page 61: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

48

(i) Initial start ��(�) = 0 and ' = 2

|�� − ��| = r�� ��&�, r/�/(�&�)/�/(��/)�(�&r�(�&�)���/) < ®/�&® ≤ �/�(�&�),

ru�u(�&�)/�u��°!(�&r�(�&�)���/) < ®/�&® |�� − ��| ≤ r�u!(�&�) ��&�. (3.46)

(ii) Initial value ��(�) = � − �� and ' = 2

|�� − ��| = ±� − 2 − �2 (1 − �)�&��&�� ��&�±, r/�/(�&�)/�/(��/)�(�&r�(�&�)���/) ±� − �&�� (1 − �)��./��� ./��± < ®/�&® |�� − ��| ≤ �/�(�&�) ±� −

�&�� (1 − �)��./��� ./��±. (3.47)

(iii) Initial value ��(�) = �� − �� and ' = 2

|�� − ��| = ��2 ²(1 − �)�&��&���&��&� − ¢� − �2 (1 − �)�&��&�� ��&�£�&�² ru�u(�&�)/�/(��/)!(�&r�(�&�)���/) ²(1 − �)��./�����./�� − ¢� − �� (1 − �)��./��� ./��£�&�² < ®/�&® |�� −

��| ≤ r�u!(�&�) ²(1 − �)��./�����./�� − ¢� − �� (1 − �)��./��� ./��£�&�². (3.48)

(iv) Initial value ��(�) = r��/�� m�&��&�o�&� ��&� and ' = 1

|�� − ��| = r�� ² ��.�� m�&��&�o�&� ��&� − ¢� − r��/�� m�&��&�o�&� ��&�£�&�²,

Page 62: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

49

r/�/(�&�)���/�(�&r�(�&�)���/) ² ��.�� m�&��&�o�&� ��&� − ¢� − r��/�� m�&��&�o�&� ��&�£�&�² <®�&® |�� − ��| ≤ r�/�(�&�) ² ��.�� m�&��&�o�&� ��&� − ¢� − r��/�� m�&��&�o�&� ��&�£�&�². (3.49)

(v) Initial value ��(�) = � m��o�&� (1 − �)�&�(� − ��)�&� and ' = 1

|�� − ��| = r�� ³v�(�&�)� ¢� − (1 − �) ./��� ./��£|�&� − v� − � m��o�&� (1 −�)�&� ¢� − (1 − �) ./��� ./��£�&�|�&�³,

r/�/(�&�)���/�(�&r�(�&�)���/) ³v�(�&�)� ¢� − (1 − �) ./��� ./��£|�&� − v� − � m��o�&� (1 −�)�&� ¢� − (1 − �) ./��� ./��£�&�|�&�³ < ®�&® |�� − ��| ≤ r�/�(�&�) ³v�(�&�)� ¢� −(1 − �) ./��� ./��£|�&� − v� − � m��o�&� (1 − �)�&� ¢� − (1 − �) ./��� ./��£�&�|�&�³.

(3.50)

Fig. 3.5 shows the error bounds curves of each analytic formula with � = 0.5 and

� = 5. While it is difficult to exactly predict the accuracy of each analytic formula

based on the error bounds curves, it gives some hints about the behavior of each

analytic formula. For example, as � increases, the difference interval between the

lower and upper bounds curves of analytic formula (i) becomes smaller and the

lower bound curves close to 0 finally. Formula (i) should have a very good

performance for great � values. On the contrary, the analytic formula (ii)

Page 63: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

50

becomes worse when � increases. The lower bound errors of the formulas (iii),

(iv) & (v) are all small, they may have similar accuracy.

3.4.2.2 Numerical Analysis of Different Analytic Formulas

Because it is difficult to compare the accuracy of each analytic formula using the

error bounds curves, we perform experiments to verify the correctness of the

approximate analytic formulas and compare the accuracy of each formula. Fig.

3.6 shows the plots of �(�) with respect to � for different analytic formulas for � =0.5, � = 5. For a smaller � (Fig. 3.6(b)), the analytic formula (ii) is more accurate

than the analytic formula (i). As � increases, the analytic formula (i) becomes

more accurate than the analytic formula (ii) and even all the other three formulas.

The accuracy of analytic formulas (iv) and (v) are very similar except that the

analytic formula (iv) is slightly more accur ate than the analytic formula (v)

Figure 3.5. Plots of error bounds of different analytic formulas with � = 0.5 and

� = 5.

2 4 6 8 100

0.01

0.02

0.03

0.04

0.05

x

F1 UpperF1 LowerF2 UpperF2 LowerF3 UpperF3 LowerF4 UpperF4 LowerF5 UpperF5 Lower

Page 64: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

51

(a)

2.8 3 3.20.82

0.84

0.86

0.88

0.9

0.92

5.6 5.8 6

0.52

0.54

0.56

0.58

0.6

(b)

(c)

2 3 4 5 6 7 8

0.5

0.6

0.7

0.8

0.9

x

t(x)

Truth Formula 1Formula 2Formula 3Formula 4Formula 5

Figure 3.6. Plots of �(�) with respect to � for different analytic formulas with

� = 0.5 and � = 5. In (a), the thickest solid line is the real �(�) and other lines

correspond to different analytic formulas. Two smaller regions are magnified

as (b) and (c).

Page 65: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

52

for a smaller � . Overall, the analytic formula (iii) is the most accurate one.

Therefore, the analytic formula (iii) is recommended when the signal values are

unknown. The analytic formulas (iv) and (v) are very accurate for all range of �

values and can also be used for unknown-value signals. The analytic formula (ii)

is not recommended for large value signals because it deviates from the real

value too much when � is large. If the signal values are extremely large, the

analytic formula (i) is also recommended because the analytic formula (i) is even

slightly more accurate than the analytic formula 3.

Page 66: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

53

Chapter 4. General Thresholding Algorithm for �� (� < � < �)

Regularization Based CT Reconstruction

In Chapter 3, we derived several analytic general thresholding formulas for ��

regularization. In Chapter 4.1, we first develop the corresponding iterative

general thresholding algorithm for the �� regularization. The general thresholding

algorithms are adequate and efficient for large-scale CS-based applications and

the regularization parameters have specific meanings. Then we investigate the

strategies for the selections of optimal regularization parameters. Finally we

apply the proposed general thresholding algorithm to CT reconstruction. Because

the �� regularization is a nonconvex problem and there are many local optima, an

iterative algorithm is more likely converge to local optima instead of the global

optimum. In Chapter 4.2, we propose an alternating iteration algorithm in a

thresholding form based on the general analytic thresholding representation for

better convergent properties. The proposed alternating iteration algorithm

alternatively optimizes one �� and one �� (0 < � < 1) regularizion problems.

While the �� regularization can help to search a sparser solution, the ��

regularization can help to monitor the solution not away from the global optimum

and prevent iteration trapped into non-optimal local optima. Below we will

introduce each algorithm in detail and perform extensive experiments to evaluate

the performance of each algorithm.

Page 67: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

54

4.1 General Thresholding Algorithm

Similar to the �� regularization, there are two approaches to solve the ��

regularization problems, namely the reweighted norm algorithm, and the

thresholding algorithm. The reweighted norm algorithm converts the non-convex

�� regularization into a form of convex �� regularization by applying some

weighting functions [58], [59], [60], [61]. As a result, the algorithms for solving the

�� regularization can be applied immediately such as the ��-reweighed [58], ��-

greedy [62] and generalized �� -greedy algorithms [63]. The thresholding type

algorithm is a simple iterative procedure followed by a threshold filtering

operation. The derived general analytic thresholding representations for the

�� (0 < � < 1) regularization permit fast algorithm, similar to the iterative hard

thresholding algorithm [22], [23], [24], [25], [26] for the �� regularization, the

iterative soft thresholding algorithm [26], [27], [28], [29] for the �� regularization,

and the iterative half thresholding algorithm [54] for the ��/� regularization. Below

we will develop the general iterative thresholding algorithm, investigate the

strategies for optimal regularization parameters selection, apply to the CT

reconstruction and compare to the reweighted algorithm.

4.1.1 Algorithm Development

4.1.1.1 SART-type Reconstruction

Here we use the same discrete imaging model in Chapter 2.2 and considering

the following linear equation

W0 = ´. (4.1)

Page 68: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

55

Here we use ´ ∈ ℝE represents projection measurements, W = {�H=} ∈ ℝE × ℝB

represents the system matrix and still use 0 ∈ ℝB represents an unknown image.

We use the improved distance-driven model (IDDM) [64] to calculate the system

matrix. Here we rewrite the Eq. (2.3) using W and ´ as

}H = ∑ �H=-=B=N� , L = 1, 2, … , O. (4.2)

The problem in Eq. (4.1) can be solved by the SART algorithm [38],

-=lR� = -=l + pl �µ¶· ∑ µ¸,·µ¸¶EHN� (}H − WH0l), (4.3)

where �R= = ∑ �H,= > 0EHN� , �HR = ∑ �H,= > 0B=N� , WH represents the LJK row of

the system matrix W , � represents the iteration index, and 0 < pl < 2 is a

relaxation parameter which can influence the convergence speed. In this paper,

p is set to be 1 for convenience. Let ¹RB ∈ ℝB × ℝB and ¹ER ∈ ℝE × ℝE be

diagonal matrices with diagonal elements Λ=,=RB = �µ¶· and ΛH,HER = �µ¸¶, respectively.

Eq. (4.3) can then be rewritten in the following matrix format:

0lR� = 0l + plm¹RBW�¹ER(´ − W0l)o. (4.4)

It is worth to mention that the fast weighted strategy in the fast iterative

shrinkage-thresholding algorithm [65] can be used to speed up the convergence

of the SART-type algorithm.

4.1.1.2 Reweighted Algorithm

Based on Eq. (4.1), we rewrite the �� regularization problem in Eq. (3.4) for CT

reconstruction as,

Page 69: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

56

min0∈ℝ\_‖´ − W0‖� + �‖0‖��`. (4.5)

While most images are not sparse itself, they can be sparsely represented in a

sparse transform domain. In the medical imaging field and image processing field,

the discrete gradient transform (DGT) is widely used as the sparse transform

assuming a piecewise constant image model [29], [66], [67]. Assuming the digital

image 0 satisfies the Neumann boundary conditions,

-�,3 = -�,3 and -7R�,3 = -7,3 for 1 ≤ ; ≤ <, -2,� = -2,3 and -2,8R� = -2,8 for 1 ≤ 9 ≤ :. (4.6)

The standard isotropic DGT is

»2,3 = ¼1-2,3 − -2R�,34� + 1-2,3 − -2,3R�4�. (4.7)

The �� (0 ≤ � ≤ 1) regularization problem with the DGT as the sparsity constraint

can be expressed as

min0∈ℝ\_‖´ − W0‖� + �‖|½0|‖��`, (4.8)

where |½0| = 1»2,34 ∈ ℝ7 × ℝ8 represents the gradient magnitude image of 0 .

According to the definition of �-norm, Eq. (4.8) can be extended as

min0∈ℝ\_‖´ − W0‖� + �1∑ ∑ ¾»2,3¾�83N�72N� 4`. (4.9)

When � = 1 , Eq. (4.9) corresponding to the total variation (TV) optimization

problem, which has been studied extensively in the CS field [68]. When 0 < � <1, Eq. (4.9) is a non-convex optimization problem. The idea of the reweighted

Page 70: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

57

algorithm is to approximate the p-norm regularization term by a weighted 1-norm

regularization term. Hence, the non-convex problem in Eq. (4.9) can be

converted to the following convex problem

min0∈ℝ\_‖´ − W0‖� + �‖D|½0|‖� `, (4.10)

where D = 1G2,34 ∈ ℝ7 × ℝ8 is a non-negative weighting factor. Note this D has

no relationship with the system matrix D in Chapter 2. Eq. (4.10) can be rewritten

as

min0∈ℝ\_‖´ − W0‖� + �1∑ ∑ ¾G2,3 ∙ »2,3¾83N�72N� 4`. (4.11)

In this work, we select the weighting factor as

G2,3 = v¼¿ + 1»2,34�|�&�, (4.12)

where ¿ is a small constant to avoid singularity. The regularization term

‖|½0|‖�� = ∑ ∑ ¾»2,3¾�83N�72N� can be approximated by other reweighted functions

such as ‖|½0|‖�� = ∑ ∑ 1»2,3� + ¿�4�/�83N�72N� [61]. It is straightforward that the Eqs.

(4.8) and (4.9) are equivalent to Eqs. (4.10) and (4.11) using the weighting factor

in Eq. (4.12). The weights can be treated as constants and the convex

regularization problem in Eqs. (4.10) and (4.11) can be solved efficiently using

the line search or the steepest descent type methods [63], [66], [67], [69] in the

framework of SART. The steepest decent direction À = (Á2,3) ∈ ℝ7 × ℝ8 can be

computed as Á2,3 = Âm∑ ∑ ¾Ã�,ľÅÄÆ.Ç�Æ. oÂÈ�,Ä , and its unit vector can be taken as À‖À‖É. During

Page 71: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

58

the iteration process, the weights factor is applied to adjust the steepest decent

direction which can result in better convergence property and improved image

quality. One problem is how to calculate the weights G2,3 without first knowing »2,3.

Thanks to the properties of iterative algorithms, we can alternatively update

»2,3 and redefine the weights G2,3 . Table 4.1 shows the pseudo-codes of the

reweighted algorithm for Eq. (4.8). The four steps are applied iteratively until the

stopping criteria are met which can be a preset maximum iteration number or an

error threshold in the projection domain.

TABLE 4.1 PSEUDO-CODES OF THE REWEIGHTED ALGORITHM

Choosing parameters ¿, � Initialization: Ê� = 0� = �, D� = �, � = 1, �� = 1;

While the stopping criteria are not satisfied

1. Updating 0l with the SART; 2. Gradient descent method 2.1 Calculate the gradient Ë of 0l and the steepest decent direction À; 2.2 Setting a reweighted decent direction À = Dl&�À; 2.3 Ì = ‖0‖É‖À‖É;

2.4 Updating 0l = 0l + �ÌÀ; 2.5 Scaling the parameter � by a smaller factor, i.e. 0.997; 3. Fast weighted method

3.1 �lR� = �R¼�R!J/� ;

3.2 Êl = 0l + mJ¯&�J¯¶. o (0l − 0l&�);

4. Updating the weights 4.1 Calculating the gradient Ë of Êl; 4.2 Updating Dl by Eq. (4.12); 4.3 Preparing for next iteration Êl → 0l, � + 1 → �;

Outputting the final reconstruction images.

Page 72: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

59

4.1.1.3 General Thresholding Algorithm

When the imaging object itself is sparse, Eq. (3.7) can be directly used as a

general thresholding algorithm. When the imaging object is not sparse in its

original domain, we can apply an appropriate sparse transform such as DGT to

sparsify the original image. Let Φ represents a sparse transform such DGT.

Assume 0 can be sparsely represented in the transform domain as Î = Φ0. If Φ

exist inverse transform Φ&� , we have 0 = Φ&�Î . The general thresholding

algorithm in Eq. (3.7) can be modified as

0lR� = Φ&�Hr¯,Я,�mΦ(0l + plW�(´ − W0l))o. (4.13)

The corresponding general thresholding algorithm in the framework of SART can

be expressed as

0lR� = Φ&�kr¯,Я,� vΦ ¢0l + plm¹RBW�¹ER(´ − W0l)o£|. (4.14)

If the mapping function k is available, Eq. (4.14) can be an algorithm for the

following �� regularization problem with 0 ≤ � ≤ 1,

min0∈ℝ\_‖´ − W0‖� + �‖Φ0‖��`. (4.15)

In this work, we selected the DGT as the sparse transform. As a result, Eq. (4.14)

is a thresholding solution of Eq. (4.15) for any 0 ≤ � ≤ 1. To apply the general

thresholding solution in Eq. (4.14), we need to first know the inverse transform of

DGT. Unfortunately, the inverse transform of DGT does not exist. Thanks to the

Page 73: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

60

properties of iterative algorithms, by employing a similar technique in [29], [70],

we can construct a pseudo-inverse transform of the DGT based on the derived

analytic general thresholding formulas. Assume 0l is the intermediate image

from the OS-SART iteration step. Based on the general thresholding operator in

Eqs. (3.5) and (3.6) and the definition of the gradient magnitude in Eq. (4.7),

when »2,3l < er,� = �&�� (1 − �)��./��� ./�� , we can adjust the values of -2,3l , -2R�,3l and

-2,3R�l to make »2,3 = 0 . When »2,3l ≥ er,� , we can reduce the values of 1-2,3l − -2R�,3l 4�

and 1-2,3l − -2,3R�l 4� to perform the general thresholding operation. One

possible pseudo-inverse DGT transform is constructed as follows (see [29], [70]

for details):

-2,3lR� = �! (2-2,3lR�,µ + -2,3lR�,Ñ + -2,3lR�,�) (4.16)

-2,3lR�,µ = ÒÓÔ �È�,ÄRÈ�¶.,į RÈ�,Ķ.¯! , if »2,3l < er,�

Ã�,ÄRK�¯,�mÃ�,Äo�Ã�,Ä -2,3l + Ã�,Ä&K�¯,�mÃ�,Äo!Ã�,Ä 1-2R�,3l + -2,3R�l 4, if »2,3l ≥ er,� (4.17)

-2,3lR�,Ñ = ÒÓÔ È�,ÄRÈ��.,į� , if »2&�,3l < er,�Ã��.,į &K�¯,�mÃ��.,į o�Ã��.,į -2&�,3l + Ã��.,į RK�¯,�mÃ��.,į o�Ã��.,į -2,3l , if »2&�,3l ≥ er,�

(4.18)

-2,3lR�,� = ÒÓÔÈ�,ÄRÈ�,Ä�.¯� , if »2,3&�l < er,�Ã�,Ä�.¯ &K�¯,�mÃ�,Ä�.¯ o�Ã�,Ä�.¯ -2,3&�l + Ã�,Ä�.¯ RK�¯,�mÃ�,Ä�.¯ o�Ã�,Ä�.¯ -2,3l , if »2,3&�l ≥ er,�

Page 74: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

61

(4.19)

Because the derived analytic thresholding formulas can be used for any � (0 ≤� ≤ 1) , Eq. (4.16) can serve as the DGT inverse transform in the general

thresholding framework. It can be called the general thresholding based pseudo-

inverse transform of the DGT. The pseudo-codes of the general thresholding

algorithm for the �� regularization problem in Eq. (4.15) are summarized as in

Table 4.2.

The algorithm in Table 4.2 can serve as a general thresholding algorithm

for any other sparse transform such as Fourier transform, wavelet transform,

discrete difference transform (DDT), etc. Similar to the DGT, we can also

TABLE 4.2 PSEUDO-CODES OF THE GENERAL THRESHOLDING ALGORITHM

Setting �, �; Initialization: 0� = � = �, � = 1, �� = 1;

While the stopping criteria are not met

1. Updating 0l with SART 2. General-threshold filtering 2.1 Performing the sparse transform (e.g. DGT in Eq. (4.8)) 2.2 Filtering with any analytical general-threshold function in Eqs. (3.1-3.5) 2.3 Performing the inverse sparse transform (e.g. Eqs. (4.17-4.20)) 3. Fast weighted method

3.1 �lR� = �R¼�R!J/� ;

3.2 Êl = 0l + mJ¯&�J¯¶. o (0l − 0l&�);

3.3 Preparing for next iteration Êl → 0l, � + 1 → �;

Outputting the final reconstruction result.

Page 75: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

62

construct the pseudo-inverse transform of the DDT, which shares the same

formulas of DGT except the definition of »2,3l .

4.1.1.4 Optimization of Regularization Parameters

It is well known that the regularization parameter � is critical to the performance

of the iterative reconstruction algorithms. Generally speaking, there is no golden

rule for the selection of optimal optimization parameters, the cross-validation

method, in most cases, is the most favorable method. Nevertheless, when some

prior information of the signal (e.g. sparsity) is available, some intelligent

strategies of parameters selection can be developed. Following the derivations of

the optimal regularization parameters for the half thresholding algorithm in [54],

we can develop some useful strategies for the parameters selection for the

proposed general thresholding algorithm for any �� (0 < � < 1) regularization.

Assume the original signal we want to recover in Eq. (3.4) is of j-sparsity, that is,

the solution of the �� (0 < � < 1) regularization problem in Eq. (3.4) is restricted

to the subspace Γ× = {S = (��, ��, … , �B): j»��(S) = j} of ℝB . Let S∗ be the

solution (original signal) of the �� (0 < � < 1) regularization, which is a fixed point

with S∗ = k(∅(S∗)). Without loss of generality, we assume @∅(S∗)C� ≥ @∅(S∗)C� ≥⋯ ≥ @∅(S∗)CB , where @∅(S∗)C= represents the >ÙÚ row of ∅(S∗) . Because a ��

regularization problem permits a thresholding representation, the following

inequalities hold:

|@∅(S∗)C2| > er∗,� = �&�� (1 − �)��./��(�∗) ./�� ⇔ 9 ∈ {1,2, … , j} , ¾@∅(S∗)C3¾ ≤ er∗,� = �&�� (1 − �)��./��(�∗) ./�� ⇔ ; ∈ {j + 1, … , '},

Page 76: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

63

Which imply

(1 − �)�&� m ��&�o�&� {|@∅(S∗)C×R�|}�&� ≤ �∗ < (1 − �)�&� m ��&�o�&� {|@∅(S∗)C×|}�&�,

namely

�∗ ∈ Ü(1 − �)�&� m ��&�o�&� |@∅(S∗)C×R�|�&�, (1 − �)�&� m ��&�o�&� |@∅(S∗)C×|�&�£.

(4.20)

Eq. (4.20) provides a narrow range where the optimal parameter �∗ should be

located. Because |@∅(S∗)C×| is the jJK largest component of @∅(S∗)C in magnitude,

the optimal parameter �∗ can be expressed as

�∗ = (1 − �)�&� m ��&�o�&� @(1 − ©)|@∅(S∗)C×R�|�&� + ©|@∅(S∗)C×|�&�C, (4.21)

with any © ∈ @0,1). Since a larger �∗ results in a larger threshold value er∗,� and a

sparser solution from the general thresholding algorithm, the most reliable choice

of �∗ can be specified by taking © = 0 as

�∗ = (1 − �)�&� m ��&�o�&� |@∅(S∗)C×R�|�&�. (4.22)

In practical applications, to obtain the optimal parameters without first known S∗, we may use intermediate results Sl to approximate S∗. As a result, we can obtain

a general thresholding algorithm with adaptive regularization parameters. By

incorporating different regularization parameter setting strategies into Eq. (4.14),

we can define several implementation schemes of the general thresholding

algorithm. For example,

Page 77: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

64

1) Scheme 1: �l is chosen by cross-validation.

2) Scheme 2:

�l = (1 − �)�&� m ��&�o�&� {|@∅(Sl)C×R�|}�&�. (4.23)

3) Scheme 3:

�l = min c�l&�, (1 − �)�&� m ��&�o�&� {|@∅(Sl)C×R�|}�&�Ý. (4.24)

The Scheme 3 is a variant of the Scheme 2 in the sense that the parameter �l is

set to keep monotonically decreasing, in addition to the same setting as the

Scheme 2. When there is no prior information about the sparsity available, the

optimal parameter Scheme 1 is the most favorite one. The Scheme 2 and

Scheme 3 can be applied when the sparsity is known or the upper-bound of the

sparsity of the problem can be estimated (e.g. ‖S‖� ≤ j ). We will perform the

general thresholding algorithm using the Scheme 1 in our experiments for a fair

comparison with the reweighted algorithm.

4.1.2 Application and Evaluation

To evaluate the performance of the proposed SART-type general thresholding

algorithm, we compare our algorithm with the well-known reweighed algorithm

via numerical simulations and practical CT data. In the CT field, reducing the

number of views for image reconstruction is an effective method for dose

reduction. Hence, we compare the performances of the two algorithms for sparse

view reconstruction quantitatively and qualitatively. In all experiments, the

parameter pl of the SART algorithm in Eq. (4.3) is set to be 1.0 and the ¿ of the

Page 78: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

65

reweighted algorithm is set to be 10&! to avoid singularity. We select two different

approximate analytic thresholding formulas for the proposed SART-type general

thresholding algorithm. One is the analytic formula (ii) which is the least accurate

and another one is the analytic formula (iii) which is the most accurate. To

compare the performance with the reweighted algorithm, the analytic formula (ii)

is used in the numerical simulations, and the analytic formula (iii) is used in

simulated and realistic phantom experiments.

4.1.2.1 Numerical Simulations

It has been studied that a smaller � has stronger sparsity enhancement than a

larger one [54]. To verify the sparsity enhancement property of the derived

analytic thresholding formulas, a numerical simulation experiment is performed

using a modified Shepp-Logan phantom. We assume a typical fan-beam

geometry and a circular scanning locus of radius 538.5 LL . Over a 360°

scanning range, 984 views were uniformly acquired. For each view, 222 detector

elements were equiangularly distributed, which define a field of view (FOV) of

249.2 LL in radius and an iso-center spatial resolution of 2.3 LL. Using the

simulated projection data and DGT, we implemented the proposed SART-type

general thresholding algorithm in Table 4.2. The analytic formula (ii), which is the

least accurate one, was used in the thresholding step. The reconstructed image

sizes are 128 × 128 with the pixel size comparable to the detector element size.

In order to compare the performance of different �� regularization fairly, we

developed a parameter selecting strategy. An optimal regularization parameter �

should balance the weight of the data fidelity term and the regularization term in

Page 79: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

66

Eq. (3.4). By assuming the data fidelity term (residual error) as a constant, we

can establish the following relationship of the regularization parameters between

different �� regularizations for fair comparisons,

��‖�‖�.�. = ��‖�‖�/�/, (4.25)

where � is the ground truth, and �� and �� are two different � values for ��

regularization. In our experiment, we varied � in (0, 1C and the view number from

5 to 15. To prevent the iteration process for the �� (0 < � < 1) regularization

trapped into an non-optimal local optima, the initialization was selected from the

reconstructions using �� regularization as follows: for each view number, we fixed

the iteration number as 1 × 10! and the images were reconstructed with � = 1.0

using soft thresholding algorithm and � range from 1 × 10&ß to 1 × 10&� .

Particularly, � was empirically selected as 10&�, 10&(, 10&!, 10&$ and 10&ß. The

reconstruction with the smallest root-mean-square-error (RMSE) was selected as

the initialization for different �� regularization with 0 < � < 1. The corresponding

optimal � for � = 1.0 was used to compute the regularization parameter � of other

� by Eq. (4.25) for fair comparisons. With this initialization and the regularization

parameter selection strategies, for each � , we run the general thresholding

algorithm for 5 × 10( iterations where either the accurate reconstruction is

achieved (i.e. RMSE<10&() or the RMSE curve levels off. As shown in Fig. 4.1,

the accurate reconstruction (i.e. RMSE<10&() is achieved at 14 views for � = 1.0.

When � value decreases slightly (e.g. � = 0.9), it yields a dramatic drop in the

number of views to achieve accurate reconstruction. While accurate

Page 80: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

67

reconstruction for all � < 1 can be obtained at 9 views, the smaller the � value is,

the smaller the RMSE it converges. For a fair comparison, in the same SART

framework, we implemented the reweighted algorithm in Table 4.1 using the DGT.

The weighting factor was selected as G = �à|∇Ã|/Râ , where ∇» is the gradient

magnitude image and ­ is a small constant to avoid singularity. The view number

was selected as 9 where the proposed general thresholding algorithm can

achieve accurate reconstruction for any 0 < � < 1 . To select the optimal

initialization from the reconstruction with � = 1.0, we varied � from 10&{ to 10&(

and fixed the iteration number as 20000, where the RMSE curve either level off

or was too slow to converge (i.e. � = 10&{ ). Because � = 10&* yielded the

smallest RMSE, the corresponding reconstruction was chosen as the initialization.

With the optimal initializations for the proposed general thresholding algorithm

and the reweighted algorithm, the images were reconstructed with � = 0.1, 0.9

and 1.0. For the general thresholding algorithm, the regularization parameter �

was selected based on Eq. (4.25) and the iteration number was set to be 5000.

For the reweighted algorithm, the regularization parameter � was selected by

cross-validation as follow: we varied � in @10&��, 10&(C and the iteration number

was fixed as 2 × 10$ for � = 0.1 and 0.9 where the RMSE curves either level off

or the iteration was too slow to converge. Then the � yields the smallest RMSE

was selected as the optimal regularization parameter. When � > 10&(, we found

the reweighted algorithm tends to divergence. The reconstructed images and the

RMSE curves are shown in Figs. 4.1 and 4.2, respectively. For both algorithms,

� = 1 cannot accurately reconstruct the Shepp-Logan phantom image. While the

Page 81: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

68

general thresholding algorithm can achieve accurate reconstruction with � = 0.1

and 0.9 from 9 views, the reweighted algorithm failed to achieve the accurate

reconstruction. For both algorithms, � = 0.1 can achieve smaller RMSEs than

� = 0.9, which is consistent with previous study that a smaller � can reconstruct a

sparser solution.

The least accurate formula (ii) was used in the proposed general thresholding

algorithm in the Shepp-Logan phantom experiments. The general thresholding

algorithm may have a better performance if the most accurate analytic formula (iii)

was used. Because the optimal parameters for the proposed general

thresholding algorithm were selected by Eq. (4.25), the algorithm may achieve

accurate reconstruction from less than 9 views if we select the regularization

parameter by cross-validation. During these experiments, we found that the

reweighted algorithm was very sensitive to the regularization parameter �. When

� becomes large, the reweighted algorithm diverged and failed to reconstruct the

images, while the proposed general thresholding algorithm still work well.

Because the reweighed algorithm employs the gradient descent method which is

a line search method, the regularization parameter � controls the step length

during searching the solution. When � is large, the step length is large which may

result in the iteration process diverge to the solution. However, in the proposed

general thresholding algorithm, the parameter � controls the threshold value er,�.

The larger the �, the greater the threshold value er,� which will result in more

aggressive filtering and a sparser solution. From the numerical simulations

experiments, compared to the state-of-the-art reweighted algorithm, the

Page 82: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

69

Figure 4.1. Reconstructed Shepp-Logan phantom images from 9 views. The

images in the top row were reconstructed by the general thresholding

algorithm, the images in the bottom row were reconstructed by reweighted

algorithm. The images from left to right columns correspond to � = 1.0 , � =0.9 and � = 0.1 , respectively. For the general thresholding algorithm, the

iteration numbers are 5000 for � = 0.1 , � = 0.9 and � = 1.0 . For the

reweighted algorithm, the iteration numbers for � = 1.0, � = 0.9 and � = 0.1

are 2 × 10!, 6 × 10! and 1.3 × 10$ , respectively. The display window is [0.1,

0.3].

Page 83: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

70

proposed general thresholding algorithm can substantially reduce the necessary

number of views for accurate reconstruction of the phantom image. In addition,

the proposed general thresholding algorithm outperforms the state-of-the-art

103

104

105

10-1.8

10-1.6

10-1.4R

MS

E

p=1.0p=0.9p=0.1

Iteration Number

0 1000 2000 3000 4000 5000

10-4

10-2

RM

SE

p=1.0p=0.9p=0.1

Iteration Number

Figure 4.2. RMSE curves for (a) the general thresholding algorithm and (b) the

reweighted algorithm for 9 views with optimal regularization parameters.

(a)

(b)

Page 84: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

71

reweighted algorithm in terms of reconstruction accuracy, convergence speed,

image quality and parameter sensitivity.

4.1.2.2 Phantom Experiment

A physical phantom experiment was performed on a GE Discovery CT750 HD

scanner. It was an axial scanning with a circular scanning trajectory. After a

series of pre-processing, we obtained a sinogram of the central slice in an

equiangular fan-beam geometry. The scanning trajectory has a radius of 538.5

mm. Over a 360° full scan range, 984 views were uniformly acquired. For each

view, 888 detector cells were equiangularly distributed, which define a FOV with

a radius of 249.2 LL and an iso-center spatial resolution of 584 pL . All the

reconstructed image sizes are 512 × 512 to cover the whole FOV, and each pixel

covered an area of 973.3 × 973.3 pL� which is comparable with the detector cell

size.

Because the ground truth is not available, we first reconstructed a

reference image using the OS-SART method from 984 views after 30 iterations.

Then, the reconstructed image was filtered by the analytic formula (iii) using the

DGT for 400 iterations with � = 0.5 and � = 20 . Fig. 4.3 shows the resultant

reference image which was selected as the ground truth to compute RMSE and

Structural SIMilarity (SSIM) [71]. The SSIM is a well-established quantitative

index to measure the similarity between a test image (reconstructed image) and

a reference image. If the reconstructed image is exactly the same as the

reference image, the SSIM index will be equal to 1. In order to exclude the

Page 85: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

72

influences of various physical factors in the raw projection data such as scatter,

beam hardening, etc., we generated a group of realistically simulated projection

data using the reference image in Fig. 4.3 and the same GE CT system

configuration. To simulate a sparse scan, we equiangularly downsampled 33 and

13 views over a 360° full scan range. Similar to the simulated Shepp-Logan

phantom experiment, the initializations for both the general thresholding

algorithm and the reweighted algorithm were selected from the reconstructions

with � = 1. The optimal regularization parameters � in the reweighted algorithm

were selected by cross-validation for fair comparisons. The optimal regularization

Figure 4.3. Reference image for the simulated physical phantom experiment.

The image is first reconstructed from 984 views using the OS-SART method

and then filtered by the analytic formula (iii) using discrete gradient transform.

The display window is [-1000, -500]HU.

Page 86: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

73

parameters for the general thresholding algorithm were selected by Eq. (4.25)

using the optimal parameters for � = 1, which was selected by cross-validation.

The SART-type general thresholding algorithm (Table 4.2) and the reweighted

algorithm (Table 4.1) were implemented to reconstruct images. The

reconstructed images after the same number of iterations were compared. In the

general thresholding algorithm, the analytic thresholding formula (iii) was

selected as the thresholding formula. Fig. 4.4 shows the optimal parameter

selection using cross-validation for the initializations for 33 views with � = 1.0.

For both algorithms, we fixed the iteration number as 2000 and varied � in

@10&�, 10C for the general thresholding algorithm and varied � in @10&ß, 1C for the

reweighted algorithm. Based on the minimum RMSE and the maximum SSIM

values, the optimal regularization parameters for the reweighted and the general

thresholding algorithms were selected as � = 10&$ and � = 0.1 , respectively.

With the optimal initializations, we reconstructed images with � = 0.5, 0.9 and 1.0

and the iteration number was set to be 2000 . Figs. 4.5 and 4.6 show the

reconstructed images from 13 views, and the corresponding SSIM curves,

respectively. For both algorithms, a smaller � can reconstruct better image

quality in terms of SSIM and visuability of details. For the general thresholding

algorithm, as indicated by the red arrow in Fig. 4.5, a smaller � can reconstruct

more details than a larger one. For the reweighted algorithm, a smaller � can

yield better image quality in the green arrow region. Given a fixed value of �, the

proposed general thresholding algorithm consistently outperforms the reweighed

algorithm in terms of both SSIM and the visuability of details. Figs 4.7 and 4.8

Page 87: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

74

show the reconstructed images from 33 views and the corresponding SSIM

curves, respectively. Similar conclusions can be made as in the case of 13 views.

100

101.43

101.54

λ

Thresholding ReWeighting

100

0.84

0.86

0.88

0.9

0.92

0.94

Figure 4.4. The cross-validation for the optimal parameters selection for 13

views with . (a) is the plot of minimum RMSE with respect to different and (b)

is the plot of maximum SSIM with respect to different .

(a)

(b)

Page 88: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

75

Figure 4.5. Physical phantom images reconstructed from 13 views after 2000

iterations. The images in the top row were reconstructed by the general

thresholding algorithm, the images in the bottom row were reconstructed by

the reweighted algorithm. From left to right columns, the images were

reconstructed with � = 0.5 , � = 0.9 and � = 1.0 , respectively. The optimal

regularization parameters were selected by cross-validation for the reweighted

algorithm and by Eq. (4.25) for the general thresholding algorithm. The display

window is [-1000, -500]HU.

Page 89: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

76

Both the general thresholding algorithm and the reweighted algorithm

were also evaluated using realistic projection data. Because there is no ground

truth available, the RMSE and SSIM cannot be calculated in this experiment. For

both algorithms, the zeros initialization were used, the iteration number was fixed

as 200, and the optimal regularization parameters were selected by cross-

validation. Fig 4.9 shows the reconstructed images from 33 views with � = 0.4

and from 55 views with � = 0.7 . Compare to the reweighted algorithm, the

proposed general thresholding algorithm can reconstruct more details as

indicated by the red arrows and yields better image quality. The reweighted

algorithm may be able to yield good image quality at the cost of increasing the

iteration number.

500 1000 1500 20000.88

0.9

0.92

0.94

0.96

Iteration Number

SS

IM

Thresholding, p=0.5Thresholding, p=0.9Thresholding, p=1.0ReWeighting, p=0.5ReWeighting, p=0.9ReWeighting, p=1.0

Figure 4.6. The SSIM curves of the physical phantom reconstruction with

respect to the iteration number for 13 views.

Page 90: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

77

500 1000 1500 20000.98

0.985

0.99

0.995

1

Iteration Number

SS

IM

Thresholding, p=0.5Thresholding, p=0.9Thresholding, p=1.0ReWeighting, p=0.5ReWeighting, p=0.9ReWeighting, p=1.0

Figure 4.8. Same as Fig. 4.6 but from 33 views.

Figure 4.7. Same as Fig. 4.5 but from 33 views.

Page 91: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

78

4.1.2.3 Clinical Data Experiment

The proposed general thresholding algorithm was evaluated by a group of

existing raw projections collected in a cardiac perfusion CT study for other

Figure 4.9. Physical phantom images reconstructed from realistic projection

data after 200 iterations with zero initializations. The images in the top row

were reconstructed from 33 views with � = 0.4, the images in the bottom row

were reconstructed from 55 views with � = 0.7. The images in the left column

were reconstructed by the reweighted algorithm, the images in the right

column were reconstructed by the general thresholding algorithm. For both

algorithms, the optimal regularization parameters were selected by cross-

validation. The display window is [-1000, -500]HU.

Page 92: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

79

purposes. A cardiac exam was performed by a state-of-the-art GE discovery

CT750 HD scanner to examine the coronary artery. After appropriate pre-

processing, the sinogram in the central slice with a fan-beam geometry was

extracted. The scanning trajectory has a radius of 53.852 cm. Over a 360° full

scanning range, 2200 views were uniformly acquired. For each view, 888

detector elements were equi-angularly distributed which defines an FOV with a

radius of 24.92 «L. In order to simulate a sparse scan, 583 views were down-

sampled uniformly. The images were reconstructed using the proposed general

thresholding algorithm and the reweighted algorithm in the frame work of OS-

SART using DGT. The size of the subsets of the OS-SART algorithm was set to

11 and the zeros initialization were used. The reconstructed image sizes were

289 × 353 and each pixel defines an area of 457.7 × 457.7 »L� which is

comparable to the detector element size. Similar to the physical phantom

experiment, we first reconstructed a reference image using the full views (2200)

for the calculation of SSIM. The images were reconstructed with � = 0.4 and � =10&� by both algorithms. The reconstructed images and the SSIM curves are

shown in Figs. 4.10 and 4.11. From both the reconstructed images and the

difference images, we don’t see big difference visually. From the SSIM curves,

we can see that the proposed general thresholding algorithm outperforms the

reweighted algorithm a little bit. One reason probably contribute to the small

difference between both algorithms is that we used DGT sparse transform which

assume a piecewise constant model. However, the clinical data is not piecewise

constant which degrade the effectiveness of the DGT.

Page 93: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

80

Figure 4.10. The image in the first column is the reference image which was

reconstructed from 2200 views. The images in the second column are the

reconstructed images (top) by the reweighted algorithm and the difference

image (bottom) between the reference image and the reconstructed image.

The images in the third column are same with the second column but using

the proposed general threshold filtering algorithm. For the images in the

second and third columns, the view number was 385, � = 0.4 and the iteration

number is 10. The optimal � was selected in a similar way as in the physical

phantom using cross-validation. The optimal � for both the reweighted

algorithm and the proposed general threshold filtering algorithm was 10&�. The

display window for the first row image is [-300 1000] HU and for the second

row images is [-5 5] HU.

Page 94: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

81

4.1.3 Conclusion and Discussion

In conclusion, we have developed several approximate analytic thresholding

representations for any �� regularization with 0 < � < 1 . These approximate

analytic formulas can be reduced to the exact hard thresholding and soft

thresholding formulas when � = 0 and � = 1, respectively. Except for the initial

values and iteration numbers used in this work, other possible initial values and

iteration numbers can be investigated to derive other possible approximate

analytic thresholding formulas. In addition, the direct iteration method was ysed

in the iterative recursive general thresholding procedure. Other iteration methods

Figure 4.11. The SSIM curve of the patient dataset experiment in Fig. 1 with

respect to the iteration number. The solid and dashed lines are for the

reweighted algorithm and the proposed general threshold filtering algorithm,

respectively.

1 2 3 4 5 6 7 8 9 100.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Iteration

SS

IM

ThresholdingReWeighting

Page 95: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

82

such as Gaussian and Newton iterations can also be employed for faster

convergence and higher accuracy. However, for practical applications, the

accuracy of the proposed approximate analytic thresholding formulas is sufficient.

Based on the derived approximate analytic thresholding formulas for any

�� (0 < � < 1) regularization, we developed the general thresholding algorithms

using DGT as the sparse constraint in terms of TV. Because the TV

regularization uniformly penalizes the image gradients and there is no advantage

in distinguishing small structural information from noise and artifacts, other

sparsity transform, such as wavelet transform, dictionary learning [72], etc., can

also be employed to explore more structures for the clinical dataset. Compare to

the reweighted algorithm, the proposed general thresholding algorithm can

substantially reduce the necessary number of view for accurate reconstruction of

Shepp-Logan phantom. In addition, the proposed general thresholding algorithm

has advantages in terms of convergence speed, image quality and parameters

sensitivity. An improved iterative reweighted algorithm was proposed by using a

“continuation” method to prevent the iteration from trapping into a non-optimal

local optimum [61]. This method may be applied to our general thresholding

algorithm to improve the performance.

4.2 Alternating Iteration Algorithm

Because an iterative algorithm seeking a solution satisfies the first-order

optimality condition and can converge to local optima, the initialization plays a

critical role for the �� regularizations. In our experiments for CT image

reconstruction in Chapter 4.1, we set the zeros initializations and the

Page 96: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

83

reconstructions from �� regularization. The algorithm converged to non-optimal

optima and failed to accurately reconstruct the Shepp-Logan phantom image

from 9 views [73]. Simply increasing the iteration number cannot help find a

optimal solution, let alone the consequence of intolerable reconstruction time for

practical applications. Hence, more views have to be used to reconstruct

satisfactory images which will result in increased radiation dose. This is because

the �� regularization is nonconvex and there are many local optima. When an

iterative algorithm for �� regularization starts from zeros and reconstructions from

�� regularization, the algorithm is easily trapped by non-optimal local optima. As a

consequence, we used the reconstruction from �� regularization as the initial

value, and the �� regularization found local minimums that are optimal or at least

more optimal than the �� regularization. Here, in order to solve the initialization

problem and to improve the performance of �� regularization, we propose an

iterative strategy named alternating iteration algorithm. In the proposed

alternating iteration algorithm, instead of a single �� (0 < � < 1) regularization is

used, an �� (0 < � < 1) regularization and an �� regularization are alternatively

applied during the iteration procedure. Because the �� regularization is convex, it

can prevent the �� regularization from trapping into a non-optimal local minimum

and help the iteration converge to a more optimal local minimum or even global

minimum. In the following sections, we will present the alternating iteration

algorithm for �� (0 ≤ � ≤ 1) regularization assuming the DGT sparse transform.

Then we will evaluate the performance of the proposed alternating iteration

Page 97: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

84

algorithm via numerical simulated phantom experiment and practical projection

data. Finally, we conclude this section.

4.2.1 Algorithm Development

Mathematically, the objective function for the alternating iteration algorithm still

can be written as

min0∈ℝ\_‖´ − W0‖� + ��‖0‖�� `. (4.26)

However, the � value is alternatively changed between 1.0 and a smaller one in

the range of 0 < � < 1 . Again, when the imaging object 0 is not sparse, we

assume 0 can be sparsely represented as Î = Φ0. We rewrite the Eq. (4.15) as

min0∈ℝ\_‖´ − W0‖� + ��‖Φ0‖��`. (4.27)

Here we use �� instead of � represents the � value will be alternatively changed

between 1.0 and a smaller one in the range of 0 < � < 1. We still use the DGT

as the sparse transform and the pseudo-inverse DGT for any fixed � value has

been developed in Eqs. (4.16) – (4.19). The pseudo-codes � value are shown in

Table 4.2. The flowchart of the proposed alternating iteration algorithm is then

summarized in Fig. 4.12. The stopping criteria for the whole algorithm can be a

maximum iteration number or a residual error threshold in the projection and/or

image domains.

4.2.2 Evaluations and Results

To evaluate the performance of the proposed alternating iteration algorithm, we

compare the proposed alternating iteration algorithm with the single ��

Page 98: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

85

regularization via extensive numerical simulated Shepp-Logan phantom

experiment and practical data experiments. In the CT imaging field, reducing the

Initialization 0�; Setting �� = 1, �� and '�; � (0 < � < 1), ��, '�;

Call algorithm in Table 4.2 for �� = 1 with parameters �� and '�

Call algorithm in Table 4.2 for � with parameters �� and '�

Stopping Criteria

Yes

Output

No

Figure 4.12. Flowchart of the proposed alternating iteration algorithm. '� and

'� represent the iteration numbers for the �� and �� regularizations,

respectively.

Page 99: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

86

number of views for image reconstruction is an effective way to reduce the

patient radiation dose. Hence, we will evaluate the performances of the proposed

alternating iteration algorithm for sparse view reconstruction. For all the

experiments, the relaxation parameter pl in the SART algorithm was set to be

1.0, and the analytic thresholding formula (ii) in Eq. (3.38) was used.

4.2.2.1 Numerical Simulations

We performed a numerical simulated modified Shepp-Logan phantom using the

same experimental configurations as in Chapter 4.1.2.1. Because the ground

truth is available, we first compare the proposed alternating iteration algorithm to

the single �� regularization using sparse projection data. Since the alternating

iteration number '� and '� play an important role on the performance of the

algorithm, we then investigate the optimal alternating iteration number.

4.2.2.1.1 Comparison with single �� regularization

From the experiment in Chapter 4.1.2.1, the single �� (0 < � < 1) regularization

can accurately reconstruct the original Shepp-Logan phantom from 9 views [73].

Here we used 8 views to test if the proposed alternating iteration algorithm can

achieve accurate reconstruction, obtain a smaller RMSE and/or converges faster

compare to single �� regularization. In order to avoid redundant information,

assuming the 984 views are numbered from 1 to 984, the 8 views were selected

as 1, 68, 151, 301, 451, 601, 751 and 901. In order to see if the zeros

initializations can obtain similar convergence properties with the initializations

from �� regularizations, we used both zeros and �� initializations. The

Page 100: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

87

�� initializations were selected by cross-validation similar with before. We first

reconstructed images using the �� regularization after 10000 iterations with a

series of different regularization parameters (i.e. � =10&ß, 10&$, 10&!, 10&(, 10&�, 10&� ). Based on the cross-validation principle, the

reconstructed image with the smallest RMSE was selected as the �� initialization

and the corresponding � was used for all the algorithms (including the zeros

initializations). For 8 views experiment, � = 10&$ obtained the smallest RMSE

and was used. For a fair comparison, the stopping criteria for zeros and ��

initializations were set as maximum iteration numbers 7 × 10! and 6 × 10! ,

respectively. The iteration numbers are large enough for all the algorithms to

achieve convergence. The �� and single �� regularizations were implemented

using the general thresholding algorithm with DGT sparse transform in Table 4.2

for comparisons. Fig. 4.13 shows the reconstructed images using single ��

regularizations with � = 1.0, 0.3, 0.2 and the corresponding proposed alternating

iteration algorithms with zeros and �� initializations. In the proposed alternating

iteration algorithm, let us define the alternating iteration numbers for �1 and �2

as ('�, '�). For the second row (�� initialization) and third row (zeros initialization)

images in Fig. 4.13, ('�, '�) = (5,10), and for the third row (�� initialization) and

fourth row (zeros initialization) images in Fig. 4.13, ('�, '�) = (5,15). Fig. 4.14

shows the corresponding RMSE curves. Again, the exact reconstruction is

considered as the reconstruction with RMSE smaller than 10&( . The images

reconstructed from the proposed alternating iteration algorithms are much better

than the single �� regularization. The proposed alternating iteration algorithm with

Page 101: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

88

�2 = 0.3 (either zeros or �� initializations) can obtain exact reconstruction with

both ('�, '�) = (5,10) and ('�, '�) = (5,15). For �2 = 0.2 with either zeros or ��

regularizations, the proposed alternating iteration algorithm can obtain exact

reconstruction with ('�, '�) = (5,15). All the single �� regularization with � = 0.2

and � = 0.3 and the corresponding alternating iteration algorithms outperform the

soft thresholding algorithm for �� regularization. From the RMSE curves, the

proposed alternating iteration algorithm suddenly jumps to the optimal minimum

(may be global minimum) after certain iteration numbers (i.e. 40000). From both

the RMSE curves and the reconstructed images, the proposed alternating

algorithm shows a similar performance with zeros initialization to that with ��

initialization. Noticed that, for the single �� regularization, � = 0.3 slightly

outperforms � = 0.2. This is because the regularization parameter � = 10&$ is

not optimal for both of them. During the experiments, we also found that when

the number of views is not sufficient for exact reconstruction, the performance of

the single �� regularization may not follow the previous study results that the

smaller � has stronger sparsity enhancement.

Since the proposed alternating iteration algorithm can obtain the exact

reconstruction of the Shepp-Logan phantom from 8 views, we further decreased

the views number to 7 which was selected as views number 1, 151, 301, 451,

601, 751 and 901. Similarly, the stopping criteria for zeros and l� initializations

were set as maximum iteration numbers 7 × 10! and 6 × 10!, respectively. The l�

initializations were selected in a same way as for 8 views. The corresponding

regularization parameter � = 10&$, which was selected by cross-validation, was

Page 102: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

89

Page 103: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

90

Figure 4.13. Reconstructed images from 8 views. The first row images are

reconstructions using single �� regularizations with the initialization selected as

the reconstructions from �� regularization. From left to right are ��, ��.( and ��.�

regularizations. The images from the second row to the fifth row are

reconstructed using the proposed alternating iteration algorithms. For the

second and third rows, the iteration number for �1 and �2 are ('�, '�) =(5,10). While the initializations for the second row images are reconstruction

from �� regularization, the initializations for the third row images are zeros. The

fourth and fifth rows are same with the second and third rows but the iteration

number for �1 and �2 are ('�, '�) = (5,15) . All the initializations from ��

regularization are same and selected by cross-validation. The iteration number

was selected as 10000. The iteration number for the images from ��

initializations and zeros are 60000 and 70000, respectively. The display

window is [0.1 0.4].

Page 104: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

91

used for all the algorithms. The alternating iteration number for �1 and �2 are

selected as ('�, '�) = (5,15). The reconstructed images and the corresponding

RMSE curves are shown in Figs 4.15 and 4.16. Similar conclusions can be made

as those for 8 views. While all of the algorithms cannot achieve exact

reconstruction from 7 views, the proposed alternating iteration algorithm

outperforms the single �� regularization. Again, the proposed alternating iteration

algorithms with zeros initialization has a similar performance with that ��

regularization.

4.2.2.1.2 Investigation of optimal alternating iteration number (å�, å�)

From the aforementioned experiments, the proposed alternating iteration

algorithm outperforms the single �� regularization. However, the alternating

0 1 2 3 4 5 6

x 104

10-3

10-2

p=1.0

p=0.3

p=0.2

p2=0.3,(N1,Np)=(5,10)

p2=0.2,(N1,Np)=(5,10)

p2=0.3,(N1,Np)=(5,10),zero

p2=0.2,(N1,Np)=(5,10),zero

p2=0.3,(N1,Np)=(5,15)

p2=0.2,(N1,Np)=(5,15)

p2=0.3,(N1,Np)=(5,15),zero

p2=0.2,(N1,Np)=(5,15),zero

Figure 4.14. The RMSE curves for the experiment in Fig. 4.13.

Page 105: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

92

Figure 4.15. Reconstructed images from 7 views. The first row images are

reconstructions using single �� regularizations with the initialization selected as

the reconstructions from �� regularization. From left to right are �� , ��.( and ��.�

regularizations. The images in the second and third rows are reconstructed using

the proposed alternating iteration algorithms. Tthe iteration number for �1 and �2

are ('�, '�) = (5,15). While the initializations for the second row images are

reconstruction from �� regularization, the initializations for the third row images

are zeros. All the initializations from �� regularization are same and selected by

Page 106: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

93

cross-validation. The iteration number was selected as 10000. The iteration

number for the images from �� initializations and zeros are 60000 and 70000,

respectively. The display window is [0.1 0.4].

iteration number ('�, '�) affects the performance of the proposed alternating

iteration algorithms. We performed more numerical simulations to evaluate the

significance of the selection of the alternating iteration number 1'�, '�4. Figs.

4.17 and 4.18 show the RMSE curves for different alternating iteration number

('�, '�) for 8 views and 7 views, respectively. � = 0.2 and � = 0.3 were used for

the experiments. For both of the 8 and 7 views, � was set as 10&$ selected by

the cross-validation. Because the zeros initialization and �� initialization have

similar performance, here we used �� initializations only. The iteration number

was selected as 7 × 10! which is sufficient large to show the convergence for all

0 1 2 3 4 5 6

x 104

10-1.3

10-1.2

10-1.1

p=1.0p=0.3p=0.2p2=0.3,(N1,Np)=(5,15)p2=0.2,(N1,Np)=(5,15)p2=0.3,(N1,Np)=(5,15),zerop2=0.2,(N1,Np)=(5,15),zero

Figure 4.16. The RMSE curves for the experiment in Fig. 4.15.

Page 107: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

94

the experiments. We can see that for all the experiments in Figs. 4.17 and 4.18,

when '� > '� , the alternating iteration algorithm finally close to the ��

regularization. This is because the �� regularization dominates the iteration

0 1 2 3 4 5 6 7

x 104

10-3

10-2

10-1 p=0.3, 8 views

Iteration Number

RM

SE

p=1.0N

1=5, N

p=5

N1=5, N

p=10

N1=5, N

p=15

N1=5, N

p=20

N1=10, N

p=5

N1=15, N

p=5

N1=20, N

p=5

0.4 0.6 0.8 1 1.2 1.4 1.6

x 104

10-1.5

10-1.4

10-1.3

10-1.2

10-1.1

p=0.2, 8 views

Iteration Number

RM

SE

p=1.0N

1=5, N

p=5

N1=5, N

p=10

N1=5, N

p=15

N1=5, N

p=20

N1=10, N

p=5

N1=15, N

p=5

N1=20, N

p=5

Figure 4.17. The RMSE curves for different alternating iteration numbers

('�, '�) for 8 views for (a) � = 0.3 and (b) � = 0.2.

(a)

(b)

Page 108: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

95

procedure. When '� = '�, this is the transition point where �� and �� (0 < � < 1)

regularizations play an equally role for the iteration. For both 7 and 8 views, the

proposed alternating iteration algorithm with �2 = 0.3 outperforms the ��

regularization. For both 7 and 8 views, the proposed alternating iteration

0 1 2 3 4 5 6 7

x 104

10-1.4

10-1.3

10-1.2

10-1.1

p=0.2, 7 views

Iteration Number

RM

SE

p=1.0N

1=5, N

p=5

N1=5, N

p=10

N1=5, N

p=15

N1=5, N

p=20

N1=10, N

p=5

N1=15, N

p=5

N1=20, N

p=5

0 1 2 3 4 5 6 7

x 104

10-1.4

10-1.3

10-1.2

10-1.1

p=0.3, 7 views

Iteration Number

RM

SE

p=1.0N

1=5, N

p=5

N1=5, N

p=10

N1=5, N

p=15

N1=5, N

p=20

N1=10, N

p=5

N1=15, N

p=5

N1=20, N

p=5

Figure 4.18. Same with Fig. 4.17 but for 7 views.

(a)

(b)

Page 109: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

96

algorithm with �2 = 0.2 finally close to the �� regularization. For all the cases,

when '� < '� , the proposed alternating iteration algorithm outperforms the ��

regularization. This is because the �� (0 < � < 1) regularization dominates the

iteration procedure while the �� convex regularization prevents the iteration from

trapping into nonoptimal local optima and assists to find a more optimal optima or

even global optima. However, if '� is much larger than '� (i.e. 1'�, '�4 = (5, 20)),

the �� regularization will dominate the iteration too much and the performance of

alternating iteration algorithm will be close to the single �� regularization finally.

Overall, the alternating iteration number ('�, '�) plays an important role for the

performance of the proposed alternating iteration algorithm. From the

experiments, we suggest to choose '� two to three times larger than '�.

4.2.2.2 Physical Phantom Experiments

A physical phantom was performed using the same experimental configurations

as in Chapter 4.1.2.2. Because the ground truth is not available, a reference

image (see Fig. 2.7) was first reconstructed by the OS-SART algorithm using 984

views after 50 iterations. The reference image was used to compute the

Structural SIMilarity (SSIM) [74]. To simulate a sparse scan, 15 views were

uniformly downsampled over a 360° scanning range. The images were

reconstructed by the single �� regularization and the proposed alternating

algorithm. The initializations were selected as the reconstructed image from the

�� regularization after 1000 iterations in a similar way as in the numerical

simulations. Instead of using the RMSE as a quantitative index, the SSIM was

Page 110: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

97

calculated for the regularization parameter selection by the cross-validation. The

corresponding parameter � = 0.1 was selected for all the algorithms. A maximum

iteration number 2000 was selected as the stopping criteria. Because the

practical measurements include noise, we let the �� regularization dominate the

iteration and choose the alternating iteration numbers '� ≥ '�. Fig. 4.19 shows

the reconstructed images using single �� regularization (first row images) with

� = 1.0, 0.7, 0.5 and the corresponding proposed alternating iteration algorithm

(second and third rows images). For the second row images, the alternating

iteration numbers for �1 and �2 are ('�, '�) = (5, 5). For the third row images,

the alternating iteration numbers for �1 and �2 are ('�, '�) = (15, 5). While the

reconstructed images are visually similar, based on the SSIM curves in Fig. 4.20,

the proposed alternating iteration algorithm outperforms the single ��

regularization. For both � = 0.7 and � = 0.5 , the single �� regularization can

reconstruct a greater maximum SSIMs than the �� regularization. However, the

single �� regularization diverges after certain iteration number. This is because

the noise exists in the measurements can result in the iteration trapping in a

nonoptimal local minimum of the objective function. The proposed alternating

iteration algorithm is more robust for the noisy measurements in terms of the

convergence behavior.

Page 111: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

98

Figure 4.19. Reconstructed images from 15 views after 2000 iterations. The

first row images are reconstructions using single �� regularizations. From left

to right are ��, ��.* and ��.$ regularizations, respectively. The second and third

row images are reconstructions using the proposed alternating iteration

algorithms. From left to right are �2 = 0.7 and �2 = 0.5. In the second and

third rows, the alternating iteration number for �1 and �2 are ('�, '�) = (5,5)

and ('�, '�) = (15,5), respectively. � = 0.1 was selected by cross-validation

and used for all the algorithms.

Page 112: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

99

4.2.3 Conclusions and Discussions

It is well known that the initialization affects the performance of �� regularization.

The �� regularization is also very sensitive to noisy data. In order to overcome

these problems and obtain a sparser solution, here we proposed an alternating

iteration algorithm based on the general analytic formula derived before. The

noise-free modified Shepp-Logan phantom experiments show that the proposed

alternating iteration algorithm has a stronger sparsity enhancement than the ��

regularization. Particularly, the proposed alternating iteration algorithm achieves

the exact reconstruction using 8 views. For the proposed alternating iteration

algorithm, the zeros initialization works fairly well as the �� initialization. The

realistic phantom experiments show that the proposed alternating algorithm is

more robust than the single �� regularization in terms of noise. In the meanwhile,

the alternating iteration number ('�, '�) for �1 and �2 affect the performance of

200 400 600 800 1000 1200 1400 1600

0.366

0.368

0.37

0.372

0.374

Iteration Number

SS

IM

p=1.0p=0.7p=0.5p

2=0.7,(N

1,N

p)=(5,5)

p2=0.5,(N

1,N

p)=(5,5)

p2=0.7,(N

1,N

p)=(15,5)

p2=0.5,(N

1,N

p)=(15,5)

Figure 4.20. The SSIM curves for the experiments in Fig. 4.19.

Page 113: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

100

the proposed alternating algorithm. For the noise-free data, '� < '� is

recommended so that the �� regularization dominates the iteration and the ��

regularization assist to find a sparser solution. For the realistic noisy projections,

'� > '� is recommended so that the convex �� regularization dominates the

iteration to increase the robustness to noise data.

While the convergence of the proposed algorithm has not been theoretically

proved, our extensive numerical experiment and phantom experiment have

verified its convergence. Moreover, it has been proved that the half thresholding

algorithm converges to a stationary point under certain assumptions [75]. It is our

hypothesis that the convergence of the proposed alternating iteration algorithm

may be able to be proved by combining the work of Zeng et al and the work of

Jiang and Wang [76]. However, it is beyond the scope of this work.

Page 114: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

101

Chapter 5. Conclusions and Future Work

In the x-ray computed tomography imaging field, it is highly desirable to reduce

the radiation dose while maintaining acceptable image quality. Few-view

reconstruction using iterative algorithm is an important strategy to reduce the

radiation dose for CT imaging. The projection/backprojection models (system

model) play an important role in the iterative reconstruction algorithms in terms of

the overall computational cost, image quality and reconstruction accuracy, ect. In

this work, we first proposed an IDDM which the computational cost is as low as

DDM and the accuracy is as high as the AIM. Theoretical analysis and physical

phantom experiments have been performed to verify the reasonability of the

proposed IDDM. One of the most important factors that limit the practical

applications of the iterative reconstruction algorithm is the high computational

cost. In the future, we will study the acceleration of different projection /

backprojection models such as using parallelization techniques and GPU-based

method.

The classic Abel theorem and previous research have shown that only �./

and ��/( permit analytic thresholding representation among all the ��

regularization. In this work, we derived several quasi-exact analytic thresholding

representations for any �� regularization with 0 ≤ � ≤ 1. All the derived quasi-

exact analytic formulas are accurate � = 0 and � = 1. We numerically verified the

correctness of the derived analytic thresholding representation. The error bounds

of each analytic formula are analyzed and the accuracy of each formula is

compared through extensive numerical experiments. Using the derived analytic

Page 115: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

102

thresholding representation for any �� (0 ≤ � ≤ 1) regularization, we developed

the general thresholding algorithms and applied for CT reconstruction. The DGT

was selected as the sparse transform to define a sparsity constraint in terms of

TV. Since the TV constraint uniformly penalizes the image gradient to approach

the piece-wise constant and there is no advantage in distinguishing structural

details from noise and artifacts, other sparsity constraints such as wavelet

transform, dictionary learning [72] can also be applied to explore more structures.

The experimental results show that, even � value decreases a little bit (i.e. � =0.9) can result in a stronger sparsity enhancement than the soft thresholding

algorithm for the �� regularization. Compare to the state-of-the-art reweighted

algorithm, the proposed general thresholding algorithm can substantially reduce

the necessary views for accurate reconstruction of the modified Shepp-Logan

phantom. In addition, the proposed general thresholding algorithm has

advantages in terms of convergence speed, image quality, and parameter

sensitivity. An improved iterative reweighted algorithm was proposed by using a

“continuation” method to prevent the iteration from trapping into a non-optimal

local optimum [61]. In the future, this method may be applied to our general

thresholding algorithm to improve the performance.

It is well known that the initialization strategy affects the performance of

the �� regularization. The �� (0 < � < 1) regularization with zeros or ��

initializations is easily trapped into a non-optimal local minimum. In addition, the

�� regularization is also very sensitive to noisy data. In order to solve these

problems and/or obtain a sparser solution, we proposed an alternating iteration

Page 116: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

103

algorithm based on the derived general analytic thresholding representation. For

the proposed alternating iteration algorithm, the zeros initialization works as well

as the �� initialization. In addition, the proposed alternating iteration algorithm has

a stronger sparsity enhancement ability and more robust to the noise than the

single �� regularization. While the convergence of the proposed algorithm has not

been theoretically proved, our extensive numerical experiment and phantom

experiment have verified its convergence. Recently, it has been proved that,

under certain assumptions, the half thresholding algorithm for ��/� regularization

converges to a stationary point [75]. It is our hypothesis that the convergence of

the proposed thresholding algorithm and the proposed alternating iteration

algorithm may be able to be proved by combining the work of Zeng et al and the

work of Jiang and Wang [76]. However, it is beyond the scope of this work and

will be a future research topic towards this direction.

It is well known that the regularization parameters significantly affect the

performance of the iterative algorithms. The regularization parameters selection

is a hot research topic and not easy. In this work, the regularization parameter �

was selected by cross-validation. In the future, it is necessary to study advanced

regularization parameter selection strategies for the iterative algorithms. Because

the results from the low-dose datasets and/or few-view projections are only

exemplary applications, the proposed general thresholding algorithm and

alternating algorithms can be applied to other CT applications such as limited-

angle, interior tomography, etc., and other imaging modalities such as PET and

SPECT, and other field such as image processing, etc.

Page 117: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

104

References

[1] C. A. Burnham and G. L. Brownell, “A Multi-Crystal Positron Camera,”

Ieee Trans. Nucl. Sci., vol. 19, no. 3, pp. 201–205, Jun. 1972.

[2] M. M. Ter-Pogossian, M. E. Phelps, E. J. Hoffman, and N. A. Mullani, “A

positron-emission transaxial tomograph for nuclear imaging (PETT),” radiology,

vol. 114, no. 1, pp. 89–98, Jan. 1975.

[3] M. E. Phelps, E. J. Hoffman, N. A. Mullani, and M. M. Ter-Pogossian,

“Application of annihilation coincidence detection to transaxial reconstruction

tomography,” J. Nucl. Med. Off. Publ. Soc. Nucl. Med., vol. 16, no. 3, pp. 210–

224, Mar. 1975.

[4] A. H. Andersen, “Algebraic reconstruction in CT from limited views,” Ieee

Trans. Med. Imaging, vol. 8, no. 1, pp. 50–55, Mar. 1989.

[5] D. E. Kuhl and R. Q. Edwards, “Image Separation Radioisotope Scanning,”

radiology, vol. 80, no. 4, pp. 653–662, Apr. 1963.

[6] R. J. Jasczak, “Tomographic radiopharmaceutical imaging,” Proc. Ieee,

vol. 76, no. 9, pp. 1079–1094, Sep. 1988.

[7] D. R. Neumann, N. A. Obuchowski, and F. P. Difilippo, “Preoperative

123I/99mTc-sestamibi subtraction SPECT and SPECT/CT in primary

hyperparathyroidism,” J. Nucl. Med. Off. Publ. Soc. Nucl. Med., vol. 49, no. 12,

pp. 2012–2017, Dec. 2008.

[8] W. C. Sheil, “Magnetic Resonance Imaging (MRI Scan),” medicinenetcom,

Apr. 2015.

Page 118: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

105

[9] G. T. Herman and A. Lent, “Iterative reconstruction algorithms,” Comput.

Biol. Med., vol. 6, no. 4, pp. 273–294, Oct. 1976.

[10] W. R. Hendee and C. J. Morgan, “Magnetic Resonance Imaging Part I—

Physical Principles,” West. J. Med., vol. 141, no. 4, pp. 491–500, Oct. 1984.

[11] D. Brenner, C. Elliston, E. Hall, and W. Berdon, “Estimated risks of

radiation-induced fatal cancer from pediatric CT,” Ajr Am. J. Roentgenol., vol.

176, no. 2, pp. 289–296, Feb. 2001.

[12] A. Berrington de González and S. Darby, “Risk of cancer from diagnostic

X-rays: estimates for the UK and 14 other countries,” lancet, vol. 363, no. 9406,

pp. 345–351, Jan. 2004.

[13] D. J. Brenner and E. J. Hall, “Computed Tomography — An Increasing

Source of Radiation Exposure,” N. Engl. J. Med., vol. 357, no. 22, pp. 2277–2284,

Nov. 2007.

[14] D. J. Brenner and C. D. Elliston, “Estimated radiation risks potentially

associated with full-body CT screening,” radiology, vol. 232, no. 3, pp. 735–738,

Sep. 2004.

[15] “The Radon Transform and Some of Its Applications.” [Online]. Available:

http://store.doverpublications.com/0486462412.html. [Accessed: 27-Apr-2015].

[16] D. L. Donoho, “Compressed sensing,” Ieee Trans. Inf. Theory, vol. 52, no.

4, pp. 1289–1306, Apr. 2006.

Page 119: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

106

[17] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles:

exact signal reconstruction from highly incomplete frequency information,” Ieee

Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.

[18] C. Miao, “COMPARATIVE STUDIES OF DIFFERENT SYSTEM MODELS

FOR ITERATIVE CT IMAGE RECONSTRUCTION,” Thesis, Wake Forest

University, 2013.

[19] E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from

incomplete and inaccurate measurements,” Commun. Pure Appl. Math., vol. 59,

no. 8, pp. 1207–1223, Aug. 2006.

[20] E. J. Candès and B. Recht, “Exact Matrix Completion via Convex

Optimization,” Found. Comput. Math., vol. 9, no. 6, pp. 717–772, Dec. 2009.

[21] Z. Xu, X. Chang, F. Xu, and H. Zhang, “Regularization: A Thresholding

Representation Theory and a Fast Solver,” Ieee Trans. Neural Networks Learn.

Syst., vol. 23, no. 7, pp. 1013–1027, Jul. 2012.

[22] R. E. Carrillo, L. F. Polania, and K. E. Barner, “Iterative hard thresholding

for compressed sensing with partially known support,” in 2011 IEEE International

Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp.

4028–4031.

[23] T. Blumensath and M. E. Davies, “Iterative Thresholding for Sparse

Approximations,” J. Fourier Anal. Appl., vol. 14, no. 5–6, pp. 629–654, Dec. 2008.

Page 120: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

107

[24] T. Blumensath and M. E. Davies, “Iterative hard thresholding for

compressed sensing,” Appl. Comput. Harmon. Anal., vol. 27, no. 3, pp. 265–274,

Nov. 2009.

[25] K. Bredies and D. Lorenz, “Iterated Hard Shrinkage for Minimization

Problems with Sparsity Constraints,” Siam J. Sci. Comput., vol. 30, no. 2, pp.

657–683, Jan. 2008.

[26] K. K. Herrity, A. Gilbert, and J. . Tropp, “Sparse Approximation Via

Iterative Thresholding,” in 2006 IEEE International Conference on Acoustics,

Speech and Signal Processing, 2006. ICASSP 2006 Proceedings, 2006, vol. 3,

pp. III–III.

[27] D. L. Donoho, “De-noising by Soft-thresholding,” Ieee Trans Inf Theor, vol.

41, no. 3, pp. 613–627, May 1995.

[28] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding

algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure

Appl. Math., vol. 57, no. 11, pp. 1413–1457, Nov. 2004.

[29] H. Yu and G. Wang, “A soft-threshold filtering approach for reconstruction

from a limited number of projections,” Phys. Med. Biol., vol. 55, no. 13, p. 3905,

Jul. 2010.

[30] B. De Man and S. Basu, “Distance-driven projection and backprojection in

three dimensions,” Phys. Med. Biol., vol. 49, no. 11, pp. 2463–2475, Jun. 2004.

Page 121: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

108

[31] G. T. Herman, Fundamentals of Computerized Tomography: Image

Reconstruction from Projections, 2nd ed. 2010 edition. Dordrecht ; New York:

Springer, 2009.

[32] T. M. Peters, “Algorithms for Fast Back- and Re-Projection in Computed

Tomography,” Ieee Trans. Nucl. Sci., vol. 28, no. 4, pp. 3641–3647, Aug. 1981.

[33] W. Zhuang, S. S. Gopal, and T. J. Hebert, “Numerical evaluation of

methods for computing tomographic projections,” Ieee Trans. Nucl. Sci., vol. 41,

no. 4, pp. 1660–1665, Aug. 1994.

[34] G. L. Zeng and G. T. Gullberg, “A Ray-driven Backprojector For

Backprojection Filtering And Filtered Backprojection Algorithms,” in Nuclear

Science Symposium and Medical Imaging Conference, 1993., 1993 IEEE

Conference Record., 1993, pp. 1199–1201.

[35] B. De Man and S. Basu, “Distance-driven projection and backprojection,”

in 2002 IEEE Nuclear Science Symposium Conference Record, 2002, vol. 3, pp.

1477–1480 vol.3.

[36] G. W. Hengyong Yu, “Finite detector based projection model for high

spatial resolution.,” J. X-Ray Sci. Technol., vol. 20, no. 2, pp. 229–38, 2012.

[37] M. Slaney, Principles of Computerized Tomographic Imaging. Philadelphia:

Society for Industrial and Applied Mathematics, 2001.

[38] G. Wang and M. Jiang, “Ordered-subset simultaneous algebraic

reconstruction techniques (OS-SART),” J. X-Ray Sci. Technol., vol. 12, no. 3, pp.

169–177, Jan. 2004.

Page 122: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

109

[39] F. J. Schlueter, G. Wang, P. S. Hsieh, J. A. Brink, D. M. Balfe, and M. W.

Vannier, “Longitudinal image deblurring in spiral CT,” radiology, vol. 193, no. 2,

pp. 413–418, Nov. 1994.

[40] A. J. Jerri, “The Shannon sampling theorem #8212;Its various extensions

and applications: A tutorial review,” Proc. Ieee, vol. 65, no. 11, pp. 1565–1596,

Nov. 1977.

[41] E. J. Candes and J. Romberg, “Quantitative Robust Uncertainty Principles

and Optimally Sparse Decompositions,” Found Comput Math, vol. 6, no. 2, pp.

227–254, Apr. 2006.

[42] I. F. Gorodnitsky, J. S. George, and B. D. Rao, “Neuromagnetic source

imaging with FOCUSS: a recursive weighted minimum norm algorithm,”

Electroencephalogr. Clin. Neurophysiol., vol. 95, no. 4, pp. 231–251, Oct. 1995.

[43] B. K. Natarajan, “Sparse Approximate Solutions to Linear Systems,” Siam

J Comput, vol. 24, no. 2, pp. 227–234, Apr. 1995.

[44] D. L. Donoho, “Neighborly Polytopes and Sparse Solutions of

Underdetermined Linear Equations,” 2005.

[45] D. L. Donoho, “High-dimensional centrally-symmetric polytopes with

neighborliness proportional to dimension,” Comput. Geometry, (online) Dec,

2005.

[46] J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani, “Pathwise coordinate

optimization,” Ann. Appl. Stat., vol. 1, no. 2, pp. 302–332, Dec. 2007.

Page 123: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

110

[47] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle

regression,” Ann. Stat., vol. 32, no. 2, pp. 407–499, 200404.

[48] J. H. Friedman, “Greedy function approximation: A gradient boosting

machine.,” Ann. Stat., vol. 29, no. 5, pp. 1189–1232, 200110.

[49] S. Chen, D. Donoho, and M. Saunders, “Atomic Decomposition by Basis

Pursuit,” Siam Rev., vol. 43, no. 1, pp. 129–159, Jan. 2001.

[50] H. Z. ZongBen Xu, “L 1/2 regularization,” Sci. China Inf. Sci., vol. 53, no. 6,

2010.

[51] H. Zou, “The Adaptive Lasso and Its Oracle Properties,” J. Am. Stat.

Assoc., vol. 101, no. 476, pp. 1418–1429, Dec. 2006.

[52] R. Chartrand and V. Staneva, “Restricted isometry properties and

nonconvex compressive sensing,” Inverse Probl., vol. 24, no. 3, p. 035020, Jun.

2008.

[53] Z.-B. XU, H.-L. GUO, Y. WANG, and H. ZHANG, “Representative of L1/2

Regularization among Lq (0 < q ≤ 1) Regularizations: an Experimental Study

Based on Phase Diagram,” Acta Autom. Sin., vol. 38, no. 7, pp. 1225–1228, Jul.

2012.

[54] Z. Xu, X. Chang, F. Xu, and H. Zhang, “Regularization: A Thresholding

Representation Theory and a Fast Solver,” Ieee Trans. Neural Networks Learn.

Syst., vol. 23, no. 7, pp. 1013–1027, Jul. 2012.

Page 124: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

111

[55] K. Bredies, D. A. Lorenz, and S. Reiterer, “Minimization of Non-smooth,

Non-convex Functionals by Iterative Thresholding,” J. Optim. Theory Appl., pp.

1–35, Jul. 2014.

[56] G. Marjanovic and V. Solo, “On Optimization and Matrix Completion,” Ieee

Trans. Signal Process., vol. 60, no. 11, pp. 5714–5724, Nov. 2012.

[57] G. Marjanovic and V. Solo, “Sparsity Penalized Linear Regression With

Cyclic Descent,” Ieee Trans. Signal Process., vol. 62, no. 6, pp. 1464–1475, Mar.

2014.

[58] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing Sparsity by

Reweighted ℓ 1 Minimization,” J. Fourier Anal. Appl., vol. 14, no. 5–6, pp. 877–

905, Dec. 2008.

[59] P. Rodriguez and B. Wohlberg, “An Iteratively Reweighted Norm Algorithm

for Total Variation Regularization,” in Fortieth Asilomar Conference on Signals,

Systems and Computers, 2006. ACSSC ’06, 2006, pp. 892–896.

[60] B. Wohlberg and P. Rodriguez, “An Iteratively Reweighted Norm Algorithm

for Minimization of Total Variation Functionals,” Ieee Signal Process. Lett., vol.

14, no. 12, pp. 948–951, Dec. 2007.

[61] M.-J. Lai, Y. Xu, and W. Yin, “Improved Iteratively Reweighted Least

Squares for Unconstrained Smoothed $\ell_q$ Minimization,” Siam J. Numer.

Anal., vol. 51, no. 2, pp. 927–957, Mar. 2013.

Page 125: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

112

[62] I. Kozlov and A. Petukhov, “Sparse Solutions of Underdetermined Linear

Systems,” in Handbook of Geomathematics, W. Freeden, M. Z. Nashed, and T.

Sonar, Eds. Springer Berlin Heidelberg, 2010, pp. 1243–1259.

[63] J. Zhu and X. Li, “A generalized greedy algorithm for image reconstruction

in CT,” Appl. Math. Comput., vol. 219, no. 10, pp. 5487–5494, Jan. 2013.

[64] C. Miao, B. Liu, Q. Xu, and H. Yu, “An improved distance-driven method

for projection and backprojection,” J. X-Ray Sci. Technol., vol. 22, no. 1, pp. 1–18,

2014.

[65] A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding

Algorithm for Linear Inverse Problems,” Siam J. Imaging Sci., vol. 2, no. 1, pp.

183–202, Jan. 2009.

[66] E. Y. Sidky, C.-M. Kao, and X. Pan, “Accurate image reconstruction from

few-views and limited-angle data in divergent-beam CT,” Arxiv09044495 Phys.,

Apr. 2009.

[67] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam

computed tomography by constrained, total-variation minimization,” Phys. Med.

Biol., vol. 53, no. 17, p. 4777, Sep. 2008.

[68] E. J. Candes and M. B. Wakin, “An Introduction To Compressive

Sampling,” Ieee Signal Process. Mag., vol. 25, no. 2, pp. 21–30, Mar. 2008.

[69] H. Yu and G. Wang, “Compressed sensing based interior tomography,”

Phys. Med. Biol., vol. 54, no. 9, pp. 2791–2805, May 2009.

Page 126: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

113

[70] H. Yu and G. Wang, “Sart-Type Half-Threshold Filtering Approach for CT

Reconstruction,” Ieee Access, vol. 2, pp. 602–613, 2014.

[71] Z. Wang, A. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality

assessment: from error visibility to structural similarity,” Ieee Trans. Image

Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.

[72] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-Dose X-ray

CT Reconstruction via Dictionary Learning,” Ieee Trans. Med. Imaging, vol. 31,

no. 9, pp. 1682–1697, Sep. 2012.

[73] C. Miao and H. Yu, “A general thresholding solution for lp (0<p<1)

regularized CT reconstruction,” IEEE Trans. on Image Process.

[74] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality

assessment: from error visibility to structural similarity,” Ieee Trans. Image

Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.

[75] J. Zeng, S. Lin, Y. Wang, and Z. Xu, “Regularization: Convergence of

Iterative Half Thresholding Algorithm,” Ieee Trans. Signal Process., vol. 62, no. 9,

pp. 2317–2329, May 2014.

[76] M. Jiang and G. Wang, “Convergence of the simultaneous algebraic

reconstruction technique (SART),” Ieee Trans. Image Process. Publ. Ieee Signal

Process. Soc., vol. 12, no. 8, pp. 957–961, 2003.

Page 127: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

114

Appendix A

A.1 Distance-driven Kernel

According to [30], the distance-driven kernel uses the overlapped length between

each source and each destination to quantify the weighting coefficient for the

weighted sum of the source values. As indicated in Figure A.1, suppose that the

source signal has sample values j�, j�, … , jE and sample locations

��, ��, … , �E, �ER�. Suppose the destination signal has a set of sample values

��, ��, … , �B and sample locations ��, ��, … , �B , �BR� . In order to quantify the

destination signal values, we need to calculate the weighing coefficients of the

related sample values. Based on the distance-driven kernel operation, the

destination value can be calculated as:

�H = (�=R� − �H) ∙ j= + (�HR� − �=R�) ∙ j=R� (A.1)

The weights can be normalized by the length of the source or destination

intervals, which results in the normalized distance-driven kernel operation,

�H = �·¶.&�¸�¸¶.&�¸ j= + �¸¶.&�·¶.�¸¶.&�¸ j=R� (A.2)

Mapping this operation to the system matrix , = {GH,=} in CT reconstruction, we

have,

GH,= = �·¶.&�¸�¸¶.&�¸ (A.3)

Page 128: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

115

ym

ym+1

ym+2

xn+1

xn x

n+2

tm

tm+1

sn s

n+1

Figure A.1. Distance-driven kernel.

Page 129: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

116

Curriculum Vitae

Chuang Miao was born in Jinlin province, China. In 2006, he received a high

school diploma from Shiyan High School there. He subsequently attended the

Jilin University, China and began his undergraduate study in Electrical

Engineering and received the Bachelor of Engineering degree in 2010. In 2011,

he began his graduate studies in Biomedical Engineering at Wake Forest

University. He completed Master of Sciences degree along the way of Ph.D.

study in this university in May 2013. He completed the Doctor of Sciences degree

in this university in August 2015.

During his graduate academic career, he pursued research interests in

medical imaging. He was involved in CT lab in the areas of analytic CT image

reconstruction theories and algorithm, iterative CT image reconstruction theories

and algorithm for low dose CT, CT imaging model and material decomposition

algorithm for spectral CT. He received travel awards to international meetings.

He presented his work in the International Conference on Image Formation in X-

ray Computed Tomography. He was also served as the vice president of the

Chinese Students and Scholars Association (CSSA) in Wake Forest University

for one year. He has performed three times internship in the medical CT

department at GE Healthcare. Some of his work has been patented and / or

implemented into the product to improve the patient outcomes. Below are the list

of publications during the graduate studies that have been published or

submitted:

Page 130: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

117

Patent

1. B. Nett and C. Miao: Methods and Systems for Normalizing Contrast

Across Multiple Acquisitions, US patent, 2014 (pending)

Peer Reviewed Journal

1. C. Miao, H.Y. Yu: Alternating Iteration for �� (0 < � ≤ 1) Regularized CT

Reconstruction, Physics in Medicine and Biology, 2015 (submitted)

2. C. Miao, H.Y. Yu: General Thresholding Representation for �� (0 < � < 1)

Regularization, IEEE Trans. on Image Processing, 2015 (To appear)

3. J. Li, C. Miao, Z.W. Shen, G. Wang and H.Y. Yu: Robust frame based X-

Ray CT reconstruction and interior tomography, IEEE Trans. on Medical

Imaging, 2014 (Submitted)

4. H. Gong, C. Miao, H.Y. Yu and G.H. Cao: Realistic Cone-Beam Micro-Ct

datasets for Reproducible Evaluation of Reconstruction Algorithms, JSM

Biomedical Imaging, 1(1), 2014

5. C. Miao, B.L. Liu, Q. Xu and H.Y. Yu: An Improved Distance-Driven

Method for Projection and Backprojection, J. X-ray Sci. & Technol., 22(1),

p. 1-18, 2014

Proceedings and Abstracts

1. C. Miao, H.Y. Yu: An Optimized Thresholding Reconstruction Approach

for the �� (0 < � < 1) Regularization Problem, BMES Meeting, 2014

Page 131: COMPRESSIVE SENSING BASED IMAGE RECONSTRUCTION FOR ... · and the initial starts 0 for the analytic thresholding formula 4 and 5 ..... 43 3.3 Comparisons of iterative process with

118

2. C. Miao, H.Y. Yu: General thresholding representation for �� (0 < � < 1)

regularization and its application in computed tomography, 3rd CT meeting,

2014

3. H.Y. Yu, C. Miao: General Thresholding Representation for the �� (0 < � <1) Regularization Problem, IEEE ISBI, 2014

4. H. Gong, C. Miao, H.Y. Yu, G. Cao: Real Phantom Datasets for the

Evaluation of Reconstruction Algorithms at Various Dose Conditions, IEEE

ISBI, 2014

5. M. Wang, C. Miao, B. Liu and H.Y. Yu: Numerical Observer Based

Quantitative Evaluation Method for CT Reconstruction, BMES Meeting,

2013

6. C. Miao, G. Wang and H.Y. Yu: Few-view CT Reconstruction Aided by

Low-resolution Projections, Fully 3D, 2013

7. C. Miao, B.L. Liu, Q. Xu and H.Y. Yu: An Improved Distance-Driven

Method for Projection and Backprojection, BMES Meeting, 2012

8. C. Miao, B.L. Liu, Q. Xu and H.Y. Yu: Comparative Studies on Distance-

driven and Finite-detector-based Projection Models for Iterative Image

Reconstruction, 2nd CT Meeting, 2012 (Oral)