[ieee 2010 32nd annual international conference of the ieee engineering in medicine and biology...
TRANSCRIPT
Abstract— This paper presents an investigation into
different approaches for segmentation-driven retinal
image registration. This constitutes an intermediate step
towards detecting changes occurring in the topography
of blood vessels, which are caused by disease progression.
A temporal dataset of retinal images was collected from
small animals (i.e. mice). The perceived low quality of the
dataset employed favoured the implementation of a
simple registration approach that can cope with rotation,
translation and scaling, in the presence of major vascular
dissimilarities, distortions, noise, and blurring effects.
The proposed approach uses a single control point, i.e.
the centroid of the optic disc, and achieves accurate
registration by matching points in the pair of input
images using mean squared error calculation. A number
of alternative, more sophisticated methods have been
explored alongside the proposed one. While these other
methods could prove valuable and perform reasonably
well when applied on good quality images, they generally
fail when using the dataset at hand.
I. INTRODUCTION
egmentation is concerned with the partitioning of an
image into meaningful regions or objects, which share
different types of content, and is regarded as a necessary
step before further image analyse and information extraction.
In our application, blood vessels and the optic disc are the
objects of interest. Moreover, in retinal image processing,
segmentation can often be employed in order to guide
registration. Retinal image registration aims to achieve the spatial
alignment of two or more images of the same retina,
captured at different moments in time and from different
viewpoints. In general, image registration can be categorised
in four types of approaches: 1) elastic model-based; 2)
Fourier-domain based; 3) correlation-based; and 4) point
matching methods [1]. In retinal images, due to the type of
distortions occurring, the most common approach is by point
matching methods [3]. The distortions that retinal images
suffer from are caused by a number of different reasons [3]:
1) change in subject’s sitting position (large horizontal
translation); 2) change in chin cup position (smaller vertical
translation); 3) head tilting and ocular torsion (rotation); 4)
L.Andreou and A. Achim are with the Department of Electrical and
Electronic Engineering, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol BS8 1UB, UK [email protected];
distance change between the eye and the camera (scaling); 5)
the three–dimensional nature of the retinal surface (spherical
distortion); and 6) inherent aberrations of the eye and camera
optical systems. When capturing retinal images from mice
rather than humans these effects are more pronounced and
consequently cause more distortions and image degradation.
The main goal of this study was to develop an image
analysis application using animal models that would then be
translated into clinical practice for application on patients
suffering from diabetes and uveitis. These are diseases that
can progress unpredictably and very rapidly to threaten
sight.
The following sections progressively introduce the
different approaches attempted in order to achieve the
aforementioned goal. First, vascular segmentation
algorithms using wavelets and active contours are presented
and then the details of a registration technique that uses optic
disc detection to drive a point matching procedure based on
Mean Squared Error (MSE) calculation is introduced.
II. THEORETICAL PRELIMINARIES
In this section we provide a brief theoretical background on
the main concepts on which our algorithms are based.
1) The ‘a trous’ Wavelet Transform: Is a fast, shift
invariant wavelet transform. The wavelet is defined as [5]:
1
( )2 2 2
x xxψ φ φ= −
(1)
where ( )xψ is the wavelet function and ( )xφ is the low-pass
scaling function. Thus, each pixel of the original signal can
be reproduced by the summation of all the wavelet scales
and the smoothed array, Jc :
0, , ,
1
J
k J k j k
j
c c w=
= + ∑ (2)
where j is the scale and k is the pixel position .
2) K-Means Clustering is a classification algorithm that
groups data into K clusters by minimising the sum of
distances between data and the cluster centroid squared (i.e.
Euclidean distance). The criterion needed to be iteratively
minimized is ([5], ch.9):
21
| | q Q i qI ∈ ∈
−∑∑ i q� � (3)
where I is the dataset, | . | indicates the instances of I, q is a
cluster, Q is the partition, i is a vector of the dataset I and q
is the cluster mean and equivalently the centroid:
Temporal Registration for Low-Quality Retinal Images of the
Murine Eye
Lenos Andreou and Alin Achim
S
32nd Annual International Conference of the IEEE EMBSBuenos Aires, Argentina, August 31 - September 4, 2010
978-1-4244-4124-2/10/$25.00 ©2010 IEEE 6272
1
| | i qq ∈
= ∑q i (4)
Each iteration q is updated until no further minimisation is
possible [7].
3) Multiscale Products (MSP) is simply the product of
the different scales in which an image is decomposed when
the ‘a trous’, is applied. The purpose of such an operation is
to enhance edge coefficients. Mathematically, this can be
represented as [10]:
1
( , ) ( , )J
J i
i
P x y W x y=
= ∏ (5)
where JP is the correlation image, J is the highest scale at
which the correlation is evaluated and ( , )iW x y is the wavelet
coefficient at scale i and location (x,y).
4) Active Contours (snakes) represent an energy
minimization process, where an initial contour is formed
which iteratively deforms in order to minimize an energy
functional. The energy functional is given by (ch.6.3, [7]):
1
int
0
( ( )) ( ( )) ( ( ))snake image con
s
E E s E s E s ds=
= + +∫ v v v (6)
where intE , imageE and conE are the different energy
contributions based on further calculations which provide
the parameters and the capability of controlling the
behaviour of the snake. Energy contributions originate from
functions that depend among other factors on bending,
elasticity, intensity, and edge.
Different snake variants have been developed with
different energy contributions, after M. Kass et al. [8]
proposed this approach. A particular snake algorithm that is
faster and more sensitive to edge and slope is the Gradient
Vector Flow (GVF) Snake [9]. It computes an edge map in
order to produce an external force field that replaces the
potential force field in the general form of parametric active
contour introduced by M. Kass et al.
III. ALGORITHMIC DEVELOPMENT
The current section describes the different new techniques
developed in the attempt to compute an integrated solution
for automatic segmentation-driven retinal image registration.
A. Vessels Segmentation
1) Combined A Trous/K-Means algorithm: After a
careful review of commonly employed segmentation
techniques retinal images [2][4], the A Trous (or Stationery
Wavelet Transform (SWT)) was selected as the basis for
developing our own algorithm. The other techniques
considered included thresholding, region growing, JSEG,
watershed and several edge detection techniques in the
spatial and frequency domain. The proposed algorithm is a
combination of K-Means Clustering and ‘A trous’ wavelet
transform as shown in the block diagram of Fig. 1.
We proceed by performing colour segmentation via K-
Means in L*a*b colour space both for the higher scales
wavelet coefficients and the original image.
Fig. 1: “A Trous/K-Means”
We then combine the colour segmentation with the edge
detection and denoising achieved by MSP, and hence
segment visible vessels better than all other techniques
considered.
The results were encouraging, but due to the low-quality
images in our dataset the vessels were fragmented and also
some small non-vascular regions were falsely segmented as
well.
2) GVF Snake: In order to address the above
fragmentation issue we applied an “area thresholding” first,
to remove the non-vascular regions, and then GVF Snake to
defragment the vessel tree as shown in Fig. 2.
Fig. 2: Defragmentation and cleaning process
We show example results of applying these techniques in the
Results section.
B. Optic disc detection and MSE-based Registration
1) Optic disc detection: Optic disc, sometimes also
referred to as optic nerve head, is generally the brightest
region of the fundus and there are many techniques for its
detection [2]. We were able to perform optic disc detection
by emplying a sequential process that enables the calculation
of its centroid. The optic disc detection procedure is shown
diagrammatically in Fig. 3.
Fig. 3: Optic disc localization
A description of each block in the diagram above is provided
in the following
‘A Trous’ Wavelet
TransformL*a*b Conversion
K-Means
Clustering
L*a*b Conversion
Image
ba
K-Means
Clustering
ba
Multiscale
Products
Thresholding
Segmented
Vessels
Colour
Segmentation
Edge
detection and
Denoising
6273
a) HSV represents the conversion of our RGB
images into HSV colourspace, which is more sensitive to
luminance than chrominance and more sensitive to high
contrast regions than to low contrast region being thus more
appropriate for optic disc localisation.
b) Thresholding is applied to the high
magnitude image pixels, i.e the largest value ‘bin’ of the
equalized histogram.
c) Dilation-Erosion represent morphological
processes to connect neighbouring segmented pixels of high
luminance in order to form regions of high luminance.
d) Area Thresholding is needed since the optic
disc is the largest region of connected high luminance pixels.
Thus, all regions smaller than the largest one are discarded.
e) Centroid Calculation is used in order to
calculate the centroid of the largest area.
2) MSE-based Registration:
We start the description of this part of our proposed
algorithm by first depicting again its block diagram below.
Fig. 4: Block diagram of MSE-based Registration
As before, we now provide a succinct description of each
block:
a) Size Compensation: Using the two
segmented optic discs from the detection process, calculate
the size difference between them. Thus compensate for any
size difference that can affect registration between the two
retinal images.
b) Segment Blocks: The image block extracted
for each image contains the optic disc as well as some of the
surrounding area. Consequently, by confining the two
images to be matched to some region around the optic disc,
the vascular spherical distortion effects that would have
misled the matching process are minimised. Also, the
regions contain less non-vessel background that causes
unwanted averaging in the MSE.
c) Rotate Image 2: Rotate ‘image 2’ block by
‘k’degrees, (‘k’<1o ). d) MSE: Compute MSE i.e. (7), between
‘image 1’ block and rotated ‘image 2’ block. Repeat step b)
and c) for some degree range (e.g. 0 360o o− ) keeping track
of lowest MSE and the degree at which it occurs (i.e.
‘matching degree’)
1 1
2
0 0
1( )
N M
ij ij
i j
MSE C RMN
− −
= =
= −∑ ∑ (7)
e) Register: Translate ‘image 1’ and ‘image 2’
in order for the centroid to coincide with their centre and
rotate ‘image 2’ by the ‘matching degree'.
IV. RESULTS
We performed extensive experiments in order to assess the
quality of our developed algorithms. Different qualitative
assessment measures were proposed in the literature to
assess image segmentation/registration results [11]. These
include accuracy/precision, robustness, algorithm
complexity, assumptions verification, and execution time.
For the purpose of this communication, we quantified our
results by visually assessing them in light of the
aforementioned measures, as well as by providing
execution times.
A. Evaluation of Retinal Vessels Segmentation Algorithm
(a) (b)
(c) (d)
Fig. 5: (a) Original retinal image. (b) Focused version.
(c)Segmentation using ‘A Trous/K-Means’ algorithm. (d)
Defragmentation and cleaning process performed on (c).
We first applied the technique presented in Sections III.A.1)
and III.A.2) to the images in our database, one typical result
being shown in Fig. 5. Fig. 5(d) shows that the vessels that
were visible enough to be segmented are almost perfectly
outlined with almost unnoticeable extra segmentation. This
was a consistent result, only the degree of fragmentation
varying across the images in our database.
Nevertheless, the whole process is relatively complex
compared with the other techniques mentioned in Section
III.A. The execution time was 60.12s on an Intel Centrino
Core 2 @ 2GHz, for a 500x500 retinal image.
Although relatively good from a segmentation point of view,
this method did not allow any reasonable registration result.
This was largely due to the fact that the major assumption
6274
made in implementing it was that the vessels must cross.
Any vessel-based point matching technique relies on finding
vascular crossing points in order to be used as anchor points
in a registration process, which due to the low
images was rarely the case.
B. Evaluation of Image Registration Algorithm
(a) (b)
(c) (d)
(e)
Fig. 6: (a) Image 1 (b) Image 2 (c) Image 1 block
2 block (e) Reference Image 1 (f) Registered
(a) (b)
(c)
Fig. 7: (a) Image 1 segmented optic disc. (b) Image 2
segmented optic disc. (c) Ref. Image 1. (d) Reg. Image 2.
was that the vessels must cross.
based point matching technique relies on finding
vascular crossing points in order to be used as anchor points
low-quality of our
Algorithm
(a) (b)
(c) (d)
(f)
Image 1 block (d) Image
istered Image 2.
(a) (b)
(d)
Fig. 7: (a) Image 1 segmented optic disc. (b) Image 2
segmented optic disc. (c) Ref. Image 1. (d) Reg. Image 2.
We first assessed the quality of the
of our algorithm both for the case of murine eye
and for images of human retina (available
Fig. 7). By observing Fig. 6 (c) and (d) it can be seen that
this has been accurately achieved. In terms of
registration, clearly both the mice images
images were registered with high accuracy
The execution time of our algorithm
search with a step of 10 was 16.42s on
2 @ 2GHz, for a 600x600 retinal image
V. CONCLUSIONS AND FUTURE WORK
We presented the results of a study aimed at developing
temporal registration of retinal images
preliminary segmentation of blood vessels or optic disc
The registration approach that we devised
with rotational, translational and size
relying on the vessels for point matching.
not only can be used for the typical good
images, but more importantly for cases
not possible using the segmented vessels.
optic disc detection on which the algorithm actually relies is
of theoretical importance in its own right since it constitute
the starting point in many other processing algorithms
Our current work focuses on des
algorithms for the temporally pre-registered retinal images
to enable automatic correlation of vessels topography
changes with specific disease progression. Results will be
presented in a future communication
ACKNOWLEDGMENT
The authors would like to thank Dr Lindsay Nicholson from
the School of Medical Sciences at the University of Bristol
for providing the retinal images used in this study.
REFERENCES
[1] L. G. Brown: “A survey of image registration techniques,”
Computing Surveys, vol. 24, no. 4, pp. 325–376, 1992.[2] N. Patton et al.: “Retinal image analysis: Concepts,applications and
potential,” Progress in Retinal and Eye Research, vol.25, pp. 99
[3] F. Laliberté, L. Gagnon and Y. Sheng: “Registration and Fusion of Retinal Images—An Evaluation Study,” IEEE Trans. Med. Imag., vol. 22,
pp. 661–673, May 2003.
[4] M. S. Mabrouk, N. H. Solouma and Y. M. Kadah: “Survey of Retinal Image Segmentation and Registration,” GVIP Journal, vol. 6, issue 2, 2006.
[5] J.-L. Starck, F. Murtagh: “Astronomical Image and Data Analysis,”
2nd ed. Springer–Verlag Berlin Helderberg, 2006[6] Lloyd, P.: “Least squares quantization in PCM,” Technical report, Bell
Laboratories,1957.
[7] M. S. Nixon and A. S. Aguado: “Feature Extraction and Image Processing,” 1st ed. Butterworth-Heinemann, 2002.
[8] M. Kass, A. Witkin, and D. Terzopoulos: “Snakes: Active contour
models.” Int. J. Computer Vision, 1(4):321–331, 1987.[9] C. Xu and J. L. Prince: “Gradient Vector Flow: A New External Force
for Snakes,” IEEE Proc. Conf. on Comp. Vis. Patt. Recog.(CVPR' 97).
[10] J.-C. Olivo-Marin:“Extraction of spots in biological images using multiscale products,”Pattern Recognition, Vol. 35, No. 9, pp.1989
[11] J.B.A. Maintz and M.A. Viergever: “A Survey of medical image registration,” Oxford Uni. Press, Med. Imag. Analysis
[12] Image Sciences Institute. 2001-2009: “
Vessel Extraction: DRIVE database.” [Online] (Updated 29 December 2007). Available at: http://www.isi.uu.nl/Research/Databases/
November 2009].
We first assessed the quality of the optic disc detection part
case of murine eye (i.e. Fig. 6)
(available online [12]) (i.e.
By observing Fig. 6 (c) and (d) it can be seen that
this has been accurately achieved. In terms of MSE-based
mice images and the human
were registered with high accuracy.
of our algorithm for a full 0
0 360o −
16.42s on an Intel Centrino Core
z, for a 600x600 retinal image.
AND FUTURE WORK
We presented the results of a study aimed at developing
temporal registration of retinal images based on a
preliminary segmentation of blood vessels or optic disc.
approach that we devised is able to cope
with rotational, translational and size differences without
relying on the vessels for point matching. Thus the method
the typical good-quality retinal
for cases when registration is
vessels. Furthermore, the
optic disc detection on which the algorithm actually relies is
of theoretical importance in its own right since it constitutes
processing algorithms.
Our current work focuses on designing change detection
registered retinal images,
to enable automatic correlation of vessels topography
changes with specific disease progression. Results will be
presented in a future communication.
CKNOWLEDGMENT
thors would like to thank Dr Lindsay Nicholson from
the School of Medical Sciences at the University of Bristol
for providing the retinal images used in this study.
EFERENCES
L. G. Brown: “A survey of image registration techniques,” ACM
376, 1992. N. Patton et al.: “Retinal image analysis: Concepts,applications and
potential,” Progress in Retinal and Eye Research, vol.25, pp. 99–127, 2006.
F. Laliberté, L. Gagnon and Y. Sheng: “Registration and Fusion of An Evaluation Study,” IEEE Trans. Med. Imag., vol. 22,
M. S. Mabrouk, N. H. Solouma and Y. M. Kadah: “Survey of Retinal Image Segmentation and Registration,” GVIP Journal, vol. 6, issue 2, 2006.
onomical Image and Data Analysis,”
2006. Lloyd, P.: “Least squares quantization in PCM,” Technical report, Bell
M. S. Nixon and A. S. Aguado: “Feature Extraction and Image Heinemann, 2002.
M. Kass, A. Witkin, and D. Terzopoulos: “Snakes: Active contour
331, 1987. Gradient Vector Flow: A New External Force
Vis. Patt. Recog.(CVPR' 97).
“Extraction of spots in biological images using multiscale products,”Pattern Recognition, Vol. 35, No. 9, pp.1989-96, 2002.
J.B.A. Maintz and M.A. Viergever: “A Survey of medical image Oxford Uni. Press, Med. Imag. Analysis Vol 2 pp. 1-36, 1998.
: “Digital Retinal Images for
[Online] (Updated 29 December http://www.isi.uu.nl/Research/Databases/ [Accessed 10
6275