microvasculature segmentation of co-registered retinal ... · attempts to study the retinal...

20
CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 1 Annals of the BMVA Vol. 2012, No. 9, pp1–20 (2012) Microvasculature Segmentation of Co-Registered Retinal Angiogram Sequences Shuoying Cao 1 , Anil Bharath 1 , Kim Parker 1 , Jeffrey Ng 1 , John Arnold 2 , Alison McGregor 3 and Adam Hill 1 1 Bioengineering Department, Imperial College London, London, UK 2 Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK 3 Faculty of Medicine, Surgery and Cancer, Imperial College London, London, UK h[email protected]i Abstract In this paper, we extend our published work [Cao et al., 2011] on methods to support the analysis of both vessel bed segmentation and microvascular centreline extraction in co-registered retinal angiogram sequences. These tools should allow researchers and clinicians to estimate and assess the (micro-)vessel diameter, capillary blood volume and microvascular topology for early stage disease detection, monitoring and treatment. We first discuss the problem of 2D+t intra- and inter-sequential registration of retinal an- giograms. On the spatio-temporally registered sequential angiograms, we then take into account both the global and local context to enable vessel bed segmentation and microvascular centreline extraction. Global vessel bed segmentation is achieved by com- bining phase-invariant orientation fields with neighbourhood pixel intensities in a patch- based feature vector for supervised learning. This approach is evaluated against other benchmarks on the DRIVE database [Niemeijer et al., 2004]. Local microvascular cen- trelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariant orientation measures with phase-selective local structure features. For each individual frame in the sequence, the extracted microvascular centrelines can be compared either against an objective ground truth to assess the quality and accuracy of registration, or with other frames intra-sequentially and inter-sequentially for non-invasive clinical monitoring of the micro-circulation. The combination of registration and segmentation has the potential to detect the presence of both microemboli and pathological structural alterations in a longitudinal study. 1 Introduction Microvascular blood flow in the retina can be observed non-invasively, allowing clinicians to infer from the ocular circulation some disease-related characteristics in human haemody- namics. The diameter of retinal arterioles typically ranges from 20 to 200μm [Hughes, 2007]. c 2012. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.

Upload: others

Post on 08-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 1Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Microvasculature Segmentation ofCo-Registered Retinal AngiogramSequencesShuoying Cao1, Anil Bharath1, Kim Parker1, Jeffrey Ng1, John Arnold2, AlisonMcGregor3 and Adam Hill1

1Bioengineering Department, Imperial College London, London, UK2Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK3Faculty of Medicine, Surgery and Cancer, Imperial College London, London, UK〈[email protected]

Abstract

In this paper, we extend our published work [Cao et al., 2011] on methods to supportthe analysis of both vessel bed segmentation and microvascular centreline extraction inco-registered retinal angiogram sequences. These tools should allow researchers andclinicians to estimate and assess the (micro-)vessel diameter, capillary blood volume andmicrovascular topology for early stage disease detection, monitoring and treatment. Wefirst discuss the problem of 2D+t intra- and inter-sequential registration of retinal an-giograms. On the spatio-temporally registered sequential angiograms, we then takeinto account both the global and local context to enable vessel bed segmentation andmicrovascular centreline extraction. Global vessel bed segmentation is achieved by com-bining phase-invariant orientation fields with neighbourhood pixel intensities in a patch-based feature vector for supervised learning. This approach is evaluated against otherbenchmarks on the DRIVE database [Niemeijer et al., 2004]. Local microvascular cen-trelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariantorientation measures with phase-selective local structure features. For each individualframe in the sequence, the extracted microvascular centrelines can be compared eitheragainst an objective ground truth to assess the quality and accuracy of registration,or with other frames intra-sequentially and inter-sequentially for non-invasive clinicalmonitoring of the micro-circulation. The combination of registration and segmentationhas the potential to detect the presence of both microemboli and pathological structuralalterations in a longitudinal study.

1 Introduction

Microvascular blood flow in the retina can be observed non-invasively, allowing cliniciansto infer from the ocular circulation some disease-related characteristics in human haemody-namics. The diameter of retinal arterioles typically ranges from 20 to 200µm [Hughes, 2007].

c© 2012. The copyright of this document resides with its authors.It may be distributed unchanged freely in print or electronic forms.

Page 2: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

2 CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Thus, sufficiently fine-resolution imaging and dependable image analysis are essential to re-solve retinal microvasculatures in the scale of 10µm. Fluorescein angiography offers suchresolution in clinical assessment of the retina. However, the nature of this image modalityintroduces technical challenges (elaborated in Section 2) both in registration and segmen-tation. We address the co-registration problem by pairing a ‘pairwise’ RANSAC-iterated-homography (Section 4.1) with a local-to-global hierarchical spatio-temporal ‘joint’ registra-tion framework (Section 4.2). We address the segmentation problem by applying adaptive(steerable) filters to estimate the orientation of vessels in the global context, hence construct-ing patch-based feature vectors (Section 5.1). Whilst previous demonstrations of steerablefilters have been reported [Freeman and Adelson, 1991], no evaluation has yet been done inthe context of retinal vasculature. These phase-invariant orientation features can be used inconjunction with supervised machine learning models to segment global retinal vessel bed(Section 5.2); and the phase of the steered filters also allow us to trace the microvascular cen-trelines (Section 5.3) in a local context. The chart to illustrate the hierarchical stages of ouralgorithms can be found in Figure 1.

Our contributions are: (1) an automated pre-processing step that jointly registers inter-and intra- sequential angiograms to allow subsequent high-resolution image processing, (2)an automatic framework that relies on patch-based feature vectors to segment the global reti-nal vessel bed and to “link-while-extract” (sub)pixel-wise local microvascular centrelines, (3)a quantitative objective measure that evaluates registration results from a clinical perspec-tive.

Figure 1: Top: Hierarchical stages of processing for sequential registration; Middle: Hier-archical stages of processing for segmentation; Bottom: Hierarchical stages of processingfor objective validation of registration. Note, the arrows in validation scheme denote thetemporal flow of each stage.

Page 3: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 3Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Our paper is organized as follows. In Section 2, we provide concise literature reviews ofthe medical context (Section 2.1) and summarize technical advancements in related registra-tion, validation and segmentation methods. In Section 4, we first propose a local-to-globalhierarchical spatio-temporal joint registration framework for inter- and intra- sequential an-giogram registration. In Section 4.3, we present an evaluation method to assess the regis-tration quality. To achieve this more objective and automated assessment, we construct, inSection 5.1, a phase-invariant orientation field using steerable wavelet filters that differen-tiate vessels from background. In Section 5.2, we combine the phase-invariant structuralinformation with neighbourhood pixel intensity to segment the global retinal vessel bed. InSection 5.3, we combine these phase-invariant features with pixelwise phase-selective re-sponses to identify and extract the local microvascular centreline. In Section 6, we presentthe comparison of our vessel segmentation algorithm with other benchmark algorithms, andthe error of our registration results using our centreline evaluation method.

2 Literature Review

2.1 Ocular Haemodynamics and Retinal Angiography

Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in19th century [Weinreb and Harris, 2009] [Rechtman et al., 2003] [Harris et al., 1998]. Over theyears, clinical and research assessment techniques have evolved from the physical descrip-tion of the retinal arteries and veins to the quantitative measurements of the associated ocu-lar haemodynamics parameters [Hughes, 2007] [Arevalo, 2009] [Venkataraman et al., 2010].These quantitative parameters, describing the characteristics of retinal vasculature or ocularblood perfusion, can be influenced by the presence of diseases such as hypertension, diabetesand glaucoma. [Stanton et al., 1995] suggested that hypertension reduces the average ves-sel diameter; [King et al., 1996] suggested that hypertension increases the arteriolar lengthto diameter ratio; [Grunwald et al., 1986] found that diabetes increases venous diametersand reduces blood velocity; [Sinclair, 1991] reported diabetes increases clinically perceivedleukocyte velocity; [Arend et al., 1991] suggested that diabetes prolongs arteriovenous pas-sage time and reduces capillary blood flow velocities; [Langham et al., 1991] showed thatglaucoma lowers pulsatile ocular blood flow.

Since its inception [Novotny and Alvis, 1961], fluorescein angiography has become awell-established technique for clinical assessment of the retina. The fluorescein dye con-centration in the retinal vessels is quantified by the distribution and the variation of thefluorescence intensity. The passage of fluorescein dye through the retina thus reflects boththe vessel structure and the retinal blood flow velocity. A fundus camera continuously pho-tographs the retina from the onset of dye injection over a period of 3 to 5 minutes. Thiscaptures filling and the subsequent elimination of dye in the retinal vessels. Determinedby capture time from point injection and filling of the vessels, the angiogram sequences areroughly divided into arterial phase, arteriovenous phase, venous phase and recirculationphase [Hipwell et al., 1998]. The complete filling of large retinal veins in the venous phaseresults in the maximum fluorescence intensity of the sequence.

For early stage detection, however, subtle changes in the retinal microvasculature re-quires sufficiently fine-resolution imaging and dependable image analysis to diagnose ves-sels at the micrometer scale. Standard parameters such as “AVP time”, “mean dye velocity”,and “time to maximum image” [Hughes, 2007] [Hipwell et al., 1998] [Arend et al., 1991] weredeveloped to investigate the vascular network in localized regions near the optical disk , but

Page 4: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

4 CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

none of these analyses capture individual microvascular vessels.[Blauth et al., 1986] first suggested that comparison of pre- and post- operative retinal

fluorescein angiograms might indicate the existence of microemboli that could be associatedwith cognitive impairment or even morbidity. However, only the macula region of one pre-op and one post-op image were used. The analysis of sequential retinal angiograms has notbeen widely exploited; the dynamics of blood flow, the inter-individual variability of retinalvascular patterns, and the temporal diffusion of the dye are either difficult to reliably com-pensate for or computationally too demanding. However, with retinal arterioles typicallyranging from approximately 20 to 200µm in diameter [Hughes, 2007], fluorescein angiogra-phy allows visualization of the microvasculature down to 10µm in diameter, which is not yetachievable by either funduscopy or transcranial Doppler ultrasound (TCD). By recruiting allframes in both sequences, it is possible to study retinal microvasculature dynamics and evenidentify small, but potentially significant embolic events.

2.2 Retinal Angiogram Co-registration

The purpose of registration for our problem is to estimate and model the distortion betweenframes in order to map each angiogram onto one common coordinate system (“reference”).In the medical imaging context, it allows clinicians to review and compare spatially andtemporally aligned medical scans for diagnosis and treatment. Various review papers [Crumet al., 2004] [Zitová and Flusser, 2003] [Lester and Arridge, 1999] provide in-depth coverageof the subject.

In retinal angiogram analysis, feature-based methods rely on the stability of the featuredescriptors and the accuracy of their transformation model estimation to achieve successfulregistration. Often, vascular bifurcations and crossing points are extracted as features [Zanaand Klein, 1999] [Martinez-Perez et al., 1999] [Aguilar et al., 2009]. An improved method[Yang et al., 2007] extracts both “corner” points and “face” points to improve the point-feature distribution. Other algorithms such as “Scale Invariant Feature Transform” (SIFT)[Lowe, 2004] and “Speeded Up Robust Features” (SURF) [Cattin et al., 2006] have also beensuggested. For transformation model estimation, schemes such as “Iterative Closest Point”(ICP) [Can et al., 2002] [Stewart et al., 2003] [Yang et al., 2007] [Almhdie et al., 2007] are oftencompared with “RANdom SAmple Consensus” (RANSAC) [Fischler and Bolles, 1981] [Caoet al., 2011] [Aguilar et al., 2009]. Accurate initialization of matching point correspondenceis essential for ICP, as it has a narrow domain of convergence. Therefore, [Yang et al., 2007]used SIFT descriptors to guarantee at least one “correct” key point match.

Geometric distortion, radiometric degradation, and additive noise corruption contributeto the difficulty in registration. Problems specific to clinical retinal angiography are: thephotographer’s bias in image capture, the patient’s involuntary movement, and the 3D to2D warping between retina and the camera. Existing techniques, for example, approximatethe retinal surface by a sphere [Can et al., 2002], assume a perspective distortion [Lee et al.,2010], or model the intensity variation to address uneven illumination [Nunes et al., 2004].

2.3 Objective Performance Validation

Registration error can be determined by the extent of misalignment between the registeredimage and the reference image. Conventional ground truth is obtained from manual regis-tration [Zana and Klein, 1999]. This form of reference standard is subject to inter- and intra-observer variability. Therefore, [Stewart et al., 2003] linked lines between feature locationson the original images as relatively “unbiased” ground truth. However, both the linking

Page 5: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 5Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

algorithm that connects vessel fragments to subpixel precision and the similarity measurethat optimally matches to subpixel accuracy between different frames vary from case to casein establishing error metrics. Therefore, [Lee et al., 2010] casted an evenly spaced virtualgrid that intersects with vessels to obtain ground-truth pixels. This, however, does not suf-ficiently take account of the location-dependent nature of pixel information. Pixels locatednear the capillary-rich macula region generate significantly more clinical interest than thosenear the “field-stop”. Furthermore, their proposed “error tracing” route theoretically favorsstrategies that pair both the forward and the inverse registration functions to optimize the“net offset” rather than a true assessment of the registration algorithm on its own. Addi-tional processing is also required to address the uneven global illumination.

2.4 Vessel Segmentation

Extensive research on this area has been published. Methods include: Laplacian-of-Gaussianfiltering with binary image morphology [Zana and Klein, 1999]; two-dimensional Gaussianmodels of vessel intensity space [Chaudhuri et al., 1989]; exploratory vessel tracing [Canet al., 2002], region-growing using Hessian matrix maxima in scale-space [Martinez-Perezet al., 1999] and maximum likelihood estimates from multi-scale filter outputs in scale-space[Ng et al., 2010]. Yet, most algorithms are developed on either red-free fundus images([Chaudhuri et al., 1989], [Niemeijer et al., 2004] and [Ng et al., 2010]) or fluorescein an-giograms ([Zana and Klein, 1999], [Can et al., 2002] and [Martinez-Perez et al., 1999]). Thus,there is a need for techniques that take into account the scale difference between the twoimage modalities yet employ some parameters that are unique and distinctive enough toclassify both the entire vessel bed (red-free) and the micro-vessel (fluorescein) centrelines.

Notably, Niemeijer [Niemeijer et al., 2004] proposed a supervised pixel classificationmethod where a 31-dimensional feature vector is constructed for each pixel in the image,and a kNN-classifier is trained with these feature vectors. Their study also showed that thekNN-classifier performed better than a linear or a quadratic classifier. This architecture firstappeared as convolutional neural nets [Lecun et al., 1998] used for recognizing handwrittencharacters.

2.5 Centreline Extraction

Methods for line detection, tracing and extraction include active contours [Kass et al., 1988],the Hough transform [Duda and Hart, 1972] and scale-space filters (Principal ComponentAnalysis (PCA) of the Hessian matrix) combined with region growing [Martinez-Perez et al.,1999]. To achieve subpixel accuracy in centreline positions, [Steger, 1998] used the Gaussianpartial derivative kernels to characterize the first and second directional derivative of curvi-linear lines. To detect regions of high directional curvature, the scale of these popular filterkernels is vital for accuracy. If the standard deviation of the Gaussian kernel is too small, thevessel centrelines cannot be detected.

3 Image Modalities

3.1 DRIVE database

The fundus images (see Figure 2 (left)) for the DRIVE (Digital Retinal Images for VesselExtraction) database [Staal et al., 2004] [Niemeijer et al., 2004] were obtained from a diabeticretinopathy screening program in The Netherlands. The screening population consisted of453 subjects between 31 to 86 years of age. Each image was JPEG encoded, the commonpractice in screening programs. Of the total 40 images in the database, 7 contain pathology,

Page 6: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

6 CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

such as exudates, hemorrhages and pigment epithelium changes. The images were acquiredusing a Canon CR5 non-mydriatic 3CCD camera with a 45 degree field of view (FOV). Eachimage was captured using 8 bits per color plane at 768× 584 pixels. The FOV of each imagewas circular with a diameter of approximately 540 pixels. The images were cropped aroundthe FOV, and a mask image was provided that delineates the FOV.

The set of 40 images was divided into a test and a training set, both containing 20 im-ages. Three observers, instructed and trained by an experienced ophthalmologist, manuallysegmented all the pixels on the basis that they were at least 70% certain that these pixelsbelonged to the retinal vasculatures. For the training set, a single manual segmentationwas carried out between two observers with one segmenting 14 images and the other seg-menting the remaining 6 images. For the test set, two manual segmentations were carriedout. First segmentation was shared between the prior two observers, and was used as goldstandard. They marked 577,649 pixels as vessel and 3,960,494 as background (12.7% ves-sel). The second segmentation was completed by a third observer alone, and was used tocompare computer generated segmentations with those of an independent manual segmen-tation. This segmentation resulted in 556,532 pixels marked as vessel and 3,981,611 as back-ground (12.3% vessel).

Figure 2: Left: A fundus image from the DRIVE database test image set; Right: A singlefluorescein angiogram from a pre-operative intervention sequence in our clinical data set.

3.2 Sequential Fluorescein Angiograms

The retinal angiogram (see Figure 2 (right)) sequences we employed in our analysis werecollected in a standardized clinical protocol (both anaesthetic and surgical) before and aftercoronary artery bypass graft surgery. A modification of the procedure allowed a larger num-ber of image frames to be obtained within a shorter time frame, thereby minimizing the dyeusage and time taken to scan the patient.

The right pupil of the patient was first dilated with 1% tropicamide eye drops, and thenexposed by an eyelid retractor. The cornea was kept moist with balanced saline solutionapplied every 10 seconds. A modified and vertically mounted Zeiss (Oberkochen) 30 Funduscamera was moved to the head of the table. When the camera was positioned and focusedon the macula, 5ml of 20% sodium fluorescein dye was injected as a rapid bolus into an

Page 7: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 7Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

intravenous cannula sited prior to the surgical procedure. As the dye took 10-15 secondsto circulate to allow the optimum images to be captured, the sequential exposures werefirst made sparsely over the initial period of 15 seconds post dye injection, and then withincreasing yet uneven intervals (maximum rate at 0.7 second per frame) over a period ofapproximately 50 seconds. The eyelid retractor was then removed and the camera movedaway from the operating table. The whole procedure took 1-2 minutes.

Procedures were carried out under the ethics guidelines of Hammersmith Hospital. Pa-tient recruitment followed certain exclusion criteria for control purposes: the presence ofdiabetes mellitus, neurological, psychiatric, ocular, or carotid disease, or cardiac failure. Fur-thermore, pre- and post- operative ophthalmic studies (8 days before operation and 8 weeksfollowing surgery) were identical and consist of visual acuity (Snellen chart), visual fields(Friedmann analyser) and bilateral colour fundus photography.

The pre- and post-operative angiograms correspond to an area of a 30 degrees field of theretina (31mm2) centred on the fovea. Each individual frame was scanned by a high resolutionJVC TV camera and digitized to a 4288-by-2848 array of 8 bit pixels using the GOP-302 imageanalyzer (Teragon-Context AB). We used in total 12 retinal angiogram sequences from 6 pa-tients (Figure 3) with approximately 32 frames per angiogram sequence for image analysis,ie. registration and centreline segmentation.

4 Co-registration

4.1 Pairwise Registration

At the pairwise image level, we combine projective RANSAC with a quadratic homographytransformation. This is described in 6 steps below:

1. Apply a circular Hough Transform to extract the “field-stop” mask.2. Pre-process the image by contrast-enhancement (histogram equalization) within the

field-stop.3. Extract point features (vessel bifurcations and crossings) using the Harris corner mea-

sure [Harris and Stephens, 1988] with self-threshold.4. Putatively match pairs of feature points using normalized cross-correlation.5. Optimize transformation mapping using iterative RANSAC [Fischler and Bolles, 1981].6. Select a projective model by assessing registration parameters.

The first step, which extracts the binary “field-stop” mask, is essential for the secondstep of pre-processing. It is commonly observed that the low dye concentration in the retinalblood vessels both at the beginning and the end of the sequence requires temporary adjust-ment of the dynamic range of pixel intensities. This boosts local contrast and improves thecorner strength output of the Harris corner detector in step 3. We modified the corner thresh-old to represent the spatial average of the Harris corner measure within the field-stop. Thisimproves the algorithm’s adaptability and automation. We decide on Harris corner measurefor feature detection based on the conclusions from a comparative study [Schmid et al., 2000]on 6 point detectors. Harris-based detectors demonstrated the best performance for imagerotation, illumination variation and viewpoint change, against intensity- and contour-baseddetectors. Our features correspond to vessel bifurcations and crossings distributed acrossthe angiogram.

At step 4, these feature points are then putatively matched by cross-correlating the neigh-bourhood (define by the size and the shape of the window) pixel intensities and orientations

Page 8: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

8 CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

at each feature points [Zitová and Flusser, 2003] [Hartley and Zisserman, 2004]. We used a51-by-51 window to allow for larger spatial translation between frames. This step removesnon-matching feature pairs, and reduces the computation load at the subsequent steps. Wethen apply RANSAC, at step 5, to iteratively estimate the “best-fit” projective model ap-plicable to most putative matches (error< 10−2, Number of Maximum Trials= 1000). Thesymmetric transfer error of the homography applied to the matched "feature" points is usedas the cost measure for optimization. At step 6, we define the inliers-to-outliers ratio as thecomparison between the number of pairs that can be described by the “best-fit” model withinan offset threshold and those that cannot. This ratio helps to select the appropriate transfor-mation model in a hierarchical order, from affine [Hartley and Zisserman, 2004] to quadratic[Can et al., 2002].

Notably, the quadratic surface model with 12 degrees of freedom, defined in Equation 1,incorporates a geometric estimate for the near spherical shape of the retina surface. How-ever, the mapping with a higher-order model (quadratic) is sensible if the lower-order model(affine) registration has been successful.

(x, y) and (x′, y′) denote the feature-point coordinates from two different viewpoints, thereference frame and the floating frame respectively. The matrix entries θij represent functionsof the camera, the retina surface and rigid transformation parameters:(

x′

y′

)=

(θ11 θ12 θ13 θ14 θ15 θ16θ21 θ22 θ23 θ24 θ25 θ26

)(x2 xy y2 x y 1)T (1)

To generalize this approach, we suggest that the ROC (sensitivity versus 1-specificity)characteristic be used to decide what performance trade-offs are acceptable for True Positive(TP) and False Positive (FP) rates.

4.2 Joint Registration

To analyse temporal information both within a single retinal angiogram sequence (intra-sequence) and across two different sequences taken before and after the operation (inter-sequence), we need a step-by-step process that first aligns each pixel intra-sequentially thencross-aligns the same pixel inter-sequentially. Temporal registration requires maximizingthe point correspondence between similar structural features, while still allowing us to dif-ferentiate, detect or even monitor pathological changes. For each patient, the last post-opframe was acquired several hours after the first pre-op frame. This increases the chance thatthe vasculature alters either in width or in curvature, in addition to the natural variability ofblood vessels.

For the nth frame in a sequence S of length N, let (xn, yn) denote the location of each pixelin the frame coordinates at time tn. The image is denoted by the function:

fn := fn(xn, yn; tn), n ∈ {1, 2, ..., N} (2)

Consider two unregistered angiogram sequences S(A) and S(B), acquired with frame-specific spatial coordinates relative to the camera lens, at unknown points in time relative tothe cardiac cycle, and with non-uniform frame-to-frame intervals (from less than a secondto tens of seconds) along each sequence:

S(A) ={

f (A)n

}n=1,2,...,NA

and S(B) ={

f (B)n

}n=1,2,...,NB

(3)

Page 9: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES 9Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

We first spatially register each individual image f (A)n in S(A) to a local reference spatial

coordinate system defined by frame f (A)lr . This individual-to-local reference registration is

also applied to sequence S(B) with frame f (B)lr as its local reference. We then register the

two local references f (A)lr and f (B)

lr separately to one global reference fgr. This local-to-globalreference transformation is further combined with prior individual-to-local reference trans-formations to give the individual-to-global reference transformation that allows S(A) andS(B) to be co-registered to one global reference fgr.

We adopt the clinical practice of selecting the darkest image as the local reference. Fluores-cein angiograms are negative-contrasted where the higher the dye concentration the "darker"the frame appears. Thus, the darkest frame is essentially the frame with the highest vesselcontrast and visibility. Our algorithm computes the sum of the pixel intensities within thefield-stop and selects the frame at the peak of the dye-time course (when the image is thedarkest) in each sequence:

flr = { fn∗}, n∗ = arg minn∈{1,2,...,N}

〈 fn, Mn〉 (4)

with 〈·, ·〉 denoting a spatial inner product and Mn a binary spatial weighting function thatis unity for points (xn, yn) within the field-stop region and 0 outside.

The individual-to-local reference registration is given as:

f ′n := f ′n(

x′(lr), y′(lr); tn

)= Rn(lr)( fn), n ∈ {1, 2, ..., N} (5)

where individual frame fn is mapped to the spatial coordinate system of local reference flrby function Rn(lr).

The local-to-global reference transformation is given as:

f ′lr := f ′lr(

x′(gr), y′(gr); tlr

)= Rlr(gr)( flr) (6)

where the local reference flr is mapped to the spatial coordinate system of global referencefgr by function Rlr(gr).

Lastly, the individual-to-global reference registration can be combined as:

f ′′n := f ′′n(

x′′(gr), y′′(gr); tn

)= Rlr(gr)

(f ′n)

, n ∈ 1, 2, ..., N (7)

where individual frame fn is mapped to the spatial coordinate system of the global referencefgr.

4.3 Objective Validation of Registration Performance

We extend the concept of linking lines between features, but instead consider the microvas-cular centreline as a more structurally stable, patient specific and objective ground truthduring fluorescein angiography. However, the estimate for the centreline location of the ves-sels requires subpixel resolution and high accuracy. In practice, fine vessel structures maynot be captured in all frames within a sequence. It is commonly observed that capillariesmay “disappear” from the previous frame then “re-emerge” in the following one. Further-more, due to the time delay in the passage of dye, angiogram frames at the beginning andthe end of a sequence reveal significantly less information on the detailed microvasculature.

Page 10: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

10CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Therefore, we choose the temporal average of the registered sequence(s) as a more per-ceptually accurate representation of the retinal microvasculature as it takes into account pre-vious and subsequent frames in one or more sequences (Figure 3). For globally registeredsequence(s) { f ′′n }n∈{1,2,...,N} with N number of frames, the temporal average across the se-quence(s) is defined as:

faverage =1N

N

∑n=1

f ′′n (8)

Figure 3: Temporal average faverage of registered sequences (pre- and post- coronary arterybypass graft surgery) for each patient (a total of 6 patients).

To reduce statistical bias in our ground-truth estimate, we employ the leave-one-outcross-validation (LOOCV) strategy [Kohavi, 1995]. To evaluate the registration accuracy forframe f ′′n , we extract its ground-truth microvasculature on a modified leave-one-out averageframe gn:

gn =N

N − 1faverage −

1N − 1

f ′′n (9)

For our proposed validation method, we first perform centreline extraction of the reti-nal vessel bed of all our co-registered pre- and post-op sequences, and each correspondingleave-one-out average {gn}n∈{1,2,...,N}. We then select a patch (ROI) near the macula regionwith a high density of microvasculature. For each frame fn, we establish its ground-truthvasculature as the centreline locations vn(true)(x, y) within the ROI on gn. Lastly, we estab-lish correspondence by cross-correlation between the intensity fields of the averaged, co-registered centrelines with the per-frame putative centreline estimates. This allows us tocompare frame-by-frame the vessel centreline (sub)pixels vn(x, y) from the same ROI withits matching ground-truth. For a given ROI area with L (sub)pixels on the centreline, wedefine the centreline error (CE) as:

CE =1L

L−1

∑n=0

αn‖vn(x, y)− vn(true)(x, y)‖ (10)

‖ · ‖ is the Euclidean distance. αn is a positive weighting factor that can be adjusted tocharacterize the neighborhood around the (sub)pixel. Here, we use αn = 1 assuming even

Page 11: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES11Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

contribution from all centreline (sub)pixels. This allows us to compare against a standardaffine transformation model. However, in other cases, for instance patients with proliferativeretinopathy, αn can reflect the local vessel tortuosity. This is a fairer assessment of registrationquality as the clinically interesting fine-scale microvasculatures contribute more strongly tothe error metric than a spatially-averaged global measure.

5 Segmentation

5.1 Phase-invariant Orientation Field

The feature that we use is defined in prior work [Bharath, 1998], [Bharath and Ng, 2005]and [Ng and Bharath, 2004]. Because red-free and fluorescein images have opposite contrastbehavior, this approach compactly encodes the neighbourhood phase-invariant structuralorientation and spatial symmetry estimate. Moreover, it shares some benefits of the scalespace approach as the filter scale selection is automated to be “scalable” to the local vesseldiameter. This tuning takes the form of a weighting applied to each scale of the filter output,which is based on the likelihood of that filter scale being well matched to the local vessel size.In-depth discussion and application of “steering in scale” and “detecting intrinsic scale” ina more generalized setting can be found in [Ng and Bharath, 2004].

The phase-invariant orientation field is a representation of local image structure that is rel-atively discriminating yet stable in the presence of illumination change. It may be thoughtof as a quasi-illumination invariant estimation of local structure orientation, independent oflocal spatial symmetry about a selected dominant axis (see Figure 4).

Figure 4: Estimated phase-invariant orientation field (in red) of a 128 × 128 patch from aretinal angiogram with 3D view (left) and 2D view (right, detail).

It is constructed as:

O(l)n (xn, yn) =

∑K/2−1k=0 |g(l)k (xn, yn)|−→uk

p + (∑K/2−1k=0 |g(l)k (xn, yn)|2)

12

(11)

for g(l)k (x, y) where k = 0, 1, 2, ..., K2 -1 denotes the output of the kth oriented bandpass com-

plex analysis filter from image fn at level l. For K = 8, we use the directional vectors−→uk = [1, 0], [0, 1], [-1, 0], [0, -1]. p is a conditioning constant set at 1.25% of the maximumvalue of the image [Bharath and Ng, 2005]. As the magnitude of the phase-invariant orienta-tion field ranges from 0 to 1, it can be used as an indication of anisotropy, in which stronglyisotropic neighbors will produce values near to 0 and strongly anisotropic neighbors willproduce values near to 1.

Page 12: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

12CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Our set of four 15× 15 complex “Steerable Wavelet Filter” (SWF) arrays can be visualizedin Figure 5.

Figure 5: Real and imaginary parts of oriented complex bandpass filter kernels used to pro-duce the filter outputs in the spatial domain.

5.2 Vessel Segmentation

A standard artificial neural network [Rumelhart et al., 1986] is then trained over a smallsample of hand-labelled pixels to effect the segmentation. We use a multi-layer feed-forwardnetwork with backpropagation and Levenberg-Marquardt training algorithm [Hagan andMenhaj, 1994]. The network uses a mean squared error performance function, hyperbolictangent sigmoid transfer functions for both hidden and output layers, and scaled conjugategradient descent for iteration. Our 27-dimensional input feature vector is defined as:

−→FVn = [I3×3,

−→O (3×3)] (12)

Our unique descriptors for segmentation is assembled as follows: For each pixel, we firstassemble its 3 × 3 neighbourhood intensity. We then separate the horizontal and verticalcomponents of the phase-invariant orientation field, and assemble the 2× 3× 3 neighbour-hood orientation field. The horizontal and vertical components of the orientation vector arenormalized against vector magnitudes to conserve the angular representation. For training,we use randomly selected patch (similar to a discretized Wiener process) on each trainingimage and validate the accuracy of the classification output against manually segmented“ground-truth” from the DRIVE database.

5.3 Centreline Extraction

We propose a “link while extract” piece-wise segmentation approach that combines thesearch algorithm with our phase-invariant orientation field, which improves on Steger’smethod [[Steger, 1998]] that only links “selected” vessel centerline pixels after extraction.In a neighbourhood Θ(p, b, c) from image fn, the pixel p = (px, py) is located at the centre

of the b× c patch. If, for instance, the local vascular (neighbourhood) orientation O(l)Θ (x ∈

Θ, y ∈ Θ) at scale l is in the interval [−π8 , π

8 ], then only the points (px + 1, py + 1), (px + 1, py),(px + 1, py − 1) are considered as “locally aligned” pixels.

Page 13: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES13Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Meanwhile, we use an additional measure that estimates local phase for each pixel. Thisestimate, Ψ(l)

n , is obtained from the complex wavelet filter outputs steering by the polynomialfunctions sp(φ, k) and sq(φ, k) on fn [Bharath and Ng, 2005]:

Ψ(l)n = 6

(K/2−1

∑k=0

sp(φ, k)g(l)k (xn, yn) +K/2−1

∑k=0

sq(φ, k)(g(l)k (xn, yn))∗)

(13)

p is considered as a “probable” vessel centreline pixel if its local phase estimate Ψ(l)n (px, py)

matches the phase component of its neighbourhood orientation O(l)Θ . The angle difference

between p and its “locally aligned” pixels determines whether these neighboring pixels arealso likely to be located along the vessel centreline. The combination of both allows a softclassification of the centreline (sub)pixel locations. The Euclidean distance between two(sub)pixel centreline neighbours is also calculated as an indicator for the continuity of thevessel.

6 Experiments and Results

Vessel Segmentation on the DRIVE database

To compare our automatic vessel segmentation technique with several (8) previously bench-marked methods [Staal et al., 2004] [Niemeijer et al., 2004] [Zana and Klein, 1999] [Jiang andMojon, 2003] [Martinez-Perez et al., 1999] [Chaudhuri et al., 1989] [Al-Diri et al., 2009], wetested our algorithms on the DRIVE database. See Figure 6, for the segmentation results ofthe global vessel-bed of two test images; and Figure 7, for the segmentation results of theentire test image set.

Figure 6: The segmentation results (posterior probability map, or “soft decision”) of ourSWF-NN algorithm on Test Image 1 (left) and Test Image 20 (right).

Page 14: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

14CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Figure 7: Top: 20 Test images from the DRIVE Database; Bottom: The segmentation results(posterior probability map, or “soft decision”) of our SWF-NN algorithm on all the test im-ages.

Page 15: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES15Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

For training, we generated 5 different batches of random patch (similar to a discretizedWiener process) on each training image: 4000 pixels (0.89% of total image pixels), 8000 pix-els (1.78%), 12000 pixels (2.68%), 16000 pixels (3.57%) and 24000 pixels (5.35%). Each patchcontains half vessel pixels and half background pixels. We compared network models con-structed from training based on a total of 8× 104 pixels, 1.6× 105 pixels, 2.4× 105 pixels,3.2× 105 pixels and 4.8× 105 pixels.

Figure 8 (right) presents a classical ROC (sensitivity versus 1-specificity) curve acrossthe entire DRIVE database using our algorithm; whereas Figure 8 (left) demonstrates thealgorithm performance from eight different benchmark methods. Notably, single points onthe figure represent binary segmentation methods. Table 1 compares the area under theROC curve (Az) for non-binary segmentation methods. Our “Steerable Wavelet Filter-NeuralNetwork” (SWF-NN) algorithm achieved 0.9779 for Az.

Figure 8: Left: The ROC curves of 8 methods tested on the DRIVE database, adapted from[Staal et al., 2004]; Right: The ROC curves of our SWF-NN algorithm evaluated on the DRIVEdatabase with different training patch size, incrementing from less than 1% to around 5% ofthe total image pixels.

Method SWF-NN Staal Niemeijer Jiang Zana Chaudhuri

Az 0.98 0.95 0.93 0.91 0.90 0.79

Table 1: Az comparison for 6 non-binary segmentation methods.

To assess the variability of the high dimensional classification using a complex decisionsurface, we apply the same neural network model on each of the 20 test images in the DRIVEdatabase. The ROC for the entire test data superimposed on the ROC lower and upperbound are depicted in Figure 9. The algorithm performance on individual images is anindicator of the technical usability, especially in a clinical setting.

Page 16: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

16CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

Figure 9: Left: The ROC curves for 20 test images using one neural network model; Right:The ROC curves for lower and upper bound, and for the entire test data.

Centreline Extraction and Registration Evaluation on Fluorescein Angiograms

For image registration and centreline extraction, we compared our algorithms with an affinemodel [Zitová and Flusser, 2003], that accounts for pairwise rotation, scale, translation andshearing. This model is known as a type of “shape-preserving mapping”, as it preservesangles and curvatures. We optimized our “control” affine model by minimising the first or-der approximation of the geometric error (Sampson distance) of the fit of the transformationmatrix [Hartley and Zisserman, 2004].

Our segmentation evaluation used ROIs of size 101×101 pixels near the macula con-taining complex image structures, and with clearly displayed capillaries ranging from 5-10pixels in width. The validation and segmentation procedures were controlled, so that thesame ROIs were used in both models. Both algorithms ran on the same machine, and bothalgorithms used the same local and global references to give a fair comparison.

Method Mean SD SEM Range 95% Confidence Interval

Joint Global-RANSAC 0.113 0.057 0.003 0.286 [0.108 0.119]

Global Affine 3.205 2.771 0.143 12.980 [ 2.923 3.487]

Table 2: Statistical comparison of centreline error (CE) between our joint Global-RANSAC registra-tion scheme with a controlled global affine transformation model, on a total of 384 angiograms.

Table 2 presents the comparison results of centreline error (CE) between the outputs ofthe two algorithms across all 384 retinal angiograms. For fine structures, the affine modelhas on average 3.2 pixel misalignments and a much larger range of nearly 13 pixels. This isrealistic as we retain the original image resolution at 4288× 2848 for the statistical compar-ison of the microvasculature misalignment between the two methods. Our model, on theother hand, has a much more stable performance at 0.11 mean (distance) pixel misalignment

Page 17: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES17Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

and a much tighter range of 0.29 pixel in all registered frames.

7 ConclusionIn this paper, we first suggest a registration procedure based on joint spatio-temporal reg-istration from automatically determined image feature points. We then consider techniquesto reliably segment retinal vasculature. We find that the use of a relatively simple machinelearning algorithm, combining phase-invariant orientation fields and image patches, yieldspromising results. We then present a novel pixel-wise approach for centreline segmenta-tion relying on phase information extracted after steering complex wavelets according to thephase-invariant orientation field; this is then used for evaluation of registration accuracy.The method is promising, as it relies on few thresholds and does not require region grow-ing. The centreline extraction, coupled to accurate registration, allows comparison and non-invasive monitoring of fine-resolution microvasculature from an existing well-establishedimaging technique. It provides the potential for detecting temporal changes in the circula-tion (possibly caused by microembolism) in real-time. This allows early preventative mea-sures to be taken to reduce aggravated blood-clotting, thus improving the post-operativerecovery of the patients. Future development will include more extensive validation of thesystem on larger amounts of data, real-time performance evaluation and, in time, incorpo-ration into a stand-alone system for clinical trials.

ReferencesW. Aguilar, Y. Frauel, F. Escolano, M. E. Martinez-Perez, A. Espinosa-Romero, and M. A.

Lozano. A robust graph transformation matching for non-rigid registration. Image VisionComputing, 27:897–910, 2009.

B. Al-Diri, A. Hunter, and D. Steel. An active contour model for segmenting and measuringretinal vessels. IEEE Trans. Medical Imaging, 28(9):1488–97, 2009.

A. Almhdie, C. Léger, M. Deriche, and R. Lédée. 3D registration using a new implementationof the ICP algorithm based on a comprehensive lookup matrix: Application to medicalimaging. Pattern Recognition Letters, 28(12):1523 – 1533, 2007.

O. Arend, S. Wolf, F. Jung, B. Bertram, H. Pöstgens, H. Toonen, and M. Reim. Retinal mi-crocirculation in patients with diabetes mellitus: dynamic and morphological analysis ofperifoveal capillary network. British Journal of Ophthalmology, 75:514–518, 1991.

J. F. Arevalo, editor. Retinal Angiography and Optical Coherence Tomography. Springer, 2009.

A. A. Bharath. Steerable filters from erlang functions. In Proc. British Machine Vision Confer-ence (BMVC) 1998, pages 144–153, 1998.

A. A. Bharath and J. Ng. A steerable complex wavelet construction and its application toimage denoising. IEEE Trans. Image Processing, 14(7):948–959, 2005.

C. Blauth, J. Arnold, E. M. Kohner, and K. M. Taylor. Retinal microembolism during car-diopulmonary bypass demonstrated by fluorescein angiography. Lancet, 328(8511):837–839, 1986.

A. Can, C. V. Stewart, B. Roysam, and H. L. Tanenbaum. A feature based, robust, hierarchicalalgorithm for registering pairs of images of the curved human retina. IEEE Trans. PatternAnalysis and Machine Intelligence, 24(3):347–364, 2002.

Page 18: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

18CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

S. Cao, A. A. Bharath, K. Parker, J. Ng, J. Arnold, A. McGregor, and A. Hill. Spatio-temporalregistration and microvasculature segmentation of retinal angiogram sequences. In Proc.Medical Image Understanding and Analysis (MIUA) 2011, pages 293–297, 2011.

P. C. Cattin, H. Bay, L. V. Gool, and G. Székely. Retina mosaicing using local features. In Proc.Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2006, volume 4191,pages 185–192, 2006.

S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum. Detection of blood ves-sels in retinal images using two-dimensional matched filters. IEEE Trans. Medical Imaging,8(3):263–269, 1989.

W. R. Crum, T. Hartkens, and D. L. G. Hill. Non-rigid image registration: theory and practice.British Journal of Radiology, 77:S140–S153, 2004.

R. O. Duda and P. E. Hart. Use of the Hough transformation to detect lines and curves inpictures. Communications of the ACM, 15:11–15, 1972.

M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fittingwith applications to image analysis and automated cartography. Communications of theACM, 24(6):381–395, 1981.

W. H. Freeman and E. H. Adelson. The design and use of steerable filters. IEEE Trans. PatternAnalysis and Machine Intelligence, 13:891–906, 1991.

J. E. Grunwald, C. E. Riva, S. H. Sinclair, A. J. Brucker, and B. L. Petrig. Laser dopplervelocimetry study of retinal circulation in diabetes mellitus. Archives of Ophthalmology,104:991–996, 1986.

M. T. Hagan and M. B. Menhaj. Training feedforward networks with the marquardt algo-rithm. IEEE Trans. Neural Networks, 5:6, 1994.

A. Harris, L. Kagemann, and G. A. Cioffi. Assessment of human ocular hemodynamics.Survey of Ophthalmology, 42(6):509–533, 1998.

C. Harris and M. Stephens. A combined corner and edge detector. In Proc. 4th ALVEY VisionConference, pages 147–151, 1988.

R. Hartley and A. Zisserman, editors. Multiple View Geometry in Computer Vision. CambridgeUniversity Press, 2 edition, 2004.

J. H. Hipwell, A. Manivannan, P. Vieira, P. F. Sharp, and J. V. Forrester. Quantifying changesin retinal circulation: the generation of parametric images from fluorescein angiograms.Physiological Measurement, 19(2):165–180, 1998.

A. D. Hughes. The clinical assessment of retinal microvascular structure and therapeuticimplications. Current Treatment Options in Cardiovascular Medicine, 9:236–241, 2007.

X. Jiang and D. Mojon. Adaptive local thresholding by verification-based multithresholdprobing with application to vessel detection in retinal images. IEEE Trans. Pattern Analysisand Machine Intelligence, 25(1):131–137, 2003.

Page 19: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCES19Annals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

M. Kass, A. Witkin, and D. Terzopoulus. Snakes: Active contour models. Int. Journal ofComputer Vision, pages 321–331, 1988.

L. A. King, A. V. Stanton, P. S. Sever, S. Thom, and A. D. Hughes. Arteriolar length/diameter(L:D) ratio: A geometric parameter of the retinal vasculature diagnostic of hypertension.Journal of Human Hypertension, 10(6):417–418, 1996.

R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and modelselection. Proc. 14th Int. Joint Conference on Artificial Intelligence (IJCAI), 2:1137–1143, 1995.

M. E. Langham, R. Farrell, T. Krakau, and D. Silver. Ocular pulsatile blood flow, hypotensivedrugs, and differential light sensitivity in glaucoma. In G. K. Krieglstein, editor, GlaucomaUpdate IV, pages 162–172. Springer-Verlag, Berlin, Germany, 1991.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to documentrecognition. Proc. IEEE, 86(11):2278–2324, 1998.

S. Lee, J. M. Reinhardt, P. C. Cattin, and M. D. Abràmoff. Objective and expert-independentvalidation of retinal image registration algorithms by a projective imaging distortionmodel. Medical Image Analysis, 14(4):539–549, 2010.

H. Lester and S. R. Arridge. A survey of hierarchical non-linear medical image registration.Pattern Recognition, 32(1):129–149, 1999.

D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. Journal of Com-puter Vision, 60(2):91–110, 2004.

M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, A. A. Bharath, and K. H.Parker. Retinal blood vessel segmentation by means of scale-space analysis and regiongrowing. In Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI)1999, volume 1679 of Lecture Notes in Computer Science, pages 90–97, 1999.

J. Ng and A. A. Bharath. Steering in scale space to optimally detect image structures. In Proc.8th European Conference on Computer Vision (ECCV) 2004, volume 3021 of LNCS, pages 482–494, 2004.

J. Ng, S. T. Clay, S. A. Barman, A. R. Fielder, M. J. Moseley, K. H. Parker, and C. Paterson.Maximum liklihood estimatation of vessel parameters from scale space analysis. ImageVision Computing, 28(1):55–63, 2010.

M. Niemeijer, J.J. Staal, B. van Ginneken, M. Loog, and M.D. Abramoff. Comparative studyof retinal vessel segmentation methods on a new publicly available database. SPIE MedicalImaging, 5370:648–656, 2004.

H. R. Novotny and D. L. Alvis. A method of photographing fluorescence in circulating bloodin the human retina. Circulation, 28:82–86, 1961.

J. C. Nunes, Y. Bouaoune, E. Delechelle, and Ph. Bunel. A multiscale elastic registrationscheme for retinal angiograms. Computer Vision and Image Understanding, 95(2):129–149,2004.

Page 20: Microvasculature Segmentation of Co-Registered Retinal ... · Attempts to study the retinal vascular system date back to the origins of ophthalmoscopy in 19th century [Weinreb and

20CAO, BHARATH et al: VESSEL SEGMENTATION OF CO-REGISTERED RETINAL SEQUENCESAnnals of the BMVA Vol. 2012, No. 9, pp 1–20 (2012)

E. Rechtman, A. Harris, R. Kumar, L. B. Cantor, S. Ventrapragada, M. Desai, S. Friedman,L. Kagemann, and H. J. Garzozi. An update on retinal circulation assessment technologies.Current Eye Research, 27(6"):329–343, 2003.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations byerror propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel DistributedProcessing: Explorations in the Microstructure of Cognition, chapter 8. MIT Press, Cambridge,MA and London, England, 1986.

C. Schmid, R. Mohr, , and C. Bauckhage. Evaluation of interest point detectors. Int. Journalof Computer Vision, 37(2):151–172, 2000.

S. H. Sinclair. Macular retinal capillary hemodynamics in diabetic patients. Ophthalmology,98:1580–1586, 1991.

J.J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, and B. van Ginneken. Ridge basedvessel segmentation in color images of the retina. IEEE Trans. Medical Imaging, 23:501–509,2004.

A. V. Stanton, B. Wasan, A. Cerutti, S. Ford, R. Marsh, and P. S. Sever. Vascular networkchanges in the retina with age and hypertension. Journal of Hypertension, 13(12):1724–1728,1995.

C. Steger. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Analysis andMachine Intelligence, 20:113–125, 1998.

C. V. Stewart, C. L. Tsai, and B. Roysam. The dual-bootstrap iterative closest point algorithmwith application to retinal image registration. IEEE Trans. Medical Imaging, 22(11):1379–1394, 2003.

S. T. Venkataraman, J. G. Flanagan, and C. Hudson. Vascular reactivity of optic nerve headand retinal blood vessels in glaucoma - a review. Microcirculation, 17(7):568–581, 2010.

R.N. Weinreb and A. Harris, editors. Ocular Blood Flow in Glaucoma, chapter Clinical Mea-surement of Ocular Blood Flow, pages 19–58. Kugler Publications, Amsterdam, TheNetherlands, 2009.

G. Yang, C.V. Stewart, M. Sofka, and C.-L. Tsai. Registration of challenging image pairs: Ini-tialization, estimation, and decision. IEEE Trans. Pattern Analysis and Machine Intelligence,29(11):1973 –1989, 2007.

F. Zana and J. C. Klein. A multimodal registration algorithm of eye fundus images usingvessels detection and hough transform. IEEE Trans. Medical Imaging, 18(5):419–428, 1999.

B. Zitová and J. Flusser. Image registration methods: a survey. Image and Vision Computing,21(11):977–1000, 2003.