8 segmentation of blood vessels and optic disc in

14
2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics 1 Segmentation of Blood Vessels and Optic Disc in Retinal Images Ana Salazar-Gonzalez, Djibril Kaba, Yongmin Li and Xiaohui Liu Abstract— Retinal image analysis is increasingly prominent as a non-intrusive diagnosis method in modern ophthalmology. In this paper, we present a novel method to segment blood vessels and optic disc in the fundus retinal images. The method could be used to support non-intrusive diagnosis in modern ophthalmology since the morphology of the blood vessel and the optic disc is an important indicator for diseases like diabetic retinopathy, glaucoma and hypertension. Our method takes as first step the extraction of the retina vascular tree using the graph cut technique. The blood vessel information is then used to estimate the location of the optic disc. The optic disc segmentation is performed using two alternative methods. The Markov Random Field (MRF) image reconstruction method segments the optic disc by removing vessels from the optic disc region and the Compensation Factor method segments the optic disc using prior local intensity knowledge of the vessels. The proposed method is tested on three public data sets, DIARETDB1, DRIVE and STARE. The results and comparison with alternative methods show that our method achieved exceptional performance in segmenting the blood vessel and optic disc. Index Terms—Retinal images, vessel segmentation, optic disc segmentation, graph cut segmentation. I. I NTRODUCTION T HE Segmentation of retinal image structures has been of great interest because it could be used as a non-intrusive diagnosis in modern ophthalmology. The morphology of the retinal blood vessel and the optic disc is an important structural indicator for assessing the presence and severity of retinal diseases such as diabetic retinopathy, hypertension, glaucoma, haemorrhages, vein occlusion and neo-vascularisation. However to assess the diameter and tortuosity of retinal blood vessel or the shape of the optic disc, manual planimetry has commonly been used by ophthalmologist, which is generally time consuming and prone with human error, especially when the vessel structure are complicated or a large number of images are acquired to be labelled by hand. Therefore, a reliable automated method for retinal blood vessel and optic disc segmentation, which preserves various vessel and optic disc characteristics is attractive in computer aided-diagnosis. An automated segmentation and inspection of retinal blood vessel features such as diameter, colour and tortuosity as Manuscript received August 21, 2013; revised November 28, 2013 and January 17, 2014; accepted January 17, 2014. Djibril Kaba, Yongmin Li and Xiaohui Liu are with Brunel Uni- versity, Department of Computer Science, London, UK e-mail: (see http://www.brunel.ac.uk/siscm). Ana Salazar-Gonzale was with Brunel University, Department of Com- puter Science. She is now working as an engineer at Access IS in Read- ing, UK. well as the optic disc morphology allows ophthalmologist and eye care specialists to perform mass vision screening exams for early detection of retinal diseases and treatment evaluation. This could prevent and reduce vision impairments; age related diseases and many cardiovascular diseases as well as reducing the cost of the screening. Over the past few years, several segmentation techniques have been employed for the segmentation of retinal structures such as blood vessels and optic disc and diseases like lesions in fundus retinal images. However the acquisition of fundus retinal images under different conditions of illumination, resolution and field of view (FOV) and the overlapping tissue in the retina cause a significant degradation to the performance of automated blood vessel and optic disc segmentations. Thus, there is a need for a reliable technique for retinal vascular tree extraction and optic disc detection, which preserves various vessel and optic disc shapes. In the following segment, we briefly review the previous studies on blood vessel segmentation and optic disc segmentation separately. II. RELATED WORKS Two different approaches have been deployed to segment the vessels of the retina: the pixel processing based methods and tracking based methods [1]. The pixel processing based approach performs the vessel segmentation in a two-pass operation. First the appearance of the vessel is enhanced using detection process such as morphological pre-processing techniques and adaptive filtering. The second operation is the recognition of the vessel structure using thinning or branch point operations to classify a pixel as a vessel or background. These approaches process every pixel in the image and apply multiple operations on each pixel. Some pixel processing methods use neutral networks and frequency analysis to define pixels in the image as vessel pixels and background pixels. Typical pixel processing operations are shown in Hoover et al. [2], Mendoca et al. [3], Soares et al. [4], Staal et al. [5], Chaudhuri et al. [6] and Zana et al. [7]. The second set of approaches to vessel segmentation are referred to as vessel tracking, vectorial tracking or tracing [1]. In contrast to the pixel processing based approaches, the tracking methods detect first initial vessel seed points, and

Upload: vigneshinfotech

Post on 13-Sep-2015

225 views

Category:

Documents


1 download

DESCRIPTION

Segmentation of Blood Vessels and Optic Disc In

TRANSCRIPT

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    1

    Segmentation of Blood Vessels and Optic Disc inRetinal Images

    Ana Salazar-Gonzalez, Djibril Kaba, Yongmin Li and Xiaohui Liu

    Abstract Retinal image analysis is increasingly prominent asa non-intrusive diagnosis method in modern ophthalmology. Inthis paper, we present a novel method to segment blood vesselsand optic disc in the fundus retinal images. The method could beused to support non-intrusive diagnosis in modern ophthalmologysince the morphology of the blood vessel and the optic disc isan important indicator for diseases like diabetic retinopathy,glaucoma and hypertension. Our method takes as first stepthe extraction of the retina vascular tree using the graph cuttechnique. The blood vessel information is then used to estimatethe location of the optic disc. The optic disc segmentation isperformed using two alternative methods. The Markov RandomField (MRF) image reconstruction method segments the opticdisc by removing vessels from the optic disc region and theCompensation Factor method segments the optic disc using priorlocal intensity knowledge of the vessels. The proposed methodis tested on three public data sets, DIARETDB1, DRIVE andSTARE. The results and comparison with alternative methodsshow that our method achieved exceptional performance insegmenting the blood vessel and optic disc.

    Index TermsRetinal images, vessel segmentation, optic discsegmentation, graph cut segmentation.

    I. INTRODUCTION

    THE Segmentation of retinal image structures hasbeen of great interest because it could be used asa non-intrusive diagnosis in modern ophthalmology. Themorphology of the retinal blood vessel and the optic disc isan important structural indicator for assessing the presenceand severity of retinal diseases such as diabetic retinopathy,hypertension, glaucoma, haemorrhages, vein occlusionand neo-vascularisation. However to assess the diameterand tortuosity of retinal blood vessel or the shape of theoptic disc, manual planimetry has commonly been used byophthalmologist, which is generally time consuming andprone with human error, especially when the vessel structureare complicated or a large number of images are acquired tobe labelled by hand. Therefore, a reliable automated methodfor retinal blood vessel and optic disc segmentation, whichpreserves various vessel and optic disc characteristics isattractive in computer aided-diagnosis.

    An automated segmentation and inspection of retinal bloodvessel features such as diameter, colour and tortuosity as

    Manuscript received August 21, 2013; revised November 28, 2013 andJanuary 17, 2014; accepted January 17, 2014.

    Djibril Kaba, Yongmin Li and Xiaohui Liu are with Brunel Uni-versity, Department of Computer Science, London, UK e-mail: (seehttp://www.brunel.ac.uk/siscm).

    Ana Salazar-Gonzale was with Brunel University, Department of Com-puter Science. She is now working as an engineer at Access IS in Read-ing, UK.

    well as the optic disc morphology allows ophthalmologistand eye care specialists to perform mass vision screeningexams for early detection of retinal diseases and treatmentevaluation. This could prevent and reduce vision impairments;age related diseases and many cardiovascular diseases as wellas reducing the cost of the screening.

    Over the past few years, several segmentation techniqueshave been employed for the segmentation of retinal structuressuch as blood vessels and optic disc and diseases like lesionsin fundus retinal images. However the acquisition of fundusretinal images under different conditions of illumination,resolution and field of view (FOV) and the overlappingtissue in the retina cause a significant degradation to theperformance of automated blood vessel and optic discsegmentations. Thus, there is a need for a reliable techniquefor retinal vascular tree extraction and optic disc detection,which preserves various vessel and optic disc shapes. In thefollowing segment, we briefly review the previous studieson blood vessel segmentation and optic disc segmentationseparately.

    II. RELATED WORKS

    Two different approaches have been deployed to segmentthe vessels of the retina: the pixel processing based methodsand tracking based methods [1].

    The pixel processing based approach performs the vesselsegmentation in a two-pass operation. First the appearanceof the vessel is enhanced using detection process suchas morphological pre-processing techniques and adaptivefiltering. The second operation is the recognition of the vesselstructure using thinning or branch point operations to classifya pixel as a vessel or background. These approaches processevery pixel in the image and apply multiple operations on eachpixel. Some pixel processing methods use neutral networksand frequency analysis to define pixels in the image asvessel pixels and background pixels. Typical pixel processingoperations are shown in Hoover et al. [2], Mendoca et al. [3],Soares et al. [4], Staal et al. [5], Chaudhuri et al. [6] andZana et al. [7].

    The second set of approaches to vessel segmentation arereferred to as vessel tracking, vectorial tracking or tracing[1]. In contrast to the pixel processing based approaches, thetracking methods detect first initial vessel seed points, and

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    2

    then track the rest of the vessel pixels through the imageby measuring the continuity proprieties of the blood vessels.This technique is used as a single pass operation, where thedetection of the vessel structures and the recognition of thestructures are simultaneously performed.

    The tracking based approaches included semi automatedtracing and automated tracing. In the semi automated tracingmethods, the user manually selects the initial vessel seed point.These methods are generally used in quantitative coronaryangiography analysis and they generally provide accuratesegmentation of the vessels. In fully automated tracing, thealgorithms automatically select the initial vessel points andmost methods use Gaussian functions to characterise a vesselprofile model, which locates a vessel points for the vesseltracing. They are computationally efficient and more suitablefor retinal image processing. Examples of the tracking basedapproaches are presented in Xu et al. [8], Maritiner-perez etal. [9], Staal et al. [5], Zhou et al [10].

    Both pixel processing and tracking approaches have theirown advantages and limitations over each other. The pixelprocessing approaches can provide a complete extraction ofthe vascular tree in the retinal image since they search allthe possible vessel pixels across the whole image. Howeverthese techniques are computationally expensive and requirespecial hardware to be suitable for large image dataset. Thepresence of noise and lesions in some retinal images causesa significant degradation in the performance of the pixelprocessing approaches as the enhancement operation maypick up some noise and lesions as vessel pixels. This couldlead to false vessel detection in the recognition operation. Onthe other hand, the tracking approaches are computationallyefficient and much faster than the pixels processing methodsbecause they perform the vessel segmentation using only thepixels in the neighbourhood of the vessels structure and avoidthe processing of every pixel in the image. Nevertheless, thesemethods lack in extracting a complete vascular tree in thecase where there are discontinuities in the vessel branches.Further more, the semi automated tracking segmentationmethods need manual input, which requires time.

    The optic nerve head is described as the brightest roundarea in the retina where the blood vessels converge witha shape that is approximately elliptical and has a width of1.80.2 mm and height 1.90.2 mm [11]. The convergencefeature of blood vessels into the optic disc region is generallyused to estimate the location of the optic disc and segmentit from the retinal image. But the intrusion of vessels in theoptic disc region constitutes computational complexity forthe optic disc segmentation as it is breaking the continuityof its boundary. To address this problem, several methodshave been employed such as Chrastek et al. [12], Lowell etal. [13], Welfer et al. [14] and Aquino et al. [15] .

    Chrastek et al. [12] presented an automated segmentationof the optic nerve head for diagnosis of glaucoma. Themethod removes the blood vessel by using a distance map

    algorithm, then the optic disc is segmented by combining amorphological operation, Hough Transform and an anchoredactive contour model. Lowell et al. [13] proposed a deformablecontour model to segment the optic nerve head boundaryin low resolution retinal images. The approach localisesthe optic disc using a specialised template matching and adirectionally-sensitive gradient to eliminate the obstructionof the vessel in the optic disc region before performingthe segmentation. Welfer et al. [14] proposed an automatedsegmentation of the optic disc in colour eye fundus imageusing an adaptive morphological operation. The methoduses a watershed transform markers to define the opticdisc boundary and the vessel obstruction is minimised bymorphological erosion.

    These techniques are performed using morphologicaloperations to eliminate the blood vessels from the retinalimage. However, the application of morphological operationscan modify the image by corrupting some useful information.

    In our optic disc segmentation process, the convergencefeature of vessels into the optic disc region is used to estimateits location. We then use two automated methods (MarkovRandom field image reconstruction and Compensation Factor)to segment the optic disc.

    The rest of the paper is organised as follow. The bloodvessel segmentation is discussed in Section III. Section IVprovides the detailed description of the optic disc segmenta-tion. Section V presents the experimental results of our methodwith comparisons to other methods. Conclusions are drawn inSection VI. The preliminary results of the three componentsof the approach, namely the blood vessel segmentation, opticdisc segmentation using the Graph Cut and Markov RandomField respectively, were presented separately in [16], [17],[18]. More details of the approach can be found in the PhDthesis [19].

    III. BLOOD VESSELS SEGMENTATION

    Blood vessels can be seen as thin elongated structures inthe retina, with variation in width and length. In order tosegment the blood vessel from the fundus retinal image, wehave implemented a pre-processing technique, which consistsof effective adaptive histogram equalisation (AHE) and robustdistance transform. This operation improves the robustness andthe accuracy of the graph cut algorithm. Fig. 1 shows theillustration of the vessel segmentation algorithm.

    A. Pre-processing

    We apply a contrast enhancement process to the greenchannel image similar to the work presented in [20]. Theintensity of the image is inverted, and the illumination isequalised. The resulting image is enhanced using an adaptivehistogram equaliser, given by:

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    3

    Fig. 1. Vessel segmentation algorithm

    IEnhanced =

    pR(p)

    s (I (p) I (p))h2

    r M (1)where I is the green channel of the fundus retinal colourimage, p denotes a pixel and p

    is the neighbourhood pixel

    around p. p R(p) is the square window neighbourhood

    with length h. s(d) = 1 if d > 0, and s(d) = 0 otherwisewith d = s (I (p) I (p)). M = 255 value of the maximumintensity in the image. r is a parameter to control the level ofenhancement. Increasing the value of r would also increasethe contrast between vessel pixels and the background (seeFig. 2). The experimental values of the window length wasset to h = 81 and r = 6.

    A binary morphological open process is applied to prunethe enhanced image, which discards all the misclassifiedpixels see (Fig. 2 (d)). This approach significantly reducesthe false positive, since the enhanced image will be used toconstruct the graph for segmentation.

    A distance map image is created using the distancetransform algorithm. This is used to calculate the directionand magnitude of the vessel gradient. Fig. 2 (e) and (f) showthe distance map of the whole image and a sample vessel witharrows indicating the direction of the gradients respectively.From the sample vessel image, we can see the centre linewith the brightest pixels, which are progressively reduced inintensity in the direction of the edges (image gradients). Thearrows in Fig. 2 (f) referred as vector field, which is used toconstruct the graph in the next Sections.

    B. Graph construction for vessel segmentation

    The graph cut is an energy based object segmentationapproach. The technique is characterised by an optimisation

    Fig. 2. Pre-processing. (a) h = 45, r = 3, (b) h = 45, r = 6, (c) h =81, r = 3, (d)h = 81, r = 6 (e) distance map, (f) sample of a vessel witharrows indicating the vessel gradients

    operation designed to minimise the energy generated froma given image data. This energy defines the relationshipbetween neighbourhood pixel elements in an image.

    A graph G (, ) is defined as a set of nodes (pixels) anda set of undirected edges that connect these neighbouringnodes. The graph included two special nodes, a foregroundterminal (source S) and a background terminal (sink T). includes two types of undirected edges: neighbourhood links(n-links) and terminal links (t-links). Each pixel p P (a setof pixels) in the graph presents two t-links {p, S} and {p, T}connecting it to each terminal while a pair of neighbouringpixels {p, q} N (number of pixel neighbour) is connect bya n-links [21]. Thus:

    = NpP{{p, S}, {p, T}, = P {S, T}} (2)

    An edge e is assigned a weight (cost) We > 0. A cut isdefined by a subset of edges C where G (c) = , \Cseparating the graph into two foreground and background withC defined as |C| = eCWeThe Max-Flow algorithm is used to cut the graph and findthe optimal segmentation. Table I assigns weight to the edges in the graph [21].

    TABLE IWEIGHT ASSIGNMENT OF THE EDGES IN THE GRAPH.

    Edge weight for

    {p, q} B{p, q} {p, q} N{p, S} (Foreground) Rp(Fg) p P, p / F B

    K p F0 p B

    { p, T} (Background) Rp(Bg) p P, p / F B0 p FK p B

    whereK = 1 +maxpP

    {p,q}

    Bp,q (3)

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    4

    F and B represent the subsets of pixels selected as foregroundand background respectively. Thus F P and B P suchthat F B = . Bp,q defines the discontinuity betweenneighbouring pixels, and its value is large when the pixelintensities. > 0 is a constant coefficient, which we willdefine in the energy formulation of the graph.

    The graph cut technique is used in our segmentation becauseit allows the incorporation of prior knowledge into the graphformulation in order to guide the model and find the optimalsegmentation. Let assume A = (A1, Ap, . . . AP ) a binaryvector set of labels assigned to each pixel p in the image, whereAp indicate assignments to pixels p in P . Therefore, eachassignment Ap is either in foreground (Fg) or background(Bg). Thus the segmentation is obtained by the binary vectorA and the constraints imposed on the regional and boundaryproprieties of vector A are derived by the energy formulationof the graph defined as

    E (A) = R (A) +B (A) (4)where the positive coefficient indicates the relative impor-tance of the regional term (likelihoods of foreground and back-ground) RA against the boundary term (relationship betweenneighbourhood pixels) BA. The regional or the likelihood ofthe foreground and background is given by

    R (A) =pP

    Rp (Ap) (5)

    and the boundary constraints is defined as

    B (A) =p,qN

    Bp,q (Ap, Aq) (6)

    where (Ap, Aq) = 1 for Ap 6= Aq and 0 Otherwise.

    Bp,q = exp( (Ip Iq)2

    22) 1

    dist(p, q)(7)

    Rp (Ap) specifies the assignment of pixel p to either theforeground (Fg) or the background (Bg). Bp,q defines thediscontinuity between neighbouring pixels, and its value islarge when the pixel intensities Ip and Iq are similar andclose to zero when they different. The value of Bp,q is alsoaffected by the Euclidean distance dist(p, q) between pixelsp and q.

    During the minimisation of the graph energy formulation in(4) to segment thin objects like blood vessels, the second term(boundary term) in (4) has a tendency to follow short edgesknown as the shrinking bias [22]. This problem causes asignificant degradation on the performance of the graph cutalgorithm on thin elongated structures like the blood vessels.Fig. 3 shows an example of the blood vessel segmentationusing the traditional graph formulation [23]. From Fig. 3, itcan be seen that the blood vessel segmentation follows shortedges, and tends to shrink in the searching for the cheapestcost. It can also be noticed that in (4) controls the relationbetween boundary and regional terms. Increasing the value of, the likelihood of the pixels belonging to foreground and

    background (t-links) gains strength over the regional term(n-links), which slightly improved the segmentation result seeFig. 3 (d).

    Fig. 3. Retinal blood vessel segmentation using the traditional graph. (a)seeds initialisation of the input image, (b) = 20, (c) = 50, (d) = 100

    To address the above problem, the segmentation of bloodvessels using the graph cut requires special graph formulation.One of the method used to address the shrinking bias problemis to impose an additional connectivity prior, where the usermarks the constrain connectivity [22]. In order to achieve fullautomated segmentation, we used the method presented in[23], which overcomes the the shrinking bias by adding themechanism of vectors flux into the construction of the graph.The incorporation of vectors flux can improve edge alignmentand allows the segmentation of thin objects like bloodvessels by keeping a balance between shrinking (length) andstretching (vectors flux) along the boundary. Fig. 4 shows fluxof vectors v passing through a given surface S. Our methodtakes the image gradients of rough blood vessels from thepre-processing step as vectors v (see Fig. 2 (f)), and the flux(magnitude, and direction) of these vectors is incorporatedinto the graph construction and optimised. Thus the shrinkingeffect of the minimization energy on the boundary term isequilibrated with the spreading effect of vectors v flux.

    Fig. 4. The flux of vectors v passing through a given surface S.

    It is been shown in [23] that the class of Finsler metrics

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    5

    can described geometric proprieties of the discrete cut metricon regular grids and Finsler length can be represented by thesum of two terms. Those terms represent the symmetric andanti-symmetric parts of the cut metric. The symmetric part ofthe cut defines the standard geometric length of contour andit is independent of its orientation. The anti-symmetric partof the cut metric represents the flux of a given vector fieldthrough the contour [23].

    To address the shrinking bias problem seen in Fig. 3,we have constructed a graph consisting of a symmetric partg+ (shrinking) and an anti-symmetric part g (stretching) byincorporating the flux of vector v into the graph construction.The symmetric part g+ of the graph corresponds to a cutgeometric length and is related directly with the n-linkconnections and the anti-symmetric part g is equal to fluxof vector field v over the cut geometric and it is used toderive the t-links. Thus the the blood vessels can be segmentby keeping a good balance between shrinking and stretching(flux) throughout the image boundary.

    1) The symmetric part of the graph: is used to assignweights on the n-link connections (edges between neigh-bouring pixels). Let consider a neighbour system of a graphdescribed by a set of edges ek, where 1 k N , for Nnumber of neighbours. Let us define ek as the shortest vectorconnecting two pixels in the direction of k, W+k (p) the weightof the edge ek at pixel p and W+k (p) a set of the edge weightsat pixel p for all directions. The corresponding edge weightsare defined by

    + =1

    2D g+ (8)

    where D is a N x N matrix with entries defined as

    Dii = sin(i+1 i1)sin(i+1 i)sin(i i1) (9)

    Dij =

    { 1sin(ji) if j + 1 1 mod N0 for other entries

    where k is the angle of the edge ek with respect to thepositive axis X (see Fig. 5).

    In our implementation, we consider a grid map of 16neighbours with edges ek, k = 1, 2, ..., 16 (see Fig. 5). Foreach pixel p in the green channel image, the edge weightW+k (p) is computed according to (8). g

    + is calculated usingthe pixel intensity difference between two given nodes by:

    g+ = K exp((Ip Iq)2

    2

    )(10)

    g+ has a high value for pixels of similar intensities,when Ip Iq < . However if the pixels are very differentIp Iq > the value of g+ is small, which represents a poorrelation between the pixels, hence they belong to differentterminals [24].

    Fig. 5. Neighborhood system for a grid in the graph.

    2) The anti-symmetric part of the graph : We used theterm Anti-Symmetry because, the flux (stretching) of vectorfield v over the cut geometric balanced the shrinking ofblood vessels during the segmentation. This anti-symmetricpart of the graph is defined by the flux of vector field vover the cut geometric. It is used to assign weights on thet-links (edges between a given pixel and the terminals) tobalance the shrinking effect seen in Fig. 3. Specific weightsfor t-links are obtained based on the deposition of vector v.Different decompositions of vectorv may result in differentt-links whose weights can be interpreted as an estimation ofdivergence. In our implementation, we decomposed the vectorv along grid edges with the n-links oriented along the mainaxes, X and Y direction. Thus vector v can be decomposedas v = vxux + vyuy where ux anduy are unit vectors in Xand Y direction respectively. This decomposition leads to thet-link weights defined as

    tp =2

    2[(vrightx vleftx

    )+(vupy vdownx

    )] (11)

    where vrightx and vrightx are the components of vector v in

    X direction taken at the right and left neighbour of pixel Prespectively. vupy and v

    downy are the Y of vector v taken at

    the top and down of of pixel P . is the size of the cell inthe grid map (see Fig. 5). We add edge (s p) with weightC (tp) if tp < 0, or edge (p t) with weight C tpotherwise. The parameter C is related to the magnitude ofthe vector v, thus pixels in the centre of the blood vesselhave a higher connection to the source (foreground) thanpixels in the edge of the blood vessels. Because the distancemap is calculated on the pruned image and vector v isonly defined for the pixels detected as blood vessels in therough segmentation. For the rest of the pixels in the image,the initialisation of t-link weights is set as (p s) withweight t = 0 and (p t) with weight t = K, where Kis the maximum weight sum for a pixel in the symmetricconstruction. Fig. 6 shows the segmentation results of theblood vessels using different decomposition of the vector vgenerating different t-link weights.

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    6

    Fig. 6. Vessel segmentation using the decomposition of vector v: (a)input retinal image, (b) Blood vessel segmentation using horizontal (X axis)decomposition of vector v, (c) Blood vessel segmentation using vertical (Yaxis) decomposition of vector v, (d) Blood vessel segmentation result usingthe decomposition of vector v along X and Y axes.

    IV. OPTIC DISC SEGMENTATION

    The optic disc segmentation starts by defining the locationof the optic disc. This process used the convergence featureof vessels into the optic disc to estimate its location. Thedisc area is then segmented using two different automatedmethods (Markov Random field image reconstruction andCompensation Factor). Both methods use the convergencefeature of the vessels to identify the position of the disc. TheMarkov Random Field (MRF) method is applied to eliminatethe vessel from the optic disc region. This process is knownas image reconstruction and it is performed only on the vesselpixels to avoid the modification of other structures of theimage. The reconstructed image is free of vessel and it is usedto segment the optic disc via graph cut. In contrast to MRFmethod, the Compensation Factor approach segments the opticdisc using prior local intensity knowledge of the vessels. Fig. 7shows the overview of both the MRF and the CompensationFactor method process.

    Fig. 7. (a) Markov Random Field Image Reconstruction method diagram, (b)Compensation Factor method diagram

    A. Optic Disc Location

    Inspired by the method proposed in [14], which effectivelylocates the optic disc using the vessels. we use the binary

    image of vessels segmented in Section III to find the locationof the optic disc. The process iteratively trace towards thecentroid of the optic disc. The vessel image is pruned using amorphological open process to eliminate thin vessels and keepthe main arcade. The centroid of the arcade is calculated usingthe following formulation:

    Cx =

    Ki=1

    xiK

    Cy =

    Ki=1

    yiK

    (12)

    where xi and yi are the coordinates of the pixel in the binaryimage and K is the number of pixels set to 1 (pixels markedas blood vessels) in the binary image.

    Given the gray scale intensity of a retinal image, we select1% of the brightest region. The algorithm detects the brightestregion with the most number of pixels to determine thelocation of the optic disc with respect to the centroid point(right, left, up or down). The algorithm adjusts the centroidpoint iteratively until it reaches the vessel convergence pointor centre of the main arcade (centre of the optic disc) byreducing the distance from one centroid point to next one inthe direction of the brightest region, and correcting the centralposition inside the arcade accordingly. Fig. 8 shows the processof estimating the location of the of optic disc in a retinalimage. It is important to notice that, the vessel convergencepoint must be detected accurately, since this point is used toautomatically mark foreground seeds. A point on the borderof the optic disc may result in some false foreground seeds.After the detection of the vessel convergence point, the imageis constrained a region of interest (ROI) including the wholearea of the optic disc to minimize the processing time. ThisROI is set to a square of 200 by 200 pixels concentric with thedetected optic disc centre. Then an automatic initialisation ofseeds (foreground and background) for the graph is performed.A neighbourhood of 20 pixels of radius around the centre ofthe optic disc area is marked as foreground pixels and a bandof pixels around the perimeter of the image are selected asbackground seeds (see Fig. 9).

    B. Optic Disc Segmentation with Markov Random Field ImageReconstruction

    The high contrast of blood vessels inside the optic discpresented the main difficulty for it segmentation as itmisguides the segmentation through a short path, breakingthe continuity of the optic disc boundary. To address thisproblem, the MRF based reconstruction method presentedin [25] is adapted in our work. We have selected this approachbecause of its robustness. The objective of our algorithm isto find a best match for some missing pixels in the image,however one of the weaknesses of MRF based reconstructionis the requirement of intensive computation. To overcome thisproblem, we have limited the reconstruction to the region ofinterest (ROI) and using prior segmented retina vascular tree,the reconstruction was performed in the ROI. An overviewdiagram of the optic disc segmentation with Markov RandomField Image Reconstruction is shown in Fig. 7.

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    7

    Fig. 8. Optic disc detection. a) retinal image green channel with 1% of thebrightest region selected in green colour , b) binary segmented blood vessel c)binary segmented blood vessel after pruning and d) sequence of points fromthe centroid to vessels convergence point (optic disc location).

    Fig. 9. Optic disc detection. a) ROI image b) initialisation of the foregroundF and the background B of the ROI image.

    Let us consider a pixel neighbourhood w(p) define as asquare window of size W , where pixel p is the centre ofthe neighbourhood . I is the image to be reconstructed andsome of the pixels in I are missing. Our objective is to findthe best approximate values for the missing pixels in I . Solet d(w1, w2) represent a perceptual distance between twopatches that defines their similarity. The exact matching patchcorresponds to d(w, w(p)) = 0. If we define a set of thesepatches as (p) = { I : d(, (p)) = 0} the probabilitydensity function of p can be estimated with a histogram of allcentre pixel values in (p). However we are considering afinite neighbourhood for p and the searching is limited to theimage area, there might not be any exact matches for a patch.For this reason, we find a collection of patches, which matchfalls between the best match and a threshold. The closestmatch is calculated as best = argmind((p), ) I . Allthe patches with d((p), ) < (1 + )d((p), best) areincluded in the collection . d(w, w(p)) is defined as the sumof the absolute differences of the intensities between patches,so identical patches will result in d(w, w(p)) = 0. Using thecollection of patches, we create an histogram and select theone with the highest mode. Fig. 10 shows sample results ofthe reconstruction. The foreground Fgs and the background

    Bgs seeds are initialised in the reconstructed image, whichare then used in graph cut formulation to segment theoptic disc. Similar to the (Fig. 9), the initialisation of theforeground Fgs and background Bgs seeds is performedusing the reconstructed image.

    Fig. 10. MRF reconstruction applied to retinal images. (Top) original grayscale images. (Bottom) reconstructed images using the MRF based method.

    The graph cut algorithm descripted in section III-B isused to separate the foreground and the background byminimising the energy function over the graph and producingthe optimal segmentation of the optic disc in the image. Theenergy function of the graph in (4) consists of regional andboundary terms. The regional term (likelihoods of foregroundand background) is calculated using (5), while the boundaryterm (relationship between neighbouring pixels) is derivedusing (6). A grid of 16 neighbours N is selected to createlinks between pixels in the image Im. The Max-Flowalgorithm is used to cut the graph and find the optimalsegmentation.

    C. Optic Disc Segmentation With Compensation Factor

    In contrast to MRF image reconstruction, we haveincorporated the blood vessels into the graph cut formulationby introducing a compensation factor V ad. This factor isderived using prior information of blood vessel.

    The energy function of the graph cut algorithm generallycomprises a boundary and regional terms. The boundary termdefined in (6) is used to assign weights on the edges (n-links) to measure the similarity between neighbouring pixelswith respect to the pixel proprieties (intensity, texture, colour).Therefore pixels with similar intensities have a strong connec-tion. The regional term in (5) is derived to define the likelihoodof the pixel belonging to the background or the foreground byassigning weights on the edges (t-link) between image pixelsand the two terminals background and foreground seeds. Inorder to incorporated the blood vessels into the graph cutformulation, we derived the t-link as follows:

    Slink =

    { lnPr (Ip\Fgseeds) if p 6= vessel lnPr (Ip\Fgseeds) + V ad if p = vessel

    (13)

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    8

    Tlink =

    { lnPr (Ip\Bgseeds) if p 6= vessel lnPr (Ip\Bgseeds) if p = vessel (14)

    where p is the pixel in the image, Fgseeds is the intensitydistribution of the foreground seeds, Bgseeds represents theintensity distribution of the background seeds and V ad is thecompensation factor given as:

    V ad = maxpvessel{ lnPr (Ip\Bgseeds)} (15)The intensity distribution of the blood vessel pixels in

    the region around the optic disc makes them more likelyto belong to background pixels than the foreground (or theoptic disc pixels). Therefore the vessels inside the disc haveweak connections with neighbouring pixels making themlikely to be segmented by the graph cut as background. Weintroduce in (13) a compensation vector to all t-links of theforeground for pixels belong to the vascular tree to addressthis behaviour. Consequently, vessels inside the optic discare classified with respect to their neighbourhood connectionsinstead of their likelihood with the terminals foreground andbackground seeds. Fig. 11 shows sample of images segmentedby Compensation Factor. The segmentation of the disc isaffected by the value of V ad, the method achieves poorsegmentation results for low value of V ad. However when thevalue of the V ad increases, the performance improves until thevalue of V ad is high enough to segment the rest of the vesselsas foreground.

    Fig. 11. Optic disc segmentation with Compensation Factor V ad method:(a) V ad = 20, (b) V ad = 100, (c) V ad = 150, (d) V ad = 250

    V. RESULTS

    For the vessel segmentation method, we tested ouralgorithm on two public datasets, DRIVE [5], STARE [2]with a total of 60 images. The optic disc segmentationalgorithm was tested on DRIVE [5] and DIARETDB1 [26],consisting of 129 images in total. The performance of bothmethods is tested against a number of alternative methods.

    The DRIVE consists of 40 digital images which werecaptured from a Canon CR5 non-mydriatic 3CCD camera at45 field of view (FOV). The images have a size of 768584

    pixels. The dataset includes masks to separate the FOV fromthe rest of the image. It included two sets hand labelledimages (set A and set B) for the blood vessel. The set Aoffers the manually labelled images for all the images in thedataset, whereas the set B provides the manually labelledimages for half of the dataset. To test our method we adoptthe set A hand labelling as the benchmark. We manuallydelimited the optic disc to test the performance of optic discsegmentation algorithm.

    The STARE dataset consists of 20 images captured by aTopCon TRV-50 fundus camera at 35 FOV. The size of theimages is 700 605 pixels. We calculated the mask imagefor this dataset using a simple threshold technique for eachcolour channel. The STARE dataset included images withretinal diseases selected by Hoover et al [2]. It also providestwo sets of hand labelled images performed by two humanexperts. The first expert labelled fewer vessel pixels than thesecond one. To test our method we adopt the first expert handlabelling as the ground truth.

    The DIARETDB1 dataset consist of 89 colour imageswith 84 of them contain at last one indication of lesion.The images were captured with digital fundus camera at 50degree filed of view and have a size of 1500 1152 pixels.Hand labelled lesion regions are provided in this dataset byfour human experts. However the DIARETDB1 dataset onlyincludes the hand labelled ground truth of lesions but notthe blood vessels and the optic disc. For this reason, wewere unable to compare the performance of the blood vesselsegmentation on the DIARETDB1 dataset. Nevertheless wewere able to create the hand labelled ground truth of opticdisc to test the performance of the optic disc segmentation.

    To facilitate the performance comparison between ourmethod and alternative retinal blood vessels segmentationapproaches, parameters such as the true positive rate (TPR),the false positive rate (FPR) and the accuracy rate (ACC)are derived to measure the performance of the segmentation[5]. The accuracy rate is defined as the sum of the truepositives (pixels correctly classified as vessel points) andthe true negatives (non-vessel pixels correctly identified asnon vessel points), divided by the total number of pixelin the images. True Positive Rate (TPR) is defined as thetotal number of true positives, divided by the number ofblood vessel pixel marked in the ground true image. FalsePositive Rate (FPR) is calculated as the total number offalse positives divided by the number of pixels marked asnon-vessel in the ground true image. It is worth mentioningthat a perfect segmentation would have a FPR of 0 and aTPR of 1. Our method and all the alternative methods usedthe first expert hand labelled images as performance reference.

    Most of the alternative methods use the whole image tomeasure the performance. In [5] all the experiments aredone on the FOV without considering the performance inthe dark area outside the FOV. The method in [3] measuresthe performance on both the whole image and the FOV. The

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    9

    dark background outside the FOV in the retinal image iseasy to segment. It is an advantage in measuring the truenegatives pixels when the whole image is considered. We havecalculated the percentage of pixels outside the FOV in theimages for the two datasets, which represents approximatelythe 25% of the pixels in the whole image. However, it doesnot affect all the measurement metrics, except when the truenegative value is involved (e.g. accuracy rate). On the otherhand, most of the methods use the whole image to measuretheir performance, making the comparison fair.

    A. Results of Blood Vessel Segmentation Algorithm on STAREdataset

    Tables II and III show performance comparison resultsof our approach with recent alternative methods in terms ofTPR, FPR and ACC on STARE dataset. The performanceresults of the second expert hand labelled and the methodof Martinez-Perez et al. [9] and Staal et al. [5] are takenfrom [9]. The results of the methods proposed by Mendoncaet al. [3] and Hoover et al. [2] are taken from [3] and theapproaches of Chaudhuri et al. [6], Kaba et al. [27] and Marinet al. [28] and Zhang et al. [29] were generated from theiroriginal manuscripts. The performance of the segmentationresults for Zhang et al. [29], Chaudhuri et al. [6] and Soareset al. [30] on both healthy and unhealthy images were takenfrom [29]. The testing includes all the 20 fundus imagesexcept the method proposed by Staal [5] which used 19 outof the 20 (10 healthy and 9 unhealthy) images.

    In Tables II the second human expert hand labelled imageis considered as the target performance level with average(TPR = 0.7887) given the first human expert hand labelledas benchmark. Thus our method needs an improvementof 10.64% in average true positive whereas Mendona etal., Staal et al., Chaudhuri et al., Hoover et al., Kaba etal., Martinez-Perez et al. and Zhang et al. have a room ofimprovement of 19.55%, 19.81%, 28.17%, 22.00%, 23.06%,14.45% and 17.74% respectively.

    Considering the value of average TPR as performancemeasure, our proposed approach reaches better performancethan all the other methods. However with the average accuracyrate, our method is only marginally inferior to the methodspresented by Staal et al. [5], Kaba et al. [27], Marin et al.[28] and Zhang et al. [29] but as mentioned above, Staal etal. [5] used 19 of the 20 images. Compared to the methodsproposed by Hoover et al. [2], Martinez-Perez et al. [9] andChaudhuri et al. [6], our approach outperforms the accuracyrate of these techniques and it has approximately the samevalue of Acc as Mendonca et al. [3].

    Table III compares the performance of the healthysubject images against the unhealthy subject images onSTARE dataset. The results of the experiments show theunhealthy ocular images cause a significant degradation tothe performance of automated blood vessels segmentation

    techniques. An overview of the results shows on both healthyand unhealthy images, our proposed method achieves betteroverall average TPR performance than all the methods.However the average Acc value is comparable to theperformance of Soares et al. [30] and Zhang et al. [29]. Itoutperforms the ACC of Mendonca et al. [3] , Hoover et al. [2]and Chaudhuri et al. [6] on both healthy and unhealthy images.

    Fig. 12 and 13 show the segmented images and themanually labelled images for the DRIVE and the STAREdatasets respectively.

    Fig. 12. The DRIVE dataset: a) and d) retinal images, b) and e) oursegmentation results, and c) and e) manually labelled results.

    Fig. 13. The STARE dataset: a) and d) retinal images, b) and e) oursegmentation results, and c) and e) manually labelled results.

    B. Results of Blood Vessel Segmentation Algorithm on DRIVEdataset

    The performance of the segmentation our method onDRIVE dataset is compared with alternative methods: Zhanget al. [29], Soares et al. [30], Zana et al. [7], Garg et al. [31],Perfetti et al. [32], Al-Rawi et al. [33] taken from [29].The results of the second human expert B and the methodproposed by Niemeijer et al. [34], Mendonca et al. [3] and

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    10

    Staal et al. [5] were acquired from [3]. Cinsdikici et al. [35]and Jiang et al. [36] were generated from Marin et al. [28] andfinally Ricci et al. [37], Soares et al. [30]and Martinez-Perezet al. [9] were used from their orignal manuscripts.

    The second human expert B hand labelled [3] isconsidered as the target performance level with average(TPR = 0.7761andACC = 0.9473) given the first humanexpert A hand labelled as reference (benchmark). Tables IVshows the performance of our method against the abovemethods on DRIVE dataset. Our method needs an overallimprovement of 2.49% and 0.61% in average true positiverate and average accuracy rate respectively.

    Whereas with an average TPR rate of 0.7512, our methodachieves better performance than all the other methods withrespect to the average TPR value. The average accuracyachieved with our approach on DRIVE outperforms Jianget al. [36], Cinsdikici et al. [35], Zana et al. [7], Garg etal. [31], Zhang et al. [29] and Martinez et al. [9]. But it ismarginally inferior to the methods proposed by Al-Rawi etal. [33], Ricci et al. [37] and Mendonca et al. [3] and it iscomparable to Soares et al. [30], Marin et al. [28], Niemeijeret al. [34] and Staal et al. [5].

    It is important to note that the methods presented by Ricciet al. [37], Soares et al. [30], Marin et al. [28], Niemeijeret al. [34] and Staal et al. [5] used supervised techniquesthat generally depend on the training datasets, thus toachieve a good results, classifier re-training is required beforeperforming any experimentation on new datasets.

    An overview of the testing results on DRIVE show that ourmethod offers a reliable and robust segmentation solution forblood vessels. It is clearly observed that our approach reachesbetter performance in terms average true positive rate.

    TABLE IIPERFORMANCE COMPARISON ON THE STARE DATASET.

    Method TPR FPR Accuracy

    2nd human expert [9] 0.8951 0.0438 0.9522

    Hoover[2] 0.6751 0.0433 0.9267

    Staal[5] 0.6970 0.0190 0.9541

    Mendonca[3] 0.6996 0.0270 0.9440

    Martinez[9] 0.7506 0.0431 0.9410

    Chaudhuri[6] 0.6134 0.0245 0.9384

    Kaba [27] 0.6645 0.0216 0.9450

    Zhang [29] 0.7177 0.0247 0.9484

    Marin [28] - - 0.9526

    Our method 0.7887 0.0367 0.9441

    C. Results of Optic Disc Segmentation on DIARETDB1 andDRIVE datasets

    The performance results of our approach are compared tothe alternative methods: The adaptive morphological approach

    TABLE IIIPERFORMANCE COMPARISON OF HEALTHY VERSUS DISEASE IMAGES IN

    STARE DATASET.

    Heathy images

    Method TPR FPR Accuracy

    Mendonca[3] 0.7258 0.0209 0.9492

    Hoover[2] 0.6766 0.0338 0.9324

    Chaudhuri[6] 0.7335 0.0218 0.9486

    Zhang [29] 0.7526 0.0221 0.9510

    Soares [30] 0.7554 0.0188 0.9542

    Our method 0.8717 0.0364 0.9513Unhealthy images

    Method TPR FPR Accuracy

    Mendonca[3] 0.6733 0.0331 0.9388

    Hoover[2] 0.6736 0.0528 0.9211

    Chaudhuri[6] 0.5881 0.0384 0.9276

    Zhang [29] 0.7166 0.0327 0.9439

    Soares [30] 0.6869 0.0318 0.9416

    Our method 0.7057 0.0371 0.9369

    TABLE IVPERFORMANCE COMPARISON ON THE DRIVE DATASET.

    Method TPR FPR Accuracy

    Human expert B[3] 0.7761 0.0275 0.9473

    Staal[5] 0.7194 0.0227 0.9442

    Mendonca[3] 0.7344 0.0236 0.9452

    Niemeijer[34] 0.6898 0.0304 0.9417

    Jiang[36] - - 0.8911

    Cinsdikici [35] - - 0.9293

    Marin [28] - - 0.9452

    Ricci[37] - - 0.9633

    Zana[7] - - 0.9377

    Garg[31] - - 0.9361

    Perfetti[32] - - 0.9261

    Al-Rawi[33] - - 0.9510

    Soares[30] - - 0.9466

    Zhang[29] 0.7120 0.0276 0.9382

    Martinez[9] 0.7246 0.0345 0.9344

    Our method 0.7512 0.0316 0.9412

    by Welfer et al. [14], the traditional graph cut technique byBoykov et al. [24] and the topology cut technique proposedby Zeng et al. [38]. Unfortunately it was not possible to testour method against a large number of alternative methods,since most of the methods do use a unique benchmark tomeasure the results of the optic disc segmentation, thereforethis makes the comparison of the results difficult. Furthercomparison is made between our two optic disc segmentationmethods (the Compensation factor and the Markov Randomfield image reconstruction). All the methods are tested onthe same datasets (DIARETDB1 and DRIVE) of 109 fundusretinal images in total, including those with discernable opticdisc.

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    11

    The optic disc segmentation performance is evaluated bythe overlapping ratio Oratio and the mean absolute distanceMAD. The overlapping ratio is defined to measure thecommon area between the optic disc region in the groundtruth and the optic disc region segmented by our method. Itis defined by the following formulation:

    Oratio =GS

    GS

    (16)

    where G represents the true optic disc boundary (manuallylabelled region) and S is the optic disc boundary segmentedby our method. MAD is defined as:

    MAD (Gc, Sc) =1

    2

    {1

    n

    ni=1

    d(gci, S) +1

    m

    mi=1

    d(sci, G)

    }(17)

    where Gc and Sc are the contours of the segmented regionsof the ground truth and our algorithm respectively. d(ai, B)is the minimum distance from the position of the pixel ai onthe contour A to the contour B. A good segmentation impliesa high overlapping ratio and a low MAD value.

    The sensitivity of our method on DIARETDB1 andDRIVE, it is defined as:

    Sensitivity =Tp

    Tp + Fn(18)

    where Tp and Fn are the number of true positives andthe number of false negatives respectively. The sensitivityindicates the detection of the foreground pixels by thesegmentation method.

    Fig. 14 (a) and (b) show the optic disc segmentationresults of topology cut technique [38], traditional graphcut technique [24] and both our methods the optic discsegmentation with compensation factor and the optic discsegmentation with Markov Random field image reconstructionon DIARETDB1 and DRIVE respectively. Considering theground truth images, It is clear that both our methods performbetter than alternative methods topology cut technique [38]and traditional graph cut technique [24]. The topology cuttechnique achieved acceptable results in the brighter images,characterised by vessels that are more likely to belong tothe foreground (similar intensity as the optic disc). However,the traditional graph cut technique tends to segment only thebrightest region of the disc, this is due to the intrusion of theblood vessels in the optic disc region, which misguide thesegmentation algorithm to follow a short path.

    Table V shows the performance of our proposed methodswith alternative methods on DIARETDB1 images. Thecompensation factor vad and the MRF image reconstructionsegmentation algorithms achieve the overlapping ratio of

    0.7594 and 0.7850 and outperform the approaches in [14],[38] and [24]. However considering the performance in termsof a mean absolute distance, MRF image reconstructionalgorithm reaches the lowest value 6.55 and performs betterthan all the other methods. Both our methods achieve thehighest average sensitivity with 87.50% for MRF imagereconstruction and 86.75% for compensation factor V ad in96.7% on the DIARETDB1 images.

    Fig. 14. (a) Optic disc segmentation results of DIARETDB1 images: Firstrow Topology cut, second row Graph cut, third row Compensation factoralgorithm, fourth row Markov Random field image reconstruction algorithm,fifth row Hand labelled. (b) Optic disc segmentation results of DRIVE images:First row Topology cut, second row Graph cut, third row Compensation factoralgorithm, fourth row Markov Random field image reconstruction algorithm,fifth row Hand labelled

    Table VI shows the performance results of our methodswith other alternative methods in terms of Oratio, MADand Sensitivity on DRIVE images. An overview of thesegmentation results shows our proposed methods achievedthe highest overlapping ratio with the minimum MADvalue compare to the traditional graph cut method [24], thetopology cut method [38], except the Adaptive morphologic[14], which is marginally inferior to the compensation factoralgorithm in terms of MAD. However, an increase in theoverlapping ratio does not necessarily mean a decrease onMAD value. Thus the value of MAD alone is not enoughto measure the performance of segmentation results, but itprovides a good reference of the contour matching with theground truth contour reference.

    TABLE VPERFORMANCE COMPARISON ON DIARETDB1 DATASET.

    Average Average AverageMethod ORatio MAD Sensitivity

    Topology cut [38] 0.3843 17.49 0.5530

    Adaptive morphologic [14] 0.4365 8.31

    Graph cut [24] 0.5403 10.74 0.7635

    Compensation Factor 0.7594 6.18 0.8675

    MRF Image Reconstruction 0.7850 6.55 0.8750

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    12

    TABLE VIPERFORMANCE COMPARISON ON DRIVE DATASET.

    Average Average AverageMethod ORatio MAD Sensitivity

    Topology Cut [38] 0.5591 10.24 0.6512

    Adaptive morphologic [14] 0.4147 5.74

    Graph cut [24] 0.5532 9.97 0.7398

    Compensation Factor 0.709 6.48 0.8464

    MRF Image Reconstruction 0.8240 3.39 0.9819

    For further performance comparison, we used the cumu-lative histogram to compare the overlapping ratio of ourproposed method against Topology Cut [38] and Graph cut[24]. This is done by performing each segmentation methodagainst the human expert hand labelled, and the cumulativehistogram represents the frequency of the Oratio value. Aperfect segmentation is achieved when the value of Oratio = 1and the area under the curve is equal to zero. Fig. 15 and 16show the plotted of the cumulative histograms of overlappingratio for Topology Cut [38] and Graph cut [24], CompensationFactor and MRF Image Reconstruction on DIARETDB1 andDRIVE datasets respectively. The overview of the graphs showthat the Compensation Factor and MRF Image Reconstructionmethods achieve the minimum area under the graph, henceour method outperforms all other methods. In general theMRF Image Reconstruction method reaches better resultson DRIVE images, while the Compensation Factor methodproduces better segmentation results on DIARETDB1 dataset.

    Fig. 15. Cumulative histogram for overlapping ratio of DIARETDB1 images

    Based on the assumption in Niemeijer et al. [39] whichconsider a minimum overlapping ratio Oratio > 50% as asuccessful segmentation, the compensation factor algorithmwith 86.52% success performs better on DRIVE than DI-ARETDB1 and the segmentation of MRF Image Reconstruc-tion with 90.00% achieves better results than the compensationfactor algorithm on DRIVE .

    VI. DISCUSSIONS AND CONCLUSIONSWe have presented a novel approach for blood vessels

    and optic disc segmentation in retinal images by integratingthe mechanism of flux, MRF image reconstruction andcompensation factor into the graph cut method. The process

    Fig. 16. Cumulative histogram for overlapping ratio of DRIVE images

    also involves contrast enhancement, adaptive histogramequalisation, binary opening and distance transform forpre-processing.

    We have evaluated the performance of vessel segmentationagainst 10 other methods including human manual labellingon the STARE dataset and 15 other methods including humanmanual labelling on the DRIVE dataset. For the optic discsegmentation, we have evaluated the performance of ourmethod against three other methods on the DRIVE andDIARETDB1 datasets.

    Tables II, III and IV show performance comparisonin terms of average true positive rate, false positive rateand accuracy rate. According to these results, our vesselsegmentation algorithm reaches a acceptable results andoutperforms all other methods in terms of average truepositive rate on both STARE and DRIVE images. In terms ofaverage accuracy, our method outperforms Hoover et al. [2],Martinez-Perez et al. [9] and Chaudhuri et al. [6] on Stareimages. On DRIVE it performs better than Jiang et al. [36],Cinsdikici et al. [35], Zana et al. [7], Garg et al. [31], Zhanget al. [29] and Martinez et al. [9]. Nevertheless our methodis marginally inferior to the methods presented by Staal etal. [5], Kaba et al. [27], Marin et al. [28] and Zhang et al.[29] on STARE and Al-Rawi et al. [33], Ricci et al. [37]and Mendonca et al. [3], Soares et al. [30], Marin et al. [28]and Staal et al. [5] on DRIVE. Although Soares et al. [30],Marin et al. [28], Staal et al. [5] and Ricci et al. [37] seemsto achieve higher accuracy, as supervised techniques, theygenerally depend on the training datasets, thus to achieveexcellent results, classifier re-training is required beforeperforming any experimentation on new datasets. Furtherstudies in [28] proved that these methods perform well whenboth training and testing are applied on the same datasetbut the performance deteriorated when the method is testedand trained on different datasets. Since these methods aresensitive to the training datasets, deploying them for practicaluse in retinal blood vessel segmentation would need furtherimprovements as segmentation algorithms must work onretinal images taken under different conditions to be effective.

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    13

    Our propose method incorporates prior knowledge of bloodvessels to perform the segmentation and it can be appliedon retinal images from multiple sources and under differentconditions without a need for training. This can be seen inthe results achieved by this method on both STARE andDRIVE datasets.

    For the optic disc segmentation Tables V and VI presentthe performance of our method on DIARETDB1 and DRIVEimages. The results show that our methods of using (theCompensation factor and the MRF image reconstruction)achieved the best overall performance. The results also showthat, the MRF image reconstruction algorithm outperformsthe Compensation factor algorithm by 2.56% and 11.5% onDIARETDB1 and DRIVE images respectively. However itis important to notice that, the MRF image reconstructionalgorithm depends on the vessel segmentation algorithm,for example if the vessel segmentation algorithm achieveda low performance on severely damage retinal image, thereconstruction would not define a meaningful optic discregion, hence the segmentation will fail.

    Further more, the proposed method addresses one of themain issues in medical image analysis, the overlapping tissuesegmentation. Since the blood vessels converse into theoptic disc area and misguide the graph cut algorithm througha short path, breaking the optic disc boundary. To achievea good segmentation results, the MRF image reconstructionalgorithm eliminates vessels in the optic disc area withoutany modification of the image structures before segmentingthe optic disc. On the other hand the compensation factorincorporates vessels using local intensity characteristic toperform the optic disc segmentation. Thus our method canbe applied in other medical image analysis applications toovercome the overlapping tissue segmentation.

    Our future research will be based on the segmentationof retinal diseases (lesions) known as exudates using thesegmented structures of the retina (blood vessels and opticdisc).Thus a background template can be created using thesestructures. Then this template can be used to perform thedetection of suspicious areas (lesions) in the retinal images.

    ACKNOWLEDGMENTThe authors would like to thank V. Kolmogorov for

    providing the software MaxFlow-v3.01 to compute the graphcut.

    REFERENCES[1] K. Fritzsche, A. Can, H. Shen, C. Tsai, J. Turner, H. Tanenbuam,

    C. Stewart, B. Roysam, J. Suri, and S. Laxminarayan, Automated modelbased segmentation, tracing and analysis of retinal vasculature fromdigital fundus images, State-of-The-Art Angiography, Applications andPlaque Imaging Using MR, CT, Ultrasound and X-rays, pp. 225298,2003.

    [2] A. Hoover, V. Kouznetsova, and M. Goldbaum, Locating blood vesselsin retinal images by piecewise threshold probing of a matched filterresponse, IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp.203210, 2000.

    [3] A. M. Mendonca and A. Campilho, Segmentation of retinal bloodvessels by combining the detection of centerlines and morphologicalreconstruction, IEEE Transactions on Medical Imaging, vol. 25, no. 9,pp. 12001213, 2006.

    [4] J. Soares, J. Leandro, R. Cesar, H. Jelinek, and M. Cree, Retinal vesselsegmentation using the 2-d gabor wavelet and supervised classification,IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 12141222,2006.

    [5] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. vanGinneken, Ridge-based vessel segmentation in color images of theretina, IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501509, 2004.

    [6] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum,Detection of blood vessels in retinal images using two-dimensionalmatched filters, IEEE Transactions on medical imaging, vol. 8, no. 3,pp. 263269, 1989.

    [7] F. Zana and J.-C. Klein, Segmentation of vessel-like patterns usingmathematical morphology and curvature evaluation, Image Processing,IEEE Transactions on, vol. 10, no. 7, pp. 10101019, 2001.

    [8] L. Xu and S. Luo, A novel method for blood vessel detection fromretinal images, Biomedical engineering online, vol. 9, no. 1, p. 14,2010.

    [9] M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath,and K. H. Parker, Segmentation of blood vessels from red-free andfluorescein retinal images, Medical image analysis, vol. 11, no. 1, pp.4761, 2007.

    [10] L. Zhou, M. S. Rzeszotarski, L. J. Singerman, and J. M. Chokreff, Thedetection and quantification of retinopathy using digital angiograms,Medical Imaging, IEEE Transactions on, vol. 13, no. 4, pp. 619626,1994.

    [11] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson,Automated localisation of the optic disc, fovea, and retinal blood vesselsfrom digital colour fundus images, British Journal of Ophthalmology,vol. 83, no. 8, pp. 902910, 1999.

    [12] R. Chrastek, M. Wolf, K. Donath, H. Niemann, D. Paulus, T. Hothorn,B. Lausen, R. Lammer, C. Y. Mardin, and G. Michelson, Automatedsegmentation of the optic nerve head for diagnosis of glaucoma,Medical Image Analysis, vol. 9, no. 1, pp. 297314, 2005.

    [13] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, andL. Kennedy, Optic nerve head segmentation, IEEE Transactions onMedical Imaging, vol. 23, no. 2, pp. 256264, 2004.

    [14] D. Welfer, J. Scharcanski, C. Kitamura, M. D. Pizzol, L. Ludwig, andD. Marinho, Segmentation of the optic disc in color eye fundus imagesusing an adaptive morphological approach, Computers in Biology andMedicine, vol. 40, no. 1, pp. 124137, 2010.

    [15] A. Aquino, M. E. Gegundez-Arias, and D. Marn, Detecting the opticdisc boundary in digital fundus images using morphological, edgedetection, and feature extraction techniques, Medical Imaging, IEEETransactions on, vol. 29, no. 11, pp. 18601869, 2010.

    [16] A. Salazar-Gonzalez, Y. Li, and X. Liu, Optic disc segmentationby incorporating blood vessel compensation, In Proceedings of IEEESSCI, International Workshop on Computational Intelligence In MedicalImaging, pp. 18, 2011.

    [17] A. G. Salazar-Gonzalez, Y. Li, and X. Liu, Retinal blood vesselsegmentation via graph cut, in Control Automation Robotics &Vision (ICARCV), 2010 11th International Conference on. IEEE, 2010,pp. 225230.

    [18] A. Salazar-Gonzalez, Y. Li, and D. Kaba, Mrf reconstruction of retinalimages for the optic disc segmentation, in Health Information Science.Springer, 2012, pp. 8899.

    [19] A. G. S. Gonzalez, Structure analysis and lesion detection from retinalfundus images, Ph.D. dissertation, Brunel University, 2011.

    [20] D. Wu, M. Zhang, and J. Liu, On the adaptive detection of blood vesselsin retinal images, IEEE Transactions on Biomedical Engineering,vol. 53, no. 2, pp. 341343, 2006.

    [21] Y. Y. Boykov and M.-P. Jolly, Interactive graph cuts for optimalboundary & region segmentation of objects in nd images, in Com-puter Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE InternationalConference on, vol. 1. IEEE, 2001, pp. 105112.

    [22] S. Vicente, V. Kolmogorov, and C. Rother, Graph cut based image seg-mentation with connectivity priors, In Proceedings of IEEE Conferenceon Computer Vision and Pattern Recognition, CVPR, vol. 1, pp. 18,2008.

    [23] V. Kolmogorov and Y. Boykov, What metrics can be approximated bygeo-cuts, or global optimization of length/area and flux, In Proceedingsof Tenth IEEE International Conference on Computer Vision, ICCV.,vol. 1, pp. 564571, 2005.

  • 2168-2194 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/JBHI.2014.2302749, IEEE Journal of Biomedical and Health Informatics

    14

    [24] Y. Boykov and G. Funka-Lea, Graph cuts and efficient n-d imagesegmentation, International Journal of Computer Vision, vol. 70, no. 2,pp. 109131, 2006.

    [25] A. Efros and T. Leung, Texture synthesis by non-parametric sampling,In Proceedings of the ICCV, pp. 10331038, 1999.

    [26] T. Kauppi, V. Kalesnykiene, J. Kamarainen, L. Lensu, I. Sorri, A. Rani-nen, R. Voitilainen, H. Uusitalo, H. Kalviainen, and J. Pietila, Diaretdb1diabetic retinopathy database and evaluation protocol. In Proceedingsof British Machine Vision Conference., 2007.

    [27] D. Kaba, A. G. Salazar-Gonzalez, Y. Li, X. Liu, and A. Serag, Seg-mentation of retinal blood vessels using gaussian mixture models andexpectation maximisation, in Health Information Science. Springer,2013, pp. 105112.

    [28] D. Marn, A. Aquino, M. E. Gegundez-Arias, and J. M. Bravo, Anew supervised method for blood vessel segmentation in retinal imagesby using gray-level and moment invariants-based features, MedicalImaging, IEEE Transactions on, vol. 30, no. 1, pp. 146158, 2011.

    [29] B. Zhang, L. Zhang, L. Zhang, and F. Karray, Retinal vessel extractionby matched filter with first-order derivative of gaussian, Computers inbiology and medicine, vol. 40, no. 4, pp. 438445, 2010.

    [30] J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree,Retinal vessel segmentation using the 2-d gabor wavelet and supervisedclassification, Medical Imaging, IEEE Transactions on, vol. 25, no. 9,pp. 12141222, 2006.

    [31] S. Garg, J. Sivaswamy, and S. Chandra, Unsupervised curvature-basedretinal vessel segmentation, in Biomedical Imaging: From Nano toMacro, 2007. ISBI 2007. 4th IEEE International Symposium on. IEEE,2007, pp. 344347.

    [32] R. Perfetti, E. Ricci, D. Casali, and G. Costantini, Cellular neural net-works with virtual template expansion for retinal vessel segmentation,Circuits and Systems II: Express Briefs, IEEE Transactions on, vol. 54,no. 2, pp. 141145, 2007.

    [33] M. Al-Rawi, M. Qutaishat, and M. Arrar, An improved matched filterfor blood vessel detection of digital retinal images, Computers inBiology and Medicine, vol. 37, no. 2, pp. 262267, 2007.

    [34] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff,Comparative study of retinal vessel segmentation methods on a newpublicly available database, in Medical Imaging 2004. InternationalSociety for Optics and Photonics, 2004, pp. 648656.

    [35] M. G. Cinsdikici and D. Aydn, Detection of blood vessels in ophthal-moscope images using mf/ant (matched filter/ant colony) algorithm,Computer methods and programs in biomedicine, vol. 96, no. 2, pp.8595, 2009.

    [36] X. Jiang and D. Mojon, Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection inretinal images, Pattern Analysis and Machine Intelligence, IEEE Trans-actions on, vol. 25, no. 1, pp. 131137, 2003.

    [37] E. Ricci and R. Perfetti, Retinal blood vessel segmentation using lineoperators and support vector classification, Medical Imaging, IEEETransactions on, vol. 26, no. 10, pp. 13571365, 2007.

    [38] Y. Zeng, D. Samaras, W. Chen, and Q. Peng, Topology cuts: a novelmin-cut/max-flow algorithm for topology preserving segmentation in n-d images, Journal of computer vision and image understanding., vol.112, no. 1, pp. 8190, 2008.

    [39] M. Niemeijer, M. D. Abra`moff, and B. Van Ginneken, Segmentationof the optic disc, macula and vascular arch in fundus photographs,Medical Imaging, IEEE Transactions on, vol. 26, no. 1, pp. 116127,2007.

    Ana Salazar-Gonzalez was born in Irapuato, Gua-najuato, Mexico in 1979. She received the B.S.degree in Electronics Engineering in 2003 and theM.S. degree in Signal Processing in 2006 from theUniversidad de Guanajuato, Mexico. In 2012 shewas awarded the Ph.D. degree in Computer Scienceby Brunel University, London, UK.

    Since 2003 she has been dedicated to the designand development of image processing systems. Herresearch interests include medical image analysis,OCR systems and image processing for security

    documents under different wavelength lights. Currently she is working as anengineer at Access IS, designing and developing OCR systems and protocolsof validation for security documents (passports, visas, ID cards, drivinglicences, etc).

    Djibril Kaba received his M.Eng. and B.Eng. inelectronics engineering from Kings College Univer-sity, London, UK in 2010. His currently pursuingthe Ph.D. degree in the Department of ComputerScience at Brunel University, West London, UK.

    From 2010 to 2011, he worked as a BusinessAnalysis for ITSeven in London, UK. Since 2011,he has been a Teaching Assistant in the Departmentof Computer Science at Brunel University, London,UK. His research interests include computer vision,image processing, pattern recognition, medical im-

    age analysis and machine learning.

    Yongmin Li is a senior lecturer in the Departmentof Computer Science at Brunel University, WestLondon, UK. He received his M.Eng. and B.Eng.in control engineering from Tsinghua University,China, in 1990 and 1992 respectively, and Ph.D.in computer vision from Queen Mary, University ofLondon in 2001.

    Between 2001 and 2003 he worked as a researchscientist at the British Telecom Laboratories. Hiscurrent research interests cover the areas of auto-matic control, non-linear filtering, computer vision,

    image processing, video analysis, medical imaging, machine learning, andpattern recognition.

    Xiaohui Liu is Professor of Computing at BrunelUniversity in the UK where he directs the Centrefor Intelligent Data Analysis, conducting interdisci-plinary research concerned with the effective analy-sis of data.

    He is a Charted Engineer, Life Member of theAssociation for the Advancement of Artificial Intel-ligence, Fellow of the Royal Statistical Society, andFellow of the British Computer Society. ProfessorLiu has over 100 high quality journal publicationsin biomedical informatics, complex systems, com-

    putational intelligence and data mining. His H-index is over 40