robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and...

14
METHODOLOGY ARTICLE Open Access Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition B. Mathew 1* , A. Schmitz 1 , S. Muñoz-Descalzo 2 , N. Ansari 1 , F. Pampaloni 1 , E.H.K. Stelzer 1 and S.C. Fischer 1 Abstract Background: Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. Results: We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91 %) whereas the other methods each failed for at least one dataset (F-measure 69 %). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. Conclusion: We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology. Keywords: Image segmentation, image processing, algorithm, approximately convex decomposition, clustering, three-dimensional microscopy, spheroid, mouse embryo * Correspondence: [email protected] 1 Buchmann Institute for Molecular Life Sciences (BMLS), Fachbereich Biowissenschaften (FB15, IZN), Goethe Universität Frankfurt am Main, Max-von-Laue-Straße 15, 60438 Frankfurt am Main, Germany Full list of author information is available at the end of the article © 2015 Mathew et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Mathew et al. BMC Bioinformatics (2015) 16:187 DOI 10.1186/s12859-015-0617-x

Upload: others

Post on 17-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 DOI 10.1186/s12859-015-0617-x

METHODOLOGY ARTICLE Open Access

Robust and automated three-dimensionalsegmentation of densely packed cell nucleiin different biological specimens withLines-of-Sight decomposition

B. Mathew1*, A. Schmitz1, S. Muñoz-Descalzo2, N. Ansari1, F. Pampaloni1, E.H.K. Stelzer1 and S.C. Fischer1

Abstract

Background: Due to the large amount of data produced by advanced microscopy, automated image analysis iscrucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biologicalspecimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a majordifficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently toucheach other. Current methods are highly adapted to a certain biological specimen or a specific microscope. Theydo not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is notguaranteed. Hence, these methods require elaborate adjustments to each dataset.

Results: We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust.Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separateapparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of ouralgorithm using data from different specimens recorded with different microscopes. The three-dimensional imageswere recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouseembryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with groundtruth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accuratethroughout all test datasets (mean F-measure: 91 %) whereas the other methods each failed for at least one dataset(F-measure ≤ 69 %). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-artmethods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not requireparameter value adjustments. The accurate performance was achieved with one fixed set of parameter values.

Conclusion: We developed a novel and fully automated three-dimensional cell nuclei segmentation methodincorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparentlytouching cell nuclei independent of their shape, size or intensity. Our method showed superior performancecompared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoSapproach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.

Keywords: Image segmentation, image processing, algorithm, approximately convex decomposition, clustering,three-dimensional microscopy, spheroid, mouse embryo

* Correspondence: [email protected] Institute for Molecular Life Sciences (BMLS), FachbereichBiowissenschaften (FB15, IZN), Goethe Universität Frankfurt am Main,Max-von-Laue-Straße 15, 60438 Frankfurt am Main, GermanyFull list of author information is available at the end of the article

© 2015 Mathew et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium,provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Page 2: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 2 of 14

BackgroundBiological processes rely on spatial cell-cell and cell-matrix interactions and are highly influenced by themicroenvironment of the cells [1–3]. A large number ofbiomarkers and dyes are available to label distinct cellu-lar structures. Well-defined protocols enable the obser-vation and quantification of dynamic processes of cellswithin intrinsically three-dimensional structures. Severaltechniques have been explored to track cells in vivo orin vitro. The classical method is fluorescent labelling ofcell nuclei [4, 5] and visualization by advanced three-dimensional fluorescence microscopy. This method en-ables the localization of cell nuclei and is also utilized toassess cell viability.Optical imaging has experienced a significant progress

during the past 20 years [6]. The most widely appliedadvanced three-dimensional fluorescence imaging tech-nique is confocal microscopy [7, 8]. Light sheet-basedfluorescence microscopy (LSFM) such as single plane il-lumination microscopy (SPIM) [9, 10] and digitalscanned laser light sheet-based fluorescence microscopy(DSLM) ([11]) are becoming increasingly popular. Theyhave demonstrated their applicability for imaging bio-logical specimens ranging from a single cell to entire an-imals ([12, 13]). However, fluorescence images producedby all microscopes are affected by their optical proper-ties. The brightness as well as the signal-to-noise ratio inimages of large, thick and light-scattering multicellularspecimens decreases with the penetration depth [14].Furthermore, the axial resolution is at least three timesworse than the lateral resolution, which often causes ob-jects in the image to apparently touch one another.The availability of three-dimensional microscopes for

three-dimensional biological specimens has pushed thedevelopment of three-dimensional image segmentationmethods. Three-dimensional cell nuclei segmentationmethods commonly rely on pre-processing steps (e.g. fil-tering, initial thresholding or seed detection) combinedwith watershed ([15–17]), graph cut ([18, 19]), machinelearning ([20, 21]), gradient flow tracking [22], activesurface models [23], level set [24], or concavity-basedsegmentation ([25, 26]). But despite the continual pro-gress in three-dimensional cell nuclei segmentation,there is still a need to improve accuracy, level of auto-mation and adaptability.A major challenge for segmentation is the separation

of densely packed or apparently touching cell nuclei. An-other difficulty arises from the diversity in terms of theimaged biological specimens or the imaging techniquesused for acquisition. In general, segmentation methodsare adopted to either a specific specimen (e.g. mouseembryo [8], C. elegans [27] or zebra fish [28]) or a spe-cific imaging technique (e.g. DSLM or confocal micro-scope [29–31]); or their accuracy has only been tested

for one particular application ([32, 33]). To achieve asatisfactory performance of such a method for imagesthat contain another specimen or were obtained by adifferent imaging technique, sets of parameter valuesneed to be optimized.To tackle these problems and establish a method that

does not rely on signal intensity, heavy parameterizationor effective initialization of an algorithm, we chose ap-proximate convex decomposition as the basis of our cellnuclei segmentation method. In the literature, severalapproaches exist for separating objects into approxi-mately convex parts. However, none of them have beenapplied to biological images. In Lien’s and Amato’s ap-proach [34], first the most concave regions are identified,and then they are partitioned such that the concavity isreduced below a specified threshold - resulting in ap-proximately convex parts. Another approach is based ongreedy region growing of a surface part until the dis-tance to its convex hull decreases below a given thresh-old [27]. This again yields approximately convex parts.In Attene et al. [35], a shape is represented by a tetrahe-dral mesh. Decomposition is achieved by calculating ahierarchy of convex polyhedra that tightly enclose theshape. In a bottom-up manner, a single polyhedron isclustered into approximately convex parts.In the work of Asafi et al. [36], approximate convex

decomposition is achieved by analyzing pairs of surfacepoints of a shape that are visible to each other. Lines be-tween such mutually visible pairs of points are called“Lines-of-Sight” [36]. Neither a tetrahedralization northe convex hull have to be calculated for an object.Therefore, Lines-of-Sight are easily accessible featuresfor which neither shape and size information nor inten-sity distributions have to be considered. We apply thissegmentation method to biological images for the veryfirst time.We combine the Lines-of-Sight (LoS) concept with a

local adaptive pre-processing to separate apparentlytouching cell nuclei into approximately convex partsrepresenting single cell nuclei. We show that our LoSimplementation accurately segments three-dimensionalcell nuclei due to effective separation of apparentlytouching cell nuclei. The accuracy of our proposedmethod exceeded 86 %. Furthermore, volumes of splitcell nuclei were well represented. In a direct comparisonwith methods that are based on a graph cut algorithm(FARsight), seeded watershed transformation (ImageJ 3DWatershed) or machine learning (ilastik) – commonlyused techniques for three-dimensional cell nuclei seg-mentation – we demonstrate overall superior segmenta-tion performance and volume measurements. Thereby,our automated LoS approach is applied with the sameset of parameter values for real test images that differ interms of biological specimen, size, imaging technique,

Page 3: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Fig. 2 Concept of Lines-of-Sight (LoS). Lines-of-Sight are linesconnecting two points on the surface of an object that do not leave theinner volume of the shape (orange lines). Pairs of surface points that arein line-of-sight are mutually visible. Consequently, the gray surface pointis in line-of-sight with the orange marked regions

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 3 of 14

and quality. Together, this demonstrates the high ac-curacy, adaptability and applicability of the proposedmethod.

MethodApproximately convex decomposition with Lines-of-SightIn many biological specimens cell nuclei are naturallydensely packed. In three-dimensional fluorescence mi-croscopy images, they even seem to touch each other,e.g. due to a lower axial than lateral resolution of themicroscope. The differences in intensity are often notsufficient to distinguish single cell nuclei within a clump.Therefore, basic segmentation by intensity thresholdingfails to identify individual nuclei (Fig. 1).We developed an automated and robust three-

dimensional cell nuclei segmentation. The idea is thata single nucleus is expected to be convex, whereasclumps of apparently touching cell nuclei are in general notconvex. However, object decomposition into exact convexparts can be costly and too strict for segmentation due tothe generation of an incontrollable number of components[34]. Hence, the concept of approximate convex decompos-ition was implemented to decompose apparently touchingcell nuclei. The definition of approximately convex and theconcept of Lines-of-Sight (LoS) was taken from the work ofAsafi et al. [36] and was adapted to cell nuclei. Two surfacepoints of a shape are said to be in line-of-sight if the con-necting line between those two points does not leave theinner volume of the shape. Such mutually visible points arein a convex position. Hence, an object is convex if allsurface points are in a convex position. If few surfacepoints are not in a convex position, an object is approxi-mately convex. Fig. 2 illustrates Lines-of-Sight for a givensurface point. As evident, not all pairs of points that are inline-of-sight belong to a connected part of the shape sur-face. Thus, this object is not convex.Using this definition, we cluster surface points that are

in line-of-sight. To ensure robust clustering we deter-mine an initial guess of the number of clusters. For anumber of k clusters, each cluster represents a first ap-proximation of an individual cell nucleus. Subsequently,this rough estimation is optimized such that we obtain

Fig. 1 Densely packed and apparently touching cell nuclei challenge sea three-dimensional section of densely packed and apparently touchingCD1 mouse embryo labelled with DAPI. (b) Cell nuclei after intensity thindicating that they are wrongly detected as on object

clearly separated cell nuclei. The method was imple-mented as a processing pipeline for three-dimensionalgray level images. The raw images pass through fourmain stages: (1) local adaptive binarization, (2) collectingconnected components, (3) determining the number ofdivisible parts and (4) decomposition of componentsinto approximately convex parts with LoS.The details of the algorithm are described in the next

section and illustrated in Fig. 3. Thereby, all parametervalues described in the following section were deter-mined by a parameter scan. We chose one common par-ameter set that showed equally good performance of ouralgorithm for all test datasets.

AlgorithmPre-processing

(1) Local adaptive binarization. Foreground andbackground are separated by local intensity-basedthresholding performed per slice. At each pixel,Otsu’s clustering method is applied to compute anintensity threshold within a defined neighborhood

gmentation procedures. Illustration of segmentation challenges forcell nuclei. (a) Section of a raw gray-scale image taken from a

resholding. (c) Touching sets of nuclei are assigned identical colors,

Page 4: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Fig. 3 Steps of LoS decomposition algorithm for three-dimensional cell nuclei segmentation. Flow chart of the complete LoS decompositionpipeline. Pre-processing is depicted in the left column. The actual LoS decomposition steps (middle column) are visualized based on a clump ofthree apparently touching cell nuclei (right column). Colors indicate associated components in the example images. Touching cell nuclei appearin transparent gray

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 4 of 14

range of 18 pixels. Within this neighborhood, eachpixel value above the local threshold is consideredas foreground. This procedure returns an initialbinary image, which might contain artefactsoutside the region of interest (ROI). Therefore, abinary mask of the ROI is constructed by using amaximum filter with a large radius on the roughbinarized raw image. Consequently, the ROI vox-els are assigned a one and voxels outside the ROIa zero. Multiplication of the initial binary imagewith the ROI mask eliminates voxels outside theROI. This yields an improved binary image. In asecond step, holes within foreground objects arefilled. Finally, a morphological opening removesobjects that are smaller than a spherical structur-ing element of radius five. The resulting binaryimage provides the basis both for determining the

number of divisible parts and for the decompos-ition of apparently touching nuclei.

(2) Collect connected components. Connectedforeground components of the given binary image areextracted and stored together with their bounding boxspecifying their exact location in the image. Eachextracted foreground component corresponds to apossible clump of apparently touching cell nuclei andis individually processed by the LoS algorithm.

(3) Determine the number of divisible parts. Todetermine the number of divisible parts for eachconnected component, we implemented an automaticdetection method based on the Euclidean distancetransformation. Thereby, the value of each voxel isreplaced by its Euclidean distance to the closestbackground voxel. Consequently, the minimumdistance from each voxel to the surface is returned.

Page 5: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 5 of 14

We determine the number of divisible parts bydetecting local maxima in the transformed imagethat exceed the values of adjacent voxels by at least0.1. For a convex object the local maximum isequivalent to the centroid. Objects that touch eachother exhibit dents at split sites. Therefore, for a cellnuclei clump we detect one local maximum for eachapproximately convex part. Hence, the number oflocal maxima gives an approximation of the numberof nuclei in a clump. An object with exactly onemaximum represents a single cell nucleus.

LoS decompositionThe LoS algorithm is applied to each object that con-tains more than one local maximum and consists of lessthan 1000 voxels since both criteria give strong evidenceof apparently touching cell nuclei. Locations of maximaare irrelevant and simply the number is considered forLoS decomposition. Each candidate object passesthrough seven steps for decomposition:

(i) Define surface points. All foreground voxels thatare directly adjacent to a background voxel are setto be the surface points of an object.

(ii) Define lines between pairs of surface points.Pairs are formed from the determined surfacepoints. To ensure good accuracy as well as fastprocessing, we perform a sampling. Thereby, thesample size of pairs depends on the number ofsurface points that are collected. We define a lowerbound of 1000 surface points and an upper boundof 100000 surface points. If the number of surfacepoints is below 1000, 10 % of the surface points aresampled. Exceeding 100000 surface points decreasesthe sample size to 2 %. If the number of surfacepoints is in between the lower and upper bound, 3 %of the surface points are considered. Lines betweenpairs of surface points are generated.

(iii) Select LoS. To determine whether a line betweentwo surface points defines a Line-of-Sight, wecheck whether the line leaves the object. Pointsare sampled along each line at an interval of 0.1 %of its length and for each sample point we checkwhether this point is part of the foreground. If thisis true, such a line is considered to not leave theinner volume of the object and thus defines aLine-of-Sight. All other lines are not LoS and arediscarded.

(iv) Cluster LoS. We perform a bottom-up hierarch-ical agglomerative clustering (HAC) of the LoS tosubdivide them into k clusters, where k is given bythe number of divisible parts determined in (3).The HAC algorithm always starts with each Line-of-Sight in one cluster. In each step, the similarity

between two lines is computed with the chess-board distance and nearest clusters are fused usingWard’s method until k clusters are formed [37].

(v) Label surface points. Assuming the number ofdetected divisible parts for a nuclei clump is k,HAC outputs k clusters. LoS in the same clusterare assigned a common cluster label. LoS withdifferent labels can originate from the same surfacepoint. Therefore, the label of the surface point isset to the most common cluster label of the linesoriginating from that point. In case of an equaldistribution of cluster labels, the cluster label ischosen randomly from the respective labels.

(vi) Optimize surface points labelling. To achieve alabelling of the surface points such that the pointsin the individual parts are geometrically connected,i.e. form a coherent labelled surface and thereforean individual object, an optimization of theprevious surface points labelling is performed.Therefore, each surface point is assigned the mostcommon surface point label of the eighteen nearestsurface points.

(vii) Label voxels. Surface points labelling subdividesthe surface of an object. To separate the wholeobject, we label the inner voxels according to thelabelled surface points. For each foreground voxelthe eighteen nearest surface points are determined.The voxel label is chosen as the most commonlabel of these surface points. In case of an equaldistribution of surface point labels, the label isdetermined randomly from the surface point labelsfound. The labelled voxels are then used to create acomponent matrix where each individual objectobtains a unique identifier.

All previously described steps are performed foreach nuclei clump. Subsequently, the alreadyindividual nuclei and the decomposed cell nucleiare reassembled and combined to a componentmatrix. Feature measurements such as the centroidsand volumes of the objects are computed for eachnucleus and stored in an Excel sheet.

ImplementationThe method is implemented in Mathematica lan-guage (Version 9) and uses the ImageJ plugin forautomatic local thresholding [38]. The source codeis freely downloadable from [39]. Data processingusing the complete pipeline with the test datasetstook 8916 s for the mouse embryo, 3408 s for thebreast cancer spheroid and 26487 s for thepancreatic cancer spheroid on a 2.67 GHz 12-coreIntel Xeon E5650 with 96 GB of RAM. We expect

Page 6: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Fig. 4 Cell nuclei in different biological specimens acquired withdifferent imaging techniques. (a) Raw data showing single slices ofthree-dimensional stacks of three test datasets. From top to bottom:CD1 mouse embryo labelled with DAPI (scale bar 10 μm). In total235 two-dimensional slices. Recorded with a Zeiss confocal microscope(LSM 780, AxioObserver), objective lens EC Plan-Neofluar 40x/1.30 OilPh3. Cellular spheroid of human breast cancer cells (T47D)labelled with H2B:GFP (scale bar 10 μm). In total 296 two-dimensionalslices. Recorded with a digital scanned laser light sheet-basedfluorescence microscope (DSLM), objective lens Epiplan-Neofluar10x/0.3 NA. Large cellular spheroid of BxPC3 human pancreaticcancer cells labelled with DRAQ5 (scale bar 20 μm). In total465 two-dimensional slices. Recorded with a selective planeillumination microscope (SPIM), objective lens CZ 40x/0.8 NAwater dipping. (b) Three-dimensional image reconstruction ofrespective stacks of raw images

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 6 of 14

to continue improving the performance of the LoSpipeline.

ResultsEthics statementAll mouse work was approved by the University of BathAnimal Welfare and Ethical Review Body (AWERB) andundertaken under UK Home Office license PPL 30/3219in accordance with the Animals (Scientific Procedures)Act incorporating EU Directive 2010/63/EU. The humancell lines used in this study are listed in the AmericanType Culture Collection (ATCC) and therefore raise noethical concerns (ATCC number for T47D: HTB-133and for BxPC3: CRL-1687).

Real test image data and ground truth dataTo validate our segmentation method, we quantified itsperformance with three-dimensional images of a mouseembryo recorded with a confocal microscope, a cellularspheroid of T47D human breast cancer cells recordedwith a DSLM, and a large cellular spheroid of BxPC3human pancreatic cancer cells recorded with a SPIM(Fig. 4). The mouse embryo is an example of a develop-mental system, whereas cellular spheroids are commonlyused as tumur models. The images demonstrate variouscommon challenges of cell nuclei segmentation: (1) varyingintensities, (2) different types of cells of different sizes, (3)heterogeneous cell density and (4) an unpredictable degreeof overlap. In addition, all images originate from differentmicroscopes, which adds a level of technical variation. Thefile sizes of the three-dimensional stacks of images differbetween 18.5 MB (512×512 pixels), 11.4 MB (285×284pixels), and 203 MB (672×512 pixels) for the mouseembryo, the breast cancer spheroid, and the pancreaticspheroid, respectively. For each image, a ground truth (GT)dataset was generated by manual detection of the nucleicentroids.

Comparison with state-of-the-art methodsThe segmentation accuracy was not only comparedwith ground truth data, but also with results from thestate-of-the-art segmentation methods FARSight ([40],[18]), 3D Watershed in ImageJ ([41], [42]) and ilastik([43],[20]). These are commonly used, open-sourcesegmentation tools [6]. Additionally, we tested severalother methods described in the literature, e.g. themethod presented in [22], a three-dimensional cell nu-clei segmentation named “CellSegmentation3D” basedon gradient flow tracking and the software MINS [8].All methods were supplied with the same raw images.A major drawback of many segmentation methods istheir incapability to process huge datasets. MINSsegmentation failed for the pancreatic spheroid and“CellSegmentation3D” as well as the command line

tool for FARSight failed for all datasets, since theycrashed during processing. Due to this issue withFARSight’s command line tool, we had to revert to thecorresponding graphical user interface “NucleusEdi-tor”. Furthermore, the segmentation tools do notsupport all image types. “NucleusEditor” e.g. did notoperate with 16 bit images. Since the default param-eter values of FARSight resulted in unsatisfactorysegmentation results, we screened multiple sets ofparameter values and chose the most reasonable onefor each test image.

Page 7: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 7 of 14

3D Watershed is a plugin for a seeded watershedwithin the 3D ImageJ suite. We used the automated seeddetection implemented in this plugin. Further details aregiven in Additional file 1. Similar to FARSight, the par-ameter values had to be adjusted because the defaultvalues resulted in unsatisfactory segmentation results.For the machine learning-based segmentation tool ilas-tik, a training dataset had to be created for each testimage. In contrast, the proposed LoS algorithm does notrequire a training dataset and uses the same parametervalues for all test images.

Fig. 5 Comparison of LoS decomposition of the three test datasets with FAdataset, (b) the breast cancer spheroid dataset, and (c) the pancreatic cancprojection of the raw three-dimensional data and the segmentation outpuobject is assigned a different color. The bottom row shows a two-dimensioand (c): 292) and the corresponding sections of the segmentations. Scale b

Accurate segmentation performance with LoSdecompositionFig. 5 shows the segmentation results achieved by ourLoS implementation, FARSight, 3D Watershed and ilastikin three dimensions and in a single two-dimensionalslice. Additional file 2: Figure S2) shows further details ofsegmentation results along XZ and YZ. Ilastik failed forall datasets. 3D Watershed and FARSight suffered fromover-segmentation (Fig. 5a and c and Additional file 2:Figure S2C). Furthermore, 3D Watershed segmentationresulted in nearly straight borders between cell nuclei

RSight, 3D Watershed and ilastik. Results for (a) the mouse embryoer spheroid dataset. In each case, the top row shows the maximumt generated by LoS, FARsight, 3D Watershed and ilastik. Each segmentednal section of the dataset (slice number for dataset (a): 123, (b): 146,ars: (a) 10 μm, (b) 20 μm, (c) 20 μm

Page 8: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Fig. 6 Results of LoS decomposition along XY and XZ. Visualizationof one two-dimensional slice along XY through the three-dimensional stack of the pancreatic cancer spheroid after LoS de-composition. The smaller inset shows the separated cell nuclei in XYand the bigger inset shows the magnification of the result in XZ

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 8 of 14

(Fig. 5b and Additional file 2: Figure S2B). For allother methods the shapes of the splitting sites seemmore natural. Note that cell nuclei that appear splitincorrectly in the depicted two-dimensional slices arefound to be correctly split if one considers the XZdirection (Fig. 6). Fig. 7 shows three-dimensionalrenderings of single clumps of apparently touchingcell nuclei after decomposition with LoS. Clumps withnuclei of different sizes or shapes are similarly wellsplit as clumps of homogenously sized or shaped nu-clei. Furthermore, small and large clumps are splitequally well.For a quantitative analysis of the segmentation results

of LoS and the other methods, four well-establishedmetrics were used ([32, 44]):

Fig. 7 Examples of separated clumps of apparently touching cell nuclei. Ea

Recall ¼ TPTPþ FN

Precision ¼ TPTPþ FP

F‐measure ¼ 2Precision� RecallPrecisionþ Recall

Accuracy ¼ TPTPþ FNþ FP

For each segmentation method true positive (TP), falsepositive (FP) and false negative (FN) represent the num-ber of correctly detected, falsely detected, and un-detected nuclei relative to the GT [32].To calculate these four metrics, we matched the cen-

troids determined by the segmentation to centroidsfound in the GT. To do this, a three-dimensional spher-ical neighborhood of a range that corresponds to theaverage diameter of the respective cell nuclei was cen-tered at each centroid position in the GT. If this localneighborhood included exactly one centroid in the seg-mentation, we counted it as a correct detection – thustrue positive (TP). If the neighborhood included morethan one centroid, the closest one was considered as TP.These calculations resulted in a TP score for each three-dimensional test image. Based on the number of TP,scores for false positive (FP) and false negative (FN) wereobtained using FP = nseg - TP and FN = ngt - TP, wherenseg and ngt are the number of centroids determined bythe segmentation and centroids in the GT for eachdataset [32].Based on these classifications, we measured precision,

recall, accuracy, and F-measure of our LoS method andthe state-of-the-art methods. Table 1 shows the perform-ance of LoS, FARSight, 3D Watershed, and ilastik for allthree test images. We observed that for recall allmethods achieved comparable results. Hence, the num-ber of detected nuclei for each method was similar tothe number of cell nuclei in the GT. Thus, there was notmuch under-segmentation. However, compared to the

ch part within a cell nuclei clump is assigned a different color

Page 9: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Table 1 Algorithmic performance of LoS, FARSight, 3D Watershed and ilastik

Dataset # cells GT Algorithm # cells Seg Match Recall Precision Accuracy F-measure

Mouse embryo 61 LoS 59 58 0.95 0.98 0.94 0.97

FARSight 62 60 0.98 0.97 0.95 0.98

3D Watershed 299 61 1.00 0.20 0.20 0.34

ilastik 274 61 1.00 0.22 0.22 0.36

Breast cancer spheroid 240 LoS 247 216 0.90 0.87 0.80 0.89

FARSight 338 236 0.98 0.70 0.69 0.82

3D Watershed 288 220 0.92 0.76 0.71 0.83

ilastik 112 74 0.31 0.66 0.27 0.42

Pancreatic cancer spheroid 531 LoS 690 523 0.98 0.76 0.75 0.86

FARSight 997 524 0.99 0.53 0.52 0.69

3D Watershed 734 518 0.98 0.70 0.69 0.81

ilastik 19744 531 1.00 0.03 0.03 0.05

Performance was measured against manually segmented ground truth for the three different test datasets. “# cells GT”, “# cells Seg” and “Match” list the numberof cells that were determined manually in the ground truth, segmented by the different algorithms, and matched, respectively. The segmentation performance isgiven in terms of the metrics “Recall”, “Precision”, “Accuracy” and “F-measure”. Thereby, values range from 0 (worst performance) to 1 (best performance). For theLoS algorithm the same set of parameter values was used for all test images. For 3D Watershed and FARSight different parameter sets had to be used. These weredetermined by parameter scanning

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 9 of 14

LoS approach, all other methods exhibit lower precision.Specifically, we observed that single cell nuclei aremistakenly split into multiple parts, increasing the over-segmentation rate. The performance of the LoS algo-rithm was steadily accurate for the different datasets. Inthe worst case, we obtained an F-measure of 86 %,whereas the lowest F-measure for the other methodswas 69 %, 34 %, and 5 % for FARSight, 3D Watershed,and ilastik, respectively (see Table 1).These results clearly demonstrate that our proposed seg-

mentation method is capable of achieving a consistent seg-mentation performance for diverse datasets compared toan ideal ground truth. Additionally, no adjustments of par-ameter values are required, thus making the method robustfor these datasets and straightforward to use.To investigate the contribution of our pre-processing

steps and the LoS decomposition to the performance ofthe whole LoS pipeline, we performed a more detailedcomparison with 3D Watershed. We considered twoadditional cases for 3D Watershed: (1) we used the bin-ary image generated by the LoS pipeline and applied thedistance transformation implemented in ImageJ and (2)we supplied the binary image and the seeds both gener-ated by the LoS pipeline. Comparison showed that thenuclear decomposition capabilities are similar (seeAdditional file 1: Table S1). Using either the binary imagegenerated by the LoS pipeline as input or the local maximafrom our detection of divisible parts as seeds lead to an im-provement of the performance of 3D Watershed. AlthoughMINS crashed for the pancreatic spheroid, we analyzed theperformance for the two other datasets. Results are summa-rized in Additional file 3: Table S2 shows similar perform-ance for LoS and MINS.

Reliable nuclear volume representation with LoSdecompositionThe volume of an individual nucleus is an important fea-ture for the quantitative analysis of various processes.E.g. in drug screening assays the efficacy of a drug is ex-pected to correspond to a decrease in nuclear volume.Hence, reliable representation of nuclear volumes aftersegmentation is essential. Therefore, we measured thenuclear volumes obtained by each segmentation methodand compared it with a ground truth (GT) volume. TheGT volume was manually determined based on the meanvolume of representative 10 % of the cell nuclei fromdifferent regions of the respective test datasets (mouseembryo: 391 ± 94 μm3 (n = 6), breast cancer spheroid:1260 ± 426 μm3 (n = 20), pancreatic cancer spheroid:271 ± 102 μm3 (n = 50)). We found that for the mouseembryo, the mean nuclear volume achieved by FARSightsegmentation was closer to the mean GT volume thanthe mean nuclear volume after LoS decomposition(Fig. 8a). However, after the LoS approach the nuclearvolumes were more consistent with the ground truththan the result from FARSight as indicated by a smallerstandard deviation (Fig. 9a; LoS: 257 ± 86 μm3 (n = 59),FARSight: 439 ± 180 μm3 (n = 62)). We observed, thatcompared to the standard deviation of the GT, thestandard deviation of LoS was increased by 20 %. Theother methods showed an increase in the standard devi-ation by 54 % for FARSight, 189 % for 3D Watershed,and 8118 % for ilastik.For the spheroids, the nuclear volumes from our LoS

algorithm reproduced the true nuclear volumes betterthan all other three methods (Figs. 8b and c, 9b and c).FARSight and 3D Watershed produced enlarged nuclei

Page 10: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Fig. 8 Visual comparison of nuclei volumes measured in segmentation results of LoS, FARSight, 3D Watershed and ilastik. (a) Mouse embryo,(b) breast cancer spheroid and (c) pancreatic cancer spheroid. Spheres were plotted around each nucleus’ centroid position detected in therespective ground truth or by the respective segmentation method. The radius of each sphere in the ground truth represents the manuallydetermined mean nuclei volume for a representative subset of nuclei. The spheres for the segmentation outputs represent the automaticallydetermined volume for each detected nucleus. Coloring scheme: for each dataset, a minimum (purple) and an approximate maximum volume(red) across all results for LoS, FARsight, 3D Watershed and ilastik was determined (minimum volume for coloring: 1 μm3 for (a), (b) and (c);maximum volume for coloring: 1600 μm3 for (a), 4200 μm3 for (b) and 1000 μm3 for (c), respectively). Note: for better visualization, extremeoutliers from the ilastik segmentation results were removed for determining the maximum volume: (one for (a), one for (b) and three for (c)).Subsequently, each sphere was assigned a color depending on its volume

Fig. 9 Quantitative comparison of nuclei volumes measured in segmentation results of LoS, FARSight, 3D Watershed and ilastik. (a) Mouse embryo,(b) breast cancer spheroid and (c) pancreatic cancer spheroid. Data is visualized as box-and-whisker plots: the white lines within the boxes describe therespective mean, the error bars show variability outside the upper and lower quartiles and the dots represent outliers. The gray vertical line describes themean of the respective ground truth nuclear volume that was manually determined for a subset of nuclei in all test datasets. Note: for better visualizationextreme outliers from the segmentation results of ilastik were removed (one for (a), one for (b) and three for (c))

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 10 of 14

Page 11: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 11 of 14

compared to the GT. Ilastik throughout failed to resem-ble the volumes for all datasets due to its lacking cap-ability of separating apparently touching cell nuclei. Toshow that our LoS algorithm yields a significant im-provement over the other methods, a Wilcoxon ranksum test with Holm’s correction was performed.Thereby, nuclear volumes achieved by LoS were alwayssignificantly different from those determined by all othermethods (p-value < 0.01).In summary, our results show that nuclear volumes

obtained by the LoS method are more consistent withthe ground truths than all other tested methods. Add-itionally, less outliers indicate the robustness of themethod. These results demonstrate that LoS decompos-ition yields nuclear volumes that resemble the actualvolumes very closely.

DiscussionWe developed a segmentation pipeline for cell nuclei inthree-dimensional fluorescence images. The algorithmconsists of local adaptive pre-processing followed bydecomposition into approximately convex objects. Ourdetailed analysis revealed the contributions of the twoparts to the robustness of the algorithm with respect todifferences in image properties like the biological speci-men, imaging technique, signal quality, intensity distri-bution and size of the cell nuclei. These major parts arediscussed in the following sections.

Pre-processingThe key steps in our pre-processing are a local adaptivebinarization and the determination of the number ofdivisible parts.During binarization, many regions are difficult to clas-

sify as foreground or background in images that exhibitbackground noise or differences in contrast and inten-sity. For such cases, local thresholding is more appropri-ate and robust than global thresholding. Applying Otsu’sclustering method locally yielded fewer clumped nucleithan global thresholding. Additionally, for the chosen bi-narization method only a single parameter value had tobe adjusted: the radius that defines the neighborhoodduring local Otsu thresholding. A value for the radiusthat is too high can increase the number of apparentlytouching nuclei and decrease the total number of ex-tracted cell nuclei. We found that a value that corre-sponds to the approximate radius of a cell nucleusyielded the best results. It is an intuitive feature of thebiological specimen. In contrast, pre-processing imple-mented in e.g. 3D Watershed involves non-intuitive ad-justments of parameter values like an intensity thresholdfor the image (see Additional file 1: Figure S1). Definingintensity thresholds is typically image dependent andtherefore sensitive to the signal-to-noise ratio of the

image [22]. Such parameters need to be carefully set toavoid poor segmentation results [45]. Besides the easyand intuitive parameterization for this step, our approach isinsensitive to non-uniform intensity distributions as well asthe decreasing signal quality with increasing penetrationdepth into the specimen (Fig. 4). Differences in terms ofimage origin and image quality is compensated by this.Summarizing the above, this part of the pre-processing isrobust if cell nuclei sizes do not differ excessively. Thus, weobtained good results for all our test datasets. Furtherimprovements are possible. By changing the parametervalue for the local neighborhood radius e.g. for the pancre-atic cancer spheroid to 30 voxels, yields a segmentationimprovement of about 7 % (F-measure 93 %) compared tothe result shown in Table 1.The determination of the number of divisible parts of

an object can be responsible for over- and under-segmentation. A small value for the local maxima detec-tion in the distance transformed image increases thenumber of local maxima and hence the number of po-tential cell nuclei. We found our value gives an accurateand robust initial guess of the number of divisible parts.In our algorithm, the number of divisible parts deter-mines the number of LoS clusters. However, it is only anupper threshold for the number of segmented cell nucleidue to the steps that follow the LoS clustering. Theseensure an optimized number of cell nuclei in the finaloutput. If the cell nuclei in the biological specimen ex-hibit a homogeneous size distribution, an improvementof the detection of divisible parts may be possible byadding an optimization that considers reasonable sizedescriptors. In this case, huge cell nuclei clumps forwhich not enough divisible parts are detected, would besplit into parts of an appropriate size.Consequently, our implementation of the intuitively

parameterizable binarization and the calculation of thenumber of divisible parts with the potential of a subse-quent optimization, contribute to the robustness of ourapproach.

LoS decompositionThe fundamental difference between our decompositionmethod and existing methods is the application of linesthat define approximate convex parts. The robustness isachieved because LoS are easily accessible features ofcell nuclei clumps that do not require previous know-ledge about object shape, size or intensity. Hence, a goodthroughout performance of the decomposition of cellnuclei for all datasets was obtained with the same set ofparameter values.Clumps containing nuclei of different sizes or shapes

are equally well split as clumps consisting of equallysized or shaped parts (Fig. 7). This is especially advanta-geous in the case of homeostatic tissues where the stem

Page 12: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 12 of 14

cell nuclei are in general smaller than the differentiatedcell nuclei. Furthermore, for separating complex clumpsof many nuclei our LoS algorithm exhibits a better per-formance than e.g. 3D Watershed, ilastik or methodsused in [46] or [25] (Fig. 7, most right).One reason for the robustness is, that the decompos-

ition is performed directly on the three-dimensional ob-jects rather than applying it on each two-dimensionalslice independently like in [47] or [19]. Incorporating thethree-dimensional information provides more accuratedecomposition of apparently touching cell nuclei. As-suming we would have considered the slice in Fig. 6 in-dependently, the object that is depicted in the smallerinset would not have been separated since it is alreadyapproximately convex. However, the view along the XZdirection in the larger inset clearly shows that this objecthad to be split. Although here, we focus on three spatialdimensions, the method works well for two-dimensionalimages and has the potential of being extended to three-dimensional time-lapse images.If the split site of an object is pronounced, a reliable

LoS clustering is guaranteed. Clumps with small concav-ities (i.e. wide split sites) are challenging. In this case,more LoS can pass through the split site and cause mis-clustering of the lines. To overcome this effect, we haveintroduced a parameter for the sampling size of LoS andthe steps “label surface points and optimize surfacepoints labelling” that follow the LoS clustering. Thenumber of sampled LoS determines the spacing betweenthe lines within an object. Lines with a small distancebetween them are clustered together with a higher prob-ability. The more lines are sampled, the more passthrough the wide split site and thus are clustered to-gether. Consequently, surface points of one part of theobject are hit more often by LoS with different labels.This can lead to a wrongly labelled surface point. If thishappens for too many surface points, it is hard to elim-inate them even with the steps following the clustering.However, too few lines would not cover the object andtherefore would not be sufficient for an accurate cluster-ing. Therefore, we introduced a LoS sample size thatchanges if the number of surface points drops below alower threshold or exceeds an upper threshold. None-theless, the algorithm is less sensitive to the values forthe lower and upper threshold than to the LoS samplesize. For all three parameters, we chose values that wereequally applicable to the various test images.In general, the surface points labelling works well since

a surface point is hit more often by LoS with the correctlabel. Optimization of the surface points labelling elimi-nates the few remaining mis-labelled surface points thatare surrounded by correctly labelled surface points.To sum up, the surface points labelling and the

optimization following the LoS clustering improve the

decomposition performance. The sample size of the linesis one key parameter for the LoS decomposition whichcan affect the segmentation output. This value can beadapted to further improve the results. In general, wefound that our value enables a robust decomposition forall our test images.

Complete LoS pipelineApplying the entire pipeline as depicted in Fig. 3 on thedifferent test datasets revealed a throughout robust per-formance with the same set of parameter values. In con-trast, the other methods were supplied with adjustedparameter values for each image. Although each appliedmethod failed for at least one dataset, FARSight and LoSachieved similar results for the mouse embryo (seeTable 1). Nevertheless, cell nuclei volume measurementsof our LoS algorithm are more consistent with themanually obtained volumes than those determined byFARSight. This shows that the segmentation with FAR-Sight worked accurately for the mouse embryo but itproduced a cell nuclei volume distribution that did notreflect the ground truth volume.For 3D Watershed pre-processing can be decoupled

from the cell nuclei decomposition step. Improved re-sults of watershed segmentation with parts of our pre-processing reveal that our pre-processing steps are morerobust for our test datasets than those implemented inthe 3D Watershed plugin (see Table S1 in Additional file1). Moreover, for 3D Watershed the number of seeds andthe number of cell nuclei in the output are exactly thesame. This requires a more accurate seed detection as isneeded for our LoS approach. Our end result does not onlyrely on the initial number of divisible parts. The additionaloptimization steps after LoS clustering decouple the finalresult from the initial calculation, and thereby contributeto the robustness and accuracy of our algorithm.In our pipeline, we discarded segmented cell nuclei

below a defined volume threshold, a post- processingstep that is applicable to a wide range of biological spec-imens. However, our segmentation method extracts fur-ther properties such as the centroids or the fluorescenceintensity of the cell nuclei. If only a specific biologicalspecimen is considered, this information can be used formore sophisticated post-processing steps.

ConclusionThe automated, robust and accurate detection of a singlecell nucleus is a crucial and challenging step for a pre-cise quantification in many biological applications. Up tonow, several three-dimensional cell nuclei segmentationmethods exist but in most cases they need further adjust-ments to the considered image data and biological spe-cimen. We introduce a three-dimensional segmentationmethod that incorporates separating apparently touching

Page 13: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 13 of 14

cell nuclei. It combines local adaptive pre-processing with aLines-of-Sight approach for approximately convex de-composition. Our method is readily applicable to three-dimensional microscopy images of different biologicalspecimens acquired by diverse imaging techniques.The segmentation quality is largely independent of cell

type, size, intensity, cell density, and image quality. Acomparison with state-of-the-art algorithms revealed anoverall better performance of our method in terms ofidentification of individual cell nuclei and extraction ofrelevant features such as the nuclear volume.Three-dimensional cell nuclei segmentation with our

LoS method provides the crucial starting point for manyapplications. Cellular spheroids e.g. serve as in vitro tis-sue models, especially in tumur biology. They show animproved resemblance of the spatial arrangements com-pared to two-dimensional cell cultures [48]. Therefore,they have become increasingly important for drug test-ing assays. Volume changes of cell nuclei within suchspheroids upon drug treatment might provide insights intodrug efficacy. Developmental systems such as mouse em-bryos serve as a good model for exploring the coordinationof cell linage specification and morphogenesis [49].Thereby, a reliable identification of cell nuclei positions fa-cilitates accurate tracking and lineaging of cell nuclei duringdifferent processes and hence opens up a variety of possibil-ities for investigating cellular dynamics. In general, the wideapplicability of our proposed segmentation method enablesa reliable and robust quantification of cell nuclei in fluores-cence images that will contribute to our knowledge inmany biological applications.

Additional files

Additional file 1: 3D Watershed with different pre-processing steps.Comparison of the performance of our LoS algorithm with 3D Watershedcombined with different pre-processing steps.

Additional file 2: Figure S2. Visualization of different segmentationresults along the XZ- and YZ-direction. Results for (A) the mouse embryodataset, (B) the breast cancer spheroid dataset, and (C) the pancreaticcancer spheroid dataset. In each case, the top row shows two-dimensionalsections of the raw image and the segmentation results along XZ direction.The bottom row shows two-dimensional sections of the raw image and thesegmentation results along YZ direction. For each dataset, the methods withthe best and second best F-measure are compared. Scale bars: (A) 10 μm, (B)20 μm, (C) 20 μm.

Additional file 3: Comparison of LoS with MINS. Segmentationperformance of LoS and MINS.

Competing interestsThe authors declared that they have no competing interests.

Authors’ contributionsBM, SCF and EHKS designed and developed the algorithm. BM implementedthe algorithm and analyzed the data. SMD, NA and FP performed theimaging experiments. AS implemented the tool for generating the groundtruth data. BM drafted the manuscript and SCF, EHKS, AS, SMD and NAcritically revised the paper. All authors read and approved the finalmanuscript.

AcknowledgementsResearch in the Stelzer lab is supported by the Deutsche Forschungsgemeinschaft(DFG, EXC-115). The funders had no role in study design, data collection andanalysis, decision to publish, or preparation of the manuscript.

Author details1Buchmann Institute for Molecular Life Sciences (BMLS), FachbereichBiowissenschaften (FB15, IZN), Goethe Universität Frankfurt am Main,Max-von-Laue-Straße 15, 60438 Frankfurt am Main, Germany. 2Department ofBiology and Biochemistry, University of Bath, Bath BA2 7AY, UK.

Received: 17 December 2014 Accepted: 18 May 2015

References1. Bissell MJ, Hines WC. Why don’t we get more cancer? A proposed role

of the microenvironment in restraining cancer progression. Nat Med.2011;17:320–9.

2. Harjanto D, Zaman MH. Matrix mechanics and receptor-ligand interactionsin cell adhesion. Org Biomol Chem. 2010;8:299–304.

3. Sutherland RM. Cell and environment interactions in tumor microregions:the multicell spheroid model. Science. 1988;240:177–84.

4. Day RN, Schaufele F. Imaging molecular interactions in living cells. MolEndocrinol. 2005;19:1675–86.

5. Ponomarev V. Nuclear imaging of cancer cell therapies. J Nucl Med.2009;50:1013–6.

6. Eliceiri KW, Berthold MR, Goldberg IG, Ibáñez L, Manjunath BS, Martone ME,et al. Biological imaging software tools. Nat Methods. 2012;9:697–710.

7. Pawley J. Handbook of Biological Confocal Microscopy. 3rd ed. New York:Springer Science+Business Media, LLC; 2006. p. 988.

8. Lou X, Kang M, Xenopoulos P, Muñoz-Descalzo S, Hadjantonakis A-K. A rapidand efficient 2D/3D nuclear segmentation method for analysis of earlymouse embryo and stem cell image data. Stem Cell Reports. 2014;2:382–97.

9. Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EHK. Optical sectioningdeep inside live embryos by selective plane illumination microscopy.Science. 2004;305:1007–9.

10. Greger K, Swoger J, Stelzer EHK. Basic building units and properties of afluorescence single plane illumination microscope. Rev Sci Instrum.2007;78:23705.

11. Stelzer EHK. Light-sheet fluorescence microscopy for quantitative biology.Nat Methods. 2014;12:23–6.

12. Pampaloni F, Richa R, Ansari N, Stelzer EHK. Advanced FluorescenceMicroscopy, Methods and Protocols. New York: Springer Science+BusinessMedia, LLC; 2015. p. 43–57.

13. Strobl F, Stelzer EHK. Non-invasive long-term fluorescence live imaging ofTribolium castaneum embryos. Development. 2014;141:2331–8.

14. Keller PJ, Stelzer EHK. Quantitative in vivo imaging of entire embryos withdigital scanned laser light sheet fluorescence microscopy. Curr OpinNeurobiol. 2008;18:624–32.

15. Malpica N, de Solórzano CO, Vaquero JJ, Santos A, Vallcorba I, García-Sagredo JM,et al. Applying watershed algorithms to the segmentation of clustered nuclei.Cytometry. 1997;28:289–97.

16. Gniadek TJ, Warren G. WatershedCounting3D: a new method forsegmenting and counting punctate structures from confocal image data.Traffic. 2007;8:339–46.

17. Wählby C, Bengtsson E. Segmentation of cell nuclei in tissue by combiningseeded watersheds with gradient information. Springer Verlag BerlinHeidelb. 2003;408–414.

18. Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detectionand segmentation of cell nuclei in histopathology images. IEEE TransBiomed Eng. 2010;57:841–52.

19. Qi J. Dense nuclei segmentation based on graph cut and convexity-concavityanalysis. J Microsc. 2014;253:42–53.

20. Sommer C, Straehle C, Kothe U, Hamprecht FA. Ilastik: Interactive learningand segmentation toolkit. IEEE Int Symp Biomed Imaging From Nano toMacro. 2011;2011:230–3.

21. Chen C, Wang W, Ozolek JA, Lages N, Altschuler SJ, Wu LF, Rohde GK. Atemplate matching approach for segmenting microscopy images. InInternational Symposium on Biomedical Imaging; 2012:768–771.

22. Li G, Liu T, Tarokh A, Nie J, Guo L, Mara A, et al. 3D cell nuclei segmentationbased on gradient flow tracking. BMC Cell Biol. 2007;8:40.

Page 14: Robust and automated three-dimensional segmentation of … · 2017-04-10 · tion performance and volume measurements. Thereby, our automated LoS approach is applied with the same

Mathew et al. BMC Bioinformatics (2015) 16:187 Page 14 of 14

23. Delgado-Gonzalo R, Chenouard N, Unser M. Spline-based deformingellipsoids for interactive 3D bioimage segmentation. IEEE Trans ImageProcess. 2013;22:3926–40.

24. Harder N, Bodnar M, Eils R, Spector DL, Rohr K. 3D segmentation andquantification of mouse embryonic stem cells in fluorescence microscopyimages. In: Bildverarbeitung für die Medizin. Berlin Heidelberg: Springer;2011. p. 34–8. Informatik Aktuell.

25. Yu D, Pham TD, Zhou X. Analysis and recognition of touching cell imagesbased on morphological structures. Comput Biol Med. 2009;39:27–39.

26. Zhong Q, Zhou P, Yao Q, Mao K. A novel segmentation algorithm forclustered slender-particles. Comput Electron Agric. 2009;69:118–27.

27. Chen L, Chan LLH, Zhao Z, Yan H. A novel cell nuclei segmentationmethod for 3D C. elegans embryonic time-lapse images. BMCBioinformatics. 2013;14:328.

28. Mikula K, Remesiková M, Stasová O, Peyriéras N. Segmentation and analysisof 3D zebrafish cell image data. In International Congress on Image andSignal Processing; 2010:1444–48.

29. Keller PJ, Schmidt AD, Wittbrodt J, Stelzer EHK. Reconstruction of zebrafishearly light sheet microscopy. Science. 2008;322:1065–9.

30. Indhumathi C, Cai YY, Guan YQ, Opas M. An automatic segmentationalgorithm for 3D cell cluster splitting using volumetric confocal images.J Microsc. 2011;243:60–76.

31. Zanella C, Campana M, Rizzi B, Melani C, Sanguinetti G, Bourgine P,et al. Cells segmentation from 3-D confocal images of early zebrafishembryogenesis. IEEE Trans Image Process. 2009;19:770–81.

32. Bashar MK, Yamagata K, Kobayashi TJ. Improved and robust detection ofcell nuclei from four dimensional fluorescence images. PLoS One.2014;9:e101891.

33. Bilgin CC, Kim S, Leung E, Chang H, Parvin B. Integrated profiling of threedimensional cell culture models and 3D microscopy. Bioinformatics.2013;29:3087–93.

34. Lien J, Amato NM. Approximate convex decomposition of polyhedra. InACM symposium on Solid and physical modeling; 2006:121–131.

35. Attene M, Mortara M, Spagnuolo M, Falcidieno B. Hierarchical convexapproximation of 3D shapes for fast region selection. Comput Graph Forum.2008;27:1323–32.

36. Asafi S, Goren A, Cohen-Or D. Weak Convex Decomposition by Lines-of-sight. Comput Graph Forum. 2013;32:23–31.

37. Ward JJH. Hierarchical Grouping to Optimize an Objective Function. J AmStat Assoc. 1963;58:236–44.

38. Plugin for auto local threshold (ImageJ). [http://fiji.sc/wiki/index.php/Auto_Local_Threshold]

39. Physikalische Biologie. [https://www.physikalischebiologie.de/downloads]40. FARSight. [http://www.farsight-toolkit.org]41. 3D ImageJ Suite. [http://imagejdocu.tudor.lu/

doku.php?id=plugin:stacks:3d_ij_suite:start]42. Ollion J, Cochennec J, Loll F, Escudé C, Boudier T. TANGO: A generic tool

for high-throughput 3D image analysis for studying nuclear organization.Bioinformatics. 2013;29:1840–1.

43. ilastik. [http://ilastik.org/]44. Song Y, Cai W, Huang H, Wang Y, Feng DD, Chen M. Region-based progressive

localization of cell nuclei in microscopic images with data adaptive modeling.BMC Bioinformatics. 2013;14:173.

45. Meijering E, Dzyubachyk O, Smal I. Methods for cell and particle tracking.Methods Enzymol. 2012;504:183–200.

46. Bai X, Sun C, Zhou F. Splitting touching cells based on concave points andellipse fitting. Pattern Recognit. 2009;42:2434–46.

47. Latorre A, Alonso-Nanclares L, Muelas S, Peña J-M, Defelipe J. 3D segmentationsof neuronal nuclei from confocal microscope image stacks. FrontNeuroanat. 2013;7:49.

48. Pampaloni F, Reynaud EG, Stelzer EHK. The third dimension bridges the gapbetween cell culture and live tissue. Nat Rev Mol Cell Biol. 2007;8:839–45.

49. Schrode N, Xenopoulos P, Piliszek A, Frankenberg S, Plusa B, HadjantonakisA-K. Anatomy of a blastocyst: cell behaviors driving cell fate choice andmorphogenesis in the early mouse embryo. Genesis. 2013;51:219–33.

Submit your next manuscript to BioMed Centraland take full advantage of:

• Convenient online submission

• Thorough peer review

• No space constraints or color figure charges

• Immediate publication on acceptance

• Inclusion in PubMed, CAS, Scopus and Google Scholar

• Research which is freely available for redistribution

Submit your manuscript at www.biomedcentral.com/submit