arxiv:1512.04958v1 [cs.cv] 15 dec 2015 · sarfaraz hussein 1, aileen green2, arjun watane ,...

5
CONTEXT DRIVEN LABEL FUSION FOR SEGMENTATION OF SUBCUTANEOUS AND VISCERAL FAT IN CT VOLUMES Sarfaraz Hussein 1 , Aileen Green 2 , Arjun Watane 1 , Georgios Papadakis 3 , Medhat Osman 2 , Ulas Bagci 1 1 Center for Research in Computer Vision (CRCV), University of Central Florida (UCF) 2 Department of Radiology, St. Louis University, St. Louis, MO 3 Radiology and Imaging sciences, National Institutes of Health (NIH), Bethesda MD ABSTRACT Quantification of adipose tissue (fat) from computed to- mography (CT) scans is conducted mostly through manual or semi-automated image segmentation algorithms with limited efficacy. In this work, we propose a completely unsupervised and automatic method to identify adipose tissue, and then sep- arate Subcutaneous Adipose Tissue (SAT) from Visceral Adi- pose Tissue (VAT) at the abdominal region. We offer a three- phase pipeline consisting of (1) Initial boundary estimation using gradient points, (2) boundary refinement using Geomet- ric Median Absolute Deviation and Appearance based Local Outlier Scores (3) Context driven label fusion using Condi- tional Random Fields (CRF) to obtain the final boundary be- tween SAT and VAT. We evaluate the proposed method on 151 abdominal CT scans and obtain state-of-the-art 94% and 91% dice similarity scores for SAT and VAT segmentation, as well as significant reduction in fat quantification error mea- sure. Index TermsFat Segmentation, CT, Conditional Ran- dom Fields, Subcutaneous fat, Visceral fat 1. INTRODUCTION Quantification of adipose tissue (i.e., fat) and its subtypes is an important task in many clinical applications such as obe- sity, cardiac, and diabetes research. Among them, obesity is one of the most prevalent health conditions in recent years [1]. In the United States, about one-third of the adult popula- tion is obese [2], causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Traditionally, Body Mass Index (BMI) has been used as a measure of obe- sity; however, it remains inconsistent across subjects, espe- cially for underweight and obese individuals. Instead, vol- umetry of visceral adipose tissue from CT volumes is consid- ered as a reliable, accurate and consistent measure of body fat distribution and its extent. However, current radiological quantification methods are insufficient, often requiring man- ual interaction at several anatomical locations, leading to in- efficient and inaccurate quantification. Fig. 1: Dotted red lines show the abdominal region boundaries on the left. A thin discontinuous muscle wall (dotted blue curve) sepa- rating VAT and SAT is shown on the right. Compared to SAT, body composition phenotype due to VAT is associated with medical disorders such as coronary heart disease, and several malignancies including prostate, breast and colorectal cancers. Hence, quantification of vis- ceral/abdominal obesity is vital for precise diagnosis and timely treatment. However, separation of VAT from SAT is not trivial because both SAT and VAT regions share similar intensities in CT, and are vastly connected (See Figure 1). Ra- diologists often rely on different morphological and filtering operators to segregate these two fat types in routine evalu- ations, but the size of structural element and neighborhood area for filtering are subjective and require excessive manual tuning. In this work, we propose a novel method to identify, seg- ment as well as quantify SAT and VAT automatically. Our contributions in this paper are the following. (1) Our proposed automated method is completely unsupervised and we esti- mate SAT-VAT separation boundary using both appearance and geometric cues. (2) The contextual information captured by our method can well handle inter-subject variations as compared to other methods. To the best of our knowledge, this is the largest VAT and SAT segmentation study (over 150 CT scans) till date validating an automated fat segmentation and quantification method. arXiv:1512.04958v1 [cs.CV] 15 Dec 2015

Upload: others

Post on 11-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: arXiv:1512.04958v1 [cs.CV] 15 Dec 2015 · Sarfaraz Hussein 1, Aileen Green2, Arjun Watane , Georgios Papadakis3, Medhat Osman2, ... arXiv:1512.04958v1 [cs.CV] 15 Dec 2015-1000-190

CONTEXT DRIVEN LABEL FUSION FOR SEGMENTATION OF SUBCUTANEOUS ANDVISCERAL FAT IN CT VOLUMES

Sarfaraz Hussein1, Aileen Green2, Arjun Watane1, Georgios Papadakis3, Medhat Osman2, Ulas Bagci1

1Center for Research in Computer Vision (CRCV), University of Central Florida (UCF)2Department of Radiology, St. Louis University, St. Louis, MO

3Radiology and Imaging sciences, National Institutes of Health (NIH), Bethesda MD

ABSTRACT

Quantification of adipose tissue (fat) from computed to-mography (CT) scans is conducted mostly through manual orsemi-automated image segmentation algorithms with limitedefficacy. In this work, we propose a completely unsupervisedand automatic method to identify adipose tissue, and then sep-arate Subcutaneous Adipose Tissue (SAT) from Visceral Adi-pose Tissue (VAT) at the abdominal region. We offer a three-phase pipeline consisting of (1) Initial boundary estimationusing gradient points, (2) boundary refinement using Geomet-ric Median Absolute Deviation and Appearance based LocalOutlier Scores (3) Context driven label fusion using Condi-tional Random Fields (CRF) to obtain the final boundary be-tween SAT and VAT. We evaluate the proposed method on151 abdominal CT scans and obtain state-of-the-art 94% and91% dice similarity scores for SAT and VAT segmentation, aswell as significant reduction in fat quantification error mea-sure.

Index Terms— Fat Segmentation, CT, Conditional Ran-dom Fields, Subcutaneous fat, Visceral fat

1. INTRODUCTION

Quantification of adipose tissue (i.e., fat) and its subtypes isan important task in many clinical applications such as obe-sity, cardiac, and diabetes research. Among them, obesity isone of the most prevalent health conditions in recent years[1]. In the United States, about one-third of the adult popula-tion is obese [2], causing an increased risk for cardiovasculardiseases, diabetes, and certain types of cancer. Traditionally,Body Mass Index (BMI) has been used as a measure of obe-sity; however, it remains inconsistent across subjects, espe-cially for underweight and obese individuals. Instead, vol-umetry of visceral adipose tissue from CT volumes is consid-ered as a reliable, accurate and consistent measure of bodyfat distribution and its extent. However, current radiologicalquantification methods are insufficient, often requiring man-ual interaction at several anatomical locations, leading to in-efficient and inaccurate quantification.

Fig. 1: Dotted red lines show the abdominal region boundaries onthe left. A thin discontinuous muscle wall (dotted blue curve) sepa-rating VAT and SAT is shown on the right.

Compared to SAT, body composition phenotype due toVAT is associated with medical disorders such as coronaryheart disease, and several malignancies including prostate,breast and colorectal cancers. Hence, quantification of vis-ceral/abdominal obesity is vital for precise diagnosis andtimely treatment. However, separation of VAT from SAT isnot trivial because both SAT and VAT regions share similarintensities in CT, and are vastly connected (See Figure 1). Ra-diologists often rely on different morphological and filteringoperators to segregate these two fat types in routine evalu-ations, but the size of structural element and neighborhoodarea for filtering are subjective and require excessive manualtuning.

In this work, we propose a novel method to identify, seg-ment as well as quantify SAT and VAT automatically. Ourcontributions in this paper are the following. (1) Our proposedautomated method is completely unsupervised and we esti-mate SAT-VAT separation boundary using both appearanceand geometric cues. (2) The contextual information capturedby our method can well handle inter-subject variations ascompared to other methods. To the best of our knowledge,this is the largest VAT and SAT segmentation study (over 150CT scans) till date validating an automated fat segmentationand quantification method.

arX

iv:1

512.

0495

8v1

[cs

.CV

] 1

5 D

ec 2

015

Page 2: arXiv:1512.04958v1 [cs.CV] 15 Dec 2015 · Sarfaraz Hussein 1, Aileen Green2, Arjun Watane , Georgios Papadakis3, Medhat Osman2, ... arXiv:1512.04958v1 [cs.CV] 15 Dec 2015-1000-190

-190 -30 3000 -1000

HU

Geometric MAD

Initial Boundary Estimation

CRF-based Context driven

label fusion

SAT VAT

Preprocessing Step

Step 1

Step 2

Step 3

SAT/VAT Separation

CT Volume Segmented CT Volume

Appearance LoOS

Fig. 2: Workflow of the proposed method. Upper: pre-processingstep. Bottom: two stage outlier removal using MAD and LoOS andcontext driven label fusion via CRF.

Related Work: Body fat quantification has been a long-timeactive area of research for medical imaging scientists. Forabdominal fat quantification, Zhao et al. [3] used intensityprofile along the radii connecting sparse points on the outerwall (skin boundary) starting from the abdominal body cen-ter. Boundary contour is then refined by a smoothness con-straint to separate VAT from SAT. This method, however, doesnot adapt to obese patients easily where the neighboring sub-cutaneous and/or visceral fat cavities lead to leakage in seg-mentation. In another study, Romero et al. [4] developed asemi-supervised method generating skin-boundary, abdomi-nal (visceral) wall, and subcutaneous fat masks. In a similarfashion, the method in [5] is based on a hierarchical fuzzyaffinity function based semi-supervised segmentation. Its suc-cess was vague when patient specific quantification is consid-ered. In more advanced method of [6], SAT, VAT and mus-cle are separated using a joint shape and appearance model.Mensink et al. [7] proposed a series of morphological oper-ations but fine tuning of the algorithm was a necessary stepfor patient specific quantification, and this step should be re-peated almost for every patient when the abdominal wall istoo thin. More recently, Kim et al. [8] generated subcuta-neous fat mask using a modified “AND” operation on fourdifferent directed masks with some success shown. However,logical and morphological operations make the whole quan-tification system vulnerable to inefficiencies.

In comparison to all these methods, our proposed frame-work is robust to handle patient specific variations sincewe use both appearance and geometric information to gen-erate the muscle boundary. Moreover, our proposed label fu-sion method using conditional random field (CRF) helps cap-ture the contextual and pairwise similarity between image at-tributes leading to significantly better segmentation of SATand VAT.

2. METHODS

The motivation for the proposed approach is based on the ob-servation that in CT volumes, SAT and VAT are separated bya thin layer of non-fat tissue (Figure 1). Our proposed algo-rithm includes a pre-processing task and a three-step segmen-tation framework as illustrated in Figure 2. Since HounsfieldUnit (HU) interval for certain tissues in CT image is fixed,it is straightforward to identify whole-body fat regions fromCT scans using a thresholding interval on HU space. In pre-processing step, we apply an identical set of morphologicaloperations to remove noise and normalize the imaging data.Afterwards, in the first step of the core segmentation method,we identify the initial boundaries for VAT and SAT regions byusing a sparse search conducted over a line oriented towardsthe center of the abdominal cavity starting from skin bound-ary. In the second step, we propose to use median absolutedeviation (MAD) and local outlier scores (LoOS) in order toremove false positive boundary locations and thus refine VATand SAT separation. In the final step, we utilize a label fusionframework using CRF energy minimization technique com-bining shape, anatomy, and appearance information.

2.1. Pre-processing of CT Images

The input to our pipeline is a CT abdominal volume. Webegin by thresholding the CT volumes between -190 to -30HU, corresponding to the fat tissue [9]. For simplifying thethresholded CT volume and standardizing the imaging datawith denoising, we apply a series of morphological operationsand median filtering. Specifically, we perform morphologicalclosing on the input image using a disk with a fixed radiusof 10. The resulting image is then filtered using median fil-tering in a 3x3 neighborhood. As the CT volume is affectedby noise in the form of holes and intensity peaks, the pre-processing pipeline in our framework is meant to make thevolume smooth for the next phase.

2.2. Initial Boundary Estimation

We identify the skin boundary of the subject by filling im-age regions and extract skin-boundary contour from the filledimage. The largest set of contour points on the filled im-age corresponds to the skin boundary. We then generate aset of boundary points between the boundary contour and thecentroid of the skin boundary serving as possible hypothe-ses for SAT-VAT boundary locations. Each hypothesis (can-didate boundary location) is next verified for the possibilityof being a boundary location by assessing gradient informa-tion. Let HC = {h1, h2, h3...hn} be the set of hypothesispoints between the initial boundary contour C and the centerof the abdomen region m. Then, SAT-VAT boundary loca-tions B = {b1, b2, b3...bn} would satisfy the following con-

Page 3: arXiv:1512.04958v1 [cs.CV] 15 Dec 2015 · Sarfaraz Hussein 1, Aileen Green2, Arjun Watane , Georgios Papadakis3, Medhat Osman2, ... arXiv:1512.04958v1 [cs.CV] 15 Dec 2015-1000-190

dition:

hj 6= hj−1 for hj ∈ B, and bi ∈ HC ,∀i. (1)

For each boundary location inC, we obtain SAT-VAT separat-ing boundary location bi from the transition of pixel intensi-ties from 1 to 0 as on the thresholded image, where labels arefrom the initial segmentation of CT volume. These boundarypoints can be noisy and are often stuck inside the small cav-ities of the subcutaneous fat. Hence, we propose a two stageapproach to smooth these noisy measurements.

2.3. Outlier Removal Using Geometric MAD

In the first stage of the outlier removal and smoothing, weapply median absolute deviation (MAD) on the distances be-tween the abdominal (B) and the skin boundaries (P). Theintuition behind this idea is that abdominal boundary contourwould maintain a smoothly varying distance from the skinboundary. The outliers in subcutaneous and visceral cavi-ties usually have abruptly changing distances from the skinboundary; hence, we compute the MAD [10] for all boundarypoints to obtain a score for each point in the boundary beingan outlier. The MAD coefficient Φ is given by:

Φ =|(‖P −B‖2)−med(‖P −B‖2)|

med{|(‖P −B‖2)−med(‖P −B‖2)|}, (2)

where med is the median and the denominator of Eq. 2 is me-dian absolute deviation. Boundary locations having Φ greaterthan empirically selected threshold 2.5 are labeled as outliersand are removed from the boundary B. The MAD is foundto be quite effective in outlier removal against noisy bound-ary measurements in our experiments. However, there maybe still some boundary locations that could potentially lead todrifting of SAT-VAT separation, particularly in small cavities.To mitigate the influence of those boundary points on the fi-nal boundary B, we apply appearance constraints, explainedin the next subsection.

2.4. Outlier Removal Using Appearance Attributes

For the second stage of the core methodology, we computeHistogram of Oriented Gradients (HOG) features as appear-ance attributes in a 14x14 cell with overlapping window of5 pixels (i.e., resulting in a total of 279 dimensional featurevector). The goal is further identify the candidate boundarypoints between SAT and VAT where shape/geometry based at-tributes have limitations. In practice, as these boundary pointslie on high dimensional manifold, we use normalized corre-lation distance instead of Euclidean distance, as also justifiedby computing the proximity Qij between embedded bound-ary points qi and qj using t-distributed stochastic neighbor-hood embedding (t-SNE) [11]:

Qij =1 + (‖qi − qj‖2)−1∑

a6=b(1 + (‖qa − qb‖2)−1. (3)

(a) (b)

Fig. 3: t-SNE visualizations using (a) Euclidean and (b) Normal-ized Correlation distance. Better separation between classes can beclearly seen in (b) where blue and red are two separate classes

Briefly, we use t-SNE to project the extracted HOG featureson to a 2-dimensional space. Figure 3 depicts the feature em-bedding visualization using t-SNE, where better separationof features with normalized correlation distance can be read-ily observed. We then compute the local outlier scores Π, toget the confidence of each point being an outlier [12]. The in-tuition is to cluster points, that are mapped to denser regionsin high dimensional feature space, together and to alienate theoutliers which actually don’t constitute the muscle boundarybetween SAT and VAT. The higher the Π, the higher the con-fidence of that boundary point being an outlier:

Π(x) = erf

(PLOF (x)√2.nPLOF

), (4)

where erf is the Gaussian Error Function and PLOF is theprobabilistic local outlier factor based on the ratio of the den-sity around point x and the mean value of estimated densitiesaround all remaining points. nPLOF is the λ standard devia-tion of the PLOF, where λ = 3 in our experiments.

2.5. Context Driven Label Fusion

We employ Conditional Random Fields (CRF) [13] to fusethe labels of the boundary candidates after the second andthird stages, i.e., to remove any inconsistencies in geometricand appearance labels using context information. In CRFformulation, the image is considered as a graph G=(V,E),where the nodes (V) consists of only the hypothesis bound-ary points and the edges (E) are neighboring points in highdimensional feature space. We seek to minimize the negativelog of Pr(k|G;w) with k labels and weights w as:

−log(Pr(k|G;w)) =∑vi∈V

Θ(ki|vi)+w∑

vi,vj∈EΨ(ki, kj |vi, vj).

(5)Unary PotentialsThe unary potentials are defined to be probability outputs ob-tained after applying k-means clustering on the normalizedscores of first and second stages:

Page 4: arXiv:1512.04958v1 [cs.CV] 15 Dec 2015 · Sarfaraz Hussein 1, Aileen Green2, Arjun Watane , Georgios Papadakis3, Medhat Osman2, ... arXiv:1512.04958v1 [cs.CV] 15 Dec 2015-1000-190

Table 1: Segmentation results for SAT and VAT evaluated by DiceSimilarity Coefficient (DSC)

Methods Subcutaneous DSC Visceral DSC

Zhao et al. [3] 89.51% 84.09%RANSAC 91.14% 85.90%Geometric MAD 89.61% 87.58%Appearance LoOS 92.45% 88.48%Context Driven Fusion 94.04% 91.57%

Θ(ki|vi) = −log(Pr(ki|vi)) (6)

Pairwise PotentialsThe pairwise potentials between the set of neighboring pointsvi and vj are defined as:

Ψ(ki, kj |vi, vj) =

(1

1 + |φi − φj |

)[ki 6= kj ], (7)

where |.| is the Manhattan or L1 distance and φ is the con-catenation of appearance and geometric features.

Finally, we fit a convex-hull around the visceral bound-aries obtained after the label fusion stage. The segment insidethe convex-hull is masked as VAT and that outside of it is la-beled as SAT.

3. RESULTS

Data: With IRB approval, we retrospectively collected imag-ing data from 151 oncology patients who underwent PET/CTscanning in our institute from 2011 to 2013 (67 men, 84 fe-male, mean age: 57.4). Since CT images are from PET/CThybrid counterpart; they are in low resolution, and no contrastagent was used. In-plane resolution (xy-plane) of CT imagewas recorded as 1.17 mm by 1.17 mm, and slice thicknesswas 5 mm. CT images were reconstructed on abdominal levelbefore applying the proposed method. Patients were selectedto have roughly equal distribution from varying BMI metrics(obese, overweight, normal, and underweight). Two expertinterpreters manually labeled the whole data set to serve asground truth. Above 99% of agreement was found with nostatistical difference between observers’ evaluations (p >0.5).

Table 2: Mean absolute error (MAE) for SAT and VAT in ml.

Method MAE for SAT (ml) MAE for VAT (ml)RANSAC 12.4995 12.5009Zhao et al. [3] 11.1617 11.1632Geometric MAD 13.6875 13.6890Appearance LoOS 11.8463 11.8473Context Driven Fusion 7.1258 7.1281

Evaluations: For segmentation evaluation, we used widelyaccepted dice similarity coefficient (DSC): 2|IG∩IS |

|IG|+|IS | , whereIG, IS were ground truth and automatically segmented fatregions, respectively. For quantification evaluation, we used

VAT Total fat

(a)

x% of total fat is from abdomen

VAT Total fat

(b)Fig. 4: Volume rendering along with labeled VAT and total fat inabdominal region is seen. (a) Subject with BMI<25, (b) subject withBMI>30. VAT (red) is shown to be separated from total fat (green).Five slices at the umbilical level of the abdomen are shown for illus-tration.

Mean Absolute Error (MAE) in milliliters (ml) of adiposetissue.

Comparisons: We compare our proposed method with Zhaoet al. [3] and RANSAC. Also, we have progressively shownthe results of our proposed framework’s steps: GeometricMAD, Appearance LoOS and the final context driven fusion.DSC results for SAT and VAT using these five methods areshown in Table 1 where significant improvement can be seenusing our proposed method. Moreover, Table 2 shows a com-parison using mean absolute error (MAE) in ml, where theproposed method records >36% lesser MAE as compared to[3] and other baselines. Figure 4 shows the volume renderingof subjects with BMIs in normal and obese range along withvisceral and total fat segmentations.

Computation Time: The unoptimized MATLAB implemen-tation of the proposed method takes around 2.59 seconds perslice on Intel Xeon Quad Core CPU @ 2.80GHz and 24.0GBRAM. For faster performance the algorithm can run in par-allel for multiple slices as there are no dependencies acrossslices.

4. CONCLUSIONIn this work, we present a novel approach for an automaticand unsupervised segmentation of SAT and VAT using globaland local geometric and appearance attributes followed bycontext driven label fusion. Evaluations were performed onlow resolution CT volumes from 151 subjects with varyingBMI conditions to avoid bias in our analysis. The proposedmethod has shown superior performance as compared to othermethods. As an extension of this study, we are currently de-veloping an automated abdominal region detection algorithmto extract abdominal region from the whole body CT scansand combine it with the proposed framework for fat quantifi-cation to be used in routine clinics.

Our approach is designed as model free; training or pa-tient specific parameter tuning is not necessary. Instead,image-driven features are adapted accordingly due to the ro-

Page 5: arXiv:1512.04958v1 [cs.CV] 15 Dec 2015 · Sarfaraz Hussein 1, Aileen Green2, Arjun Watane , Georgios Papadakis3, Medhat Osman2, ... arXiv:1512.04958v1 [cs.CV] 15 Dec 2015-1000-190

bustness of our proposed method. This study is an importantfirst step in generalizing fat quantification using low resolu-tion CT images, which will be more likely in the near futuredue to an exponential increase in the quantity of publiclyavailable CT data.

5. REFERENCES

[1] Marie Ng, Tom Fleming, Margaret Robinson, BlakeThomson, Nicholas Graetz, Christopher Margono,Erin C Mullany, Stan Biryukov, Cristiana Abbafati, Se-maw Ferede Abera, et al., “Global, regional, and na-tional prevalence of overweight and obesity in childrenand adults during 1980–2013: a systematic analysis forthe global burden of disease study 2013,” The Lancet,vol. 384, no. 9945, pp. 766–781, 2014.

[2] Cynthia L Ogden, Margaret D Carroll, Brian K Kit, andKatherine M Flegal, “Prevalence of childhood and adultobesity in the united states, 2011-2012,” Jama, vol. 311,no. 8, pp. 806–814, 2014.

[3] Binsheng Zhao, Jane Colville, John Kalaigian, SeanCurran, Li Jiang, Peter Kijewski, and Lawrence HSchwartz, “Automated quantification of body fat dis-tribution on volumetric computed tomography,” Jour-nal of computer assisted tomography, vol. 30, no. 5, pp.777–783, 2006.

[4] Daniel Romero, Juan Carlos Ramirez, and AlejandroMármol, “Quanification of subcutaneous and visceraladipose tissue using ct,” in Medical Measurement andApplications, 2006. MeMea 2006. IEEE InternationalWorkshop on. IEEE, 2006, pp. 128–133.

[5] Amol Pednekar, Alok N Bandekar, Ioannis Kakadiaris,Morteza Naghavi, et al., “Automatic segmentation ofabdominal fat from ct data,” in Application of ComputerVision, 2005. WACV/MOTIONS’05 Volume 1. SeventhIEEE Workshops on. IEEE, 2005, vol. 1, pp. 308–315.

[6] Howard Chung, Dana Cobzas, Laura Birdsell, JessicaLieffers, and Vickie Baracos, “Automated segmentationof muscle and adipose tissue on ct images for humanbody composition analysis,” in SPIE Medical Imaging.International Society for Optics and Photonics, 2009,pp. 72610K–72610K.

[7] Sanne D Mensink, Jarich W Spliethoff, Ruben Belder,Joost M Klaase, Roland Bezooijen, and Cornelis HSlump, “Development of automated quantification ofvisceral and subcutaneous adipose tissue volumes fromabdominal ct scans,” in SPIE Medical Imaging. Inter-national Society for Optics and Photonics, 2011, pp.79632Q–79632Q.

[8] Young Jae Kim, Seung Hyun Lee, Tae Yun Kim,Jeong Yun Park, Seung Hong Choi, and Kwang Gi Kim,“Body fat assessment method using ct images with sepa-ration mask algorithm,” Journal of digital imaging, vol.26, no. 2, pp. 155–162, 2013.

[9] Tohru Yoshizumi, Tadashi Nakamura, Mitsukazu Ya-mane, Abdul Hasan M Waliul Islam, Masakazu Menju,Kouichi Yamasaki, Takeshi Arai, Kazuaki Kotani, TohruFunahashi, Shizuya Yamashita, et al., “Abdominal fat:Standardized technique for measurement at ct 1,” Radi-ology, vol. 211, no. 1, pp. 283–286, 1999.

[10] Christophe Leys, Christophe Ley, Olivier Klein,Philippe Bernard, and Laurent Licata, “Detecting out-liers: do not use standard deviation around the mean,use absolute deviation around the median,” Journal ofExperimental Social Psychology, vol. 49, no. 4, pp. 764–766, 2013.

[11] Laurens Van der Maaten and Geoffrey Hinton, “Visu-alizing data using t-sne,” Journal of Machine LearningResearch, vol. 9, no. 2579-2605, pp. 85, 2008.

[12] Hans-Peter Kriegel, Peer Kröger, Erich Schubert, andArthur Zimek, “Loop: local outlier probabilities,” inProceedings of the 18th ACM conference on Informationand knowledge management. ACM, 2009, pp. 1649–1652.

[13] B. Fulkerson, A. Vedaldi, and S. Soatto, “Class segmen-tation and object localization with superpixel neighbor-hoods,” in Proceedings of the International Conferenceon Computer Vision, October 2009.