comparative evaluation of interactive segmentation algorithms … · 2019. 6. 13. · comparative...
TRANSCRIPT
International Journal of Computer Information Systems and Industrial Management Applications.
ISSN 2150-7988 Volume 11 (2019) pp. 142-150
© MIR Labs, www.mirlabs.net/ijcisim/index.html
MIR Labs, USA
Received: 17 Feb, 2019; Accepted: 15 May, 2019; Published: 13 June, 2019
Comparative Evaluation of Interactive Segmentation
Algorithms Using One Unified User Interactive Type
Soo See Chai1, Kok Luong Goh2, Hui Hui Wang3 and Yin Chai Wang4
1,3,4 Faculty of Computer Science & Information Technology,
University of Malaysia Sarawak (UNIMAS),
Kota Samarahan, 94300, Malaysia [email protected], [email protected], [email protected]
2 School of Innovative Technology,
International College of Advanced Technology Sarawak (i-CATS),
Malaysia
Abstract: In interactive segmentation, user inputs are required
to produce cues for the algorithms to extract the object of interest.
Different input types were recommended by the researchers in
their developed algorithms. The most common input types are
points, strokes and bounding box. Different evaluation
parameters were used in the researches in this field for
comparison. Our previous work shows that, for non-complex
image, segmentation result will not be affected by the user input
type used. Complex images are defined as images whereby the
colors of the objects of interest and the background are similar
and vice-versa. In some of the complex images, parts of the color
of the objects of interest are present in the background. This
paper extends our previous work by using the proposed unified
input types, which consists of a bounding box to locate the object
of interest range and a stroke for the foreground, on three
interactive segmentation algorithms for non-complex and
complex image. Three different evaluation measures are
computed to compare the segmentation quality: Variation of
Information (VI), Global Consistency Error (GCE) and Jaccard
index (JI). From the experiment results, it is noticed that, all
three algorithms perform well for non-complex images but could
not perform as good for complex images.
Keywords: interactive segmentation, complex, non-complex, user
input, bounding box, stroke.
I. Introduction
Image segmentation algorithms help human to extract object
of interest from images for further processing. Generally,
image segmentation will partition an image into certain
number of regions which have certain coherent features like
texture and colors. These coherent features would be grouped
into meaningful pieces for better perceiving [1]. From the
technical perspective, image segmentation can be divided into:
fully-automated, semi-automated or interactive, and manual
segmentation [2]. Fully automated segmentation, as the name
implies, does not require user intervention. Semi-automated
or interactive segmentation, where user automation is at
medium, requires user to initialize the algorithms to
appropriately mark the object of interest. Some of these
algorithms will require user to provide feedback to the
algorithms in order to improve the segmentation results. In
manual segmentation, the object of interest is delineated by
hand. In most of the practical applications of image
segmentation, large number of images are needed to be
handled by human. Therefore, human intervention in the
segmentation process should be as minimal as possible. This
makes automated image segmentation more appealing [3].
However, automated segmentation still exhibits certain
constraints and cannot produce satisfactory results due to the
complexity of the images, especially using natural images
[4-7]. To overcome this, semi-automated or interactive
segmentation use human operator to provide cue to the
segmentation algorithms.
This paper is organized as follows: Section II. will present a
brief introduction on the general interactive segmentation and
the three different algorithms used. The purpose of this paper
is also explained in this section. Section III. presents the
experiment settings and the results obtained. Conclusion are
presented in Section V.
II. Interactive Segmentation
In interactive segmentation, the user intention is incorporated
in the user interaction [8]. The user will provide intuitive
interaction which serves as high level or prior information to
the interactive segmentation algorithms to extract the object of
interest [9-14]. A general functional view of an interactive
segmentation system is depicted in Figure 1. The system
consists of User Input Module (Step 1), Computation Module
(Step 2) and Output Display Module (Step 3) [15]. In Step 1,
the module will receive user input and the intention of the user
will be recognized at this stage. The main part of the whole
system is the Computation Module, i.e Step 2, whereby the
segmentation algorithms will run according to the user input to
generate the segmentation results. The segmentation results
will be displayed in Step 3. These steps are iterative whereby
Comparative Evaluation of Interactive Segmentation Algorithms Using One Unified User Interactive Type 143
the user can input additional input after Step 3 and the system
will go back to Step 1 until
Figure 1. General functional view of an interactive
segmentation system whereby a user can produce interaction
to the system until a satisfactory result [15].
Figure 2. Input types used in interactive segmentation: a:
bounding box. b: bounding boxes for foreground object. c:
seed points for the background and foreground of the image. d:
placing the skeleton on the object of interest. e: background
and foreground strokes on the image. f: stroke on the contour
of the object. g: bounding box with strokes on the object and
background, and h: seed point on the contour of the object.
the user satisfies with the output produced. Thus, this is a
human-machine collaborative method whereby, the machine
will need to understand the user intention based on the human
interaction and the human’s input will affect the machine
behavior in solving the problem [16].
There are several user interactive or input types used in the
current researches to offer the information about the
background and foreground regions. The most commonly
used input type is placing strokes in the foreground and
background in the image [4, 6, 7, 9, 13, 17-23]. Besides
strokes, rectangle or bounding box is also used to locate the
target object range [11, 24-28]. In addition, seed points are
also used to be placed on foreground and background on the
image [29-31] or on the contour of the object of interest [32,
33]. Mixed input types were also applied in some of the
research. For example, [34] combined seed points with strokes
on the background and foreground of the image to extract the
object of the interest. Figure 2 shows the use of different input
types used in interactive segmentation.
Figure 3. The overall framework of [38].
In our previous work [35], we had compared four
commonly used user input types, i.e, seed point, foreground
and background strokes, foreground stroke with background
bounding box, and bounding box as foreground and
background using the nonparametric higher-order learning
algorithm by [36] for the simple or non-complex and complex
images from the Berkeley image database [37]. Our previous
findings show that, the use of bounding box to locate the object
of interest range can improve the segmentation results for
complex image. In addition, we also found that, the location
and length of strokes or seed points used have an impact on the
segmentation accuracy. In this work, we will extend the
previous work by comparing and evaluating the use of
bounding box as input type for three different interactive
segmentation algorithms: robust interactive image
segmentation via graph-based manifold ranking [38],
f
f.
a.
b.
c.
e.
g.
Chai et al.
144
nonparametric higher-order learning for interactive
segmentation [36] and interactive image segmentation by
maximal similarity based region merging [39]. Quantitative
comparison evaluation on the segmentation results will be
done with three evaluation parameters: Variation of
Information (VI), Global Consistency Error (GCE) and
Jaccard index (JI) [40]. Between two segmentation, the GCE
is the error measure, JI measures the similarity and VI
measures the distance.
a. User scribbles f. Output with = 0.9665
b. User scribbles g. Output with = 0.9644
c. User scribbles h. Output with = 0.9491
d. User scribbles i. Output with = 0.9682
e. User scribbles j. Output with = 0.9627
Figure 4. Segmentation results obtained using [38]: a. to e.
multiple user scribbles for foreground and background labels,
and f. to j. the corresponding result obtain.
a. f
b. g.
c. h.
d. i.
e. j.
Figure 5. Segmentation results obtained using algorithm [36]:
a. to e. user scribbles used, f. to j. the corresponding results.
A. Robust Interactive Segmentation via Graph-based
Manifold Ranking [38]
In this work, using graph-based semi-supervised learning
theory and superpixel, an interactive segmentation system was
developed. At the beginning of the process, the image is
over-segmented into small homogeneous regions using
superpixels. To cater the user intention, scribbles are entered
as the foreground and background labels. There is no limit on
the amount of foreground and background scribbles that may
be input by the user. These foreground and background labels
Comparative Evaluation of Interactive Segmentation Algorithms Using One Unified User Interactive Type 145
information will be integrated into the superpixels. Next, using
the proposed labels driven and locally adaptive kernel
parameter, the k-regular sparse graph is approximated to form
the affinity graph matrix. By calculating and integrating the
ranking scores, the final segmentation result is generated to
form the foreground and background indicator vectors from
the user scribbles. The overall framework is shown in Figure 3.
Figure 4 shows some of the segmentation results the authors
obtained. In their work, the normalized overlap, β0, is used to
measure the similarity between the segmentation results and
the preset ground truth. However, the average β0 obtained
from all the tested images was not reported. In addition, the
authors reported that, the proposed algorithm managed to
segment the input images in less than 2 seconds.
The key contributions of this work are:
• A novel framework that combines the graph-based
semi-supervised learning theory with region-based
models to efficiently segment out the desired object,
• A novel graph construction strategy which models the
graph as an approximate k-regular sparse graph that
integrate spatial relationships and user provided
scribbles, and
• A new graph edge weights computing strategy that forms
the weights using a locally adaptive kernel width
parameter.
B. Nonparametric higher-order learning for interactive
segmentation [36]
This generative interactive model was based on multi labels.
An algorithm was proposed to estimate the pixels likelihood
for each label. Using the mean shift unsupervised learning
algorithm [41], a new higher-order cost function of pixel
likelihoods to partly enforce the label consistency inside the
regions generated was designed. The algorithm considers the
relationships between the pixels and the corresponding regions
in a multi-layer graph. To estimate the pixels likelihoods, the
higher-order cues of the representative region likelihoods are
used. The non-parametric learning technique is introduced to
recursively estimate the higher-order cues from the resulting
likelihoods of pixels included in each region. This will
consider the connections between the regions that aid in the
propagation of the local grouping cues across the image areas.
Figure 5 shows the scribbles used and the corresponding
segmentation results for some of the images tested by the
authors. The authors reported that the algorithm resulted in
4.34% of error rate.
The key contributions of this algorithm are:
• The pixel likelihood for each label is estimated,
• The representative region likelihoods are defined as
higher-order cues to estimate the pixel likelihoods, and
• Using the resulting likelihoods of pixels included in each
region, a nonparametric learning technique was
addressed to recursively estimate the higher order cues.
For the pixel and region properties, only color values were
used.
C. Interactive image segmentation by maximal
similarity-based region merging [39]
For this method, strokes are used not only as the markers to
indicate the location, but also to label the object and
background. For the different regions, the similarity between
these regions are calculated. Utilizing the assistance of the
markers input by the user, the similar regions are merged based
on the proposed maximal similarity rule. This proposed
maximal similarity-based region merging does not require a
preset threshold value and is adaptive to image content. The
method manages to merge and label the non-marker
background automatically.
a. f.
b. g.
c. h.
d. i.
e. j.
Figure 6. Segmentation results obtained using algorithm [39]:
a. to e. show the user input, f.to j. show the segmentation
outputs obtain.
Chai et al.
146
In addition, the method also successfully determined the
non-marker object regions, and these are avoided from being
merged with the background. The object contours are next
extracted from the background after all the non-marker regions
are labeled. The authors claimed that this method is the first to
use the markers entered by the user to guide the region
merging for object contour extraction. Some of the results
obtained using this method are shown in Figure 6.
The authors manually labelled the desired object in the
images and used this as the ground truth. The evaluation
parameters used are: true positive rate and false positive rate.
True positive rate is defined as the number of correctly
classified object pixels over the total number of object pixels
in the ground truth. The ratio of the number of background
pixels which are wrongly classified as object pixels over the
total number of background pixels in the ground truth is the
false positive rate.
a.
b.
Figure 7. Failure example of algorithm [39]: a. Part of the
object and background have very similar features and many
scribbles were use, b. segmentation result obtained.
The ultimate aim is to achieve lower false positive rate
and at the same time, higher true positive rate. The true
positive rate and false positive rate obtained for the images in
Figure 6 are: bird (94.64% and 0.29%), starfish (90.25% and
0.26%), flower (97.59% and 1.08%), Mona Lisa (98.85%,
0.71%) and tiger (91.70%, 0.75%).
The author had noted that, since the proposed algorithm
is based on some initial segmentations, such as mean-shift or
superpixel, therefore, if the initial segmentation does not
provide a good basis of region merging, the proposed method
will fail.
In terms of user input, the experiments conducted by the
authors showed that, the user input had to cover the main
feature regions in odder for the object of interest to be
correctly extracted. In addition, the authors had also
conducted experiments on the different number of user input.
It was concluded that, the images with more user inputs will
produce better segmentation results. The algorithm was
noticed to produce not so encouraging results when shadow,
low-contrast edges and ambiguous areas occur. In an
experiment using an image where parts of the object regions
are very similar to the background, as shown in Figure 7, the
segmentation result obtained was poor although many
markers were used.
Figure 8. The bounding box and foreground stroke used as the
input image for the three different interactive segmentation
algorithms. Image a, b and c are non-complex images used
while, d, e and f images are in the category of complex images.
a. d.
b. e.
c. f.
Comparative Evaluation of Interactive Segmentation Algorithms Using One Unified User Interactive Type 147
III. Experiment Settings and Results
A. Observations
From the previous section, it can be seen that, the three
algorithms use strokes to indicate the foreground and
background in the image. Multiple inputs are required from the
user to accurately provide the cue for the algorithms for
segmentation purpose. In addition, there is no common
evaluation features used. Therefore, in this study, the
experiments settings will: 1) compare the three algorithms
based on the unified single bounding box and one stroke for
the foreground in a set of images that are divided into complex
and non-complex categories, and 2) use the same evaluation
parameters to compare and evaluate the segmentation results.
GCE=0.02
VI=0.98
JI=0.95
GCE=0.01
VI=0.99
JI=0.98
GCE=0.01
VI=0.99
JI=0.98
GCE=0.12
VI=0.87
JI=0.83
GCE=0.02
VI=0.98
JI=0.97
GCE=0.04
VI=0.96
JI=0.95
GCE=0.11
VI=0.88
JI=0.86
GCE=0.02
VI=0.97
JI=0.97
GCE=0.04
VI=0.96
JI=0.95
a. b. c.
Figure 9. Qualitative and quantitative segmentation results
obtain using non-complex images for: a. Robust Interactive
Segmentation via Graph-based Manifold Ranking [38], b.
Nonparametric higher-order learning for interactive
segmentation [36], and c. Interactive image segmentation by
maximal similarity-based region merging [39].
B. Dataset
The Berkeley image database [37] consists of around 12,000
hand-labeled segmentations of 1,000 Corel dataset from 30
human subjects. This database was available publicly for
research on image segmentation and boundary. Using the
images from the Berkeley image database [37], the images are
divided into non-complex and complex images. Non-complex
images are images whereby the color of the objects of interest
and background are not similar while for complex images, the
objects of interest and the background are having akin color.
In some of the complex images, parts of the color of the object
of interest are present in the background.
In the first stage, the user input which includes the bounding
box to locate the range of the object of interest and a stroke for
the foreground will be drawn on the selected images. Next,
these user scribbled images are input into the three different
interactive segmentation algorithms. The three evaluation
parameters: Variation of Information (VI), Global
Consistency Error (GCE) and Jaccard index (JI) are calculated
for the segmentation results.
GCE=0.07
VI=0.88
JI=0.50
GCE=0.04
VI=0.91
JI=0.37
GCE=0.05
VI=0.94
JI=0.62
GCE=0.13
VI=0.82
JI=0.62
GCE=0.08
VI=0.81
JI=0.36
GCE=0.07
VI=0.92
JI=0.77
GCE=0.10
VI=0.77
JI=0.34
GCE=0.04
VI=0.93
JI=0.55
GCE=0.03
VI=0.95
JI=0.70
a. b. c.
Figure 10. Qualitative and quantitative segmentation results
obtain using complex images for: a. Robust Interactive
Segmentation via Graph-based Manifold Ranking [38], b.
Nonparametric higher-order learning for interactive
segmentation [36], and c. Interactive image segmentation by
maximal similarity based region merging [39].
Chai et al.
148
C. Results and Discussions
For comparison purpose, three non-complex images and
three complex images were used. Figure 8 shows the images
in these two categories. In this figure, image a. to c. are
non-complex images whereby the background and object of
interest could be differentiated easily. Images d. to f. in this
figure are complex images. In these complex images, it could
be seen that, parts of the objects and the background have very
similar color features. The unified bounding box and
foreground stroke from the user input is also shown in this
figure. Fair comparison is done in the experiments as the
length and location of the strokes as well as the size of the
bounding box use for the three algorithms are the same. It is
worth mentioning that, the over-segmentation technique used
in each of the algorithm is remained the same, i.e. there is no
change on the original algorithm in terms of
over-segmentation technique used.
1) Non-complex Images
For the non-complex images, the segmentation results
obtained using the three interactive segmentation algorithms
are shown in Figure 9. Using the three evaluation parameters,
it can be concluded that, the three interactive segmentation
algorithms can extract the object of interest with satisfactory
results, with minimum JI=0.83 and GCE=0.12. The best
segmentation results obtained was JI=0.98 and GCE=0.01.
The Nonparametric higher-order learning for interactive
segmentation (NHL) [36], and Interactive image segmentation
by maximal similarity based region merging (MSRM) [39]
perform better as comparing to the Robust Interactive
Segmentation via Graph-based Manifold Ranking (RGMR)
[38] with NHL and MSRM achieve more than or equal to 0.95
for JI and GCE less than 0.05. RGMR on the other hand, only
managed to achieve the best result using the bush image with
JI=0.95 and GCE=0.02. For the other two non-complex
images, RGMR only manages to achieve average JI of around
0.85 and GCE of around 0.12. With visual inspection, it could
be seen that, the objects extracted mostly do not include the
background of the image for the three algorithms. However,
the detail of the object, for example, the image of the vase, is
partly missed.
2) Complex Images
Using the complex images, as shown in Figure 10, all three
algorithms could not perform as good as using the
non-complex images. The best interactive segmentation
algorithm among these three algorithms is the MSRM which
could still produce segmentation result of JI more than 0.60
and GCE of less than 0.10 for all the three complex images
used. RGMR produces the highest GCE for all three complex
images as comparing to the other two algorithms. In terms of
GCE, the ranking of the algorithms from low to high values is:
MSRM, NHL and RGMR. However, if using JI, the ranking is:
NHL, RGMR and MSRM. With visual inspection, it could be
seen that, the segmentation includes the background of the
object, which means that, the three algorithms fail to
differentiate between the object of interest and the background
for complex images.
IV. Conclusion
This paper uses the unified bounding box to locate the ranhe of
the object of interest and a stroke in the foreground as user
input for the three interactive segmentations: RGMR, NHL
and MSRM. The images used are divided into non-complex
and complex images. For the images used, the unified input
type suggested in this paper, which is a bounding box to locate
the object of interest and a stroke placed on the foreground of
the images, were drawn. In another words, the location and
size/length of the bounding box and stroke are the same for all
the test images. In addition, to better compare and evaluate the
results, three evaluation parameters are used for all the three
interactive algorithms, i.e. GCE, VI and JI. It was found that,
all three interactive segmentation algorithms perform well for
non-complex images with JI values of more than 0.80. MSRM
and NHL outperform RGMR for all the three non-complex
images. However, for complex images, only MSRM can
achieve JI values of more than 0.60 and GCE values of less
than 0.10. RGMR performs the worst among these three
algorithms for complex images with highest GCE value
obtained only at 0.13 and lowest JI value at 0.34. It can be
concluded that, the MSRM algorithm can produce more
consistent segmentation results as comparing to the NHL and
RGMR algorithms, although for complex images, the results
obtained using MSRM deteriorates as comparing to extracting
objects of interest in non-complex images. In addition, for
MSRM, it can also be generalized that, the influences of the
user input type on the segmentation results is not as huge as the
NHL and RGMR algorithms for complex images. Visual
inspection of the segmentation results reveal that, for complex
images, the three interactive segmentation algorithms fail to
differentiate between the object of interest with the
background and this result in background being included in the
final segmentation results.
The proposed unified bounding box and single stroke used
in this research suggest a method to extract objects of interest
with less required input. However, from the experiments in
this study, it can be concluded that, all the three interactive
segmentations could not perform well in extracting object of
interest for complex images, but the extraction of object of
interest is good when using non-complex images. In the future,
we would experiment with more complex images and more
interactive segmentation algorithms to verify the results
obtained.
Acknowledgment
The authors would like to thank the Faculty of Computer
Science and Information Technology (FCSIT), University
Malaysia Sarawak (UNIMAS) for providing the facilities
needed.
References
[1] Forsyth, D.A. and J. Ponce, Computer vision: a modern
approach2002: Prentice Hall Professional Technical
Reference.
Comparative Evaluation of Interactive Segmentation Algorithms Using One Unified User Interactive Type 149
[2] Heckel, F., et al., On the evaluation of segmentation
editing tools. Journal of Medical Imaging, 2014. 1(3): p.
034005.
[3] Peng, B., L. Zhang, and D. Zhang, Automatic image
segmentation by dynamic region merging. IEEE
Transactions on image processing, 2011. 20(12): p.
3592-3605.
[4] Wang, F., X. Wang, and T. Li. Efficient label propagation
for interactive image segmentation. in Machine Learning
and Applications, 2007. ICMLA 2007. Sixth International
Conference on. 2007. IEEE.
[5] Jia, Y. and C. Zhang. Learning distance metric for
semi-supervised image segmentation. in Image
Processing, 2008. ICIP 2008. 15th IEEE International
Conference on. 2008. IEEE.
[6] Jiang, X., A. Große, and K. Rothaus, Interactive
segmentation of non-star-shaped contours by dynamic
programming. Pattern Recognition, 2011. 44(9): p.
2008-2016.
[7] Nguyen, T.-V., et al. Efficient image segmentation
incorporating photometric and geometric information. in
Proceedings of the International MultiConference of
Engineers and Computer Scientists. 2011.
[8] Yang, W., et al., User-friendly interactive image
segmentation through unified combinatorial user inputs.
IEEE Transactions on image processing, 2010. 19(9): p.
2470-2479.
[9] Xu, J., X. Chen, and X. Huang. Interactive image
segmentation by semi-supervised learning ensemble. in
Knowledge Acquisition and Modeling, 2008. KAM'08.
International Symposium on. 2008. IEEE.
[10] McGuinness, K. and N.E. O’connor, A comparative
evaluation of interactive segmentation algorithms.
Pattern Recognition, 2010. 43(2): p. 434-444.
[11] Vicente, S., V. Kolmogorov, and C. Rother. Joint
optimization of segmentation and appearance models. in
Computer Vision, 2009 IEEE 12th International
Conference on. 2009. IEEE.
[12] Santner, J., T. Pock, and H. Bischof. Interactive
multi-label segmentation. in Asian Conference on
Computer Vision. 2010. Springer.
[13] Gulshan, V., et al. Geodesic star convexity for interactive
image segmentation. in Computer Vision and Pattern
Recognition (CVPR), 2010 IEEE Conference on. 2010.
IEEE.
[14] Nickisch, H., et al. Learning an interactive segmentation
system. in Proceedings of the Seventh Indian Conference
on Computer Vision, Graphics and Image Processing.
2010. ACM.
[15] Malmberg, F., Graph-based methods for interactive
image segmentation, 2011, Acta Universitatis
Upsaliensis.
[16] He, J., C.-S. Kim, and C.-C.J. Kuo, Interactive
Segmentation Techniques: Algorithms and Performance
Evaluation2013: Springer Science & Business Media.
[17] Vezhnevets, V. and V. Konouchine. GrowCut:
Interactive multi-label ND image segmentation by
cellular automata. in proc. of Graphicon. 2005. Citeseer.
[18] Mu, Y., B. Zhou, and S. Yan, Information-theoretic
analysis of input strokes in visual object cutout. IEEE
Transactions on Multimedia, 2010. 12(8): p. 843-852.
[19] Wu, X. and Y. Wang. Interactive foreground/background
segmentation based on graph cut. in Image and Signal
Processing, 2008. CISP'08. Congress on. 2008. IEEE.
[20] Liu, M., S. Chen, and J. Liu. Precise object cutout from
images. in Proceedings of the 16th ACM international
conference on Multimedia. 2008. ACM.
[21] Kim, T.H., K.M. Lee, and S.U. Lee. Generative image
segmentation using random walks with restart. in
European conference on computer vision. 2008.
Springer.
[22] Liu, L., et al., A variational model and graph cuts
optimization for interactive foreground extraction. Signal
Processing, 2011. 91(5): p. 1210-1215.
[23] Xiang, S., et al., Interactive image segmentation with
multiple linear reconstructions in windows. IEEE
Transactions on Multimedia, 2011. 13(2): p. 342-352.
[24] Lempitsky, V., et al. Image segmentation with a bounding
box prior. in Computer Vision, 2009 IEEE 12th
International Conference on. 2009. IEEE.
[25] Rother, C., V. Kolmogorov, and A. Blake. Grabcut:
Interactive foreground extraction using iterated graph
cuts. in ACM transactions on graphics (TOG). 2004.
ACM.
[26] Liu, D., et al. Fast interactive image segmentation by
discriminative clustering. in Proceedings of the 2010
ACM multimedia workshop on Mobile cloud media
computing. 2010. ACM.
[27] Lee, S.H., H.I. Koo, and N.I. Cho, Image segmentation
algorithms based on the machine learning of features.
Pattern Recognition Letters, 2010. 31(14): p. 2325-2336.
[28] Friedland, G., K. Jantz, and R. Rojas. Siox: Simple
interactive object extraction in still images. in
Multimedia, Seventh IEEE International Symposium on.
2005. IEEE.
[29] Mičušík, B. and A. Hanbury. Steerable semi-automatic
segmentation of textured images. in Scandinavian
Conference on Image Analysis. 2005. Springer.
[30] Arbeláez, P. and L. Cohen. Constrained image
segmentation from hierarchical boundaries. in Computer
Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on. 2008. IEEE.
[31] Demjen, E. and J. Marek. Interactive algorithm for
accurate segmentation of fiber-like objects. in Image and
Signal Processing and Analysis, 2009. ISPA 2009.
Proceedings of 6th International Symposium on. 2009.
IEEE.
[32] Kang, H.W., G-wire: A livewire segmentation algorithm
based on a generalized graph formulation. Pattern
Recognition Letters, 2005. 26(13): p. 2042-2051.
[33] Zadicario, E., et al. Boundary snapping for robust image
cutouts. in Computer Vision and Pattern Recognition,
2008. CVPR 2008. IEEE Conference on. 2008. IEEE.
[34] de Miranda, P.A., A.X. Falcão, and J.K. Udupa,
Synergistic arc-weight estimation for interactive image
segmentation using graphs. Computer Vision and Image
Understanding, 2010. 114(1): p. 85-99.
[35] Chai, S.S. and K.L. Goh, Evaluation of Different User
Input Types in Interactive Segmentation. Journal of
Telecommunication, Electronic and Computer
Engineering (JTEC), 2017. 9(2-10): p. 91-99.
[36] Kim, T.H., K.M. Lee, and S.U. Lee. Nonparametric
higher-order learning for interactive segmentation. in
Chai et al.
150
Computer Vision and Pattern Recognition (CVPR), 2010
IEEE Conference on. 2010. IEEE.
[37] Martin, D., et al. A database of human segmented natural
images and its application to evaluating segmentation
algorithms and measuring ecological statistics. in
Computer Vision, 2001. ICCV 2001. Proceedings. Eighth
IEEE International Conference on. 2001. IEEE.
[38] Li, H., W. Wu, and E. Wu, Robust interactive image
segmentation via graph-based manifold ranking.
Computational Visual Media, 2015. 1(3): p. 183-195.
[39] Ning, J., et al., Interactive image segmentation by
maximal similarity based region merging. Pattern
Recognition, 2010. 43(2): p. 445-456.
[40] Sharma, M. and V. Chouhan, Objective evaluation
parameters of image segmentation algorithms.
International Journal of Engineering and Advanced
Technology (IJEAT), 2012. 2(2): p. 2249-8958.
[41] Comaniciu, D. and P. Meer, Mean shift: A robust
approach toward feature space analysis. IEEE
Transactions on pattern analysis and machine intelligence,
2002. 24(5): p. 603-619.
Author Biographies
Soo See Chai is presently working as a senior
lecturer at Department of Computing and
Software Engineering at Faculty of Computer
Science and Information Technology (FCSIT)
in University Malaysia Sarawak (UNIMAS).
Her research areas are GIS, Remote Sensing,
AI and Image Processing.
Kok Luong Goh obtained his Master of
Science in Information Technology from the
FCSIT, UNIMAS. He is currently working
as a lecturer in International College of
Advaned Technology Sarawak (ICATS),
Kuching.
Wang Hui Hui is currently a Senior Lecturer
at the Faculty of Computer Science and
Information Technology, University
Malaysia Sarawak (UNIMAS). She received
her Ph.D in Computer Science from the
Universiti Teknologi Malaysia. Her main
research interest is in image processing.
Wang Yin Chai is presently working as
Professor at Faculty of Computer Science
and Information Technology, UNIMAS since
2007. He had more than 24 years of
experience in research and development. The
areas of interests include AI, Image
processing and GIS Analysis. She had
published more than 160 publications and
leaded consultation projects amounting more
than 20 million.