critical issues with respect to clustering lecture notes for chapter 8 introduction to data mining...

46
Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar 4/18/2004 1 All covered in Part1 of Clustering in 2013! gnificantly modified and extended by Ch. Eick

Upload: giles-armstrong

Post on 03-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Critical Issues with Respect to Clustering

Lecture Notes for Chapter 8

Introduction to Data Miningby

Tan, Steinbach, Kumar

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

All covered in Part1 of Clustering in 2013!

Significantly modified and extended by Ch. Eick

Page 2: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

What is Cluster Analysis?

Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Inter-cluster distances are maximized

Intra-cluster distances are

minimized

Page 3: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Notion of a Cluster can be Ambiguous

How many clusters?

Four Clusters Two Clusters

Six Clusters

Page 4: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusterings

A clustering is a set of clusters

Important distinction between hierarchical and partitional sets of clusters

Partitional Clustering– A division data objects into non-overlapping subsets (clusters)

such that each data object is in exactly one subset

Hierarchical clustering– A set of nested clusters organized as a hierarchical tree

Page 5: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Partitional Clustering

Original Points A Partitional Clustering

Page 6: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Hierarchical Clustering

p4p1

p3

p2

p4 p1

p3

p2

p4p1 p2 p3

p4p1 p2 p3

Traditional Hierarchical Clustering

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

Traditional Dendrogram

Page 7: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Other Distinctions Between Sets of Clusters

Exclusive versus non-exclusive– In non-exclusive clusterings, points may belong to multiple

clusters.– Can represent multiple classes or ‘border’ points

Fuzzy versus non-fuzzy– In fuzzy clustering, a point belongs to every cluster with some

weight between 0 and 1– Weights must sum to 1– Probabilistic clustering has similar characteristics

Partial versus complete– In some cases, we only want to cluster some of the data

Heterogeneous versus homogeneous– Cluster of widely different sizes, shapes, and densities

Page 8: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters

Well-separated clusters

Center-based clusters

Contiguous clusters

Density-based clusters

Property or Conceptual

Described by an Objective Function

Skip/Part2

Page 9: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Well-Separated

Well-Separated Clusters: – A cluster is a set of points such that any point in a cluster is

closer (or more similar) to every other point in the cluster than to any point not in the cluster.

3 well-separated clusters

Page 10: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Center-Based

Center-based– A cluster is a set of objects such that an object in a cluster is

closer (more similar) to the “center” of a cluster, than to the center of any other cluster

– The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster

4 center-based clusters

Page 11: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Contiguity-Based

Contiguous Cluster (Nearest neighbor or Transitive)– A cluster is a set of points such that a point in a cluster is

closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.

8 contiguous clusters

Page 12: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Density-Based

Density-based– A cluster is a dense region of points, which is separated by

low-density regions, from other regions of high density.

– Used when the clusters are irregular or intertwined, and when noise and outliers are present.

6 density-based clusters

Page 13: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Conceptual Clusters

Shared Property or Conceptual Clusters– Finds clusters that share some common property or represent

a particular concept.

.

2 Overlapping Circles

Page 14: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Objective Function

Clusters Defined by an Objective Function– Finds clusters that minimize or maximize an objective function.

– Enumerate all possible ways of dividing the points into clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard)

– Can have global or local objectives. Hierarchical clustering algorithms typically have local objectives Partitional algorithms typically have global objectives

– A variation of the global objective function approach is to fit the data to a parameterized model. Parameters for the model are determined from the data. Mixture models assume that the data is a ‘mixture' of a number of statistical distributions.

Page 15: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Types of Clusters: Objective Function …

Map the clustering problem to a different domain and solve a related problem in that domain– Proximity matrix defines a weighted graph, where the

nodes are the points being clustered, and the weighted edges represent the proximities between points

– Clustering is equivalent to breaking the graph into connected components, one for each cluster.

– Want to minimize the edge weight between clusters and maximize the edge weight within clusters

Page 16: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Two different K-means Clusterings

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Sub-optimal Clustering

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Optimal Clustering

Original Points

Resume!

Page 17: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Page 18: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Page 19: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Evaluating K-means Clusters

Most common measure is Sum of Squared Error (SSE)– For each point, the error is the distance to the nearest cluster– To get SSE, we square these errors and sum them.

– x is a data point in cluster Ci and mi is the representative point for cluster Ci can show that mi corresponds to the center (mean) of the cluster

– Given two clusters, we can choose the one with the smallest error

– One easy way to reduce SSE is to increase K, the number of clusters A good clustering with smaller K can have a lower SSE than a poor clustering with higher K

K

i Cxi

i

xmdistSSE1

2 ),(

Page 20: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Importance of Choosing Initial Centroids …

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

Page 21: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Importance of Choosing Initial Centroids …

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

xy

Iteration 5

Page 22: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Problems with Selecting Initial Points

If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. – Chance is relatively small when K is large

– If clusters are the same size, n, then

– For example, if K = 10, then probability = 10!/1010 = 0.00036

– Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t

– Consider an example of five pairs of clusters

skip

Page 23: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Solutions to Initial Centroids Problem

Multiple runs– Helps, but probability is not on your side

Sample and use hierarchical clustering to determine initial centroids

Select more than k initial centroids and then select among these initial centroids– Select most widely separated

Postprocessing Bisecting K-means

– Not as susceptible to initialization issues

Page 24: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Handling Empty Clusters

Basic K-means algorithm can yield empty clusters

Several strategies– Choose the point that contributes most to SSE

– Choose a point from the cluster with the highest SSE

– If there are several empty clusters, the above can be repeated several times.

Page 25: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Limitations of K-means

K-means has problems when clusters are of differing – Sizes

– Densities

– Non-globular shapes

K-means has problems when the data contains outliers.

Page 26: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

Page 27: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

Page 28: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

Page 29: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.Find parts of clusters, but need to put together.

Page 30: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Overcoming K-means Limitations

Original Points K-means Clusters

Example: Achieved 97% purity for the Iris Dataset by running K-means with k=11!

Page 31: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Overcoming K-means Limitations

Original Points K-means Clusters

Page 32: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Cluster Validity

For supervised classification we have a variety of measures to evaluate how good our model is

– Accuracy, precision, recall

For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters?

But “clusters are in the eye of the beholder”!

Then why do we want to evaluate them?– To avoid finding patterns in noise– To compare clustering algorithms– To compare two sets of clusters– To compare two clusters

Part1 Clustering 2013

Page 33: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Clusters found in Random Data

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Random Points

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

K-means

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

DBSCAN

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Complete Link

Page 34: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data.

2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels.

3. Evaluating how well the results of a cluster analysis fit the data without reference to external information.

- Use only the data

4. Comparing the results of two different sets of cluster analyses to determine which is better.

5. Determining the ‘correct’ number of clusters.

For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.

Different Aspects of Cluster Validation

Page 35: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following three types.– External Index: Used to measure the extent to which cluster labels

match externally supplied class labels. Entropy

– Internal Index: Used to measure the goodness of a clustering structure without respect to external information.

Sum of Squared Error (SSE)

– Relative Index: Used to compare two different clusterings or clusters.

Often an external or internal index is used for this function, e.g., SSE or entropy

Sometimes these are referred to as criteria instead of indices– However, sometimes criterion is the general strategy and index is the

numerical measure that implements the criterion.

Measures of Cluster Validity

Page 36: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Need a framework to interpret any measure. – For example, if our measure of evaluation has the value, 10, is that

good, fair, or poor?

Statistics provide a framework for cluster validity– The more “atypical” a clustering result is, the more likely it represents

valid structure in the data

– Can compare the values of an index that result from random data or clusterings to those of a clustering result.

If the value of the index is unlikely, then the cluster results are valid

– These approaches are more complicated and harder to understand.

For comparing the results of two different sets of cluster analyses, a framework is less necessary.

– However, there is the question of whether the difference between two index values is significant

Framework for Cluster Validity

Page 37: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Cluster Cohesion: Measures how closely related are objects in a cluster– Example: SSE

Cluster Separation: Measure how distinct or well-separated a cluster is from other clusters

Internal Measures: Cohesion and Separation

Page 38: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

A proximity graph based approach can also be used for cohesion and separation.– Cluster cohesion is the sum of the weight of all links within a cluster.

– Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster.

Internal Measures: Cohesion and Separation

cohesion separation

Page 39: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Example Cluster Evaluation Measures

2)(

),(')( 2

,

kk

CCdXSeparation XCC

jiji

O

CCCOMPCXCOHESION

XC

||)()(

i jCx Cy

ji

jiavg CC

yxdCCd

),(),(

),(),(jijicentroidccdCCd ),('

jiCCd {

|)|||*|/(|),()(,,

CCCoodCCOMPCjiCoo

jiji

Remark: k is the number of clusters and X={C1,…,Ck} are the clusters foundby the clustering algorithm; ci refers to the centroid if the i-th cluster.

skip

Page 40: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings

For an individual point, i– Calculate a = average distance of i to the points in its cluster

– Calculate b = min (average distance of i to points in another cluster)

– The silhouette coefficient for a point is then given by:

s = (b-a)/max(a,b)

– Typically between 0 and 1.

– The closer to 1 the better

– Negative values are pathological

Can calculate the Average Silhouette width for a cluster or for the whole clustering

Internal Measures: Silhouette Coefficient

ab

Page 41: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

require("cluster")

setwd("C:\\Users\\C. Eick\\Desktop")

a<-read.csv("c8.csv")

d<-data.frame(a=a[,1],b=a[,2],c=a[,3])

d<-data.frame(a=d$a,b=d$b,c=d$c+1)

plot(d$a,d$b,col=c('grey','blue','green','brown','black','orange','red','purple')[d$c])

set.seed(2)

y<-pam(d[1:2],10)

plot(d$a,d$b,col=c('gray','blue','green','brown','black','orange','red','purple',

'aquamarine','coral','slateblue')[y$cluster])

s<-silhouette(y)

ss<-summary(s)

ss$si.summary

ss

s[1,]

s[,3]

#computing the Silhouette for DBSCAN and other clusterings is more complicated!

#You need to create a distance matrix for the dataset and pass it to Silhouette

#in general, cluster 0 needs to be removed before computing Silhouette, which

#is not done in the code below!

require('ggplot2')

require('lattice')

require('fpc')

y<-dbscan(d[1:2], 22, 20, showplot=1)

y

dm<-dist(d[1:2])

s<-silhouette(y$cluster,dist=dm)

ss<-summary(s)

# there are too many clusters with a negative Silhouette; there should be only one; something is wrong!

Silhouette in R

ab

Page 42: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

require("cluster")

setwd("C:\\Users\\C. Eick\\Desktop")

a<-read.csv("c8.csv")

d<-data.frame(a=a[,1],b=a[,2],c=a[,3])

d<-data.frame(a=d$a,b=d$b,c=d$c+1)

plot(d$a,d$b,col=c('grey','blue','green','brown','black','orange','red','purple')[d$c])

set.seed(2)

y<-pam(d[1:2],10)

plot(d$a,d$b,col=c('gray','blue','green','brown','black','orange','red','purple',

'aquamarine','coral','slateblue')[y$cluster])

s<-silhouette(y)

ss<-summary(s)

ss$si.summary

ss

s[1,]

s[,3]

Silhouette in R

ab

Page 43: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

#computing the Silhouette for DBSCAN and other clusterings is more complicated!

#You need to create a distance matrix for the dataset and pass it to Silhouette

#in general, cluster 0 needs to be removed before computing Silhouette, which

#is not done in the code below!

require('ggplot2')

require('lattice')

require('fpc')

y<-dbscan(d[1:2], 22, 20, showplot=1)

y

dm<-dist(d[1:2])

s<-silhouette(y$cluster,dist=dm)

ss<-summary(s)

ss

y<-dbscan(d[1:2], 22, 20, showplot=1)

#Let us take a look at cluster 4 with the negative silouette

c<-data.frame(a=d$a,b=d$b,c=y$cluster)

dd<-subset(c, c==4)

plot(dd$a,dd$b)

#if you take a look you understand why the silhouette of cluster 4 is negative, although cluster is "really good“ !

y<-dbscan(d[1:2], 22, 20, showplot=1)

Silhouette in R

ab

Summary: Silhouette is an reasonable evaluation measure for clustering algorithms that find globular shapes, such as K-means and PAM, but it is of limited use for DBSCAN which finds clusters of non-globularshape.

Page 44: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

External Measures of Cluster Validity: Entropy and

Purity

Page 45: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

“The validation of clustering structures is the most difficult and frustrating part of cluster analysis.

Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage.”

Algorithms for Clustering Data, Jain and Dubes

Final Comment on Cluster Validity

Page 46: Critical Issues with Respect to Clustering Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction

Summary Cluster Evaluation

Clustering Tendency: Compare dataset distribution with a random distribution textbook

Comparing clustering results:– Use internal measures; e.g. silhouette

– Use external measure; e.g purity; domain expert; or pre-classified examples (last approach is called semi-supervised clustering; expert assigns link/do not link to some examples that is used to evaluate clusterings and/or to guide the search for good clusters)

How many clusters? textbook