assessment. schedule graph may be of help for selecting the best solution best solution corresponds...

Post on 17-Dec-2015

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Assessment

Schedule graph may be of help for selecting the best solution Best solution corresponds to a plateau before a high jump

Solutions with very small or even singletons clusters are rather suspicious

Zur Anzeige wird der QuickTime™ Dekompressor „TIFF (LZW)“

benötigt.

Standardization K-means

Initial centroids Validation example Cluster merit Index

Cluster validation, three approaches Relative criteria Validity Index

Dunn Index Davies-Bouldin (DB) index

Combination of different distances/diameter methods

Standardization Issue The need for any standardization must be

questioned If the interesting clusters are based on the

original features, than any standardization method may distort those clusters

Only when there are grounds to search for clusters in transformed space that some standardization rule should new used

There is no methodological way except by “trail and error”

y i =x i − μ( )

σ

y i =x i

max(x i) − min(x i)( )€

y i =x i − min(x i( )

max(x i) − min(x i)( )

An easy standardization method that will often follow and frequently achieve good results is the simple division or multiplication by a simple scale factor

A should be properly chosen so that all feature values occupy a suitable interval

y i =x i

a

k-means Clustering

Cluster centers c1,c2,.,ck with clusters C1,C2,.,Ck

d2(r x ,

r z ) = x i − zi( )

2

i=1

d

∑ ⎛

⎝ ⎜ ⎜

⎠ ⎟ ⎟

1

2

Initial centroids Specify which patterns are used as initial

centroids Random initialization Tree clustering in a reduced number of patterns may

performed for this purpose Choose first k patterns as initial centroids Sort distances between all patterns and choose

patterns at constant intervals of these distances as initial centroids

Adaptive initialization (according to a chosen radius)

k-means example (Sá 2001)

E = d2(x,c j )2

x∈C j

∑j=1

k

Cluster merit index Ri • (n patterns in k cluserts)€

E i = (x i2 − c i

2j )

x∈C j

∑j=1

k

R(k +1) =E i

(k )

E i(k +1)

−1 ⎛

⎝ ⎜

⎠ ⎟(n − k −1)

Cluster merit index measure the decrease in overall within-cluster distance when passing from a solution with k clusters to one with k+1 clusters

High value of the merit indexes indicates a substantial decrease in overall within-cluster distance

Zur Anzeige wird der QuickTime™ Dekompressor „TIFF (LZW)“

benötigt.

Cluster merit index

Factor 1 has the most important contribution The values k=3,5,8 are sensible choices k=3 attractive

-500

0

500

1000

1500

2000

2500

1 2 3 4 5 6 7

(k+1)

R

Cluster validation

The procedure of evaluating the results of a clustering algorithm is known under the term cluster validity

In general terms, there are three approaches to investigate cluster validity

The first is based on external criteria This implies that we evaluate the results

of a clustering algorithm based on a pre-specified structure, which is imposed on a data set and reflects our intuition about the clustering structure of the data set

Error Classification Ratesmaller value, good representation

Data partition according to known classes Li,

L = L1{ ,L2,...,LG}

ϕ L (C j ) := maxi=1...G (|C j ∩ Li |)

ECR :=1

k(|C j |

j=1

k

∑ −ϕ L (C j ))

The second approach is based on internal criteria

We may evaluate the results of a clustering algorithm in terms of quantities that involve the vectors of the data set themselves (e.g. proximity matrix)

Proximity matrix

Dissimilarity matrix

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

0...)2,()1,(

:::

)2,3()

...ndnd

0dd(3,1

0d(2,1)

0

The basis of the above described validation methods is often statistical testing

Major drawback of techniques based on internal or external criteria and statistical testing is their high computational demands

The third approach of clustering validity is based on relative criteria

Here the basic idea is the evaluation of a clustering structure by comparing it to other clustering schemes, resulting by the same algorithm but with different parameter values

There are two criteria proposed for clustering evaluation and selection of an optimal clustering scheme (Berry and Linoff, 1996)

Compactness, the members of each cluster should be as close to each other as possible. A common measure of compactness is the variance, which should be minimized

Separation, the clusters themselves should be widely spaced

Distance between two clusters There are three common approaches

measuring the distance between two different clusters

Single linkage: It measures the distance between the closest members of the clusters

Complete linkage: It measures the distance between the most distant members

Comparison of centroids: It measures the distance between the centers of the clusters

Relative criteria

Based on relative criteria, does not involve statistical tests

The fundamental idea of this approach is to choose the best clustering scheme of a set of defined schemes according to a pre-specified criterion

Among the clustering schemes Ci ,i=1, ..., k defined by a specific algorithm, for different values of the parameters choose the one that best fits the data set

The procedure of identifying the best clustering scheme is based on a validity index q

Selecting a suitable performance index q,we proceed with the following steps We run the clustering algorithm for all values of k

between a minimum kmin and a maximum kmax

• The minimum and maximum values have been defined a-priori by user

For each of the values of k, we run the algorithm r times, using different set of values for the other parameters of the algorithm (e.g. different initial conditions)

We plot the best values of the index q obtained by each k as the function of k

Based on this plot we may identify the best clustering scheme

There are two approaches for defining the best clustering depending on the behavior of q with respect to k

If the validity index does not exhibit an increasing or decreasing trend as k increases we seek the maximum (minimum) of the plot

For indices that increase (or decrease) as the number of clusters increase we search for the values of k at which a significant local change in value of the index occurs

This change appears as a “knee” (joelho) in the plot and it is an indication of the number of clusters underlying the data-set

The absence of a knee may be an indication that the data set possesses no clustering structure

Validity index

Dunn index, a cluster validity index for k-means clustering proposed in Dunn (1974)

Attempts to identify “compact and well separated clusters”

Dunn index

d(Ci,C j ) =min

r x ∈ Ci,

r y ∈ C j

d(r x ,

r y )

diam(Ci) =max

r x ,

r y ∈ Ci

d(r x ,

r y )

Dk =min

1≤ i ≤ k

min

1≤ j ≤ k

i ≠ j

d(Ci,C j )max

1≤ l ≤ kdiam(Cl ){ }

⎨ ⎪ ⎪

⎩ ⎪ ⎪

⎬ ⎪ ⎪

⎭ ⎪ ⎪

⎨ ⎪ ⎪

⎩ ⎪ ⎪

⎬ ⎪ ⎪

⎭ ⎪ ⎪

If the dataset contains compact and well-separated clusters, the distance between the clusters is expected to be large and the diameter of the clusters is expected to be small

Large values of the index indicate the presence of compact and well-separated clusters

The index Dk does not exhibit any trend with respect to number of clusters

Thus, the maximum in the plot of Dk versus the number of clusters k can be an indication of the number of clusters that fits the data

The implications of the Dunn index are:

Considerable amount of time required for its computation

Sensitive to the presence of noise in datasets, since these are likely to increase the values of diam(c)

The Davies-Bouldin (DB) index (1979)

d(Ci,C j ) =min

r x ∈ Ci,

r y ∈ C j

d(r x ,

r y )

diam(Ci) =max

r x ,

r y ∈ Ci

d(r x ,

r y )

DBk =1

k

max

i ≠ j

diam(Ci) + diam(C j )

d(Ci,C j )

⎧ ⎨ ⎩

⎫ ⎬ ⎭i=1

k

Small indexes correspond to good clusters, clusters are compact and their centers are far away

The DBk index exhibits no trends with respect to the number of clusters and thus we seek the minimum value of DBk its plot versus the number of clusters

Different methods may be used to calculate distance between clusters

• Single linkage

• Complete linkage

• Comparison of centroids

• Average linkage

d2(Ci,C j ) =min

r x ∈ Ci,

r y ∈ C j

d(r x ,

r y )

d1(Ci,C j ) =max

r x ∈ Ci,

r y ∈ C j

d(r x ,

r y )

d4 (Ci,C j ) =1

Ci C j

d(r x ,

r y )

x

∑y

∑€

d3(Ci,C j ) = d(c i,c j )

Differnet methods to calculate the diamater of a cluster Max

Radius

Average distance

diam1(Ci) =max

r x ,

r y ∈ Ci

d(r x ,

r y )

diam3(Ci) =

d(r x l ,

r x m )

l=1

|C i |

∑(|Ci | −1) | Ci |

2

with (r x l ,

r x m ∈ Ci)∧(l < m)

diam2(Ci) =max

r x ∈ Ci

d(r x ,

r c i)

A connected graph with s nodes has edges

(s −1)s

2

Combination of different distances/diameter methods

It has been shown that using different distances/diameter methods may produce indices of different scale range (Azuje and Bolshakova 2002)

Normalization

i selects the different distance method i (1,2,3,4)

j selects the different diameter method j (1,2,3)

(Dij) or (DBij) standart deviation of Dkij or

DBkij accross diferent values for k

Normalized indexes

ˆ D kij =

(Dkij − D ij )

σ (Dij )

D ij =1

kDl

ij

l=1

k

Literature

Zur Anzeige wird der QuickTime™ Dekompressor „TIFF (LZW)“

benötigt.

Zur Anzeige wird der QuickTime™ Dekompressor „TIFF (LZW)“

benötigt.

See also

J.P Marques de Sá, Pattern Recognition, Springer, 2001

https://www.cs.tcd.ie/publications/tech-reports/

TCD-CS-2002-34.pdf TCD-CS-2005-25.pdf

Standardization K-means

Initial centroids Validation example Cluster merit Index

Cluster validation, three approaches Relative criteria Validity Index

Dunn Index Davies-Bouldin (DB) index

Combination of different distances/diameter methods

Next lecture

KNN LVQ SOM

top related