mscit-mscbd 5210/5002 : knowledge discovery and data...
TRANSCRIPT
1 1
MSCIT-MSCBD 5210/5002 : Knowledge Discovery and Data
Mining
Acknowledgement: Slides modified by Dr. Lei Chen based on the slides provided by Jiawei Han, Micheline Kamber, and Jian Pei
And slides provide by Raymond Wong and Tan, Steinbach, Kumar
Association Rule Mining
n Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction
Market-Basket transactions
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Example of Association Rules
{Diaper} → {Beer}, {Milk, Bread} → {Eggs,Coke}, {Beer, Bread} → {Milk},
Implication means co-occurrence, not causality!
Definition: Frequent Itemset
n Itemset n A collection of one or more items
n Example: {Milk, Bread, Diaper}
n k-itemset n An itemset that contains k items
n Support count (σ) n Frequency of occurrence of an itemset n E.g. σ({Milk, Bread,Diaper}) = 2
n Support n Fraction of transactions that contain
an itemset n E.g. s({Milk, Bread, Diaper}) = 2/5
n Frequent Itemset n An itemset whose support is greater
than or equal to a minsup threshold
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Definition: Association Rule
Example: Beer}Diaper,Milk{ ⇒
4.052
|T|)BeerDiaper,,Milk(
===σs
67.032
)Diaper,Milk()BeerDiaper,Milk,(
===σ
σc
● Association Rule – An implication expression of the form
X → Y, where X and Y are itemsets
– Example: {Milk, Diaper} → {Beer}
● Rule Evaluation Metrics – Support (s)
u Fraction of transactions that contain both X and Y
– Confidence (c) u Measures how often items in Y
appear in transactions that contain X
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Association Rule Mining Task
n Given a set of transactions T, the goal of association rule mining is to find all rules having n support ≥ minsup threshold n confidence ≥ minconf threshold
n Brute-force approach: n List all possible association rules n Compute the support and confidence for each
rule n Prune rules that fail the minsup and minconf
thresholds ⇒ Computationally prohibitive!
Mining Association Rules Example of Rules:
{Milk,Diaper} → {Beer} (s=0.4, c=0.67) {Milk,Beer} → {Diaper} (s=0.4, c=1.0) {Diaper,Beer} → {Milk} (s=0.4, c=0.67) {Beer} → {Milk,Diaper} (s=0.4, c=0.67) {Diaper} → {Milk,Beer} (s=0.4, c=0.5) {Milk} → {Diaper,Beer} (s=0.4, c=0.5)
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Observations: • All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but can have different confidence
• Thus, we may decouple the support and confidence requirements
Mining Association Rules
n Two-step approach: 1. Frequent Itemset Generation
– Generate all itemsets whose support ≥ minsup
2. Rule Generation – Generate high confidence rules from each frequent
itemset, where each rule is a binary partitioning of a frequent itemset
n Frequent itemset generation is still computationally expensive
Frequent Itemset Generation
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Given d items, there are 2d possible candidate itemsets
Frequent Itemset Generation
n Brute-force approach: n Each itemset in the lattice is a candidate frequent
itemset n Count the support of each candidate by scanning
the database
n Match each transaction against every candidate n Complexity ~ O(NMw) => Expensive since M =
2d !!!
TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke
N
Transactions List ofCandidates
M
w
Computational Complexity
n Given d unique items: n Total number of itemsets = 2d
n Total number of possible association rules:
123 1
1
1 1
+−=
⎥⎦
⎤⎢⎣
⎡⎟⎠
⎞⎜⎝
⎛ −×⎟⎠
⎞⎜⎝
⎛=
+
−
=
−
=∑ ∑
dd
d
k
kd
j jkd
kd
R
If d=6, R = 602 rules
Frequent Itemset Generation Strategies n Reduce the number of candidates (M)
n Complete search: M=2d
n Use pruning techniques to reduce M n Reduce the number of transactions (N)
n Reduce size of N as the size of itemset increases n Used by DHP and vertical-based mining algorithms
n Reduce the number of comparisons (NM) n Use efficient data structures to store the
candidates or transactions n No need to match every candidate against every
transaction
Reducing Number of Candidates n Apriori principle:
n If an itemset is frequent, then all of its subsets must also be frequent
n Apriori principle holds due to the following property of the support measure:
n Support of an itemset never exceeds the support of its subsets
n This is known as the anti-monotone property of support
)()()(:, YsXsYXYX ≥⇒⊆∀
Illustrating Apriori Principle
Found to be Infrequent
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDEPruned supersets
Illustrating Apriori Principle Item CountBread 4Coke 2Milk 4Beer 3Diaper 4Eggs 1
Itemset Count{Bread,Milk} 3{Bread,Beer} 2{Bread,Diaper} 3{Milk,Beer} 2{Milk,Diaper} 3{Beer,Diaper} 3
Itemset Count {Bread,Milk,Diaper} 3
Items (1-itemsets)
Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs)
Triplets (3-itemsets) Minimum Support = 3
If every subset is considered, 6C1 + 6C2 + 6C3 = 41
With support-based pruning, 6 + 6 + 1 = 13
Apriori Algorithm n Method:
n Let k=1 n Generate frequent itemsets of length 1 n Repeat until no new frequent itemsets are identified
n Generate length (k+1) candidate itemsets from length k frequent itemsets
n Prune candidate itemsets containing subsets of length k that are infrequent
n Count the support of each candidate by scanning the DB n Eliminate candidates that are infrequent, leaving only
those that are frequent
16
Apriori: A Candidate Generation & Test Approach
n Apriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested! (Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
n Method:
n Initially, scan DB once to get frequent 1-itemset
n Generate length (k+1) candidate itemsets from length k frequent itemsets
n Test the candidates against DB
n Terminate when no frequent or candidate set can be generated
17
The Apriori Algorithm—An Example
Database TDB
1st scan
C1 L1
L2
C2 C2 2nd scan
C3 L3 3rd scan
Tid Items 10 A, C, D 20 B, C, E 30 A, B, C, E 40 B, E
Itemset sup {A} 2 {B} 3 {C} 3 {D} 1 {E} 3
Itemset sup {A} 2 {B} 3 {C} 3 {E} 3
Itemset {A, B} {A, C} {A, E} {B, C} {B, E} {C, E}
Itemset sup {A, B} 1 {A, C} 2 {A, E} 1 {B, C} 2 {B, E} 3 {C, E} 2
Itemset sup {A, C} 2 {B, C} 2 {B, E} 3 {C, E} 2
Itemset {B, C, E}
Itemset sup {B, C, E} 2
Supmin = 2
Reducing Number of Transactions
n We introduce a Dynamic Hashing and Pruning (DHP). n In k-iteration, hash all “appearing” k+1 itemsets
in a hashtable, count all the occurrences of an itemset in the correspondent bucket to find the candidate itemsets.
n In k+1 iteration, examine each of the candidate itemset to see if its correspondent bucket value is above the support ( necessary condition )
2020年3月1日 Data Mining: Concepts and Techniques 18
DHP
n Consider two items sets, all items are numbered as i1, i2, …in. For any any pair (x, y), has according to n Hash function bucket #= h({x y}) = ((order of
x)*10+(order of y)) % 7 n Example:
n Items = A, B, C, D, E, n Order = 1, 2, 3 4, 5, n H({C, E})= (3*10 + 5) % 7 = 0 n Thus, {C, E} belong to bucket 0.
2020年3月1日 Data Mining: Concepts and Techniques 19
DHP Example
n A set of transactions
2020年3月1日 Data Mining: Concepts and Techniques 20
DHP Example
n The minimum support is 2
2020年3月1日 Data Mining: Concepts and Techniques 21
Itemset support
{A} 2
{B} 3
{C} 3
{D} 1
{E} 3
Itemset support
{A} 2
{B} 3
{C} 3
{E} 3
C1 L1
DHP Example
n Find all 2-itemset of each transaction
2020年3月1日 Data Mining: Concepts and Techniques 22
DHP Example
n Hash function n h({x y}) = ((order of x)*10+(order of y)) % 7
n Hash table n We map the 2-item of transactions into the hash table
2020年3月1日 Data Mining: Concepts and Techniques 23
DHP Example
n We choose the itemsets where the number of content in its bucket is above the minimum support, which decreases the number of infrequent candidate itemsets.
2020年3月1日 Data Mining: Concepts and Techniques 24
DHP : Reducing the transactions
n If an item occurs in a frequent (k+1)-itemset, it must occur in at least k candidate k-itemsets (necessary but not sufficient)
n Discard an item if it does not occur in at least k candidate k-itemsets during support counting
2020年3月1日 Data Mining: Concepts and Techniques 25
Reducing Number of Transactions
n We introduce another technique to reduce number of transactions, mining frequent itemsets using the vertical data. n Previous introduced methods aim to mine frequent
patterns from a set of trans- actions in TID-itemset format, i.e., {TID : itemset}. This is known as the horizontal data format.
n Alternatively, data can be presented in item-TID set format (i.e., {item : TID set}).This is known as the vertical data format.
2020年3月1日 Data Mining: Concepts and Techniques 26
Vertical Data
n Vertical data format of the transaction database. n Reduce the size of transactions n Easy to count the support
2020年3月1日 Data Mining: Concepts and Techniques 27
Vertical Data
n 2-Itemsets in Vertical Data Format n Itemsets {I1, I4} and {I3, I5} each contain only one
transaction, they do not belong to the set of frequent 2-itemsets (min_support = 2).
2020年3月1日 Data Mining: Concepts and Techniques 28
Vertical Data
n 2-Itemsets in Vertical Data Format n Based on the Apriori property, a given 3-itemset is a
candidate 3-itemset only if every one of its 2-itemset subsets is frequent. The candidate generation process here will generate only two 3-itemsets: {I1, I2, I3} and {I1, I2, I5}.
2020年3月1日 Data Mining: Concepts and Techniques 29
Reducing Number of Comparisons
n Candidate counting: n Scan the database of transactions to determine the
support of each candidate itemset. n To reduce the number of comparisons, store the
candidates in a hash structure n Instead of matching each transaction against every candidate,
match it against candidates contained in the hashed buckets
TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke
N
Transactions Hash Structure
k
Buckets
Generate Hash Table
n To avoid the comparisons between transactions with each k-candidate itemset, we use the hash table to store the candidate itemsets and only scan the transactions once. n Store the k-candidate itemsets into hash table and
initiate the count of each candidate itemset as zero.
n Obtain the k-item of each transaction, and if the k-item exists in the hash table, we add the count of this k-item with 1
2020年3月1日 Data Mining: Concepts and Techniques 31
Algorithm
n Lk = {} n For i in k-candidate itemsets Ck:
n Hash_table(H(i)) = 0 // initialize the hash table n For each k-item j in transactions:
n If j in Hash_table: n Hash_table(H(j)) += 1
n For i in k-candidate itemsets Ck: n If Hash_table(H(i)) >= mim_sup :
n Lk = Lk {i}
n Return Lk 2020年3月1日 Data Mining: Concepts and Techniques 32
Example
n Given a set of transactions and the 2-candidate itemsets, we show how to scan the transaction once to avoid the comparisons. (minimum support =2)
2020年3月1日 Data Mining: Concepts and Techniques 33
T items
t1 {1,2,3}
t2 {1,2}
t3 {3,4,5}
t4 {1,3}
C2
{1,2}
{1,3}
{2,3}
transactions
2-candidate itemsets
Initialize Hash Table
n We map each 2-candidate itemset to the hash table and initialize its count as 0.
2020年3月1日 Data Mining: Concepts and Techniques 34
{1,2} {1,3} {2,3}
0 0 0
C2
{1,2}
{1,3}
{2,3} Hash table
Scan Transaction Once
n Scan the 2-itemset of each transactions once, and we then choose the candidate itemset whose support number is not less than 2 as the L2.
2020年3月1日 Data Mining: Concepts and Techniques 35
T 2-itemset
t1 {1,2}, {1,3}, {2,3}
t2 {1,2}
t3 {3,4}, {3,5}, {4,5}
t4 {1,3}
{1,2} {1,3} {2,3}
2 2 1
L2 {1,2}
{1,3}
Generate Hash Tree
2 3 4 5 6 7
1 4 5 1 3 6
1 2 4 4 5 7 1 2 5
4 5 8 1 5 9
3 4 5 3 5 6 3 5 7 6 8 9
3 6 7 3 6 8
1,4,7 2,5,8
3,6,9 Hash function
Suppose you have 15 candidate itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
• Hash function
• Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node)
Association Rule Discovery: Hash tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1,4,7 2,5,8
3,6,9
Hash Function Candidate Hash Tree
Hash on 1, 4 or 7
Association Rule Discovery: Hash tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1,4,7 2,5,8
3,6,9
Hash Function Candidate Hash Tree
Hash on 2, 5 or 8
Association Rule Discovery: Hash tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1,4,7 2,5,8
3,6,9
Hash Function Candidate Hash Tree
Hash on 3, 6 or 9
Subset Operation
1 2 3 5 6
Transaction, t
2 3 5 61 3 5 62
5 61 33 5 61 2 61 5 5 62 3 62 5
5 63
1 2 31 2 51 2 6
1 3 51 3 6 1 5 6 2 3 5
2 3 6 2 5 6 3 5 6
Subsets of 3 items
Level 1
Level 2
Level 3
63 5
Given a transaction t, what are the possible subsets of size 3?
Subset Operation Using Hash Tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1 2 3 5 6
1 + 2 3 5 6 3 5 6 2 +
5 6 3 +
1,4,7 2,5,8
3,6,9
Hash Function
transaction
Subset Operation Using Hash Tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1,4,7 2,5,8
3,6,9
Hash Function 1 2 3 5 6
3 5 6 1 2 +
5 6 1 3 +
6 1 5 +
3 5 6 2 +
5 6 3 +
1 + 2 3 5 6
transaction
Subset Operation Using Hash Tree
1 5 9
1 4 5 1 3 6 3 4 5 3 6 7
3 6 8 3 5 6 3 5 7 6 8 9
2 3 4 5 6 7
1 2 4 4 5 7
1 2 5 4 5 8
1,4,7 2,5,8
3,6,9
Hash Function 1 2 3 5 6
3 5 6 1 2 +
5 6 1 3 +
6 1 5 +
3 5 6 2 +
5 6 3 +
1 + 2 3 5 6
transaction
Match transaction against 9 out of 15 candidates
Factors Affecting Complexity n Choice of minimum support threshold
n lowering support threshold results in more frequent itemsets n this may increase number of candidates and max length of frequent
itemsets n Dimensionality (number of items) of the data set
n more space is needed to store support count of each item n if number of frequent items also increases, both computation and I/O
costs may also increase n Size of database
n since Apriori makes multiple passes, run time of algorithm may increase with number of transactions
n Average transaction width n transaction width increases with denser data sets n This may increase max length of frequent itemsets and traversals of
hash tree (number of subsets in a transaction increases with its width)
Compact Representation of Frequent Itemsets
n Some itemsets are redundant because they have identical support as their supersets
n Number of frequent itemsets
n Need a compact representation
TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C101 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 03 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 04 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 05 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 06 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 07 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 08 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 09 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 010 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 011 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 112 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 113 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 114 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 115 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
∑=
⎟⎠
⎞⎜⎝
⎛×=
10
1
103
k k
Maximal Frequent Itemset
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Border Infrequent Itemsets
Maximal Itemsets
An itemset is maximal frequent if none of its immediate supersets is frequent
Closed Itemset
n An itemset is closed if none of its immediate supersets has the same support as the itemset
TID Items1 {A,B}2 {B,C,D}3 {A,B,C,D}4 {A,B,D}5 {A,B,C,D}
Itemset Support{A} 4{B} 5{C} 3{D} 4{A,B} 4{A,C} 2{A,D} 3{B,C} 3{B,D} 4{C,D} 3
Itemset Support{A,B,C} 2{A,B,D} 3{A,C,D} 2{B,C,D} 3{A,B,C,D} 2
Maximal vs Closed Itemsets
TID Items1 ABC2 ABCD3 BCE4 ACDE5 DE
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
124 123 1234 245 345
12 124 24 4 123 2 3 24 34 45
12 2 24 4 4 2 3 4
2 4
Transaction Ids
Not supported by any transactions
Maximal vs Closed Frequent Itemsets
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
124 123 1234 245 345
12 124 24 4 123 2 3 24 34 45
12 2 24 4 4 2 3 4
2 4
Minimum support = 2
# Closed = 9
# Maximal = 4
Closed and maximal
Closed but not maximal
Maximal vs Closed Itemsets
FrequentItemsets
ClosedFrequentItemsets
MaximalFrequentItemsets
51
Large Itemset Mining
n Frequent Itemset Mining Problem: to find all “large” (or frequent) itemsets with support at least a threshold (i.e., itemsets with support >= 3)
TID Items Bought
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
52
Apriori
L1
C2
L2
Candidate Generation
“Large” Itemset Generation
Large 2-itemset Generation
C3
L3
Candidate Generation
“Large” Itemset Generation
Large 3-itemset Generation
…
Disadvantage 1: It is costly to handle a large number of candidate sets
1. Join Step 2. Prune Step
Counting Step
Disadvantage 2: It is tedious to repeatedly scan the database and check the candidate patterns
53
FP-tree
n Scan the database once to store all essential information in a data structure called FP-tree (Frequent Pattern Tree)
n The FP-tree is concise and is used in directly generating large itemsets
54
FP-tree Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order. Step 2: Construct the FP-tree from the above data Step 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset). Step 4: Determine the frequent patterns.
55
FP-tree
n Frequent Itemset Mining Problem: to find all “large” (or frequent) itemsets with support at least a threshold (i.e., itemsets with support >= 3)
TID Items Bought
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
56
FP-tree
TID Items Bought
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
57
FP-tree
TID Items Bought
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
COMP5331 58
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
Item Frequency
a
b
c
d
e
f
g
h
i
j
k
4
COMP5331 59
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
Item Frequency
a
b
c
d
e
f
g
h
i
j
k
4
4 1
3 3 3
3
1 1 1 1
COMP5331 60
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
Item Frequency
a
b
c
d
e
f
g
h
i
j
k
4
4 1
3 3 3
3
1 1 1 1
Item Frequency
a 4
b 4
d 3
e 3
f 3
g 3
COMP5331 61
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
Item Frequency
a
b
c
d
e
f
g
h
i
j
k
4
4 1
3 3 3
3
1 1 1 1
Item Frequency
a 4
b 4
d 3
e 3
f 3
g 3
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
62
FP-tree Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order. Step 2: Construct the FP-tree from the above data Step 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset). Step 4: Determine the frequent patterns.
63
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root Item Head of node-link
a
b
d
e
f
g
a:1
b:1
d:1
e:1
f:1
g:1
64
3
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:1
b:1
d:1
e:1
f:1
g:1
Item Head of node-link
a
b
d
e
f
g
f:1
g:1
a:2
65
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:2
b:1
d:1
e:1
f:1
g:1
f:1
g:1
Item Head of node-link
a
b
d
e
f
g
b:1
d:1
e:1
f:1
66
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:2
b:1
d:1
e:1
f:1
g:1
f:1
g:1
Item Head of node-link
a
b
d
e
f
g
b:1
d:1
e:1
f:1
a:3
b:2
d:2
67
e:1
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:3
b:2
d:2
e:1
f:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g g:1
a:4
b:3
68
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
69
FP-tree Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order. Step 2: Construct the FP-tree from the above data Step 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset). Step 4: Determine the frequent patterns.
70
Threshold = 3 TID Items Bought (Ordered) Frequent Items
100 a, b, c, d, e, f, g, h
200 a, f, g
300 b, d, e, f, j
400 a, b, d, i, k
500 a, b, e, g
a, b, d, e, f, g
a, f, g
b, d, e, f
a, b, d
a, b, e, g
root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
71
root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
72
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “g” { }
(a:1, b:1, d:1, e:1, f:1, g:1),
73
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “g” { }
(a:1, b:1, d:1, e:1, f:1, g:1), (a:1, b:1, e:1, g:1),
COMP5331 74
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “g” { }
(a:1, b:1, d:1, e:1, f:1, g:1), (a:1, b:1, e:1, g:1), (a:1, f:1, g:1)
Item Frequency
a
b
d
e
f
g
3
2
1
2
2
3
COMP5331 75
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “g” { }
(a:1, b:1, d:1, e:1, f:1, g:1), (a:1, b:1, e:1, g:1), (a:1, f:1, g:1)
Item Frequency
a
b
d
e
f
g
3
2
1
2
2
3
Item Frequency
a 3
g 3
{ }
(a:1, g:1), (a:1, g:1), (a:1, g:1)
root Item Head of node-link
a a:3
3
conditional pattern base of “g”
76
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “f” { }
(a:1, b:1, d:1, e:1, f:1),
77
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “f” { }
(a:1, b:1, d:1, e:1, f:1), (a:1, f:1),
COMP5331 78
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “f” { }
(a:1, b:1, d:1, e:1, f:1), (a:1, f:1), (b:1, d:1, e:1, f:1)
Item Frequency
a
b
d
e
f
g
2
2
2
2
3
0
COMP5331 79
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “f” { }
(a:1, b:1, d:1, e:1, f:1), (a:1, f:1), (b:1, d:1, e:1, f:1)
Item Frequency
a
b
d
e
f
g
2
2
2
2
3
0
Item Frequency
f 3
{ }
(f:1), (f:1), (f:1)
root
3
COMP5331 80
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “e” { }
(a:1, b:1, d:1, e:1), (a:1, b:1, e:1), (b:1, d:1, e:1)
Item Frequency
a
b
d
e
f
g
2
3
2
3
0
0
COMP5331 81
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “e” { }
(a:1, b:1, d:1, e:1), (a:1, b:1, e:1), (b:1, d:1, e:1)
Item Frequency
a
b
d
e
f
g
2
3
2
3
0
0
Item Frequency
b 3
e 3
g:1 { }
(b:1, e:1), (b:1, e:1), (b:1, e:1)
root Item Head of node-link
b b:3
3
COMP5331 82
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “d” { }
(a:2, b:2, d:2), (b:1, d:1)
Item Frequency
a
b
d
e
f
g
2
3
3
0
0
0
COMP5331 83
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “d” { }
(a:2, b:2, d:2), (b:1, d:1)
Item Frequency
a
b
d
e
f
g
2
3
3
0
0
0
Item Frequency
b 3
d 3
g:1 g:1 { }
(b:2, d:2), (b:1, d:1)
root Item Head of node-link
b b:3
3
COMP5331 84
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “b” { }
(a:3, b:3), (b:1)
Item Frequency
a
b
d
e
f
g
3
4
0
0
0
0
COMP5331 85
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “b” { }
(a:3, b:3), (b:1)
Item Frequency
a
b
d
e
f
g
3
4
0
0
0
0
Item Frequency
a 3
b 4
root Item Head of node-link
a a:3
4
g:1 g:1 g:1 { }
(a:3, b:3), (b:1)
COMP5331 86
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “a” { } (a:4)
Item Frequency
a
b
d
e
f
g
4
0
0
0
0
0
COMP5331 87
Threshold = 3 root
a:4
b:3
d:2
e:1
f:1
g:1
e:1
g:1
f:1
g:1
b:1
d:1
e:1
f:1
Item Head of node-link
a
b
d
e
f
g
Cond. FP-tree on “a” { } (a:4)
Item Frequency
a
b
d
e
f
g
4
0
0
0
0
0
Item Frequency
a 4 root
4
g:1 g:1 g:1 { } (a:4)
88
FP-tree Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order. Step 2: Construct the FP-tree from the above data Step 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset). Step 4: Determine the frequent patterns.
89
Cond. FP-tree on “g” 3
90
root
a:3
Item Head of node-link
a
root
Cond. FP-tree on “g” 3
Cond. FP-tree on “f” 3
Cond. FP-tree on “e” 3
91
root
a:3
Item Head of node-link
a
root
root
b:3
Item Head of node-link
b
Cond. FP-tree on “g” 3
Cond. FP-tree on “f” 3
Cond. FP-tree on “e” 3
Cond. FP-tree on “d” 3
92
root
a:3
Item Head of node-link
a
root
root
b:3
Item Head of node-link
b
root
b:3
Item Head of node-link
b
Cond. FP-tree on “g” 3
Cond. FP-tree on “f” 3
Cond. FP-tree on “e” 3
Cond. FP-tree on “d” 3
Cond. FP-tree on “b” 4
93
root
root
a:3
Item Head of node-link
a
root
root
b:3
Item Head of node-link
b
root
b:3
Item Head of node-link
b
root
a:3
Item Head of node-link
a
Cond. FP-tree on “g” 3
Cond. FP-tree on “f” 3
Cond. FP-tree on “e” 3
Cond. FP-tree on “d” 3
Cond. FP-tree on “b” 4
Cond. FP-tree on “a” 4
COMP5331 94
Cond. FP-tree on “a” root
root
a:3
Item Head of node-link
a
Cond. FP-tree on “g”
root
Cond. FP-tree on “f”
root
b:3
Item Head of node-link
b
Cond. FP-tree on “e”
root
b:3
Item Head of node-link
b
Cond. FP-tree on “d”
root
a:3
Item Head of node-link
a
Cond. FP-tree on “b”
3
3
3
3
4
4
1. Before generating this cond. tree, we generate {g} (support = 3)
2. After generating this cond. tree, we generate {a, g} (support = 3)
1. Before generating this cond. tree, we generate {f} (support = 3)
2. After generating this cond. tree, we do not generate any itemset.
1. Before generating this cond. tree, we generate {e} (support = 3)
2. After generating this cond. tree, we generate {b, e} (support = 3)
1. Before generating this cond. tree, we generate {d} (support = 3)
2. After generating this cond. tree, we generate {b, d} (support = 3)
1. Before generating this cond. tree, we generate {b} (support = 4)
2. After generating this cond. tree, we generate {a, b} (support = 3)
1. Before generating this cond. tree, we generate {a} (support = 4)
2. After generating this cond. tree, we do not generate any itemset.
95
Complexity
n Complexity in building FP-tree n Two scans of the transactions DB
n Collect frequent items n Construct the FP-tree
n Cost to insert one transaction n Number of frequent items in this transaction
96
Size of the FP-tree
n The size of the FP-tree is bounded by the overall occurrences of the frequent items in the database
97
Height of the Tree
n The height of the tree is bounded by the maximum number of frequent items in any transaction in the database
COMP5331 98
Compression
n With respect to the total number of items stored, n is FP-tree more compressed compared with
the original databases?
COMP5331 99
Details of the Algorithm n Procedure FP-growth (Tree, α)
n if Tree contains a single path P n for each combination (denoted by β) of the nodes in the
path P do n generate pattern β U α with support = minimum support of
nodes in β
n else n for each ai in the header table of Tree do
n generate pattern β = ai U α with support = ai.support n construct β’s conditional pattern base and then β’s conditional
FP-tree Treeβ
n if Treeβ ≠ ∅ n Call FP-growth(Treeβ, β)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Rule Generation
● Given a frequent itemset L, find all non-empty subsets f ⊂ L such that f → L – f satisfies the minimum confidence requirement – If {A,B,C,D} is a frequent itemset, candidate rules:
ABC →D, ABD →C, ACD →B, BCD →A, A →BCD, B →ACD, C →ABD, D →ABC AB →CD, AC → BD, AD → BC, BC →AD, BD →AC, CD →AB,
● If |L| = k, then there are 2k – 2 candidate association rules (ignoring L → ∅ and ∅ → L)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Rule Generation
● How to efficiently generate rules from frequent itemsets? – In general, confidence does not have an anti
-monotone property c(ABC →D) can be larger or smaller than c(AB →D)
– But confidence of rules generated from the same itemset has an anti-monotone property
– e.g., L = {A,B,C,D}: c(ABC → D) ≥ c(AB → CD) ≥ c(A → BCD) u Confidence is anti-monotone w.r.t. number of items on the RHS of the rule
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Rule Generation for Apriori Algorithm
ABCD=>{ }
BCD=>A ACD=>B ABD=>C ABC=>D
BC=>ADBD=>ACCD=>AB AD=>BC AC=>BD AB=>CD
D=>ABC C=>ABD B=>ACD A=>BCD
Lattice of rules ABCD=>{ }
BCD=>A ACD=>B ABD=>C ABC=>D
BC=>ADBD=>ACCD=>AB AD=>BC AC=>BD AB=>CD
D=>ABC C=>ABD B=>ACD A=>BCDPruned Rules
Low Confidence Rule
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Rule Generation for Apriori Algorithm
● Candidate rule is generated by merging two rules that share the same prefix in the rule consequent
● join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC
● Prune rule D=>ABC if its subset AD=>BC does not have high confidence
BD=>ACCD=>AB
D=>ABC
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Pattern Evaluation
● Association rule algorithms tend to produce too many rules – many of them are uninteresting or redundant – Redundant if {A,B,C} → {D} and {A,B} → {D}
have same support & confidence
● Interestingness measures can be used to prune/rank the derived patterns
● In the original formulation of association rules, support & confidence are the only measures used
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Application of Interestingness Measure
Interestingness Measures
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Computing Interestingness Measure
● Given a rule X → Y, information needed to compute rule interestingness can be obtained from a contingency table
Y Y
X f11 f10 f1+
X f01 f00 fo+
f+1 f+0 |T|
Contingency table for X → Y f11: support of X and Y f10: support of X and Y f01: support of X and Y f00: support of X and Y
Used to define various measures
◆ support, confidence, lift, Gini, J-measure, etc.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Drawback of Confidence
Coffee
Coffee
Tea 15 5 20 Tea 75 5 80
90 10 100
Association Rule: Tea → Coffee Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
⇒ Although confidence is high, rule is misleading
⇒ P(Coffee|Tea) = 0.9375
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Statistical Independence
● Population of 1000 students – 600 students know how to swim (S) – 700 students know how to bike (B) – 420 students know how to swim and bike (S,B)
– P(S∧B) = 420/1000 = 0.42 – P(S) × P(B) = 0.6 × 0.7 = 0.42
– P(S∧B) = P(S) × P(B) => Statistical independence – P(S∧B) > P(S) × P(B) => Positively correlated – P(S∧B) < P(S) × P(B) => Negatively correlated
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Statistical-based Measures
● Measures that take into account statistical dependence
)](1)[()](1)[()()(),(
)()(),()()(),(
)()|(
YPYPXPXPYPXPYXPtcoefficien
YPXPYXPPSYPXPYXPInterest
YPXYPLift
−−
−=−
−=
=
=
φ
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Example: Lift/Interest
Coffee
Coffee
Tea 15 5 20 Tea 75 5 80
90 10 100
Association Rule: Tea → Coffee Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
⇒ Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Drawback of Lift & Interest
Y Y X 10 0 10 X 0 90 90
10 90 100
Y Y X 90 0 90 X 0 10 10
90 10 100
10)1.0)(1.0(
1.0==Lift 11.1
)9.0)(9.0(9.0
==Lift
Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1
There are lots of measures proposed in the literature
Some measures are good for certain applications, but not for others
What criteria should we use to determine whether a measure is good or bad?
What about Apriori-style support based pruning? How does it affect these measures?
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Properties of A Good Measure
● Piatetsky-Shapiro: 3 properties a good measure M must satisfy: – M(A,B) = 0 if A and B are statistically independent
– M(A,B) increase monotonically with P(A,B) when P(A) and P(B) remain unchanged
– M(A,B) decreases monotonically with P(A) [or P(B)] when P(A,B) and P(B) [or P(A)] remain unchanged
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Comparing Different Measures
Example f11 f10 f01 f00E1 8123 83 424 1370E2 8330 2 622 1046E3 9481 94 127 298E4 3954 3080 5 2961E5 2886 1363 1320 4431E6 1500 2000 500 6000E7 4000 2000 1000 3000E8 4000 2000 2000 2000E9 1720 7121 5 1154E10 61 2483 4 7452
10 examples of contingency tables:
Rankings of contingency tables using various measures:
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Property under Variable Permutation
B B A p q A r s
A A B p r B q s
Does M(A,B) = M(B,A)?
Symmetric measures:
◆ support, lift, collective strength, cosine, Jaccard, etc
Asymmetric measures:
◆ confidence, conviction, Laplace, J-measure, etc
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Property under Row/Column Scaling
Male Female
High 2 3 5
Low 1 4 5
3 7 10
Male Female
High 4 30 34
Low 2 40 42
6 70 76
Grade-Gender Example (Mosteller, 1968):
Mosteller: Underlying association should be independent of the relative number of male and female students in the samples
2x 10x
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Property under Inversion Operation
1000000001
0000100000
0111111110
1111011111
A B C D
(a) (b)
0111111110
0000100000
(c)
E FTransaction 1
Transaction N
.
.
.
.
.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Example: φ-Coefficient
● φ-coefficient is analogous to correlation coefficient for continuous variables
Y Y X 60 10 70 X 10 20 30
70 30 100
Y Y X 20 10 30 X 10 60 70
30 70 100
5238.03.07.03.07.0
7.07.06.0
=×××
×−=φ
φ Coefficient is the same for both tables
5238.03.07.03.07.0
3.03.02.0
=×××
×−=φ
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Property under Null Addition
B B A p q A r s
B B A p q A r s + k
Invariant measures:
◆ support, cosine, Jaccard, etc
Non-invariant measures:
◆ correlation, Gini, mutual information, odds ratio, etc
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›
Different Measures have Different Properties Symbol Measure Range P1 P2 P3 O1 O2 O3 O3' O4Φ Correlation -1 … 0 … 1 Yes Yes Yes Yes No Yes Yes Noλ Lambda 0 … 1 Yes No No Yes No No* Yes Noα Odds ratio 0 … 1 … ∞ Yes* Yes Yes Yes Yes Yes* Yes NoQ Yule's Q -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes NoY Yule's Y -1 … 0 … 1 Yes Yes Yes Yes Yes Yes Yes Noκ Cohen's -1 … 0 … 1 Yes Yes Yes Yes No No Yes NoM Mutual Information 0 … 1 Yes Yes Yes Yes No No* Yes NoJ J-Measure 0 … 1 Yes No No No No No No NoG Gini Index 0 … 1 Yes No No No No No* Yes Nos Support 0 … 1 No Yes No Yes No No No Noc Confidence 0 … 1 No Yes No Yes No No No YesL Laplace 0 … 1 No Yes No Yes No No No NoV Conviction 0.5 … 1 … ∞ No Yes No Yes** No No Yes NoI Interest 0 … 1 … ∞ Yes* Yes Yes Yes No No No No
IS IS (cosine) 0 .. 1 No Yes Yes Yes No No No YesPS Piatetsky-Shapiro's -0.25 … 0 … 0.25 Yes Yes Yes Yes No Yes Yes NoF Certainty factor -1 … 0 … 1 Yes Yes Yes No No No Yes No
AV Added value 0.5 … 1 … 1 Yes Yes Yes No No No No NoS Collective strength 0 … 1 … ∞ No Yes Yes Yes No Yes* Yes Noζ Jaccard 0 .. 1 No Yes Yes Yes No No No Yes
K Klosgen's Yes Yes Yes No No No No No3320
31321
32 ……⎟⎟
⎠
⎞⎜⎜⎝
⎛−−⎟⎟
⎠
⎞⎜⎜⎝
⎛−