data mining introductory and advanced topics part i

368
© Prentice Hall 1 DATA MINING DATA MINING Introductory and Advanced Topics Introductory and Advanced Topics Part I Part I Margaret H. Dunham Margaret H. Dunham Department of Computer Science and Department of Computer Science and Engineering Engineering Southern Methodist University Southern Methodist University Companion slides for the text by Dr. M.H.Dunham, Companion slides for the text by Dr. M.H.Dunham, Data Mining, Introductory and Advanced Topics Data Mining, Introductory and Advanced Topics , Prentice , Prentice Hall, 2002. Hall, 2002.

Upload: madison-carson

Post on 19-Jan-2018

231 views

Category:

Documents


0 download

DESCRIPTION

Data Mining Outline PART I Introduction Related Concepts Data Mining Techniques PART II Classification Clustering Association Rules PART III Web Mining Spatial Mining Temporal Mining © Prentice Hall

TRANSCRIPT

Page 1: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 1

DATA MININGDATA MININGIntroductory and Advanced TopicsIntroductory and Advanced Topics

Part IPart I

Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

Southern Methodist UniversitySouthern Methodist University

Companion slides for the text by Dr. M.H.Dunham, Companion slides for the text by Dr. M.H.Dunham, Data Mining, Data Mining, Introductory and Advanced TopicsIntroductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.

Page 2: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 2

Data Mining OutlineData Mining Outline PART IPART I

– IntroductionIntroduction– Related ConceptsRelated Concepts– Data Mining TechniquesData Mining Techniques

PART IIPART II– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules

PART IIIPART III– Web MiningWeb Mining– Spatial MiningSpatial Mining– Temporal MiningTemporal Mining

Page 3: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 3

Introduction OutlineIntroduction Outline

Define data miningDefine data mining Data mining vs. databasesData mining vs. databases Basic data mining tasksBasic data mining tasks Data mining developmentData mining development Data mining issuesData mining issues

Goal:Goal: Provide an overview of data mining. Provide an overview of data mining.

Page 4: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 4

IntroductionIntroduction

Data is growing at a phenomenal rateData is growing at a phenomenal rate Users expect more sophisticated Users expect more sophisticated

informationinformation How?How?

UNCOVER HIDDEN INFORMATIONUNCOVER HIDDEN INFORMATIONDATA MININGDATA MINING

Page 5: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 5

Data Mining DefinitionData Mining Definition

Finding hidden information in a Finding hidden information in a databasedatabase

Fit data to a modelFit data to a model Similar termsSimilar terms

– Exploratory data analysisExploratory data analysis– Data driven discoveryData driven discovery– Deductive learningDeductive learning

Page 6: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 6

Data Mining AlgorithmData Mining Algorithm

Objective: Fit Data to a ModelObjective: Fit Data to a Model– DescriptiveDescriptive– PredictivePredictive

Preference – Technique to choose the Preference – Technique to choose the best modelbest model

Search – Technique to search the dataSearch – Technique to search the data– ““Query”Query”

Page 7: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 7

Database Processing vs. Data Database Processing vs. Data Mining ProcessingMining Processing

QueryQuery– Well definedWell defined– SQLSQL

QueryQuery– Poorly definedPoorly defined– No precise query languageNo precise query language

DataData– Operational dataOperational data

OutputOutput– PrecisePrecise– Subset of databaseSubset of database

DataData– Not operational dataNot operational data

OutputOutput– FuzzyFuzzy– Not a subset of databaseNot a subset of database

Page 8: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 8

Query ExamplesQuery Examples DatabaseDatabase

Data MiningData Mining

– Find all customers who have purchased milkFind all customers who have purchased milk

– Find all items which are frequently purchased Find all items which are frequently purchased with milk. (association rules)with milk. (association rules)

– Find all credit applicants with last name of Smith.Find all credit applicants with last name of Smith.– Identify customers who have purchased more Identify customers who have purchased more than $10,000 in the last month.than $10,000 in the last month.

– Find all credit applicants who are poor credit Find all credit applicants who are poor credit risks. (classification)risks. (classification)– Identify customers with similar buying habits. Identify customers with similar buying habits. (Clustering)(Clustering)

Page 9: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 9

Data Mining Models and TasksData Mining Models and Tasks

Page 10: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 10

Basic Data Mining TasksBasic Data Mining Tasks Classification Classification maps data into predefined groups or maps data into predefined groups or

classesclasses– Supervised learningSupervised learning– Pattern recognitionPattern recognition– PredictionPrediction

RegressionRegression is used to map a data item to a real is used to map a data item to a real valued prediction variable.valued prediction variable.

Clustering Clustering groups similar data together into groups similar data together into clusters.clusters.– Unsupervised learningUnsupervised learning– SegmentationSegmentation– PartitioningPartitioning

Page 11: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 11

Basic Data Mining Tasks Basic Data Mining Tasks (cont’d)(cont’d)

Summarization Summarization maps data into subsets with maps data into subsets with associated simple descriptions.associated simple descriptions.– CharacterizationCharacterization– GeneralizationGeneralization

Link AnalysisLink Analysis uncovers relationships among uncovers relationships among data.data.– Affinity AnalysisAffinity Analysis– Association RulesAssociation Rules– Sequential Analysis determines sequential Sequential Analysis determines sequential

patterns.patterns.

Page 12: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 12

Ex: Time Series AnalysisEx: Time Series Analysis Example: Stock MarketExample: Stock Market Predict future valuesPredict future values Determine similar patterns over timeDetermine similar patterns over time Classify behaviorClassify behavior

Page 13: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 13

Data Mining vs. KDDData Mining vs. KDD

Knowledge Discovery in Databases Knowledge Discovery in Databases (KDD):(KDD): process of finding useful process of finding useful information and patterns in data.information and patterns in data.

Data Mining:Data Mining: Use of algorithms to Use of algorithms to extract the information and patterns extract the information and patterns derived by the KDD process. derived by the KDD process.

Page 14: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 14

KDD ProcessKDD Process

Selection:Selection: Obtain data from various sources. Obtain data from various sources. Preprocessing:Preprocessing: Cleanse data. Cleanse data. Transformation:Transformation: Convert to common format. Convert to common format.

Transform to new format.Transform to new format. Data Mining:Data Mining: Obtain desired results. Obtain desired results. Interpretation/Evaluation:Interpretation/Evaluation: Present results Present results

to user in meaningful manner.to user in meaningful manner.

Modified from [FPSS96C]

Page 15: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 15

KDD Process Ex: Web LogKDD Process Ex: Web Log Selection:Selection:

– Select log data (dates and locations) to useSelect log data (dates and locations) to use Preprocessing:Preprocessing:

– Remove identifying URLsRemove identifying URLs– Remove error logsRemove error logs

Transformation:Transformation: – Sessionize (sort and group)Sessionize (sort and group)

Data Mining:Data Mining: – Identify and count patternsIdentify and count patterns– Construct data structureConstruct data structure

Interpretation/Evaluation:Interpretation/Evaluation:– Identify and display frequently accessed sequences.Identify and display frequently accessed sequences.

Potential User Applications:Potential User Applications:– Cache predictionCache prediction– PersonalizationPersonalization

Page 16: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 16

Data Mining DevelopmentData Mining Development•Similarity Measures•Hierarchical Clustering•IR Systems•Imprecise Queries•Textual Data•Web Search Engines

•Bayes Theorem•Regression Analysis•EM Algorithm•K-Means Clustering•Time Series Analysis

•Neural Networks•Decision Tree Algorithms

•Algorithm Design Techniques•Algorithm Analysis•Data Structures

•Relational Data Model•SQL•Association Rule Algorithms•Data Warehousing•Scalability Techniques

Page 17: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 17

KDD IssuesKDD Issues

Human InteractionHuman Interaction OverfittingOverfitting OutliersOutliers InterpretationInterpretation Visualization Visualization Large DatasetsLarge Datasets High DimensionalityHigh Dimensionality

Page 18: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 18

KDD Issues (cont’d)KDD Issues (cont’d) Multimedia DataMultimedia Data Missing DataMissing Data Irrelevant DataIrrelevant Data Noisy DataNoisy Data Changing DataChanging Data IntegrationIntegration ApplicationApplication

Page 19: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 19

Social Implications of DMSocial Implications of DM

Privacy Privacy ProfilingProfiling Unauthorized useUnauthorized use

Page 20: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 20

Data Mining MetricsData Mining Metrics

UsefulnessUsefulness Return on Investment (ROI)Return on Investment (ROI) AccuracyAccuracy Space/TimeSpace/Time

Page 21: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 21

Database Perspective on Data Database Perspective on Data MiningMining

ScalabilityScalability Real World DataReal World Data UpdatesUpdates Ease of UseEase of Use

Page 22: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 22

Visualization TechniquesVisualization Techniques

GraphicalGraphical GeometricGeometric Icon-basedIcon-based Pixel-basedPixel-based HierarchicalHierarchical HybridHybrid

Page 23: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 23

Related Concepts OutlineRelated Concepts Outline

Database/OLTP SystemsDatabase/OLTP Systems Fuzzy Sets and LogicFuzzy Sets and Logic Information Retrieval(Web Search Engines)Information Retrieval(Web Search Engines) Dimensional ModelingDimensional Modeling Data WarehousingData Warehousing OLAP/DSSOLAP/DSS StatisticsStatistics Machine LearningMachine Learning Pattern MatchingPattern Matching

Goal:Goal: Examine some areas which are related to Examine some areas which are related to data mining.data mining.

Page 24: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 24

DB & OLTP SystemsDB & OLTP Systems SchemaSchema

– (ID,Name,Address,Salary,JobNo)(ID,Name,Address,Salary,JobNo) Data ModelData Model

– ERER– RelationalRelational

TransactionTransaction Query:Query:

SELECT NameSELECT NameFROM TFROM TWHERE Salary > 100000WHERE Salary > 100000

DM: Only imprecise queriesDM: Only imprecise queries

Page 25: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 25

Fuzzy Sets and LogicFuzzy Sets and Logic Fuzzy Set:Fuzzy Set: Set membership function is a real valued Set membership function is a real valued

function with output in the range [0,1].function with output in the range [0,1]. f(x): Probability x is in F.f(x): Probability x is in F. 1-f(x): Probability x is not in F.1-f(x): Probability x is not in F. EX:EX:

– T = {x | x is a person and x is tall}T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tallLet f(x) be the probability that x is tall– Here f is the membership functionHere f is the membership function

DM: DM: Prediction and classification are fuzzy.Prediction and classification are fuzzy.

Page 26: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 26

Fuzzy SetsFuzzy Sets

Page 27: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 27

Classification/Prediction is Classification/Prediction is FuzzyFuzzy

Loan

Amnt

Simple Fuzzy

Accept Accept

RejectReject

Page 28: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 28

Information Retrieval Information Retrieval Information Retrieval (IR):Information Retrieval (IR): retrieving desired information retrieving desired information

from textual data.from textual data. Library ScienceLibrary Science Digital LibrariesDigital Libraries Web Search EnginesWeb Search Engines Traditionally keyword basedTraditionally keyword based Sample query:Sample query:

Find all documents about “data mining”.Find all documents about “data mining”.

DM: Similarity measures; DM: Similarity measures; Mine text/Web data.Mine text/Web data.

Page 29: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 29

Information Retrieval (cont’d)Information Retrieval (cont’d) Similarity:Similarity: measure of how close a measure of how close a

query is to a document.query is to a document. Documents which are “close enough” Documents which are “close enough”

are retrieved.are retrieved. Metrics:Metrics:

– PrecisionPrecision = |Relevant and Retrieved| = |Relevant and Retrieved| |Retrieved||Retrieved|

– RecallRecall = |Relevant and Retrieved|= |Relevant and Retrieved| |Relevant||Relevant|

Page 30: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 30

IR Query Result Measures IR Query Result Measures and Classificationand Classification

IR Classification

Page 31: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 31

Dimensional ModelingDimensional Modeling View data in a hierarchical manner more as View data in a hierarchical manner more as

business executives mightbusiness executives might Useful in decision support systems and miningUseful in decision support systems and mining Dimension:Dimension: collection of logically related collection of logically related

attributes; axis for modeling data.attributes; axis for modeling data. Facts:Facts: data stored data stored Ex: Dimensions – products, locations, dateEx: Dimensions – products, locations, date

Facts – quantity, unit priceFacts – quantity, unit price

DM: May view data as dimensional.DM: May view data as dimensional.

Page 32: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 32

Relational View of DataRelational View of Data

ProdID LocID Date Quantity UnitPrice 123 Dallas 022900 5 25 123 Houston 020100 10 20 150 Dallas 031500 1 100 150 Dallas 031500 5 95 150 Fort

Worth 021000 5 80

150 Chicago 012000 20 75 200 Seattle 030100 5 50 300 Rochester 021500 200 5 500 Bradenton 022000 15 20 500 Chicago 012000 10 25 1

Page 33: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 33

Dimensional Modeling QueriesDimensional Modeling Queries

Roll Up:Roll Up: more general dimension more general dimension Drill Down:Drill Down: more specific dimension more specific dimension Dimension (Aggregation) HierarchyDimension (Aggregation) Hierarchy SQL uses aggregationSQL uses aggregation Decision Support Systems (DSS):Decision Support Systems (DSS):

Computer systems and tools to assist Computer systems and tools to assist managers in making decisions and managers in making decisions and solving problems.solving problems.

Page 34: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 34

Cube view of DataCube view of Data

Page 35: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 35

Aggregation HierarchiesAggregation Hierarchies

Page 36: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 36

Star SchemaStar Schema

Page 37: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 37

Data WarehousingData Warehousing

““Subject-oriented, integrated, time-variant, nonvolatile” Subject-oriented, integrated, time-variant, nonvolatile” William InmonWilliam Inmon

Operational Data:Operational Data: Data used in day to day needs of Data used in day to day needs of company.company.

Informational Data:Informational Data: Supports other functions such as Supports other functions such as planning and forecasting.planning and forecasting.

Data mining tools often access data warehouses rather Data mining tools often access data warehouses rather than operational data.than operational data.

DM: May access data in warehouse.DM: May access data in warehouse.

Page 38: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 38

Operational vs. InformationalOperational vs. Informational  Operational Data Data Warehouse

Application OLTP OLAPUse Precise Queries Ad HocTemporal Snapshot HistoricalModification Dynamic StaticOrientation Application BusinessData Operational Values IntegratedSize Gigabits TerabitsLevel Detailed SummarizedAccess Often Less OftenResponse Few Seconds MinutesData Schema Relational Star/Snowflake

Page 39: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 39

OLAPOLAP Online Analytic Processing (OLAP):Online Analytic Processing (OLAP): provides more provides more

complex queries than OLTP.complex queries than OLTP. OnLine Transaction Processing (OLTP):OnLine Transaction Processing (OLTP): traditional traditional

database/transaction processing.database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations:Visualization of operations:

– Slice:Slice: examine sub-cube. examine sub-cube.– Dice:Dice: rotate cube to look at another dimension. rotate cube to look at another dimension.– Roll Up/Drill DownRoll Up/Drill Down

DM: May use OLAP queries.DM: May use OLAP queries.

Page 40: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 40

OLAP OperationsOLAP Operations

Single Cell Multiple Cells Slice Dice

Roll Up

Drill Down

Page 41: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 41

StatisticsStatistics Simple descriptive modelsSimple descriptive models Statistical inference:Statistical inference: generalizing a model generalizing a model

created from a sample of the data to the entire created from a sample of the data to the entire dataset.dataset.

Exploratory Data Analysis:Exploratory Data Analysis: – Data can actually drive the creation of the Data can actually drive the creation of the

modelmodel– Opposite of traditional statistical view.Opposite of traditional statistical view.

Data mining targeted to business userData mining targeted to business user

DM: Many data mining methods come DM: Many data mining methods come from statistical techniques. from statistical techniques.

Page 42: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 42

Machine LearningMachine Learning Machine Learning:Machine Learning: area of AI that examines how to area of AI that examines how to

write programs that can learn.write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning:Supervised Learning: learns by example. learns by example. Unsupervised Learning: Unsupervised Learning: learns without knowledge of learns without knowledge of

correct answers.correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets.

DM: Uses many machine learning DM: Uses many machine learning techniques.techniques.

Page 43: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 43

Pattern Matching Pattern Matching (Recognition)(Recognition)

Pattern Matching:Pattern Matching: finds occurrences of finds occurrences of a predefined pattern in the data.a predefined pattern in the data.

Applications include speech recognition, Applications include speech recognition, information retrieval, time series information retrieval, time series analysis.analysis.

DM: Type of classification.DM: Type of classification.

Page 44: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 44

DM vs. Related TopicsDM vs. Related TopicsArea Query Data Results Output

DB/OLTP Precise Database Precise DB Objects or Aggregation

IR Precise Documents Vague Documents OLAP Analysis Multidimensional Precise DB Objects

or Aggregation

DM Vague Preprocessed Vague KDD Objects

Page 45: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 45

Data Mining Techniques OutlineData Mining Techniques Outline

StatisticalStatistical– Point EstimationPoint Estimation– Models Based on SummarizationModels Based on Summarization– Bayes TheoremBayes Theorem– Hypothesis TestingHypothesis Testing– Regression and CorrelationRegression and Correlation

Similarity MeasuresSimilarity Measures Decision TreesDecision Trees Neural NetworksNeural Networks

– Activation FunctionsActivation Functions Genetic AlgorithmsGenetic Algorithms

Goal:Goal: Provide an overview of basic data Provide an overview of basic data mining techniquesmining techniques

Page 46: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 46

Point EstimationPoint Estimation Point Estimate:Point Estimate: estimate a population estimate a population

parameter.parameter. May be made by calculating the parameter for a May be made by calculating the parameter for a

sample.sample. May be used to predict value for missing data.May be used to predict value for missing data. Ex: Ex:

– R contains 100 employeesR contains 100 employees– 99 have salary information99 have salary information– Mean salary of these is $50,000Mean salary of these is $50,000– Use $50,000 as value of remaining employee’s Use $50,000 as value of remaining employee’s

salary. salary. Is this a good idea?Is this a good idea?

Page 47: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 47

Estimation ErrorEstimation Error Bias: Bias: Difference between expected value and Difference between expected value and

actual value.actual value.

Mean Squared Error (MSE):Mean Squared Error (MSE): expected value expected value of the squared difference between the of the squared difference between the estimate and the actual value:estimate and the actual value:

Why square?Why square? Root Mean Square Error (RMSE)Root Mean Square Error (RMSE)

Page 48: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 48

Jackknife EstimateJackknife Estimate Jackknife Estimate:Jackknife Estimate: estimate of parameter is estimate of parameter is

obtained by omitting one value from the set of obtained by omitting one value from the set of observed values.observed values.

Ex: estimate of mean for X={xEx: estimate of mean for X={x1, … , x, … , xn}}

Page 49: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 49

Maximum Likelihood Maximum Likelihood Estimate (MLE)Estimate (MLE)

Obtain parameter estimates that maximize Obtain parameter estimates that maximize the probability that the sample data occurs for the probability that the sample data occurs for the specific model.the specific model.

Joint probability for observing the sample Joint probability for observing the sample data by multiplying the individual probabilities. data by multiplying the individual probabilities. Likelihood function: Likelihood function:

Maximize L.Maximize L.

Page 50: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 50

MLE ExampleMLE Example

Coin toss five times: {H,H,H,H,T}Coin toss five times: {H,H,H,H,T} Assuming a perfect coin with H and T equally Assuming a perfect coin with H and T equally

likely, the likelihood of this sequence is: likely, the likelihood of this sequence is:

However if the probability of a H is 0.8 then:However if the probability of a H is 0.8 then:

Page 51: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 51

MLE Example (cont’d)MLE Example (cont’d) General likelihood formula:General likelihood formula:

Estimate for p is then 4/5 = 0.8Estimate for p is then 4/5 = 0.8

Page 52: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 52

Expectation-Maximization Expectation-Maximization (EM)(EM)

Solves estimation with incomplete data.Solves estimation with incomplete data. Obtain initial estimates for parameters.Obtain initial estimates for parameters. Iteratively use estimates for missing Iteratively use estimates for missing

data and continue until convergence.data and continue until convergence.

Page 53: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 53

EM ExampleEM Example

Page 54: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 54

EM AlgorithmEM Algorithm

Page 55: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 55

Models Based on SummarizationModels Based on Summarization Visualization:Visualization: Frequency distribution, mean, variance, Frequency distribution, mean, variance,

median, mode, etc.median, mode, etc. Box Plot:Box Plot:

Page 56: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 56

Scatter DiagramScatter Diagram

Page 57: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 57

Bayes TheoremBayes Theorem Posterior Probability:Posterior Probability: P(hP(h1|x|xi)) Prior Probability:Prior Probability: P(h P(h1)) Bayes Theorem:Bayes Theorem:

Assign probabilities of hypotheses given a data Assign probabilities of hypotheses given a data value.value.

Page 58: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 58

Bayes Theorem ExampleBayes Theorem Example Credit authorizations (hypotheses): Credit authorizations (hypotheses):

hh11=authorize purchase, h=authorize purchase, h2 = authorize after = authorize after further identification, hfurther identification, h3=do not authorize, =do not authorize, hh4= do not authorize but contact police= do not authorize but contact police

Assign twelve data values for all Assign twelve data values for all combinations of credit and income:combinations of credit and income:

From training data: P(hFrom training data: P(h11) = 60%; P(h) = 60%; P(h22)=20%; )=20%;

P(h P(h33)=10%; P(h)=10%; P(h44)=10%.)=10%.

1 2 3 4 Excellent x1 x2 x3 x4 Good x5 x6 x7 x8 Bad x9 x10 x11 x12

Page 59: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 59

Bayes Example(cont’d)Bayes Example(cont’d) Training Data:Training Data:

ID Income Credit Class xi 1 4 Excellent h1 x4 2 3 Good h1 x7 3 2 Excellent h1 x2 4 3 Good h1 x7 5 4 Good h1 x8 6 2 Excellent h1 x2 7 3 Bad h2 x11 8 2 Bad h2 x10 9 3 Bad h3 x11 10 1 Bad h4 x9

Page 60: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 60

Bayes Example(cont’d)Bayes Example(cont’d) Calculate P(xCalculate P(xii|h|hjj) and P(x) and P(xii)) Ex: P(xEx: P(x77|h|h11)=2/6; P(x)=2/6; P(x44|h|h11)=1/6; P(x)=1/6; P(x22|h|h11)=2/6; P(x)=2/6; P(x88||

hh11)=1/6; P(x)=1/6; P(xii|h|h11)=0 for all other x)=0 for all other xii.. Predict the class for xPredict the class for x44::

– Calculate P(hCalculate P(hjj|x|x44) for all h) for all hjj. . – Place xPlace x4 4 in class with largest value.in class with largest value.– Ex: Ex:

»P(hP(h11|x|x44)=(P(x)=(P(x44|h|h11)(P(h)(P(h11))/P(x))/P(x44)) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1.

»xx4 4 in class hin class h11..

Page 61: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 61

Hypothesis TestingHypothesis Testing

Find model to explain behavior by Find model to explain behavior by creating and then testing a hypothesis creating and then testing a hypothesis about the data.about the data.

Exact opposite of usual DM approach.Exact opposite of usual DM approach. HH0 0 – Null hypothesis; Hypothesis to be – Null hypothesis; Hypothesis to be

tested.tested. HH1 1 – Alternative hypothesis– Alternative hypothesis

Page 62: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 62

Chi Squared StatisticChi Squared Statistic

O – observed valueO – observed value E – Expected value based on hypothesis.E – Expected value based on hypothesis.

Ex: Ex: – O={50,93,67,78,87}O={50,93,67,78,87}– E=75E=75– 22=15.55 and therefore significant=15.55 and therefore significant

Page 63: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 63

RegressionRegression

Predict future values based on past Predict future values based on past valuesvalues

Linear RegressionLinear Regression assumes linear assumes linear relationship exists.relationship exists.

y = cy = c00 + c + c11 x x11 + … + c + … + cnn x xnn

Find values to best fit the dataFind values to best fit the data

Page 64: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 64

Linear RegressionLinear Regression

Page 65: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 65

CorrelationCorrelation

Examine the degree to which the values Examine the degree to which the values for two variables behave similarly.for two variables behave similarly.

Correlation coefficient r:Correlation coefficient r:• 1 = perfect correlation1 = perfect correlation• -1 = perfect but opposite correlation-1 = perfect but opposite correlation• 0 = no correlation0 = no correlation

Page 66: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 66

Similarity MeasuresSimilarity Measures Determine similarity between two objects.Determine similarity between two objects. Similarity characteristics:Similarity characteristics:

Alternatively, distance measure measure how Alternatively, distance measure measure how unlike or dissimilar objects are.unlike or dissimilar objects are.

Page 67: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 67

Similarity MeasuresSimilarity Measures

Page 68: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 68

Distance MeasuresDistance Measures

Measure dissimilarity between objectsMeasure dissimilarity between objects

Page 69: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 69

Twenty Questions GameTwenty Questions Game

Page 70: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 70

Decision TreesDecision Trees Decision Tree (DT):Decision Tree (DT):

– Tree where the root and each internal node is Tree where the root and each internal node is labeled with a question. labeled with a question.

– The arcs represent each possible answer to The arcs represent each possible answer to the associated question. the associated question.

– Each leaf node represents a prediction of a Each leaf node represents a prediction of a solution to the problem.solution to the problem.

Popular technique for classification; Leaf Popular technique for classification; Leaf node indicates class to which the node indicates class to which the corresponding tuple belongs.corresponding tuple belongs.

Page 71: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 71

Decision Tree ExampleDecision Tree Example

Page 72: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 72

Decision TreesDecision Trees

AA Decision Tree Model Decision Tree Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Decision TreeDecision Tree– Algorithm to create the treeAlgorithm to create the tree– Algorithm that applies the tree to data Algorithm that applies the tree to data

Creation of the tree is the most difficult part.Creation of the tree is the most difficult part. Processing is basically a search similar to Processing is basically a search similar to

that in a binary search tree (although DT may that in a binary search tree (although DT may not be binary).not be binary).

Page 73: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 73

Decision Tree AlgorithmDecision Tree Algorithm

Page 74: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 74

DT DT Advantages/DisadvantagesAdvantages/Disadvantages

Advantages:Advantages:– Easy to understand. Easy to understand. – Easy to generate rulesEasy to generate rules

Disadvantages:Disadvantages:– May suffer from overfitting.May suffer from overfitting.– Classifies by rectangular partitioning.Classifies by rectangular partitioning.– Does not easily handle nonnumeric data.Does not easily handle nonnumeric data.– Can be quite large – pruning is necessary.Can be quite large – pruning is necessary.

Page 75: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 75

Neural Networks Neural Networks Based on observed functioning of human Based on observed functioning of human

brain. brain. (Artificial Neural Networks (ANN)(Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a We view a neural network (NN) from a

graphical viewpoint.graphical viewpoint. Alternatively, a NN may be viewed from the Alternatively, a NN may be viewed from the

perspective of matrices.perspective of matrices. Used in pattern recognition, speech Used in pattern recognition, speech

recognition, computer vision, and recognition, computer vision, and classification.classification.

Page 76: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 76

Neural NetworksNeural Networks Neural Network (NN)Neural Network (NN) is a directed graph is a directed graph

F=<V,A> with vertices V={1,2,…,n} and arcs F=<V,A> with vertices V={1,2,…,n} and arcs A={<i,j>|1<=i,j<=n}, with the following A={<i,j>|1<=i,j<=n}, with the following restrictions:restrictions:– V is partitioned into a set of input nodes, VV is partitioned into a set of input nodes, V II, ,

hidden nodes, Vhidden nodes, VHH, and output nodes, V, and output nodes, VOO..– The vertices are also partitioned into layers The vertices are also partitioned into layers – Any arc <i,j> must have node i in layer h-1 Any arc <i,j> must have node i in layer h-1

and node j in layer h.and node j in layer h.– Arc <i,j> is labeled with a numeric value wArc <i,j> is labeled with a numeric value w ijij..– Node i is labeled with a function fNode i is labeled with a function f ii..

Page 77: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 77

Neural Network ExampleNeural Network Example

Page 78: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 78

NN NodeNN Node

Page 79: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 79

NN Activation FunctionsNN Activation Functions

Functions associated with nodes in Functions associated with nodes in graph.graph.

Output may be in range [-1,1] or [0,1]Output may be in range [-1,1] or [0,1]

Page 80: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 80

NN Activation FunctionsNN Activation Functions

Page 81: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 81

NN LearningNN Learning

Propagate input values through graph.Propagate input values through graph. Compare output to desired output.Compare output to desired output. Adjust weights in graph accordingly.Adjust weights in graph accordingly.

Page 82: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 82

Neural NetworksNeural Networks

A A Neural Network ModelNeural Network Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Neural Network graph Neural Network graph – Learning algorithm that indicates how Learning algorithm that indicates how

learning takes place.learning takes place.– Recall techniques that determine hew Recall techniques that determine hew

information is obtained from the network. information is obtained from the network. We will look at propagation as the recall We will look at propagation as the recall

technique.technique.

Page 83: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 83

NN AdvantagesNN Advantages

LearningLearning Can continue learning even after Can continue learning even after

training set has been applied.training set has been applied. Easy parallelizationEasy parallelization Solves many problemsSolves many problems

Page 84: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 84

NN DisadvantagesNN Disadvantages

Difficult to understandDifficult to understand May suffer from overfittingMay suffer from overfitting Structure of graph must be determined Structure of graph must be determined

a priori.a priori. Input values must be numeric.Input values must be numeric. Verification difficult.Verification difficult.

Page 85: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 85

Genetic AlgorithmsGenetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and Creates an initial feasible solution and

iteratively creates new “better” solutions.iteratively creates new “better” solutions. Based on human evolution and survival of the Based on human evolution and survival of the

fittest.fittest. Must represent a solution as an individual.Must represent a solution as an individual. Individual:Individual: string I=I string I=I11,I,I22,…,I,…,Inn where I where Ijj is in given is in given

alphabet A. alphabet A. Each character IEach character Ij j is called a is called a genegene.. Population:Population: set of individuals. set of individuals.

Page 86: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 86

Genetic AlgorithmsGenetic Algorithms A A Genetic Algorithm (GA)Genetic Algorithm (GA) is a is a

computational model consisting of five parts:computational model consisting of five parts:– A starting set of individuals, P.A starting set of individuals, P.– CrossoverCrossover:: technique to combine two technique to combine two

parents to create offspring.parents to create offspring.– Mutation: Mutation: randomly change an individual.randomly change an individual.– Fitness: Fitness: determine the best individuals.determine the best individuals.– Algorithm which applies the crossover and Algorithm which applies the crossover and

mutation techniques to P iteratively using mutation techniques to P iteratively using the fitness function to determine the best the fitness function to determine the best individuals in P to keep. individuals in P to keep.

Page 87: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 87

Crossover ExamplesCrossover Examples

111 111

000 000

Parents Children

111 000

000 111

a) Single Crossover

111 111

Parents Children

111 000

000

a) Single Crossover

111 111

000 000

Parents

a) Multiple Crossover

111 111

000

Parents Children

111 000

000 111

Children

111 000

000 11100

11

00

11

Page 88: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 88

Genetic AlgorithmGenetic Algorithm

Page 89: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 89

GA Advantages/DisadvantagesGA Advantages/Disadvantages AdvantagesAdvantages

– Easily parallelizedEasily parallelized DisadvantagesDisadvantages

– Difficult to understand and explain to end Difficult to understand and explain to end users.users.

– Abstraction of the problem and method to Abstraction of the problem and method to represent individuals is quite difficult.represent individuals is quite difficult.

– Determining fitness function is difficult.Determining fitness function is difficult.– Determining how to perform crossover and Determining how to perform crossover and

mutation is difficult.mutation is difficult.

Page 90: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 90

DATA MININGDATA MININGIntroductory and Advanced TopicsIntroductory and Advanced Topics

Part II Part II

Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

Southern Methodist UniversitySouthern Methodist University

Companion slides for the text by Dr. M.H.Dunham, Companion slides for the text by Dr. M.H.Dunham, Data Mining, Data Mining, Introductory and Advanced TopicsIntroductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.

Page 91: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 91

Data Mining OutlineData Mining Outline PART IPART I

– IntroductionIntroduction– Related ConceptsRelated Concepts– Data Mining TechniquesData Mining Techniques

PART IIPART II– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules

PART IIIPART III– Web MiningWeb Mining– Spatial MiningSpatial Mining– Temporal MiningTemporal Mining

Page 92: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 92

Classification OutlineClassification Outline

Classification Problem OverviewClassification Problem Overview Classification TechniquesClassification Techniques

– RegressionRegression– DistanceDistance– Decision TreesDecision Trees– RulesRules– Neural NetworksNeural Networks

Goal:Goal: Provide an overview of the classification Provide an overview of the classification problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 93: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 93

Classification ProblemClassification Problem Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} and a set } and a set

of classes C={Cof classes C={C11,…,C,…,Cmm}, the }, the Classification ProblemClassification Problem is to define a is to define a mapping f:Dmapping f:DC where each tC where each tii is assigned is assigned to one class.to one class.

Actually divides D into Actually divides D into equivalence equivalence classesclasses..

PredictionPrediction isis similar, but may be viewed similar, but may be viewed as having infinite number of classes.as having infinite number of classes.

Page 94: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 94

Classification ExamplesClassification Examples Teachers classify students’ grades as Teachers classify students’ grades as

A, B, C, D, or F. A, B, C, D, or F. Identify mushrooms as poisonous or Identify mushrooms as poisonous or

edible.edible. Predict when a river will flood.Predict when a river will flood. Identify individuals with credit risks. Identify individuals with credit risks. Speech recognitionSpeech recognition Pattern recognitionPattern recognition

Page 95: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 95

Classification Ex: GradingClassification Ex: Grading

If x >= 90 then grade If x >= 90 then grade =A.=A.

If 80<=x<90 then If 80<=x<90 then grade =B.grade =B.

If 70<=x<80 then If 70<=x<80 then grade =C.grade =C.

If 60<=x<70 then If 60<=x<70 then grade =D.grade =D.

If x<50 then grade =F.If x<50 then grade =F.

>=90<90

x

>=80<80

x

>=70<70

x

F

B

A

>=60<50

x C

D

Page 96: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 96

Classification Ex: Letter Classification Ex: Letter RecognitionRecognition

View letters as constructed from 5 components:

Letter C

Letter E

Letter A

Letter D

Letter F

Letter B

Page 97: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 97

Classification TechniquesClassification Techniques Approach:Approach:

1.1. Create specific model by evaluating Create specific model by evaluating training data (or using domain training data (or using domain experts’ knowledge).experts’ knowledge).

2.2. Apply model developed to new data.Apply model developed to new data. Classes must be predefinedClasses must be predefined Most common techniques use DTs, Most common techniques use DTs,

NNs, or are based on distances or NNs, or are based on distances or statistical methods.statistical methods.

Page 98: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 98

Defining ClassesDefining Classes

Partitioning Based

Distance Based

Page 99: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 99

Issues in ClassificationIssues in Classification

Missing DataMissing Data– IgnoreIgnore– Replace with assumed valueReplace with assumed value

Measuring PerformanceMeasuring Performance– Classification accuracy on test dataClassification accuracy on test data– Confusion matrixConfusion matrix– OC CurveOC Curve

Page 100: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 100

Height Example DataHeight Example DataName Gender Height Output1 Output2 Kristina F 1.6m Short Medium Jim M 2m Tall Medium Maggie F 1.9m Medium Tall Martha F 1.88m Medium Tall Stephanie F 1.7m Short Medium Bob M 1.85m Medium Medium Kathy F 1.6m Short Medium Dave M 1.7m Short Medium Worth M 2.2m Tall Tall Steven M 2.1m Tall Tall Debbie F 1.8m Medium Medium Todd M 1.95m Medium Medium Kim F 1.9m Medium Tall Amy F 1.8m Medium Medium Wynette F 1.75m Medium Medium

Page 101: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 101

Classification PerformanceClassification Performance

True Positive

True NegativeFalse Positive

False Negative

Page 102: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 102

Confusion Matrix ExampleConfusion Matrix Example

Using height data example with Output1 Using height data example with Output1 correct and Output2 actual assignmentcorrect and Output2 actual assignment

Actual Assignment Membership Short Medium Tall Short 0 4 0 Medium 0 5 3 Tall 0 1 2

Page 103: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 103

Operating Characteristic CurveOperating Characteristic Curve

Page 104: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 104

RegressionRegression Assume data fits a predefined functionAssume data fits a predefined function Determine best values for Determine best values for regression regression

coefficientscoefficients cc00,c,c11,…,c,…,cnn.. Assume an error: y = cAssume an error: y = c00+c+c11xx11+…+c+…+cnnxxnn+ Estimate error using mean squared error for

training set:

Page 105: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 105

Linear Regression Poor FitLinear Regression Poor Fit

Page 106: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 106

Classification Using RegressionClassification Using Regression

Division:Division: Use regression function to Use regression function to divide area into regions. divide area into regions.

PredictionPrediction: Use regression function to : Use regression function to predict a class membership function. predict a class membership function. Input includes desired class.Input includes desired class.

Page 107: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 107

DivisionDivision

Page 108: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 108

PredictionPrediction

Page 109: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 109

Classification Using DistanceClassification Using Distance Place items in class to which they are Place items in class to which they are

“closest”.“closest”. Must determine distance between an item Must determine distance between an item

and a class.and a class. Classes represented byClasses represented by

– Centroid:Centroid: Central value. Central value.– Medoid:Medoid: Representative point. Representative point.– Individual pointsIndividual points

Algorithm: KNNAlgorithm: KNN

Page 110: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 110

K Nearest Neighbor (KNN):K Nearest Neighbor (KNN): Training set includes classes.Training set includes classes. Examine K items near item to be Examine K items near item to be

classified.classified. New item placed in class with the most New item placed in class with the most

number of close items.number of close items. O(q) for each tuple to be classified. O(q) for each tuple to be classified.

(Here q is the size of the training set.)(Here q is the size of the training set.)

Page 111: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 111

KNNKNN

Page 112: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 112

KNN AlgorithmKNN Algorithm

Page 113: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 113

Classification Using Decision Classification Using Decision TreesTrees

Partitioning based:Partitioning based: Divide search Divide search space into rectangular regions.space into rectangular regions.

Tuple placed into class based on the Tuple placed into class based on the region within which it falls.region within which it falls.

DT approaches differ in how the tree is DT approaches differ in how the tree is built: built: DT InductionDT Induction

Internal nodes associated with attribute Internal nodes associated with attribute and arcs with values for that attribute.and arcs with values for that attribute.

Algorithms: ID3, C4.5, CARTAlgorithms: ID3, C4.5, CART

Page 114: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 114

Decision TreeDecision TreeGiven: Given:

– D = {tD = {t11, …, t, …, tnn} where t} where tii=<t=<ti1i1, …, t, …, tihih> > – Database schema contains {ADatabase schema contains {A11, A, A22, …, A, …, Ahh}}– Classes C={CClasses C={C11, …., C, …., Cmm}}

Decision or Classification TreeDecision or Classification Tree is is a tree associated a tree associated with D such thatwith D such that– Each internal node is labeled with attribute, AEach internal node is labeled with attribute, Aii

– Each arc is labeled with predicate which can be Each arc is labeled with predicate which can be applied to attribute at parentapplied to attribute at parent

– Each leaf node is labeled with a class, CEach leaf node is labeled with a class, Cjj

Page 115: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 115

DT InductionDT Induction

Page 116: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 116

DT Splits Area DT Splits Area

Gender

Height

M

F

Page 117: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 117

Comparing DTsComparing DTs

BalancedDeep

Page 118: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 118

DT IssuesDT Issues

Choosing Splitting AttributesChoosing Splitting Attributes Ordering of Splitting AttributesOrdering of Splitting Attributes SplitsSplits Tree StructureTree Structure Stopping CriteriaStopping Criteria Training DataTraining Data PruningPruning

Page 119: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 119

Decision Tree Induction is often based on Decision Tree Induction is often based on Information TheoryInformation Theory

SoSo

Page 120: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 120

InformationInformation

Page 121: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 121

DT Induction DT Induction

When all the marbles in the bowl are When all the marbles in the bowl are mixed up, little information is given. mixed up, little information is given.

When the marbles in the bowl are all When the marbles in the bowl are all from one class and those in the other from one class and those in the other two classes are on either side, more two classes are on either side, more information is given.information is given.

Use this approach with DT Induction !Use this approach with DT Induction !

Page 122: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 122

Information/EntropyInformation/Entropy Given probabilitites pGiven probabilitites p11, p, p22, .., p, .., pss whose sum is whose sum is

1, 1, EntropyEntropy is defined as:is defined as:

Entropy measures the amount of randomness Entropy measures the amount of randomness or surprise or uncertainty.or surprise or uncertainty.

Goal in classificationGoal in classification– no surpriseno surprise– entropy = 0entropy = 0

Page 123: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 123

EntropyEntropy

log (1/p) H(p,1-p)

Page 124: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 124

ID3ID3 Creates tree using information theory Creates tree using information theory

concepts and tries to reduce expected concepts and tries to reduce expected number of comparison..number of comparison..

ID3 chooses split attribute with the highest ID3 chooses split attribute with the highest information gain:information gain:

Page 125: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 125

ID3 Example (Output1)ID3 Example (Output1) Starting state entropy:Starting state entropy:4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.43844/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender:Gain using gender:

– Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764– Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) = Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) =

0.43920.4392– Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) = Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) =

0.341520.34152– Gain: 0.4384 – 0.34152 = 0.09688Gain: 0.4384 – 0.34152 = 0.09688

Gain using height:Gain using height:0.4384 – (2/15)(0.301) = 0.39830.4384 – (2/15)(0.301) = 0.3983

Choose height as first splitting attributeChoose height as first splitting attribute

Page 126: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 126

C4.5C4.5 ID3 ID3 favors attributes with large number of favors attributes with large number of

divisionsdivisions Improved version of ID3:Improved version of ID3:

– Missing DataMissing Data– Continuous DataContinuous Data– PruningPruning– RulesRules– GainRatio:GainRatio:

Page 127: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 127

CARTCART

Create Binary TreeCreate Binary Tree Uses entropyUses entropy Formula to choose split point, s, for node t:Formula to choose split point, s, for node t:

PPLL,P,PRR probability that a tuple in the training set probability that a tuple in the training set will be on the left or right side of the tree.will be on the left or right side of the tree.

Page 128: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 128

CART ExampleCART Example At the start, there are six choices for At the start, there are six choices for

split point split point (right branch on equality):(right branch on equality):– P(Gender)=P(Gender)=2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.2242(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224– P(1.6) = 0P(1.6) = 0– P(1.7) = P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.1692(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169– P(1.8) = P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.3852(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385– P(1.9) = P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.2562(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256– P(2.0) = P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.322(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32

Split at 1.8Split at 1.8

Page 129: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 129

Classification Using Neural Classification Using Neural NetworksNetworks

Typical NN structure for classification:Typical NN structure for classification:– One output node per classOne output node per class– Output value is class membership function valueOutput value is class membership function value

Supervised learning Supervised learning For each tuple in training set, propagate it For each tuple in training set, propagate it

through NN. Adjust weights on edges to through NN. Adjust weights on edges to improve future classification. improve future classification.

Algorithms: Propagation, Backpropagation, Algorithms: Propagation, Backpropagation, Gradient DescentGradient Descent

Page 130: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 130

NN Issues NN Issues Number of source nodesNumber of source nodes Number of hidden layersNumber of hidden layers Training dataTraining data Number of sinksNumber of sinks InterconnectionsInterconnections WeightsWeights Activation FunctionsActivation Functions Learning TechniqueLearning Technique When to stop learningWhen to stop learning

Page 131: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 131

Decision Tree vs. Neural Decision Tree vs. Neural NetworkNetwork

Page 132: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 132

PropagationPropagation

Tuple Input

Output

Page 133: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 133

NN Propagation AlgorithmNN Propagation Algorithm

Page 134: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 134

Example PropagationExample Propagation

© Prentie Hall

Page 135: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 135

NN LearningNN Learning

Adjust weights to perform better with the Adjust weights to perform better with the associated test data.associated test data.

Supervised:Supervised: Use feedback from Use feedback from knowledge of correct classification.knowledge of correct classification.

Unsupervised:Unsupervised: No knowledge of No knowledge of correct classification needed.correct classification needed.

Page 136: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 136

NN Supervised LearningNN Supervised Learning

Page 137: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 137

Supervised LearningSupervised Learning Possible error values assuming output from Possible error values assuming output from

node i is ynode i is yii but should be d but should be dii::

Change weights on arcs based on estimated Change weights on arcs based on estimated errorerror

Page 138: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 138

NN BackpropagationNN Backpropagation

Propagate changes to weights Propagate changes to weights backward from output layer to input backward from output layer to input layer.layer.

Delta Rule:Delta Rule: w wijij= c x= c xijij (d (dj j – y– yjj)) Gradient Descent:Gradient Descent: technique to modify technique to modify

the weights in the graph.the weights in the graph.

Page 139: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 139

BackpropagationBackpropagation

Error

Page 140: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 140

Backpropagation AlgorithmBackpropagation Algorithm

Page 141: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 141

Gradient DescentGradient Descent

Page 142: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 142

Gradient Descent AlgorithmGradient Descent Algorithm

Page 143: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 143

Output Layer LearningOutput Layer Learning

Page 144: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 144

Hidden Layer LearningHidden Layer Learning

Page 145: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 145

Types of NNsTypes of NNs

Different NN structures used for Different NN structures used for different problems.different problems.

PerceptronPerceptron Self Organizing Feature MapSelf Organizing Feature Map Radial Basis Function NetworkRadial Basis Function Network

Page 146: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 146

PerceptronPerceptron

Perceptron is one of the simplest NNs.Perceptron is one of the simplest NNs. No hidden layers.No hidden layers.

Page 147: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 147

Perceptron ExamplePerceptron Example

Suppose:Suppose:– Summation: S=3xSummation: S=3x11+2x+2x22-6-6– Activation: if S>0 then 1 else 0Activation: if S>0 then 1 else 0

Page 148: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 148

Self Organizing Feature Map Self Organizing Feature Map (SOFM)(SOFM)

Competitive Unsupervised LearningCompetitive Unsupervised Learning Observe how neurons work in brain:Observe how neurons work in brain:

– Firing impacts firing of those nearFiring impacts firing of those near– Neurons far apart inhibit each otherNeurons far apart inhibit each other– Neurons have specific nonoverlapping Neurons have specific nonoverlapping

taskstasks Ex: Kohonen NetworkEx: Kohonen Network

Page 149: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 149

Kohonen NetworkKohonen Network

Page 150: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 150

Kohonen NetworkKohonen Network

Competitive Layer – viewed as 2D gridCompetitive Layer – viewed as 2D grid Similarity between competitive nodes and Similarity between competitive nodes and

input nodes:input nodes:– Input: X = <xInput: X = <x11, …, x, …, xhh>>

– Weights: <wWeights: <w1i1i, … , w, … , whihi>>– Similarity defined based on dot productSimilarity defined based on dot product

Competitive node most similar to input “wins”Competitive node most similar to input “wins” Winning node weights (as well as Winning node weights (as well as

surrounding node weights) increased.surrounding node weights) increased.

Page 151: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 151

Radial Basis Function NetworkRadial Basis Function Network

RBF function has Gaussian shapeRBF function has Gaussian shape RBF NetworksRBF Networks

– Three LayersThree Layers– Hidden layer – Gaussian activation Hidden layer – Gaussian activation

functionfunction– Output layer – Linear activation functionOutput layer – Linear activation function

Page 152: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 152

Radial Basis Function NetworkRadial Basis Function Network

Page 153: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 153

Classification Using RulesClassification Using Rules Perform classification using If-Then Perform classification using If-Then

rulesrules Classification Rule:Classification Rule: r = <a,c> r = <a,c>

Antecedent, ConsequentAntecedent, Consequent May generate from from other May generate from from other

techniques (DT, NN) or generate techniques (DT, NN) or generate directly.directly.

Algorithms: Gen, RX, 1R, PRISMAlgorithms: Gen, RX, 1R, PRISM

Page 154: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 154

Generating Rules from DTsGenerating Rules from DTs

Page 155: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 155

Generating Rules ExampleGenerating Rules Example

Page 156: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 156

Generating Rules from NNsGenerating Rules from NNs

Page 157: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 157

1R Algorithm1R Algorithm

Page 158: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 158

1R Example1R Example

Page 159: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 159

PRISM AlgorithmPRISM Algorithm

Page 160: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 160

PRISM ExamplePRISM Example

Page 161: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 161

Decision Tree vs. Rules Decision Tree vs. Rules

Tree has implied Tree has implied order in which order in which splitting is splitting is performed.performed.

Tree created based Tree created based on looking at all on looking at all classes.classes.

Rules have no Rules have no ordering of ordering of predicates.predicates.

Only need to look at Only need to look at one class to one class to generate its rules.generate its rules.

Page 162: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 162

Clustering OutlineClustering Outline

Clustering Problem OverviewClustering Problem Overview Clustering TechniquesClustering Techniques

– Hierarchical AlgorithmsHierarchical Algorithms– Partitional AlgorithmsPartitional Algorithms– Genetic AlgorithmGenetic Algorithm– Clustering Large DatabasesClustering Large Databases

Goal:Goal: Provide an overview of the clustering Provide an overview of the clustering problem and introduce some of the basic problem and introduce some of the basic algorithmsalgorithms

Page 163: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 163

Clustering ExamplesClustering Examples

SegmentSegment customer database based on customer database based on similar buying patterns.similar buying patterns.

Group houses in a town into Group houses in a town into neighborhoods based on similar neighborhoods based on similar features.features.

Identify new plant speciesIdentify new plant species Identify similar Web usage patternsIdentify similar Web usage patterns

Page 164: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 164

Clustering ExampleClustering Example

Page 165: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 165

Clustering HousesClustering Houses

Size BasedGeographic Distance Based

Page 166: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 166

Clustering vs. ClassificationClustering vs. Classification

No prior knowledgeNo prior knowledge– Number of clustersNumber of clusters– Meaning of clustersMeaning of clusters

Unsupervised learningUnsupervised learning

Page 167: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 167

Clustering IssuesClustering Issues

Outlier handlingOutlier handling Dynamic dataDynamic data Interpreting resultsInterpreting results Evaluating resultsEvaluating results Number of clustersNumber of clusters Data to be usedData to be used ScalabilityScalability

Page 168: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 168

Impact of Outliers on Impact of Outliers on ClusteringClustering

Page 169: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 169

Clustering ProblemClustering Problem

Given a database D={tGiven a database D={t11,t,t22,…,t,…,tnn} of tuples } of tuples and an integer value k, the and an integer value k, the Clustering Clustering ProblemProblem is to define a mapping is to define a mapping f:Df:D{1,..,k} where each t{1,..,k} where each t ii is assigned to is assigned to one cluster Kone cluster Kjj, 1<=j<=k., 1<=j<=k.

A A ClusterCluster, K, Kjj, contains precisely those , contains precisely those tuples mapped to it.tuples mapped to it.

Unlike classification problem, clusters Unlike classification problem, clusters are not known a priori.are not known a priori.

Page 170: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 170

Types of Clustering Types of Clustering HierarchicalHierarchical – Nested set of clusters – Nested set of clusters

created.created. Partitional Partitional – One set of clusters – One set of clusters

created.created. Incremental Incremental – Each element handled – Each element handled

one at a time.one at a time. SimultaneousSimultaneous – All elements handled – All elements handled

together.together. Overlapping/Non-overlappingOverlapping/Non-overlapping

Page 171: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 171

Clustering ApproachesClustering Approaches

Clustering

Hierarchical Partitional Categorical Large DB

Agglomerative Divisive Sampling Compression

Page 172: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 172

Cluster ParametersCluster Parameters

Page 173: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 173

Distance Between ClustersDistance Between Clusters Single LinkSingle Link: smallest distance between points: smallest distance between points Complete Link:Complete Link: largest distance between points largest distance between points Average Link:Average Link: average distance between pointsaverage distance between points Centroid:Centroid: distance between centroidsdistance between centroids

Page 174: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 174

Hierarchical ClusteringHierarchical Clustering Clusters are created in levels actually Clusters are created in levels actually

creating sets of clusters at each level.creating sets of clusters at each level. AgglomerativeAgglomerative

– Initially each item in its own clusterInitially each item in its own cluster– Iteratively clusters are merged togetherIteratively clusters are merged together– Bottom UpBottom Up

DivisiveDivisive– Initially all items in one clusterInitially all items in one cluster– Large clusters are successively dividedLarge clusters are successively divided– Top DownTop Down

Page 175: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 175

Hierarchical AlgorithmsHierarchical Algorithms

Single LinkSingle Link MST Single LinkMST Single Link Complete LinkComplete Link Average LinkAverage Link

Page 176: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 176

DendrogramDendrogram Dendrogram:Dendrogram: a tree data a tree data

structure which illustrates structure which illustrates hierarchical clustering hierarchical clustering techniques.techniques.

Each level shows clusters Each level shows clusters for that level.for that level.– Leaf – individual clustersLeaf – individual clusters– Root – one clusterRoot – one cluster

A cluster at level i is the A cluster at level i is the union of its children clusters union of its children clusters at level i+1.at level i+1.

Page 177: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 177

Levels of ClusteringLevels of Clustering

Page 178: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 178

Agglomerative ExampleAgglomerative ExampleAA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

4

Threshold of

2 3 51

A B C D E

Page 179: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 179

MST ExampleMST Example

AA BB CC DD EE

AA 00 11 22 22 33

BB 11 00 22 44 33

CC 22 22 00 11 55

DD 22 44 11 00 33

EE 33 33 55 33 00

BA

E C

D

Page 180: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 180

Agglomerative AlgorithmAgglomerative Algorithm

Page 181: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 181

Single LinkSingle Link View all items with links (distances) View all items with links (distances)

between them.between them. Finds maximal connected components Finds maximal connected components

in this graph.in this graph. Two clusters are merged if there is at Two clusters are merged if there is at

least one edge which connects them.least one edge which connects them. Uses threshold distances at each level.Uses threshold distances at each level. Could be agglomerative or divisive.Could be agglomerative or divisive.

Page 182: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 182

MST Single Link AlgorithmMST Single Link Algorithm

Page 183: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 183

Single Link ClusteringSingle Link Clustering

Page 184: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 184

Partitional ClusteringPartitional Clustering

NonhierarchicalNonhierarchical Creates clusters in one step as Creates clusters in one step as

opposed to several steps.opposed to several steps. Since only one set of clusters is output, Since only one set of clusters is output,

the user normally has to input the the user normally has to input the desired number of clusters, k.desired number of clusters, k.

Usually deals with static sets.Usually deals with static sets.

Page 185: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 185

Partitional AlgorithmsPartitional Algorithms

MSTMST Squared ErrorSquared Error K-MeansK-Means Nearest NeighborNearest Neighbor PAMPAM BEABEA GAGA

Page 186: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 186

MST AlgorithmMST Algorithm

Page 187: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 187

Squared ErrorSquared Error

Minimized squared errorMinimized squared error

Page 188: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 188

Squared Error AlgorithmSquared Error Algorithm

Page 189: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 189

K-MeansK-Means Initial set of clusters randomly chosen.Initial set of clusters randomly chosen. Iteratively, items are moved among sets Iteratively, items are moved among sets

of clusters until the desired set is of clusters until the desired set is reached.reached.

High degree of similarity among High degree of similarity among elements in a cluster is obtained.elements in a cluster is obtained.

Given a cluster KGiven a cluster Kii={t={ti1i1,t,ti2i2,…,t,…,timim}, the }, the cluster meancluster mean is m is mii = (1/m)(t = (1/m)(ti1i1 + … + t + … + timim))

Page 190: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 190

K-Means ExampleK-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: mRandomly assign means: m11=3,m=3,m22=4=4 KK11={2,3}, K={2,3}, K22={4,10,12,20,30,11,25}, ={4,10,12,20,30,11,25},

mm11=2.5,m=2.5,m22=16=16 KK11={2,3,4},K={2,3,4},K22={10,12,20,30,11,25}, m={10,12,20,30,11,25}, m11=3,m=3,m22=18=18 KK11={2,3,4,10},K={2,3,4,10},K22={12,20,30,11,25}, ={12,20,30,11,25},

mm11=4.75,m=4.75,m22=19.6=19.6 KK11={2,3,4,10,11,12},K={2,3,4,10,11,12},K22={20,30,25}, m={20,30,25}, m11=7,m=7,m22=25=25 Stop as the clusters with these means are the Stop as the clusters with these means are the

same.same.

Page 191: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 191

K-Means AlgorithmK-Means Algorithm

Page 192: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 192

Nearest NeighborNearest Neighbor

Items are iteratively merged into the Items are iteratively merged into the existing clusters that are closest.existing clusters that are closest.

IncrementalIncremental Threshold, t, used to determine if items Threshold, t, used to determine if items

are added to existing clusters or a new are added to existing clusters or a new cluster is created.cluster is created.

Page 193: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 193

Nearest Neighbor AlgorithmNearest Neighbor Algorithm

Page 194: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 194

PAMPAM Partitioning Around Medoids (PAM) Partitioning Around Medoids (PAM)

(K-Medoids)(K-Medoids) Handles outliers well.Handles outliers well. Ordering of input does not impact results.Ordering of input does not impact results. Does not scale well.Does not scale well. Each cluster represented by one item, Each cluster represented by one item,

called the called the medoid.medoid. Initial set of k medoids randomly chosen.Initial set of k medoids randomly chosen.

Page 195: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 195

PAMPAM

Page 196: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 196

PAM Cost CalculationPAM Cost Calculation At each step in algorithm, medoids are changed At each step in algorithm, medoids are changed

if the overall cost is improved.if the overall cost is improved. CCjihjih – cost change for an item t – cost change for an item tjj associated with associated with

swapping medoid tswapping medoid tii with non-medoid t with non-medoid thh..

Page 197: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 197

PAM AlgorithmPAM Algorithm

Page 198: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 198

BEABEA Bond Energy AlgorithmBond Energy Algorithm Database design (physical and logical)Database design (physical and logical) Vertical fragmentationVertical fragmentation Determine affinity (bond) between attributes Determine affinity (bond) between attributes

based on common usage.based on common usage. Algorithm outline:Algorithm outline:

1.1. Create affinity matrixCreate affinity matrix2.2. Convert to BOND matrix Convert to BOND matrix 3.3. Create regions of close bondingCreate regions of close bonding

Page 199: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 199

BEABEA

Modified from [OV99]

Page 200: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 200

Genetic Algorithm ExampleGenetic Algorithm Example

{{A,B,C,D,E,F,G,H}A,B,C,D,E,F,G,H} Randomly choose initial solution:Randomly choose initial solution:

{A,C,E} {B,F} {D,G,H} or{A,C,E} {B,F} {D,G,H} or10101000, 01000100, 0001001110101000, 01000100, 00010011

Suppose crossover at point four and Suppose crossover at point four and choose 1choose 1stst and 3 and 3rdrd individuals: individuals:10100011, 01000100, 0001100010100011, 01000100, 00011000

What should termination criteria be?What should termination criteria be?

Page 201: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 201

GA AlgorithmGA Algorithm

Page 202: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 202

Clustering Large DatabasesClustering Large Databases Most clustering algorithms assume a large Most clustering algorithms assume a large

data structure which is memory resident.data structure which is memory resident. Clustering may be performed first on a Clustering may be performed first on a

sample of the database then applied to the sample of the database then applied to the entire database.entire database.

AlgorithmsAlgorithms– BIRCHBIRCH– DBSCANDBSCAN– CURECURE

Page 203: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 203

Desired Features for Large Desired Features for Large DatabasesDatabases

One scan (or less) of DBOne scan (or less) of DB OnlineOnline Suspendable, stoppable, resumableSuspendable, stoppable, resumable IncrementalIncremental Work with limited main memoryWork with limited main memory Different techniques to scan (e.g. Different techniques to scan (e.g.

sampling)sampling) Process each tuple onceProcess each tuple once

Page 204: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 204

BIRCHBIRCH Balanced Iterative Reducing and Balanced Iterative Reducing and

Clustering using HierarchiesClustering using Hierarchies Incremental, hierarchical, one scanIncremental, hierarchical, one scan Save clustering information in a tree Save clustering information in a tree Each entry in the tree contains Each entry in the tree contains

information about one clusterinformation about one cluster New nodes inserted in closest entry in New nodes inserted in closest entry in

treetree

Page 205: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 205

Clustering FeatureClustering Feature CT Triple: (N,LS,SS)CT Triple: (N,LS,SS)

– N: Number of points in clusterN: Number of points in cluster– LS: Sum of points in the clusterLS: Sum of points in the cluster– SS: Sum of squares of points in the clusterSS: Sum of squares of points in the cluster

CF TreeCF Tree– Balanced search treeBalanced search tree– Node has CF triple for each childNode has CF triple for each child– Leaf node represents cluster and has CF value Leaf node represents cluster and has CF value

for each subcluster in it.for each subcluster in it.– Subcluster has maximum diameterSubcluster has maximum diameter

Page 206: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 206

BIRCH AlgorithmBIRCH Algorithm

Page 207: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 207

Improve ClustersImprove Clusters

Page 208: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 208

DBSCANDBSCAN Density Based Spatial Clustering of Density Based Spatial Clustering of

Applications with NoiseApplications with Noise Outliers will not effect creation of cluster.Outliers will not effect creation of cluster. InputInput

– MinPts MinPts – minimum number of points in – minimum number of points in clustercluster

– EpsEps – for each point in cluster there must – for each point in cluster there must be another point in it less than this distance be another point in it less than this distance away.away.

Page 209: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 209

DBSCAN Density ConceptsDBSCAN Density Concepts Eps-neighborhood:Eps-neighborhood: Points within Eps distance Points within Eps distance

of a point.of a point. Core point:Core point: Eps-neighborhood dense enough Eps-neighborhood dense enough

(MinPts)(MinPts) Directly density-reachable:Directly density-reachable: A point p is directly A point p is directly

density-reachable from a point q if the distance density-reachable from a point q if the distance is small (Eps) and q is a core point.is small (Eps) and q is a core point.

Density-reachable:Density-reachable: A point si density- A point si density-reachable form another point if there is a path reachable form another point if there is a path from one to the other consisting of only core from one to the other consisting of only core points.points.

Page 210: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 210

Density ConceptsDensity Concepts

Page 211: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 211

DBSCAN AlgorithmDBSCAN Algorithm

Page 212: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 212

CURECURE

Clustering Using RepresentativesClustering Using Representatives Use many points to represent a cluster Use many points to represent a cluster

instead of only oneinstead of only one Points will be well scatteredPoints will be well scattered

Page 213: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 213

CURE ApproachCURE Approach

Page 214: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 214

CURE AlgorithmCURE Algorithm

Page 215: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 215

CURE for Large DatabasesCURE for Large Databases

Page 216: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 216

Comparison of Clustering Comparison of Clustering TechniquesTechniques

Page 217: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 217

Association Rules OutlineAssociation Rules OutlineGoal: Provide an overview of basic Association Provide an overview of basic Association

Rule mining techniquesRule mining techniques Association Rules Problem OverviewAssociation Rules Problem Overview

– Large itemsetsLarge itemsets Association Rules AlgorithmsAssociation Rules Algorithms

– AprioriApriori– SamplingSampling– PartitioningPartitioning– Parallel AlgorithmsParallel Algorithms

Comparing TechniquesComparing Techniques Incremental AlgorithmsIncremental Algorithms Advanced AR TechniquesAdvanced AR Techniques

Page 218: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 218

Example: Market Basket DataExample: Market Basket Data Items frequently purchased together:Items frequently purchased together:

Bread Bread PeanutButterPeanutButter Uses:Uses:

– Placement Placement – AdvertisingAdvertising– SalesSales– CouponsCoupons

Objective: increase sales and reduce Objective: increase sales and reduce costscosts

Page 219: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 219

Association Rule DefinitionsAssociation Rule Definitions

Set of items:Set of items: I={I I={I11,I,I22,…,I,…,Imm}} Transactions:Transactions: D={t D={t11,t,t22, …, t, …, tnn}, t}, tjj I I Itemset:Itemset: {I {Ii1i1,I,Ii2i2, …, I, …, Iikik} } I I Support of an itemset:Support of an itemset: Percentage of Percentage of

transactions which contain that itemset.transactions which contain that itemset. Large (Frequent) itemset:Large (Frequent) itemset: Itemset Itemset

whose number of occurrences is above a whose number of occurrences is above a threshold.threshold.

Page 220: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 220

Association Rules ExampleAssociation Rules Example

I = { Beer, Bread, Jelly, Milk, PeanutButter}Support of {Bread,PeanutButter} is 60%

Page 221: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 221

Association Rule DefinitionsAssociation Rule Definitions

Association Rule (AR): Association Rule (AR): implication X implication X Y where X,Y Y where X,Y I and X I and X Y = Y = ;;

Support of AR (s) X Support of AR (s) X YY: : Percentage of transactions that Percentage of transactions that contain X contain X YY

Confidence of AR (Confidence of AR () X ) X Y: Y: Ratio of Ratio of number of transactions that contain X number of transactions that contain X Y to the number that contain X Y to the number that contain X

Page 222: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 222

Association Rules Ex (cont’d)Association Rules Ex (cont’d)

Page 223: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 223

Association Rule ProblemAssociation Rule Problem Given a set of items I={IGiven a set of items I={I11,I,I22,…,I,…,Imm} and a } and a

database of transactions D={tdatabase of transactions D={t11,t,t22, …, t, …, tnn} } where twhere tii={I={Ii1i1,I,Ii2i2, …, I, …, Iikik} and I} and Iijij I, the I, the Association Rule ProblemAssociation Rule Problem is to is to identify all association rulesidentify all association rules X X Y Y with with a minimum support and confidence.a minimum support and confidence.

Link AnalysisLink Analysis NOTE:NOTE: Support of Support of X X Y Y is same as is same as

support of X support of X Y. Y.

Page 224: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 224

Association Rule TechniquesAssociation Rule Techniques

1.1. Find Large Itemsets.Find Large Itemsets.2.2. Generate rules from frequent itemsets.Generate rules from frequent itemsets.

Page 225: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 225

Algorithm to Generate ARsAlgorithm to Generate ARs

Page 226: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 226

AprioriApriori

Large Itemset Property:Large Itemset Property:Any subset of a large itemset is large.Any subset of a large itemset is large. Contrapositive:Contrapositive:

If an itemset is not large, If an itemset is not large, none of its supersets are large.none of its supersets are large.

Page 227: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 227

Large Itemset PropertyLarge Itemset Property

Page 228: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 228

Apriori Ex (cont’d)Apriori Ex (cont’d)

s=30% = 50%

Page 229: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 229

Apriori AlgorithmApriori Algorithm

1.1. CC11 = Itemsets of size one in I; = Itemsets of size one in I;

2.2. Determine all large itemsets of size 1, LDetermine all large itemsets of size 1, L1;1;

3. i = 1;4.4. RepeatRepeat5.5. i = i + 1;i = i + 1;6.6. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););

7.7. Count CCount Cii to determine L to determine Li;i;

8.8. until no more large itemsets found;until no more large itemsets found;

Page 230: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 230

Apriori-GenApriori-Gen

Generate candidates of size i+1 from Generate candidates of size i+1 from large itemsets of size i.large itemsets of size i.

Approach used: join large itemsets of Approach used: join large itemsets of size i if they agree on i-1 size i if they agree on i-1

May also prune candidates who have May also prune candidates who have subsets that are not large.subsets that are not large.

Page 231: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 231

Apriori-Gen ExampleApriori-Gen Example

Page 232: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 232

Apriori-Gen Example (cont’d)Apriori-Gen Example (cont’d)

Page 233: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 233

Apriori Adv/DisadvApriori Adv/Disadv

Advantages:Advantages:– Uses large itemset property.Uses large itemset property.– Easily parallelizedEasily parallelized– Easy to implement.Easy to implement.

Disadvantages:Disadvantages:– Assumes transaction database is memory Assumes transaction database is memory

resident.resident.– Requires up to m database scans.Requires up to m database scans.

Page 234: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 234

SamplingSampling Large databasesLarge databases Sample the database and apply Apriori to the Sample the database and apply Apriori to the

sample. sample. Potentially Large Itemsets (PL):Potentially Large Itemsets (PL): Large Large

itemsets from sampleitemsets from sample Negative Border (BD Negative Border (BD -- ): ):

– Generalization of Apriori-Gen applied to Generalization of Apriori-Gen applied to itemsets of varying sizes.itemsets of varying sizes.

– Minimal set of itemsets which are not in PL, Minimal set of itemsets which are not in PL, butbut whose subsets are all in PL. whose subsets are all in PL.

Page 235: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 235

Negative Border ExampleNegative Border Example

PL PL BD-(PL)

Page 236: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 236

Sampling AlgorithmSampling Algorithm

1.1. DDss = sample of Database D; = sample of Database D;2.2. PL = Large itemsets in DPL = Large itemsets in Dss using smalls; using smalls;3.3. C = PL C = PL BDBD--(PL);(PL);4.4. Count C in Database using s;Count C in Database using s;5.5. ML = large itemsets in BDML = large itemsets in BD--(PL);(PL);6.6. If ML = If ML = then donethen done7.7. else C = repeated application of BDelse C = repeated application of BD-;-;

8.8. Count C in Database;Count C in Database;

Page 237: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 237

Sampling ExampleSampling Example Find AR assuming s = 20%Find AR assuming s = 20% DDss = { t = { t11,t,t22}} Smalls = 10%Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter}, PL = {{Bread}, {Jelly}, {PeanutButter},

{Bread,Jelly}, {Bread,PeanutButter}, {Jelly, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}PeanutButter}, {Bread,Jelly,PeanutButter}}

BDBD--(PL)={{Beer},{Milk}}(PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} ML = {{Beer}, {Milk}} Repeated application of BDRepeated application of BD- - generates all generates all

remaining itemsetsremaining itemsets

Page 238: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 238

Sampling Adv/DisadvSampling Adv/Disadv

Advantages:Advantages:– Reduces number of database scans to one Reduces number of database scans to one

in the best case and two in worst.in the best case and two in worst.– Scales better.Scales better.

Disadvantages:Disadvantages:– Potentially large number of candidates in Potentially large number of candidates in

second passsecond pass

Page 239: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 239

PartitioningPartitioning

Divide database into partitions DDivide database into partitions D11,D,D22,,…,D…,Dpp

Apply Apriori to each partitionApply Apriori to each partition Any large itemset must be large in at Any large itemset must be large in at

least one partition.least one partition.

Page 240: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 240

Partitioning AlgorithmPartitioning Algorithm

1.1. Divide D into partitions DDivide D into partitions D11,D,D22,…,D,…,Dp;p;

2. For I = 1 to p do3.3. LLii = Apriori(D = Apriori(Dii););4.4. C = LC = L11 … … L Lpp;;5.5. Count C on D to generate L;Count C on D to generate L;

Page 241: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 241

Partitioning ExamplePartitioning Example

D1

D2

S=10%

L1 ={{Bread}, {Jelly}, {Bread}, {Jelly}, {PeanutButter}, {PeanutButter}, {Bread,Jelly}, {Bread,Jelly}, {Bread,PeanutButter}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}}{Bread,Jelly,PeanutButter}}

L2 ={{Bread}, {Milk}, {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, {Bread,PeanutButter}, {Milk, PeanutButter}, PeanutButter}, {Bread,Milk,PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer}, {Beer,Bread}, {Beer,Milk}}{Beer,Milk}}

Page 242: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 242

Partitioning Adv/DisadvPartitioning Adv/Disadv

Advantages:Advantages:– Adapts to available main memoryAdapts to available main memory– Easily parallelizedEasily parallelized– Maximum number of database scans is Maximum number of database scans is

two.two. Disadvantages:Disadvantages:

– May have many candidates during second May have many candidates during second scan.scan.

Page 243: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 243

Parallelizing AR AlgorithmsParallelizing AR Algorithms Based on AprioriBased on Apriori Techniques differ:Techniques differ:

– What is counted at each siteWhat is counted at each site– How data (transactions) are distributedHow data (transactions) are distributed

Data ParallelismData Parallelism– Data partitionedData partitioned– Count Distribution AlgorithmCount Distribution Algorithm

Task ParallelismTask Parallelism– Data and candidates partitionedData and candidates partitioned– Data Distribution AlgorithmData Distribution Algorithm

Page 244: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 244

Count Distribution Algorithm(CDA)Count Distribution Algorithm(CDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. CC11 = Itemsets of size one in I; = Itemsets of size one in I;4.4. Count CCount C1;1;

5.5. Broadcast counts to all sites;Broadcast counts to all sites;6.6. Determine global large itemsets of size 1, LDetermine global large itemsets of size 1, L11;;7. i = 1; 8.8. RepeatRepeat9.9. i = i + 1;i = i + 1;10.10. CCi i = Apriori-Gen(L= Apriori-Gen(L i-1i-1););11.11. Count CCount Ci;i;

12.12. Broadcast counts to all sites;Broadcast counts to all sites;13.13. Determine global large itemsets of size i, LDetermine global large itemsets of size i, L ii;;14.14. until no more large itemsets found;until no more large itemsets found;

Page 245: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 245

CDA ExampleCDA Example

Page 246: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 246

Data Distribution Algorithm(DDA)Data Distribution Algorithm(DDA)1.1. Place data partition at each site.Place data partition at each site.2.2. In Parallel at each site doIn Parallel at each site do3.3. Determine local candidates of size 1 to count;Determine local candidates of size 1 to count;4.4. Broadcast local transactions to other sites;Broadcast local transactions to other sites;5.5. Count local candidates of size 1 on all data;Count local candidates of size 1 on all data;6.6. Determine large itemsets of size 1 for local Determine large itemsets of size 1 for local

candidates; candidates; 7.7. Broadcast large itemsets to all sites;Broadcast large itemsets to all sites;8.8. Determine LDetermine L11;;9. i = 1; 10.10. RepeatRepeat11.11. i = i + 1;i = i + 1;12.12. CCi i = Apriori-Gen(L= Apriori-Gen(Li-1i-1););13.13. Determine local candidates of size i to count;Determine local candidates of size i to count;14.14. Count, broadcast, and find LCount, broadcast, and find L ii;;15.15. until no more large itemsets found;until no more large itemsets found;

Page 247: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 247

DDA ExampleDDA Example

Page 248: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 248

Comparing AR TechniquesComparing AR Techniques TargetTarget TypeType Data TypeData Type Data SourceData Source TechniqueTechnique Itemset Strategy and Data StructureItemset Strategy and Data Structure Transaction Strategy and Data StructureTransaction Strategy and Data Structure OptimizationOptimization ArchitectureArchitecture Parallelism StrategyParallelism Strategy

Page 249: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 249

Comparison of AR TechniquesComparison of AR Techniques

Page 250: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 250

Hash TreeHash Tree

Page 251: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 251

Incremental Association RulesIncremental Association Rules Generate ARs in a dynamic database.Generate ARs in a dynamic database. Problem: algorithms assume static Problem: algorithms assume static

databasedatabase Objective: Objective:

– Know large itemsets for DKnow large itemsets for D– Find large itemsets for D Find large itemsets for D { { D} D}

Must be large in either D or Must be large in either D or D D Save LSave Li i and counts and counts

Page 252: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 252

Note on ARsNote on ARs Many applications outside market basket Many applications outside market basket

data analysisdata analysis– Prediction (telecom switch failure)Prediction (telecom switch failure)– Web usage miningWeb usage mining

Many different types of association rulesMany different types of association rules– TemporalTemporal– SpatialSpatial– CausalCausal

Page 253: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 253

Advanced AR TechniquesAdvanced AR Techniques

Generalized Association RulesGeneralized Association Rules Multiple-Level Association RulesMultiple-Level Association Rules Quantitative Association RulesQuantitative Association Rules Using multiple minimum supportsUsing multiple minimum supports Correlation RulesCorrelation Rules

Page 254: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 254

Measuring Quality of RulesMeasuring Quality of Rules

SupportSupport ConfidenceConfidence InterestInterest ConvictionConviction Chi Squared TestChi Squared Test

Page 255: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 255

DATA MININGDATA MININGIntroductory and Advanced TopicsIntroductory and Advanced Topics

Part III Part III

Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering

Southern Methodist UniversitySouthern Methodist University

Companion slides for the text by Dr. M.H.Dunham, Companion slides for the text by Dr. M.H.Dunham, Data Mining, Data Mining, Introductory and Advanced TopicsIntroductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.

Page 256: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 256

Data Mining OutlineData Mining Outline PART IPART I

– IntroductionIntroduction– Related ConceptsRelated Concepts– Data Mining TechniquesData Mining Techniques

PART IIPART II– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules

PART IIIPART III– Web MiningWeb Mining– Spatial MiningSpatial Mining– Temporal MiningTemporal Mining

Page 257: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 257

Web Mining OutlineWeb Mining Outline

Goal:Goal: Examine the use of data mining on Examine the use of data mining on the World Wide Webthe World Wide Web

IntroductionIntroduction Web Content MiningWeb Content Mining Web Structure MiningWeb Structure Mining Web Usage MiningWeb Usage Mining

Page 258: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 258

Web Mining IssuesWeb Mining Issues

SizeSize– >350 million pages (1999) >350 million pages (1999) – Grows at about 1 million pages a dayGrows at about 1 million pages a day– Google indexes 3 billion documentsGoogle indexes 3 billion documents

Diverse types of dataDiverse types of data

Page 259: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 259

Web DataWeb Data

Web pagesWeb pages Intra-page structuresIntra-page structures Inter-page structuresInter-page structures Usage dataUsage data Supplemental dataSupplemental data

– ProfilesProfiles– Registration informationRegistration information– CookiesCookies

Page 260: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 260

Web Mining TaxonomyWeb Mining Taxonomy

Modified from [zai01]

Page 261: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 261

Web Content MiningWeb Content Mining Extends work of basic search enginesExtends work of basic search engines Search EnginesSearch Engines

– IR applicationIR application– Keyword basedKeyword based– Similarity between query and documentSimilarity between query and document– CrawlersCrawlers– IndexingIndexing– ProfilesProfiles– Link analysisLink analysis

Page 262: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 262

CrawlersCrawlers Robot (spider)Robot (spider) traverses the hypertext sructure in the traverses the hypertext sructure in the

Web.Web. Collect information from visited pagesCollect information from visited pages Used to construct indexes for search enginesUsed to construct indexes for search engines Traditional CrawlerTraditional Crawler – visits entire Web (?) and – visits entire Web (?) and

replaces indexreplaces index Periodic CrawlerPeriodic Crawler – visits portions of the Web and – visits portions of the Web and

updates subset of indexupdates subset of index Incremental CrawlerIncremental Crawler – selectively searches the Web – selectively searches the Web

and incrementally modifies indexand incrementally modifies index Focused CrawlerFocused Crawler – visits pages related to a particular – visits pages related to a particular

subjectsubject

Page 263: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 263

Focused CrawlerFocused Crawler Only visit links from a page if that page Only visit links from a page if that page

is determined to be relevant.is determined to be relevant. Classifier is static after learning phase.Classifier is static after learning phase. Components:Components:

– Classifier which assigns relevance score to Classifier which assigns relevance score to each page based on crawl topic.each page based on crawl topic.

– Distiller to identify Distiller to identify hub pages.hub pages.– Crawler visits pages to based on crawler Crawler visits pages to based on crawler

and distiller scores.and distiller scores.

Page 264: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 264

Focused CrawlerFocused Crawler

Classifier to related documents to topicsClassifier to related documents to topics Classifier also determines how useful Classifier also determines how useful

outgoing links areoutgoing links are Hub PagesHub Pages contain links to many contain links to many

relevant pages. Must be visited even if relevant pages. Must be visited even if not high relevance score.not high relevance score.

Page 265: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 265

Focused CrawlerFocused Crawler

Page 266: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 266

Context Focused CrawlerContext Focused Crawler Context Graph:Context Graph:

– Context graph created for each seed document .Context graph created for each seed document .– Root is the sedd document.Root is the sedd document.– Nodes at each level show documents with links Nodes at each level show documents with links

to documents at next higher level. to documents at next higher level. – Updated during crawl itself .Updated during crawl itself .

Approach:Approach:1.1. Construct context graph and classifiers using Construct context graph and classifiers using

seed documents as training data.seed documents as training data.2.2. Perform crawling using classifiers and context Perform crawling using classifiers and context

graph created.graph created.

Page 267: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 267

Context GraphContext Graph

Page 268: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 268

Virtual Web ViewVirtual Web View Multiple Layered DataBase (MLDB)Multiple Layered DataBase (MLDB) built on top of built on top of

the Web.the Web. Each layer of the database is more generalized (and Each layer of the database is more generalized (and

smaller) and centralized than the one beneath it.smaller) and centralized than the one beneath it. Upper layers of MLDB are structured and can be Upper layers of MLDB are structured and can be

accessed with SQL type queries.accessed with SQL type queries. Translation tools convert Web documents to XML.Translation tools convert Web documents to XML. Extraction tools extract desired information to place in Extraction tools extract desired information to place in

first layer of MLDB.first layer of MLDB. Higher levels contain more summarized data obtained Higher levels contain more summarized data obtained

through generalizations of the lower levels.through generalizations of the lower levels.

Page 269: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 269

PersonalizationPersonalization Web access or contents tuned to better fit the Web access or contents tuned to better fit the

desires of each user.desires of each user. Manual techniques identify user’s preferences Manual techniques identify user’s preferences

based on profiles or demographics.based on profiles or demographics. Collaborative filteringCollaborative filtering identifies preferences identifies preferences

based on ratings from similar users.based on ratings from similar users. Content based filteringContent based filtering retrieves pages retrieves pages

based on similarity between pages and user based on similarity between pages and user profiles.profiles.

Page 270: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 270

Web Structure MiningWeb Structure Mining

Mine structure (links, graph) of the WebMine structure (links, graph) of the Web TechniquesTechniques

– PageRankPageRank– CLEVERCLEVER

Create a model of the Web organization.Create a model of the Web organization. May be combined with content mining to May be combined with content mining to

more effectively retrieve important pages.more effectively retrieve important pages.

Page 271: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 271

PageRankPageRank Used by GoogleUsed by Google Prioritize pages returned from search by Prioritize pages returned from search by

looking at Web structure.looking at Web structure. Importance of page is calculated based Importance of page is calculated based

on number of pages which point to it – on number of pages which point to it – BacklinksBacklinks..

Weighting is used to provide more Weighting is used to provide more importance to backlinks coming form importance to backlinks coming form important pages.important pages.

Page 272: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 272

PageRank (cont’d)PageRank (cont’d)

PR(p) = c (PR(1)/NPR(p) = c (PR(1)/N11 + … + PR(n)/N + … + PR(n)/Nnn))– PR(i): PageRank for a page i which points PR(i): PageRank for a page i which points

to target page p.to target page p.– NNii: number of links coming out of page i: number of links coming out of page i

Page 273: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 273

CLEVERCLEVER

Identify authoritative and hub pages.Identify authoritative and hub pages. Authoritative PagesAuthoritative Pages : :

– Highly important pages.Highly important pages.– Best source for requested information.Best source for requested information.

Hub PagesHub Pages : :– Contain links to highly important pages.Contain links to highly important pages.

Page 274: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 274

HITSHITS Hyperlink-Induces Topic SearchHyperlink-Induces Topic Search Based on a set of keywords, find set of Based on a set of keywords, find set of

relevant pages – R.relevant pages – R. Identify hub and authority pages for these.Identify hub and authority pages for these.

– Expand R to a base set, B, of pages linked to or Expand R to a base set, B, of pages linked to or from R.from R.

– Calculate weights for authorities and hubs.Calculate weights for authorities and hubs. Pages with highest ranks in R are returned.Pages with highest ranks in R are returned.

Page 275: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 275

HITS AlgorithmHITS Algorithm

Page 276: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 276

Web Usage MiningWeb Usage Mining Extends work of basic search enginesExtends work of basic search engines Search EnginesSearch Engines

– IR applicationIR application– Keyword basedKeyword based– Similarity between query and documentSimilarity between query and document– CrawlersCrawlers– IndexingIndexing– ProfilesProfiles– Link analysisLink analysis

Page 277: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 277

Web Usage Mining ApplicationsWeb Usage Mining Applications

PersonalizationPersonalization Improve structure of a site’s Web pagesImprove structure of a site’s Web pages Aid in caching and prediction of future Aid in caching and prediction of future

page referencespage references Improve design of individual pagesImprove design of individual pages Improve effectiveness of e-commerce Improve effectiveness of e-commerce

(sales and advertising)(sales and advertising)

Page 278: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 278

Web Usage Mining ActivitiesWeb Usage Mining Activities Preprocessing Web logPreprocessing Web log

– Cleanse Cleanse – Remove extraneous informationRemove extraneous information– SessionizeSessionize

Session:Session: Sequence of pages referenced by one user at a sitting. Sequence of pages referenced by one user at a sitting. Pattern DiscoveryPattern Discovery

– Count patterns that occur in sessionsCount patterns that occur in sessions– Pattern Pattern is sequence of pages references in session.is sequence of pages references in session.– Similar to association rulesSimilar to association rules

» Transaction: sessionTransaction: session» Itemset: pattern (or subset)Itemset: pattern (or subset)» Order is importantOrder is important

Pattern AnalysisPattern Analysis

Page 279: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 279

ARs in Web MiningARs in Web Mining Web Mining:Web Mining:

– ContentContent– StructureStructure– UsageUsage

Frequent patterns of sequential page references Frequent patterns of sequential page references in Web searching.in Web searching.

Uses:Uses:– CachingCaching– Clustering usersClustering users– Develop user profilesDevelop user profiles– Identify important pagesIdentify important pages

Page 280: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 280

Web Usage Mining IssuesWeb Usage Mining Issues

Identification of exact user not possible.Identification of exact user not possible. Exact sequence of pages referenced by Exact sequence of pages referenced by

a user not possible due to caching.a user not possible due to caching. Session not well definedSession not well defined Security, privacy, and legal issuesSecurity, privacy, and legal issues

Page 281: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 281

Web Log CleansingWeb Log Cleansing

Replace source IP address with unique Replace source IP address with unique but non-identifying ID.but non-identifying ID.

Replace exact URL of pages referenced Replace exact URL of pages referenced with unique but non-identifying ID.with unique but non-identifying ID.

Delete error records and records Delete error records and records containing not page data (such as containing not page data (such as figures and code)figures and code)

Page 282: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 282

SessionizingSessionizing

Divide Web log into sessions.Divide Web log into sessions. Two common techniques:Two common techniques:

– Number of consecutive page references Number of consecutive page references from a source IP address occurring within from a source IP address occurring within a predefined time interval (e.g. 25 a predefined time interval (e.g. 25 minutes).minutes).

– All consecutive page references from a All consecutive page references from a source IP address where the interclick time source IP address where the interclick time is less than a predefined threshold.is less than a predefined threshold.

Page 283: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 283

Data Structures Data Structures

Keep track of patterns identified during Keep track of patterns identified during Web usage mining processWeb usage mining process

Common techniques:Common techniques:– Trie Trie – Suffix TreeSuffix Tree– Generalized Suffix TreeGeneralized Suffix Tree– WAP TreeWAP Tree

Page 284: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 284

Trie vs. Suffix TreeTrie vs. Suffix Tree

Trie:Trie:– Rooted treeRooted tree– Edges labeled which character (page) from Edges labeled which character (page) from

patternpattern– Path from root to leaf represents pattern.Path from root to leaf represents pattern.

Suffix Tree:Suffix Tree:– Single child collapsed with parent. Edge Single child collapsed with parent. Edge

contains labels of both prior edges.contains labels of both prior edges.

Page 285: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 285

Trie and Suffix TreeTrie and Suffix Tree

Page 286: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 286

Generalized Suffix TreeGeneralized Suffix Tree

Suffix tree for multiple sessions. Suffix tree for multiple sessions. Contains patterns from all sessions.Contains patterns from all sessions. Maintains count of frequency of Maintains count of frequency of

occurrence of a pattern in the node.occurrence of a pattern in the node. WAP Tree:WAP Tree:

Compressed version of generalized suffix Compressed version of generalized suffix treetree

Page 287: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 287

Types of PatternsTypes of Patterns

Algorithms have been developed to discover Algorithms have been developed to discover different types of patterns.different types of patterns.

Properties:Properties:– Ordered Ordered – Characters (pages) must occur in the – Characters (pages) must occur in the

exact order in the original session.exact order in the original session.– Duplicates Duplicates – Duplicate characters are allowed in – Duplicate characters are allowed in

the pattern.the pattern.– ConsecutiveConsecutive – All characters in pattern must – All characters in pattern must

occur consecutive in given session.occur consecutive in given session.– Maximal Maximal – Not subsequence of another pattern.– Not subsequence of another pattern.

Page 288: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 288

Pattern TypesPattern Types Association RulesAssociation Rules

None of the properties holdNone of the properties hold EpisodesEpisodes

Only ordering holdsOnly ordering holds Sequential PatternsSequential Patterns

Ordered and maximalOrdered and maximal Forward SequencesForward Sequences

Ordered, consecutive, and maximalOrdered, consecutive, and maximal Maximal Frequent SequencesMaximal Frequent Sequences

All properties holdAll properties hold

Page 289: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 289

EpisodesEpisodes

Partially ordered set of pagesPartially ordered set of pages Serial episodeSerial episode – totally ordered with – totally ordered with

time constrainttime constraint Parallel episodeParallel episode – partial ordered with – partial ordered with

time constrainttime constraint General episodeGeneral episode – partial ordered with – partial ordered with

no time constraintno time constraint

Page 290: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 290

DAG for EpisodeDAG for Episode

Page 291: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 291

Spatial Mining OutlineSpatial Mining OutlineGoal:Goal: Provide an introduction to some spatial Provide an introduction to some spatial

mining techniques.mining techniques. IntroductionIntroduction Spatial Data Overview Spatial Data Overview Spatial Data Mining PrimitivesSpatial Data Mining Primitives Generalization/SpecializationGeneralization/Specialization Spatial RulesSpatial Rules Spatial ClassificationSpatial Classification Spatial ClusteringSpatial Clustering

Page 292: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 292

Spatial ObjectSpatial Object

Contains both spatial and nonspatial Contains both spatial and nonspatial attributes.attributes.

Must have a location type attributes:Must have a location type attributes:– Latitude/longitudeLatitude/longitude– Zip codeZip code– Street addressStreet address

May retrieve object using either (or May retrieve object using either (or both) spatial or nonspatial attributes.both) spatial or nonspatial attributes.

Page 293: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 293

Spatial Data Mining ApplicationsSpatial Data Mining Applications GeologyGeology GIS SystemsGIS Systems Environmental ScienceEnvironmental Science AgricultureAgriculture MedicineMedicine RoboticsRobotics May involved both spatial and temporal May involved both spatial and temporal

aspectsaspects

Page 294: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 294

Spatial QueriesSpatial Queries Spatial selection may involve specialized selection Spatial selection may involve specialized selection

comparison operations:comparison operations:– NearNear– North, South, East, WestNorth, South, East, West– Contained inContained in– Overlap/intersectOverlap/intersect

Region (Range) QueryRegion (Range) Query – find objects that intersect a given – find objects that intersect a given region.region.

Nearest Neighbor QueryNearest Neighbor Query – find object close to identified – find object close to identified object.object.

Distance ScanDistance Scan – find object within a certain distance of an – find object within a certain distance of an identified object where distance is made increasingly larger.identified object where distance is made increasingly larger.

Page 295: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 295

Spatial Data StructuresSpatial Data Structures Data structures designed specifically to store or Data structures designed specifically to store or

index spatial data.index spatial data. Often based on B-tree or Binary Search TreeOften based on B-tree or Binary Search Tree Cluster data on disk basked on geographic location.Cluster data on disk basked on geographic location. May represent complex spatial structure by placing May represent complex spatial structure by placing

the spatial object in a containing structure of a the spatial object in a containing structure of a specific geographic shape.specific geographic shape.

Techniques:Techniques:– Quad TreeQuad Tree– R-TreeR-Tree– k-D Treek-D Tree

Page 296: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 296

MBRMBR

Minimum Bounding RectangleMinimum Bounding Rectangle Smallest rectangle that completely Smallest rectangle that completely

contains the objectcontains the object

Page 297: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 297

MBR ExamplesMBR Examples

Page 298: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 298

Quad TreeQuad Tree Hierarchical decomposition of the space Hierarchical decomposition of the space

into quadrants (MBRs)into quadrants (MBRs) Each level in the tree represents the Each level in the tree represents the

object as the set of quadrants which object as the set of quadrants which contain any portion of the object.contain any portion of the object.

Each level is a more exact representation Each level is a more exact representation of the object.of the object.

The number of levels is determined by The number of levels is determined by the degree of accuracy desired.the degree of accuracy desired.

Page 299: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 299

Quad Tree ExampleQuad Tree Example

Page 300: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 300

R-TreeR-Tree As with Quad Tree the region is divided As with Quad Tree the region is divided

into successively smaller rectangles into successively smaller rectangles (MBRs).(MBRs).

Rectangles need not be of the same Rectangles need not be of the same size or number at each level.size or number at each level.

Rectangles may actually overlap.Rectangles may actually overlap. Lowest level cell has only one object.Lowest level cell has only one object. Tree maintenance algorithms similar to Tree maintenance algorithms similar to

those for B-trees.those for B-trees.

Page 301: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 301

R-Tree ExampleR-Tree Example

Page 302: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 302

K-D TreeK-D Tree Designed for multi-attribute data, not Designed for multi-attribute data, not

necessarily spatialnecessarily spatial Variation of binary search treeVariation of binary search tree Each level is used to index one of the Each level is used to index one of the

dimensions of the spatial object.dimensions of the spatial object. Lowest level cell has only one objectLowest level cell has only one object Divisions not based on MBRs but Divisions not based on MBRs but

successive divisions of the dimension successive divisions of the dimension range.range.

Page 303: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 303

k-D Tree Examplek-D Tree Example

Page 304: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 304

Topological RelationshipsTopological Relationships

DisjointDisjoint Overlaps or IntersectsOverlaps or Intersects EqualsEquals Covered by or inside or contained inCovered by or inside or contained in Covers or containsCovers or contains

Page 305: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 305

Distance Between ObjectsDistance Between Objects EuclideanEuclidean ManhattanManhattan Extensions:Extensions:

Page 306: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 306

Progressive RefinementProgressive Refinement

Make approximate answers prior to Make approximate answers prior to more accurate ones.more accurate ones.

Filter out data not part of answerFilter out data not part of answer Hierarchical view of data based on Hierarchical view of data based on

spatial relationshipsspatial relationships Coarse predicate recursively refinedCoarse predicate recursively refined

Page 307: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 307

Progressive RefinementProgressive Refinement

Page 308: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 308

Spatial Data Dominant AlgorithmSpatial Data Dominant Algorithm

Page 309: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 309

STINGSTING

STatistical Information Grid-basedSTatistical Information Grid-based Hierarchical technique to divide area Hierarchical technique to divide area

into rectangular cellsinto rectangular cells Grid data structure contains summary Grid data structure contains summary

information about each cellinformation about each cell Hierarchical clustering Hierarchical clustering Similar to quad treeSimilar to quad tree

Page 310: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 310

STINGSTING

Page 311: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 311

STING Build AlgorithmSTING Build Algorithm

Page 312: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 312

STING AlgorithmSTING Algorithm

Page 313: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 313

Spatial RulesSpatial Rules Characteristic RuleCharacteristic Rule

The average family income in Dallas is $50,000.The average family income in Dallas is $50,000. Discriminant RuleDiscriminant Rule

The average family income in Dallas is $50,000, The average family income in Dallas is $50,000, while in Plano the average income is $75,000.while in Plano the average income is $75,000.

Association RuleAssociation RuleThe average family income in Dallas for families The average family income in Dallas for families living near White Rock Lake is $100,000.living near White Rock Lake is $100,000.

Page 314: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 314

Spatial Association RulesSpatial Association Rules

Either antecedent or consequent must Either antecedent or consequent must contain spatial predicates.contain spatial predicates.

View underlying database as set of View underlying database as set of spatial objects.spatial objects.

May create using a type of progressive May create using a type of progressive refinementrefinement

Page 315: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 315

Spatial Association Rule AlgorithmSpatial Association Rule Algorithm

Page 316: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 316

Spatial ClassificationSpatial Classification

Partition spatial objectsPartition spatial objects May use nonspatial attributes and/or May use nonspatial attributes and/or

spatial attributesspatial attributes Generalization and progressive Generalization and progressive

refinement may be used.refinement may be used.

Page 317: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 317

ID3 ExtensionID3 Extension

Neighborhood GraphNeighborhood Graph– Nodes – objectsNodes – objects– Edges – connects neighborsEdges – connects neighbors

Definition of neighborhood variesDefinition of neighborhood varies ID3 considers nonspatial attributes of all ID3 considers nonspatial attributes of all

objects in a neighborhood (not just one) objects in a neighborhood (not just one) for classification.for classification.

Page 318: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 318

Spatial Decision TreeSpatial Decision Tree

Approach similar to that used for spatial Approach similar to that used for spatial association rules.association rules.

Spatial objects can be described based Spatial objects can be described based on objects close to them – on objects close to them – Buffer.Buffer.

Description of class based on Description of class based on aggregation of nearby objects.aggregation of nearby objects.

Page 319: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 319

Spatial Decision Tree AlgorithmSpatial Decision Tree Algorithm

Page 320: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 320

Spatial ClusteringSpatial Clustering

Detect clusters of irregular shapesDetect clusters of irregular shapes Use of centroids and simple distance Use of centroids and simple distance

approaches may not work well.approaches may not work well. Clusters should be independent of order Clusters should be independent of order

of input.of input.

Page 321: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 321

Spatial ClusteringSpatial Clustering

Page 322: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 322

CLARANS ExtensionsCLARANS Extensions

Remove main memory assumption of Remove main memory assumption of CLARANS.CLARANS.

Use spatial index techniques.Use spatial index techniques. Use sampling and R*-tree to identify Use sampling and R*-tree to identify

central objects.central objects. Change cost calculations by reducing Change cost calculations by reducing

the number of objects examined.the number of objects examined. Voronoi DiagramVoronoi Diagram

Page 323: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 323

VoronoiVoronoi

Page 324: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 324

SD(CLARANS)SD(CLARANS)

Spatial DominantSpatial Dominant First clusters spatial components using First clusters spatial components using

CLARANSCLARANS Then iteratively replaces medoids, but Then iteratively replaces medoids, but

limits number of pairs to be searched.limits number of pairs to be searched. Uses generalizationUses generalization Uses a learning to to derive description Uses a learning to to derive description

of cluster.of cluster.

Page 325: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 325

SD(CLARANS) AlgorithmSD(CLARANS) Algorithm

Page 326: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 326

DBCLASDDBCLASD

Extension of DBSCANExtension of DBSCAN Distribution Based Clustering of LArge Distribution Based Clustering of LArge

Spatial DatabasesSpatial Databases Assumes items in cluster are uniformly Assumes items in cluster are uniformly

distributed.distributed. Identifies distribution satisfied by Identifies distribution satisfied by

distances between nearest neighbors.distances between nearest neighbors. Objects added if distribution is uniform.Objects added if distribution is uniform.

Page 327: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 327

DBCLASD AlgorithmDBCLASD Algorithm

Page 328: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 328

Aggregate ProximityAggregate Proximity

Aggregate ProximityAggregate Proximity – measure of how – measure of how close a cluster is to a feature.close a cluster is to a feature.

Aggregate proximity relationship finds the Aggregate proximity relationship finds the k closest features to a cluster.k closest features to a cluster.

CRH AlgorithmCRH Algorithm – uses different shapes: – uses different shapes:– Encompassing CircleEncompassing Circle– Isothetic RectangleIsothetic Rectangle– Convex HullConvex Hull

Page 329: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 329

CRHCRH

Page 330: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 330

Temporal Mining OutlineTemporal Mining OutlineGoal:Goal: Examine some temporal data Examine some temporal data

mining issues and approaches.mining issues and approaches. IntroductionIntroduction Modeling Temporal EventsModeling Temporal Events Time SeriesTime Series Pattern DetectionPattern Detection SequencesSequences Temporal Association RulesTemporal Association Rules

Page 331: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 331

Temporal DatabaseTemporal Database Snapshot Snapshot – Traditional database– Traditional database TemporalTemporal – Multiple time points – Multiple time points Ex:Ex:

Page 332: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 332

Temporal QueriesTemporal Queries QueryQuery

DatabaseDatabase

Intersection QueryIntersection Query

Inclusion QueryInclusion Query

Containment QueryContainment Query

Point Query – Tuple retrieved is valid at a particular point in time.Point Query – Tuple retrieved is valid at a particular point in time.

tsq te

q

tsd te

d

tsq te

qtsd te

d

tsq te

qtsd te

d

tsq te

qtsd te

d

Page 333: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 333

Types of DatabasesTypes of Databases

Snapshot – No temporal supportSnapshot – No temporal support Transaction Time – Supports time when Transaction Time – Supports time when

transaction inserted datatransaction inserted data– TimestampTimestamp– RangeRange

Valid Time – Supports time range when Valid Time – Supports time range when data values are validdata values are valid

Bitemporal – Supports both transaction Bitemporal – Supports both transaction and valid time.and valid time.

Page 334: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 334

Modeling Temporal EventsModeling Temporal Events Techniques to model temporal events.Techniques to model temporal events. Often based on earlier approachesOften based on earlier approaches Finite State Recognizer (Machine) (FSR)Finite State Recognizer (Machine) (FSR)

– Each event recognizes one characterEach event recognizes one character– Temporal ordering indicated by arcsTemporal ordering indicated by arcs– May recognize a sequenceMay recognize a sequence– Require precisely defined transitions between statesRequire precisely defined transitions between states

ApproachesApproaches– Markov ModelMarkov Model– Hidden Markov ModelHidden Markov Model– Recurrent Neural NetworkRecurrent Neural Network

Page 335: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 335

FSRFSR

Page 336: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 336

Markov Model (MM)Markov Model (MM) Directed graphDirected graph

– Vertices represent statesVertices represent states– Arcs show transitions between statesArcs show transitions between states– Arc has probability of transitionArc has probability of transition– At any time one state is designated as current At any time one state is designated as current

state.state. Markov PropertyMarkov Property – Given a current state, the – Given a current state, the

transition probability is independent of any transition probability is independent of any previous states.previous states.

Applications: speech recognition, natural Applications: speech recognition, natural language processinglanguage processing

Page 337: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 337

Markov ModelMarkov Model

Page 338: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 338

Hidden Markov Model (HMM)Hidden Markov Model (HMM) Like HMM, but states need not correspond to Like HMM, but states need not correspond to

observable states.observable states. HMM models process that produces as HMM models process that produces as

output a sequence of observable symbols.output a sequence of observable symbols. HMM will actually output these symbols.HMM will actually output these symbols. Associated with each node is the probability Associated with each node is the probability

of the observation of an event.of the observation of an event. Train HMM to recognize a sequence.Train HMM to recognize a sequence. Transition and observation probabilities Transition and observation probabilities

learned from training set.learned from training set.

Page 339: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 339

Hidden Markov ModelHidden Markov Model

Modified from [RJ86]

Page 340: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 340

HMM AlgorithmHMM Algorithm

Page 341: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 341

HMM ApplicationsHMM Applications

Given a sequence of events and an Given a sequence of events and an HMM, what is the probability that the HMM, what is the probability that the HMM produced the sequence?HMM produced the sequence?

Given a sequence and an HMM, what is Given a sequence and an HMM, what is the most likely state sequence which the most likely state sequence which produced this sequence?produced this sequence?

Page 342: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 342

Recurrent Neural Network (RNN)Recurrent Neural Network (RNN)

Extension to basic NNExtension to basic NN Neuron can obtian input form any other Neuron can obtian input form any other

neuron (including output layer).neuron (including output layer). Can be used for both recognition and Can be used for both recognition and

prediction applications.prediction applications. Time to produce output unknownTime to produce output unknown Temporal aspect added by backlinks.Temporal aspect added by backlinks.

Page 343: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 343

RNNRNN

Page 344: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 344

Time SeriesTime Series

Set of attribute values over timeSet of attribute values over time Time Series Analysis – finding patterns Time Series Analysis – finding patterns

in the values.in the values.– TrendsTrends– CyclesCycles– SeasonalSeasonal– OutliersOutliers

Page 345: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 345

Analysis TechniquesAnalysis Techniques Smoothing Smoothing – Moving average of attribute – Moving average of attribute

values.values. Autocorrelation Autocorrelation – relationships between – relationships between

different subseriesdifferent subseries– Yearly, seasonalYearly, seasonal– LagLag – Time difference between related items. – Time difference between related items.– Correlation Coefficient rCorrelation Coefficient r

Page 346: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 346

SmoothingSmoothing

Page 347: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 347

Correlation with Lag of 3Correlation with Lag of 3

Page 348: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 348

SimilaritySimilarity Determine similarity between a target pattern, X, Determine similarity between a target pattern, X,

and sequence, Y: sim(X,Y)and sequence, Y: sim(X,Y) Similar to Web usage miningSimilar to Web usage mining Similar to earlier word processing and spelling Similar to earlier word processing and spelling

corrector applications.corrector applications. Issues:Issues:

– LengthLength– ScaleScale– GapsGaps– OutliersOutliers– BaselineBaseline

Page 349: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 349

Longest Common SubseriesLongest Common Subseries

Find longest subseries they have in Find longest subseries they have in common.common.

Ex:Ex:– X = <10,5,6,9,22,15,4,2>X = <10,5,6,9,22,15,4,2>– Y = <6,9,10,5,6,22,15,4,2>Y = <6,9,10,5,6,22,15,4,2>– Output: <22,15,4,2>Output: <22,15,4,2>– Sim(X,Y) = l/n = 4/9Sim(X,Y) = l/n = 4/9

Page 350: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 350

Similarity based on Linear Similarity based on Linear TransformationTransformation

Linear transformation function fLinear transformation function f– Convert a value form one series to a value Convert a value form one series to a value

in the secondin the second ff – tolerated difference in results – tolerated difference in results – – time value difference allowedtime value difference allowed

Page 351: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 351

PredictionPrediction

Predict future value for time seriesPredict future value for time series Regression may not be sufficientRegression may not be sufficient Statistical TechniquesStatistical Techniques

– ARMAARMA– ARIMAARIMA

NNNN

Page 352: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 352

Pattern DetectionPattern Detection

Identify patterns of behavior in time Identify patterns of behavior in time seriesseries

Speech recognition, signal processingSpeech recognition, signal processing FSR, MM, HMMFSR, MM, HMM

Page 353: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 353

String MatchingString Matching

Find given pattern in sequenceFind given pattern in sequence Knuth-Morris-Pratt:Knuth-Morris-Pratt: Construct FSM Construct FSM Boyer-Moore:Boyer-Moore: Construct FSM Construct FSM

Page 354: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 354

Distance between StringsDistance between Strings

Cost to convert one to the otherCost to convert one to the other TransformationsTransformations

– Match: Current characters in both strings Match: Current characters in both strings are the sameare the same

– Delete: Delete current character in input Delete: Delete current character in input stringstring

– Insert: Insert current character in target Insert: Insert current character in target string into stringstring into string

Page 355: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 355

Distance between StringsDistance between Strings

Page 356: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 356

Frequent SequenceFrequent Sequence

Page 357: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 357

Frequent Sequence ExampleFrequent Sequence Example

Purchases made by Purchases made by customerscustomers

s(<{A},{C}>) = 1/3s(<{A},{C}>) = 1/3 s(<{A},{D}>) = 2/3s(<{A},{D}>) = 2/3 s(<{B,C},{D}>) = 2/3s(<{B,C},{D}>) = 2/3

Page 358: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 358

Frequent Sequence LatticeFrequent Sequence Lattice

Page 359: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 359

SPADESPADE

Sequential Pattern Discovery using Sequential Pattern Discovery using Equivalence classesEquivalence classes

Identifies patterns by traversing lattice Identifies patterns by traversing lattice in a top down manner.in a top down manner.

Divides lattice into equivalent classes Divides lattice into equivalent classes and searches each separately.and searches each separately.

ID-List:ID-List: Associates customers and Associates customers and transactions with each item.transactions with each item.

Page 360: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 360

SPADE ExampleSPADE Example

ID-List for Sequences of length 1:ID-List for Sequences of length 1:

Count for <{A}> is 3Count for <{A}> is 3 Count for <{A},{D}> is 2Count for <{A},{D}> is 2

Page 361: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 361

Equivalence ClassesEquivalence Classes

Page 362: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 362

SPADE AlgorithmSPADE Algorithm

Page 363: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 363

Temporal Association RulesTemporal Association Rules

Transaction has time:Transaction has time:<TID,CID,I<TID,CID,I11,I,I22, …, I, …, Imm,t,tss,t,tee>>

[t[tss,t,tee] is range of time the transaction is active.] is range of time the transaction is active. Types:Types:

– Inter-transaction rulesInter-transaction rules– Episode rulesEpisode rules– Trend dependenciesTrend dependencies– Sequence association rulesSequence association rules– Calendric association rulesCalendric association rules

Page 364: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 364

Inter-transaction RulesInter-transaction Rules

Intra-transaction association rulesIntra-transaction association rulesTraditional association RulesTraditional association Rules

Inter-transaction association rulesInter-transaction association rules– Rules across transactionsRules across transactions– Sliding windowSliding window – How far apart (time or – How far apart (time or

number of transactions) to look for related number of transactions) to look for related itemsets.itemsets.

Page 365: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 365

Episode RulesEpisode Rules

Association rules applied to sequences Association rules applied to sequences of events.of events.

EpisodeEpisode – set of event predicates and – set of event predicates and partial ordering on thempartial ordering on them

Page 366: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 366

Trend DependenciesTrend Dependencies Association rules across two database Association rules across two database

states based on time.states based on time. Ex: (SSN,=) Ex: (SSN,=) (Salary, (Salary, ))

Confidence=4/5Confidence=4/5Support=4/36Support=4/36

Page 367: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 367

Sequence Association RulesSequence Association Rules Association rules involving sequencesAssociation rules involving sequences Ex:Ex:

<{A},{C}> <{A},{C}> <{A},{D}> <{A},{D}>Support = 1/3Support = 1/3Confidence 1Confidence 1

Page 368: DATA MINING Introductory and Advanced Topics Part I

© Prentice Hall 368

Calendric Association RulesCalendric Association Rules

Each transaction has a unique Each transaction has a unique timestamp.timestamp.

Group transactions based on time Group transactions based on time interval within which they occur.interval within which they occur.

Identify large itemsets by looking at Identify large itemsets by looking at transactions only in this predefined transactions only in this predefined interval.interval.