and other applications of categorical data analysis · categorical data analysis. in general. 4....

64
A lady tasting tea and other applications of Categorical Data Analysis 1

Upload: others

Post on 31-May-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea

and other applications of Categorical Data Analysis

1

Page 2: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea Fisher (1935) wrote about this supposedly true

experiment he designed to question a lady She said she could tell if the milk was put in first,

or the tea Fisher told her she would get 8 cups 4 that had tea added first, 4 with milk first

The lady had to choose which were which, and was not told of success along the way

2

Page 3: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

“My dear, of course I can tell the difference!”

3

Page 4: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Categorical Data Analysis

In General

4

Page 5: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Applied Stats Algorithm

No Unacceptable

Numerical Categorical

Predictor(s)

CategoricalNumerical Both

Today

Scientificquestion?

Classify Study

Response Variable Multi-Var

Univariate

Censored Complete

5

Page 6: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Requirements Two categorical variables Can later generalize this to 3+ groups

There can be an obvious response variable, or not

Independent observations There are extensions for paired data

A central research question is whether or not the variables are related

Null hypothesis is that they are independent6

Page 7: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

AsideA man goes for a walk with his dog.

Q: On average, how many legs do they have? “They” = individuals, not as a group

Page 8: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Doctors Homer is sick, so Bart and Lisa are trying to

decide which doctor to take him to, in order to have operation X

Q: Which doctor should they go to?

Dr. Nick Dr. HibbertSuccess Rate

for Operation X 83% 46%

Page 9: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Additional Information

Success Rate Dr. Nick Dr. HibbertMales 90% 100%

Females ? ?

Page 10: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Additional Information

Now who would you visit? How is this possible?

Success Rate Dr. Nick Dr. HibbertMales 90% 100%

Females 20% 40%

Page 11: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Additional Information

Dr. Nick Dr. Hibbert

This is an example of Simpson’s Paradox It also explains how a team could score more

goals in a best-of-seven series, but still lose the series

Count M FSuccess 810 20Failure 90 80Total 900 100

Count M FSuccess 100 360Failure 0 540Total 100 900

Page 12: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Simpson’s Paradox Averaging over an important grouping variable

to get marginal effects can hide (and even reverse) what’s really happening

It’s rare in practice, but not impossible It’s a great way to inflate your stats Lesson: look at counts, not just percentages

Page 13: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Contingency Tables Basic descriptive tool to summarize count data Required anyway to compute some test

statistics, so no “wasted” time making them Like boxplots with ANOVA

Can be I x J in general, but we’ll start with 2x2 For the lady experiment, the two variables are

Actual and Guess, and we’d like to know if they’re related

13

Page 14: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea This seems an ideal setup for a permutation test

There are 84 possible results to this experiment

One such result is as follows

This result would support the lady’s claim, but with what evidence?

ActualG

uess

ed Milk TeaMilk 4 0Tea 0 4

14

Page 15: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea There are 70 possible results to this experiment The result we saw on the previous slide is 1 of

70, but we should also consider the mirror result Why? Because if she got every single cup wrong,

she could just reverse her rule and have a perfect classifier!

So her probability of getting this result, under H0, is 2

70= 0.02857

This is the p-value of the experiment15

Page 16: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Fisher’s Exact Test(R code)> lady <- data.frame(actual = rep(c("Milk", "Tea"), each= 4), guess = rep(c("Milk", "Tea"), each= 4))> tab <- with(lady, table(guess, actual)); tab

actual

guess Milk Tea

Milk 4 0

Tea 0 4

> fisher.test(tab, alt="two.sided")Fisher's Exact Test for Count Data

data: tab

p-value = 0.02857

alternative hypothesis: true odds ratio is not equal to 1

95 percent confidence interval:

1.339059 Inf

sample estimates:

odds ratio

Inf16

Page 17: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Fisher’s Exact Test Just a permutation test Based on the multinomial distribution of counts in

a contingency table Any parametric method we develop is just an

approximation to this exact test When counts get really high, we might want to

sample from the permutation distribution Suppose we were willing to give her the benefit

of the doubt and do a one-sided test

17

Page 18: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea Here is another possible result:

The lady got one cup wrong, which forced another to be wrong as well

There are 43

41 = 16 ways to get this result,

and 1 way of getting a ‘more extreme’ result For a one-sided p-value of 16+1

70= 0.2429

Actual

Gue

ssed Milk Tea

Milk 3 1Tea 1 3

18

Page 19: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Fisher’s Exact Test(R code)lady$guess[4] <- "Tea"lady$guess[5] <- "Milk"tab <- with(lady, table(guess, actual)); tab

actual

guess Milk Tea

Milk 3 1

Tea 1 3

> fisher.test(tab, alt="greater")Fisher's Exact Test for Count Data

data: tab

p-value = 0.2429

alternative hypothesis: true odds ratio is greater than 1

95 percent confidence interval:

0.3135693 Inf

sample estimates:

odds ratio

6.40830919

Page 20: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea This means the only way Fisher would have

believed the lady is if she got all of her guesses correct She did, allegedly

What if he hadn’t told her how many of each type there were?

Now there are more possible combinations 28, in fact

There is still only one completely correct choice Well, two 20

Page 21: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

A lady tasting tea(with a little less information) There are 256 possible results

Her probability of getting this result, under H0, is 2256

= 0.0078125 More convincing evidence that she knows what’s

up with the milk

Actual

Gue

ssed Milk Tea

Milk 4 0Tea 0 4

21

Page 22: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Low Birth Weight data A dataset with 189 observations of women

(1986) in Massachusetts We will also look at these data in our study of

logistic regression, but for now let’s see if there’s an association between mother’s smoking status and having a low birth weight baby

22

Page 23: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

LBW Dataset Data description: low: indicator of birth weight less than 2.5kg age: mother's age in years lwt: mother's weight (lbs) at last menstrual period race: mothers race ("white", "black", "other") smoke: smoking status during pregnancy ht: history of hypertension ui: presence of uterine irritability ftv: number of physician visits during first trimester ptl: number of previous premature labours bwt: birth weight in grams 23

Page 24: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Contingency Table(LBW dataset)lbw <- read.table("birthweight.data", head= T)lbw <- within(lbw, {

smoke <- factor(smoke, levels= c(0, 1), labels= c("No", "Yes"))low <- factor(low , levels= c(0, 1), labels= c("No", "Yes"))

}) with(lbw, table(smoke, low))

low

smoke No Yes

No 86 29

Yes 44 30

26

Page 25: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Research Question Is mother’s smoking status associated with

having a low birth weight baby? Are smoke and low independent?

Five approaches we will study1. (Two-sample Proportion Test)2. Fisher’s Exact Test3. Pearson’s Chi-square Test4. Likelihood Ratio Test5. GLM

27

Page 26: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #1: Two-sample Proportion Test Since we only have two levels of the predictor

variable (smoke / not smoke) we simply have two proportions to compare Proportion of smoking moms with a LBW baby Proportion of non-smoking moms with a LBW baby

We can actually do this by hand

28

Page 27: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #1: Two-sample Proportion Test

𝑧𝑧𝑜𝑜𝑜𝑜𝑜𝑜 =�̂�𝑝2 − �̂�𝑝1𝑆𝑆𝐸𝐸�𝑝𝑝2− �𝑝𝑝1

=3074 − 29

11559

189 �130189

174 + 1

115

=0.15323150.069056

= 2.2189

2𝜙𝜙 −2.2189 = 𝟎𝟎.𝟎𝟎𝟎𝟎𝟎𝟎𝟎𝟎 29

Page 28: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #1: Two-sample Proportion Test# Analysis #1> with(lbw, prop.test(table(smoke, low)))

2-sample test for equality of proportions with continuity correction

data: table(smoke, low)

X-squared = 4.2359, df = 1, p-value = 0.03958

alternative hypothesis: two.sided

95 percent confidence interval:

0.004967192 0.301495793

sample estimates:

prop 1 prop 2

0.7478261 0.594594630

Page 29: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #1: Two-sample Proportion Test# Analysis #1> with(lbw, prop.test(table(smoke, low), correct = F))

2-sample test for equality of proportions without continuity correction

data: table(smoke, low)

X-squared = 4.9237, df = 1, p-value = 0.02649

alternative hypothesis: two.sided

95 percent confidence interval:

0.01607177 0.29039121

sample estimates:

prop 1 prop 2

0.7478261 0.5945946 31

Page 30: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #2: Fisher’s Exact Test> # Analysis #2> with(lbw, fisher.test(table(smoke, low)))

Fisher's Exact Test for Count Data

data: table(smoke, low)

p-value = 0.03618

alternative hypothesis: true odds ratio is not equal to 1

95 percent confidence interval:

1.028780 3.964904

sample estimates:

odds ratio

2.014137

32

Page 31: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #3: Pearson’s χ2 testwith(lbw, chisq.test(table(smoke, low), correct= F))

Pearson's Chi-squared test

data: table(smoke, low)

X-squared = 4.9237, df = 1, p-value = 0.02649

with(lbw, chisq.test(table(smoke, low), correct= T))Pearson's Chi-squared test with Yates' continuity

correction

data: table(smoke, low)

X-squared = 4.2359, df = 1, p-value = 0.03958

33

Page 32: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #3: Pearson’s χ2 test This test generalizes to I x J tables No reason to use the two-sample test anymore Unless you don’t want pooled proportions

What is the null hypothesis? H0: The row and col variables are independent

Oldest use of the chi-sq statistic Oldest test still in its original form Possible to do by hand as well Try it sometime!

34

Page 33: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 Test

𝜋𝜋𝑖𝑖𝑖𝑖 is the probability obs. falls into row i, col j 𝜋𝜋𝑖𝑖⋅ is the probability obs. falls into row i 𝜋𝜋⋅𝑖𝑖 is the probability obs. falls into col j If Smoke/LBW independent, 𝜋𝜋𝑖𝑖𝑖𝑖 = 𝜋𝜋𝑖𝑖⋅𝜋𝜋⋅𝑖𝑖 This is H0

LBW

Sm

oker

No YesNo 86 29Yes 44 30

35

Page 34: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 Test The # of observations in a cell is a RV 𝑌𝑌𝑖𝑖𝑖𝑖 With the multinomial distribution

Observe 𝑦𝑦𝑖𝑖𝑖𝑖 with Σ𝑖𝑖𝑖𝑖𝑦𝑦𝑖𝑖𝑖𝑖 = 𝑛𝑛

𝑃𝑃 𝑌𝑌 = 𝑦𝑦 =𝑛𝑛

𝑦𝑦11𝑦𝑦12𝑦𝑦21𝑦𝑦22 𝜋𝜋11𝑦𝑦11𝜋𝜋12

𝑦𝑦12𝜋𝜋21𝑦𝑦21𝜋𝜋22

𝑦𝑦22

Σ𝑖𝑖𝑖𝑖𝜋𝜋𝑖𝑖𝑖𝑖 = 1𝑙𝑙 𝜋𝜋𝑖𝑖𝑖𝑖 = Σ𝑖𝑖Σ𝑖𝑖yijlog 𝜋𝜋𝑖𝑖𝑖𝑖 + 𝑓𝑓 𝑦𝑦

Can show that this leads to �𝜋𝜋𝑖𝑖𝑖𝑖 = 𝑦𝑦𝑖𝑖𝑖𝑖𝑛𝑛

Unrestricted MLE36

Page 35: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 Test Under assumption of independence, 𝜋𝜋𝑖𝑖𝑖𝑖 = 𝜋𝜋𝑖𝑖⋅𝜋𝜋⋅𝑖𝑖 Can show that this leads to �𝜋𝜋𝑖𝑖⋅ = 𝑦𝑦𝑖𝑖⋅

𝑛𝑛and �𝜋𝜋⋅𝑖𝑖 = 𝑦𝑦⋅𝑖𝑖

𝑛𝑛 Restricted MLE

LRT statistic for independence is

𝐺𝐺2 = −2 log𝐿𝐿𝑅𝑅𝐿𝐿𝐹𝐹

= 2𝑙𝑙 �𝜋𝜋𝑖𝑖𝑖𝑖 − 2𝑙𝑙 �𝜋𝜋𝑖𝑖𝑖𝑖

= 2Σ𝑖𝑖Σ𝑖𝑖yijlog𝑦𝑦𝑖𝑖𝑖𝑖n

− 2Σ𝑖𝑖Σ𝑖𝑖yijlog𝑦𝑦𝑖𝑖⋅𝑦𝑦⋅𝑖𝑖

n2 With DF = (nrow - 1)(ncol - 1)

37

Page 36: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 TestLR.test.bf <- function(dd) {n <- sum(dd)margSums <- as.matrix(rowSums(dd)) %*% as.vector(colSums(dd))G2 <- 2*sum(dd*log(dd/n)) - 2*sum(dd*log(margSums/n^2))G2pvalue <- 1 - pchisq(G2,df= (dim(dd)[1] - 1)*(dim(dd)[2] - 1))list(G2= G2, p_value= G2pvalue)

}with(lbw, LR.test.bf(table(smoke, low)))$G2

[1] 4.867397

$p_value

[1] 0.02736876

38

Page 37: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 Test LRT can also be written as:

𝐺𝐺2 = 2Σ𝑖𝑖Σ𝑖𝑖yijlogyij𝜇𝜇ij

With DF = (nrow - 1)(ncol - 1)

𝜇𝜇𝑖𝑖𝑖𝑖 are the expected values in a cell, under H0

39

Page 38: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: Likelihood Ratio χ2 TestLR.test <- function(dd) {G2 <- 2*sum(dd*log(dd/chisq.test(dd)$expected))G2pvalue <- 1 - pchisq(G2,df= (dim(dd)[1] - 1)*(dim(dd)[2] - 1))list(G2= G2, p_value= G2pvalue)

}with(lbw, LR.test(table(smoke, low)))$G2

[1] 4.867397

$p_value

[1] 0.02736876

40

Page 39: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #4: LRT More reliable and generally preferred over

Pearson’s test Shows up in GLMs too So you may as well get used to it

Lots of theory to show that it’s most powerful Still an asymptotic test, so large samples

required

41

Page 40: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Linear Models Can we turn the analysis of categorical data

into some sort of linear model? That would be great, because everything else

we’ve learned so far fits into the linear model framework

Central to all of them is the idea that a change in X is accompanied by a linear change in Y, and that errors are additive

What if a change in X produces an exponential change in Y?

42

Page 41: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Generalized Linear Model𝑔𝑔 𝝁𝝁 = 𝐗𝐗𝐗𝐗

𝑔𝑔 𝜇𝜇 is called a Link Function 𝑔𝑔 𝜇𝜇 = 𝜇𝜇 is vanilla regression – Identity Link 𝑔𝑔 𝜇𝜇 = log(𝜇𝜇) is Poisson regression – Log Link

𝑔𝑔 𝜇𝜇 = log 𝜇𝜇1−𝜇𝜇

is Logistic regression – Logit Link

There are others

43

Page 42: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Contingency Tables The expected # of observations in a cell is:

𝜇𝜇𝑖𝑖𝑖𝑖 = 𝑛𝑛𝜋𝜋𝑖𝑖𝑖𝑖 If the row and column variables are independent:

𝜇𝜇𝑖𝑖𝑖𝑖 = 𝑛𝑛𝜋𝜋𝑖𝑖�𝜋𝜋�𝑖𝑖 Now take logs to make it additive:

log 𝜇𝜇𝑖𝑖𝑖𝑖 = log 𝑛𝑛 + log 𝜋𝜋𝑖𝑖� + log(𝜋𝜋�𝑖𝑖) If they are not independent, then you need an

interaction term So, fit this model and test the interaction

44

Page 43: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #5: GLM# Need to format data first> tab <- ddply(lbw, .(smoke, low), summarize, count= length(smoke)); tabsmoke low count

1 No No 86

2 No Yes 29

3 Yes No 44

4 Yes Yes 30

> fit <- glm(count ~ smoke*low, family= poisson, data= tab)> summary(fit) # Analysis #5

Call:

glm(formula = count ~ smoke * low, family = poisson, data = tab)

Deviance Residuals:

[1] 0 0 0 0

45

Page 44: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Analysis #5: GLM> summary(fit) # Analysis #5

Coefficients:

Estimate Std. Error z value Pr(>|z|)

(Intercept) 4.4543 0.1078 41.308 < 2e-16

smokeYes -0.6702 0.1854 -3.616 0.0003

lowYes -1.0871 0.2147 -5.062 4.14e-07

smokeYes:lowYes 0.7041 0.3196 2.203 0.0276

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 4.117e+01 on 3 degrees of freedom

Residual deviance: 9.770e-15 on 0 degrees of freedom

AIC: 30.376

Number of Fisher Scoring iterations: 3

46

Page 45: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Summary of analysesAnalysis Test p-value CC p-value

#1 2-sample p 0.0265 0.0396#2 Fisher Exact 0.0362#3 Pearson 𝜒𝜒2 0.0265 0.0396#4 LR 𝜒𝜒2 0.0274#5 GLM 0.0276

Analysis Test Assumptions (SRS & independence) +#1 2-sample p • Independent samples from much larger population

• At least 10 success and failures in each group#2 Fisher Exact • Fixed marginal totals#3 Pearson 𝜒𝜒2 • At least 5 expected counts in each cell

• Large-sample#4 LR 𝜒𝜒2 • Same as above#5 GLM • Same as above 47

Page 46: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

New Research Question Is mother’s race associated with having a low

birth weight baby? Are race and low independent? H0: Yes

Let’s just look at LRT this time> with(lbw, (table(race, low)))

low

race No Yes

black 15 11

other 42 25

white 73 23

> with(lbw, LR.test (table(race, low)))$G2 $p_value

[1] 5.010366 0.08166065 48

Page 47: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

New Research Question Is mother’s race associated with smoking

status?> with(lbw, (table(race, smoke)))

smoke

race No Yes

black 16 10

other 55 12

white 44 52

> with(lbw, LR.test (table(race, smoke)))$G2 $p_value

[1] 22.99665 1.014708e-05

Yes, but what do you report?49

Page 48: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio Consider some data regarding attitudes towards

pre-marital sex and level of education

The chi-square statistic is 55.5, so these two variables are clearly not independent

Can we say anything else about the relationship?

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

50

Page 49: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio

Look at conditional distributions:

𝑃𝑃 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐻𝐻𝑆𝑆 = 11902063

= 0.577

𝑃𝑃 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 = 12081741

= 0.694

𝑂𝑂𝑂𝑂 = 𝑂𝑂𝑂𝑂𝑂𝑂𝑜𝑜 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙𝑂𝑂𝑂𝑂𝑂𝑂𝑜𝑜(𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝑜𝑜𝐴𝐴𝐴𝐴|𝐻𝐻𝐻𝐻)

= 0.694 / 0.3060.577 / 0.423

= 2.2681.364

= 𝟏𝟏.𝟎𝟎𝟎𝟎

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

51

Page 50: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio

The odds of approving of pre-marital sex are 1.66 times higher for College-educated students than for students with an education of HS or less

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

52

Page 51: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio

Look at other conditional distributions:

𝑃𝑃 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 = 12082398

= 0.504

𝑃𝑃 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 = 5331406

= 0.379

𝑂𝑂𝑂𝑂 = 𝑂𝑂𝑂𝑂𝑂𝑂𝑜𝑜 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝑂𝑂𝑂𝑂𝑂𝑂𝑜𝑜(𝐶𝐶𝑜𝑜𝐶𝐶𝐶𝐶|𝐷𝐷𝑖𝑖𝑜𝑜𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝑜𝑜𝐴𝐴𝐴𝐴)

= 0.504 / 0.4960.379 / 0.621

= 1.0160.610

= 𝟏𝟏.𝟎𝟎𝟎𝟎

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

53

Page 52: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio

The odds of going to College are 1.66 times higher for students who approve of pre-marital sex than for students who don’t

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

54

Page 53: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Shortcut

𝑂𝑂𝑂𝑂 =𝑛𝑛22𝑛𝑛11𝑛𝑛21𝑛𝑛12

=1208 � 873533 � 1190

= 𝟏𝟏.𝟎𝟎𝟎𝟎

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

55

Page 54: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Relative Risk

Look at conditional distributions:

𝑃𝑃 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐻𝐻𝑆𝑆 = 11902063

= 0.577

𝑃𝑃 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 = 12081741

= 0.694

𝑂𝑂𝑂𝑂 = 𝑃𝑃 𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙𝑃𝑃(𝐴𝐴𝑝𝑝𝑝𝑝𝐴𝐴𝑜𝑜𝐴𝐴𝐴𝐴|𝐻𝐻𝐻𝐻)

= 0.6940.577

= 𝟏𝟏.𝟎𝟎𝟎𝟎𝟐𝟐

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

56

Page 55: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Relative Risk

You are 1.203 times more likely to approve of pre-marital sex after going to college

You are 20.3% more likely to approve of pre-marital sex after going to college

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

57

Page 56: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Relative Risk

Look at other conditional distributions:

𝑃𝑃 𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐻𝐻𝑆𝑆 = 8732063

= 0.423

𝑃𝑃 𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙 = 5331741

= 0.306

𝑂𝑂𝑂𝑂 = 𝑃𝑃 𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝐶𝐶𝐴𝐴𝑙𝑙𝑙𝑙𝑃𝑃(𝐷𝐷𝑖𝑖𝑜𝑜𝐷𝐷𝑝𝑝𝑝𝑝𝐴𝐴𝑜𝑜𝐴𝐴𝐴𝐴|𝐻𝐻𝐻𝐻)

= 0.3060.423

= 𝟎𝟎.𝟕𝟕𝟎𝟎𝟐𝟐𝟎𝟎

AttitudeEducation Disapprove Approve Total

HS or less 873 1190 2063College or above 533 1208 1741

Total 1406 2398 3804

58

Page 57: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Odds Ratio (OR) The OR is symmetric, or reversible It doesn’t treat the column variable any differently

than the row variable It is a nice quantitative measure of the

association between two categorical variables Not affected by changes to sample size Useful for retrospective case-control studies Statisticians like it a lot, although log(OR) is

preferred because it is ~Normal We will encounter it again with logistic regression59

Page 58: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Relative Risk (RR) Not symmetric, unlike OR Has a response variable, unlike OR Also not effected by changes to sample size Useful in prospective cohort studies or cross-

sectional studies Not possible to compute in CC studies

Physicians like it a lot More natural interpretation

Statisticians prefer log(RR) For the same reasons as log(OR) 60

Page 59: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

R Code OR from fisher.test()> require(epitools)

> tab <- with(lbw, table(smoke, low)); addmargins(tab)low

smoke No Yes Sum

No 86 29 115

Yes 44 30 74

Sum 130 59 189

> riskratio(tab)$measurerisk ratio with 95% C.I.

smoke estimate lower upper

No 1.000000 NA NA

Yes 1.607642 1.057812 2.44326261

Page 60: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Comparison of OR and RR There is only one OR for a (2x2) table, but two

RRs Well, there are two ORs, but one is the inverse of

the other The OR diverges from 1 faster than the RR When the event is rare (𝑝𝑝 ≈ 0), then 𝑂𝑂𝑂𝑂 ≈ 𝑂𝑂𝑂𝑂 When the event is common, the two measures

can diverge substantially RR does not capture the absolute effect

62

Page 61: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

OR from RR

𝑂𝑂𝑂𝑂 =𝐶𝐶𝐴𝐴𝑙𝑙𝐶 𝑂𝑂𝐷𝐷𝐷𝐷𝑅𝑅𝐶𝐶𝐴𝐴𝑙𝑙2 𝑂𝑂𝐷𝐷𝐷𝐷𝑅𝑅

With IxJ tables, there are multiple ORs and even more RR statistics

If data are ordinal, compute them in order ie. Find how much more likely you are to be in

the next higher up level of Y when you go to the next higher up level of X

If data are nominal, you could compute all permutations if that was interesting to you

63

Page 62: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Absolute Risk Reduction (ARR) Suppose a treatment shows a reduced risk of

contracting a disease, compared to a control Experiment A Tx group has 20% disease rate; control 40%

Experiment B Tx group has 1% disease rate; control 2%

Both have a RR of 0.5, but vastly different effects to the population

AR is the difference between disease rates ARR = 20% for A; ARR = 1% for B 64

Page 63: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

Number Needed to Treat (NNT) Inverse of ARR Low is better Physicians like this a lot, as it’s easy to

understand and apply to their practice If the NNT is 20, you need to treat 20 patients

with the intervention in order to save 1 Insurance companies like to know this too Dependent on time period ARR is calculated over a time period, so NNT

needs to be interpreted in the same period 65

Page 64: and other applications of Categorical Data Analysis · Categorical Data Analysis. In General. 4. Applied Stats Algorithm. No Unacceptable. Numerical. Categorical. Predictor(s) Numerical

ARI and NNH Of course, the intervention might actually

increase the risk to the patient In this case, call it Absolute Risk Increase The Number Needed to Harm is then the

inverse of the ARI You would use these measures in lieu of the

ones on the previous slide, if your intervention caused harm for some outcome

NNH: How many patients do we need to give this drug to, in order to harm one (high is good)

66