hypothesis tests ii

Post on 03-Jan-2016

24 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Hypothesis Tests II. The normal distribution. Normally distributed data. Normally distributed means. First, lets consider a more simple problem…. We are testing the equality of a mean of a population (Y) to a particular value . - PowerPoint PPT Presentation

TRANSCRIPT

Hypothesis Tests II

The normal distribution

Normally distributed data

Normally distributed means

First, lets consider a more simple problem…

We are testing the equality of a mean of a population (Y) to a particular value.Now, if is assumed, what do we know? We have some idea about the distribution of sample mean .We need a measuring device that is sensitive to the variations in , or in other words deviations from the statement therein…

z1

z2 has a bivariate normal distribution.

𝒇 (𝒛𝟏 ,𝒛𝟐 )= 𝟏

𝟐𝝅𝝈𝒛𝟏𝝈𝒛𝟐

√𝟏− 𝝆𝟐𝐞𝐱𝐩 ¿

-10-5

05

10 -10

-5

0

5

10

0.000

0.005

0.010

0.015

Bivariate Normal Distribution

z1

z2

-4 -2 0 2 4

0.0

0.1

0.2

0.3

0.4

x

dnor

m (x

)

If has a bivariate normal distribution then the pdf of points is a one dimensional normal distribution. This distribution is over the line because all points of the form is situated on this line.

If has a multinomial normal distribution then the pdf of points is a two dimensional normal distribution. This distribution is over the plane because all points of the form is situated on this plane. (Hard to draw)This is how one

dimension (degrees of freedom) is lost!

That means even though the points lie in a two dimensional space, the probability distribution function defined over them is basically single dimensional.

But,

The situation resembles the following: Assume we have two normally distributed random variables; and . Then the distribution of the sum their squares, i.e., does not necessarily have a Chi-squared distribution with two degrees of freedom. Why?

Consider the case where . Then which has a Chi-square distribution of one degree of freedom. Hence unless are independent has chi-square distribution with one degree of freedom.

What is distribution?

The t-distribution

That is why we divide by (n-1) in calculating sample s.d.

One sample t-test

Two-sample testsB and A are types of seeds.

Numerical Example (wheat again)

Summary: We have so far seen how a good test statistic (null distribution) looks like. The distribution that we have selected is a test book distribution. Could we pick others?

Choosing Test Statistic

The t statistic

The Kolmogorov-Smirnov statistic

Comparing the test statistics

Sensitivity to specific alternatives

Discussion

Or…

• We need to add in additional assumptions such as equality of the stanadard deviations of the samples.

Two-sample testsB and A are types of seeds.

Remembered?

Contingency Tables

(Cross-Tabs)

We use cross-tabulation when:

• We want to look at relationships among two or three variables.

• We want a descriptive statistical measure to tell us whether differences among groups are large enough to indicate some sort of relationship among variables.

Cross-tabs are not sufficient to:

• Tell us the strength or actually size of the relationships among two or three variables.

• Test a hypothesis about the relationship between two or three variables.

• Tell us the direction of the relationship among two or more variables.

• Look at relationships between one nominal or ordinal variable and one ratio or interval variable unless the range of possible values for the ratio or interval variable is small. What do you think a table with a large number of ratio values would look like?

Because we use tables in these ways, we can set up some decision rules about how to use tables

• Independent variables should be column variables.

• If you are not looking at independent and dependent variable relationships, use the variable that can logically be said to influence the other as your column variable.

• Using this rule, always calculate column percentages rather than row percentages.

• Use the column percentages to interpret your results.

For example, • If we were looking at the relationship between

gender and income, gender would be the column variable and income would be the row variable. Logically gender can determine income. Income does not determine your gender.

• If we were looking at the relationship between ethnicity and location of a person’s home, ethnicity would be the column variable.

• However, if we were looking at the relationship between gender and ethnicity, one does not influence the other. Either variable could be the column variable.

Contingency Tables (Cross-Tabs)

Marital Status

Married Single

GenderMale 37 41

Female 51 32

How do we measure the relationship?

What do we EXPECT if there is no relationship?

Gender Total

Female Male

ResultCured 78

Not 83

Total 88 73 161

Observed Expected

F M F M

Cured 37 41 Cured 42.6 35.4

Not 51 32 Not 45.4 37.6

6.37)6.3732(

4.45)4.4551(

4.35)4.3541(

6.42)6.4237( 2222

3.18

RESULT● This test statistic has a χ2 distribution with

(2-1)(2-1) = 1 degree of freedom● The critical value at α = .01 of the χ2 distribution with

1 degree of freedom is 6.63● Thus we do not reject the null hypothesis that the

two proportions are equal, that the drug is equally effective for female and male patients

INTRODUCTION TO ANOVA• The easiest way to understand ANOVA is to generate a tiny data set

(using GLM):

As a first step set the mean , to 5 for the dataset with 10 cases. In the table below all 10 cases have a score of 5 at this point.

CASE SCORE CASE SCORE

5 5

5 5

5 5

5 5

5 5

• The next step is to add the effects of the IV. Suppose that the effect of the treatment at is to raise scores by 2 units and the effect of the treatment at is to lower scores by 2 units.

CASE SCORE CASE SCORE

5+2=7 5-2=3

5+2=7 5-2=3

5+2=7 5-2=3

5+2=7 5-2=3

5+2=7 5-2=3

• The changes produced by treatment are the deviations of the scores from Over all of these cases the deviations is

This is the sum of the (squared) effects of treatment if all cases are influenced identically by the various levels of A and there is no error.

• The third step is to complete the GLM with addition of error.

CASE SCORE CASE SCORE

5+2+2=9 5-2+0=3

5+2+0=7 5-2-2=1

5+2-1=6 5-2+0=3

5+2+0=7 5-2+1=4

5+2-1=6 5-2+1=4

Then the variance for the group is

And the variance for the group is

The average of these variances is also 1.5Check that these numbers represent error variance; that means they represent random variability in scores within each group where all cases are treated the same and therefore are uncontaminated by effects of the IV.The variance for this group of 10 numbers, ignoring group memebership is

Standard Setup for ANOVASum

9 3

7 1

6 3

7 4

6 4

The difference between each score and the Grand Mean is broken into two components:1. The difference between the score and its own group mean 2. The difference between that group mean and the grand mean

Sum of squares for treatmentThe effect of the IV!!!

Sum of squares for error

Each term is then squared and summed seperately to produce the sum of squares for error and the sum of squares for treatment seperately. The basic partition holds because the cross product terms vanish.

∑𝑖∑𝑗

(𝑌 𝑖𝑗−𝐺𝑀 )2=∑𝑖∑𝑗

(𝑌 𝑖𝑗−𝑌 𝑗 )2+∑

𝑛∑𝑗

(Y j−GM )2

This is the deviation form of basic ANOVA. Each of these terms is a sum of squares (SS). The average of this sum is the total variance in the set of scores ignoring group memebership. This term is called sum of square within groups. This term is called SS between groups.This sum is frequently symbolized as,

At this point it is important to realize that the total variance in the set of scores is partitioned into two sources. One is the effect of the IV and the other is all remaining effects (which we call error). Because the effects of the IV are assessed by changes in the central tendencies of the groups, the inferences that come from ANOVA are about differences in central tendency.

However sum of squares are not yet variances. To become variances, they must be ‘averaged’. The denominators for averaging SS must be degrees of freedom so that the statistics will have a proper distribution (remember previous slides).

So far we now that the degrees of freedom of must be N-1.

Furthermore,

Also,

Thus we have (as expected)

Variance is an ‘averaged’ sum of squares (for empirical data of course). Then to obtain mean sum of squares (MS),

The F distribution is a sampling distribution of the ratio of two distributions.

This statististic is used to test the null hypothesis that

Source table for basic ANOVASource SS df MS FBetween 40 1 40 26.67Within 12 8 1.5Total 52 9

top related