understanding statistics in research. reminders final drafts of the apa style paper are due in...

30
Understanding Statistics in Research

Post on 21-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Understanding Statistics in Research

Reminders

Final drafts of the APA Style Paper are due in lecture next Wednesday (Nov 19)

Statistics

Descriptive Statistics Describe the data you collected from the

sample

Inferential Statistics Making inferences about the population from

the data collected from the sample Generalize results from study to the population

Inferential Statistics

Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)

After treatment period test both groups for memory Results:

Group A’s average memory score is 80% Group B’s average memory score is 76%

Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?

Inferential Statistics

This type of statistics is based on making hypotheses and testing them

Five main steps to test hypotheses

Testing hypotheses

Step 1: State your hypotheses Step 2: Set your criteriaStep 3: Collect your data from your sampleStep 4: Compute your test statisticsStep 5: Make a decision about your

hypotheses Reject your null hypothesis Fail to reject your null hypothesis

Testing hypotheses

Step 1: State your hypotheses Null hypothesis (H0):

“There are no differences between the groups” This is the hypothesis that you are testing!

Alternative hypothesis (Ha): “There are effects/differences between the groups” This is what you expect to find!

State your hypotheses

The Null hypothesis is always opposite the alternative hypothesis

Example: Ha: The training group will be different from the

control group H0: The training group will not be different from the

control group

Example: Ha: The training group will perform better than the

control group H0: The training group will not perform better than

the control group (equal or worse)

State your hypotheses

You are not attempting to prove your alternative hypotheses

You are testing the null hypothesisIf you reject the null hypothesis, then you

are left with support for the alternative(s)

State your hypotheses

In memory example experiment Alternative- Ha: mean of Group A ≠ mean of

Group B(Or: Group A > Group B)

State your hypotheses

In memory example experiment Alternative- Ha: mean of Group A ≠ mean of

Group B(Or: Group A > Group B)

Null- H0: mean of Group A = mean of Group B(Or: Group A Group B)

State your hypotheses

In memory example experiment Alternative- Ha: mean of Group A ≠ mean of

Group B(Or: Group A > Group B)

Null- H0: mean of Group A = mean of Group B(Or: Group A Group B)

Which hypothesis do we test?

Testing hypotheses

Step 1: State your hypothesesStep 2: Set your decision criteria

Your alpha level will tell you what to decide Reject the null hypothesis Fail to reject the null hypothesis

Set your decision criteria

Because you set the criteria it is possible that you could make the wrong decision Type I error: saying there is a difference when

there really isn’t one Reject null when you should fail to reject alpha level = probably of making this type of error

Type II error: saying there is not a difference when there really is one

Fail to reject when you should have reject

Types of Errors

Type II errorβ

Real world (truth)

Type I error

Type II error

α

β

H0 is correct (really are no differences)

H0 is wrong (really are differences)

Reject H0

(there are differences)

Fail to reject H0

(there are no differences)

Experimenter’s conclusions

Types of Errors

Real world example:

Courtroom Analogy

Type I error

Type II error

Found guilty

Found not guilty

Jury’s decision

Defendant really is innocent

Defendant really is guilty

Real world (truth)

Types of Errors

Type I error: concluding that there is an effect (a difference between groups) when there really isn’t Alpha level () Sometimes called “significance level” Pick a low level of alpha

Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an

effect, when there really is. Beta () Related to the Statistical Power of a test (1- )

How likely are you able to detect a difference if it is there

Testing hypotheses

Step 1: State your hypothesesStep 2: Set your decision criteriaStep 3: Collect your data from your

sample(s)Step 4: Compute your test statistics

Descriptive statistics (means, standard deviations, etc)

Inferential statistics (t-tests, ANOVA, etc)

Testing hypotheses

Step 1: State your hypothesesStep 2: Set your decision criteriaStep 3: Collect your data from your

sample(s)Step 4: Compute your test statisticsStep 5: Make a decision about your

hypotheses Reject H0: Statistically significant differences Fail to reject H0: Not statistically significantly

differences

Decision about your hypotheses

What does statistically significant differences mean? Reject the null hypothesis Support the alternative hypothesis

The differences/effects you found are above what you’d expect by “chance”

Decision about your hypotheses

Level of chance is determined by how much sampling error there is More sampling error less likely to have a

significant finding

Sampling error is the difference between the sample and the population

Influenced by:Sample SizePopulation variability

Sampling Error

Population

Distribution

Population mean

x

Sampling error

(Pop mean - sample mean)

n = 1

Sampling Error

n = 2

x

Population

Distribution

x

Sampling error

(Pop mean - sample mean)

Sample mean

Population mean

Sampling Error

Population

Distribution

Sampling error

(Pop mean - sample mean)

xxx

x

x

xx

xx x

Population mean

Sample mean

n = 10

As the sample size increases, the sampling error decreases

Sampling Error

Due to a smaller range of possible sample means, the sampling error decreases

Small population variability Large population variability

Sampling Error

Influenced by: Sample size

As sample size increases, the sampling error decreases

Population variability As the population variability decreases, the sampling

error decreases

Distribution of sample means

These two factors (pop variability and sample size) combine to impact the distribution of sample means. A distribution of all possible sample means of a particular

sample size that can be drawn from the population

XA XB XC XD

Population

Samples of size = n

Distribution of sample means

Avg. Sampling error

“chance”

What is Significance?

Rejecting the null hypothesis A statistically significant difference means:

The researcher is concluding that there is a difference above and beyond chance

With the probability of making a type I error at 5% (assuming an alpha level = 0.05)

Not the same thing as theoretical significance Only a statistical difference Doesn’t mean that it is an important difference

What is Non-significance?

Failing to reject the null hypothesis Can’t accept because can’t prove anything Usually not as interesting as rejecting the null

Typically, check to see if you made a Type II error (failed to detect a difference that is really there)

Check the statistical power (probability of finding an effect if there is one) of your test

• Sample size is too small• Effects that you’re looking for are really small

Check your method, may be too much variability

Testing hypotheses: Next lecture

Tests that calculate significance Generic statistical test T-test: 2 group means Analysis of Variance (ANOVA): more than 2

group means