smith/davis (c) 2005 prentice hall chapter thirteen inferential tests of significance ii: analyzing...

31
Smith/Davis (c) 2005 Pren tice Hall Chapter Thirteen Inferential Tests of Significance II: Analyzing and Interpreting Experiments with More than Two Groups PowerPoint Presentation created by Dr. Susan R. Burns Morningside College

Upload: eleanor-benson

Post on 17-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Smith/Davis (c) 2005 Prentice Hall

Chapter Thirteen

Inferential Tests of Significance II: Analyzing and Interpreting Experiments with More than Two Groups

PowerPoint Presentation created by Dr. Susan R. BurnsMorningside College

Smith/Davis (c) 2005 Prentice Hall

Before Your Statistical Analysis

Experimental design and statistical analysis are very closely linked, thus before you run your experiment and collect data, you must decide upon your design to ensure you won’t be collected data that you cannot analyze.

Smith/Davis (c) 2005 Prentice Hall

Analyzing Multiple-Group Designs

This chapter specifically discusses experiments that have multiple groups, but only one independent variable (IV).

These designs are distinguished as one-way ANOVAs.

When participants are randomly assigned to those multiple groups, you are using a one-way ANOVA for independent groups.

If you are using natural set, matched sets, or repeated measures, you are using a one-way ANOVA for correlated groups.

Smith/Davis (c) 2005 Prentice Hall

Planning Your Experiment

As you are planning your experiment, it is important to consider operational definitions of your variables.

The example given in your text is continued from chapter 11: Examining how long it takes sales clerks to wait on customers as influenced by style of dress (sloppy versus dressy).

The researchers add an additional category of clothing style (i.e., casual).

The researchers operationally defined casual clothing as slacks and shirts (e.g., khakis and polo shirts) for both male and female customers.

Smith/Davis (c) 2005 Prentice Hall

Rationale of ANOVA

Clerks were randomly assigned to the three groups (a requirement to create independent groups).

An observer goes with the students to unobtrusively time the salesclerks’ response to time each student (the DV).

You can see in Table 13-1 the clerk’s responses.

Smith/Davis (c) 2005 Prentice Hall

Rationale of ANOVA

Variability in your data can be divided into two sources: – Between-groups variability – represents the

variability caused by the independent variable. – Error variability (a.k.a., within-groups variability) –

represents variability due to factors such as individual differences, errors in measurement, and extraneous variation. That is, any variation not due to the IV.

Smith/Davis (c) 2005 Prentice Hall

Rationale of ANOVA

In general terms the statistic for ANVOA is:

The general formula used is:

Smith/Davis (c) 2005 Prentice Hall

Rationale for ANOVA

If your IV has a strong treatment effect and creates much more variability than all the error variability, we should find the numerator of this equation as considerably larger than the denominator.

The result would be a large F ratio. See Figure A.

Smith/Davis (c) 2005 Prentice Hall

Rationale for ANOVA

The reverse is also true, if the IV has no effect, there would be no variability due to the IV, meaning we would add 0 for the factor in the in equation.

Thus, the F ratio would be close to one because the error variability between groups should approximately equal the error variability within the groups. See Figure B.

The F ratio is conceptualized (and computed) with the following formula:

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Independent Groups

Calculating ANOVA by Hand – because ANOVA is complicated, there is not one simple formula for calculation.

Thus, you must calculate the variability in the DV scores that is attributable to different sources.

– Similar to the formula for the standard deviation, you will need to find the sum of squared deviation scores (the difference between each score and the overall mean, squared and summed). This product is known as the sum of squares.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Independent Groups

In the one-way ANOVA you will need to calculate the sum of squares for three sources: total, between groups (due to the IV), and error (all variability not due to the IV).

– Total sum of squares: SStot = ΣX2 – (ΣX)2/N

– Sum of Square for the IV: SSIV = Σ[ΣX2/n] – (ΣX)2/N

– Sum of Squares for Error: SSerror = Σ[ΣX2 – (ΣX)2/N]

Smith/Davis (c) 2005 Prentice Hall

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Independent Groups

To find the Mean Squares, we divide the sums of squares for our IV and error by their respective df:

MSIV = SSIV/dfIV

MSerror = SSerror/dferror The degrees of freedom for the treatment effect (IV)

is equal to the number of groups minus one The degrees of freedom for the error term equals the

number of participants minus the number of groups. The total degrees of freedom equals the number of

participants minus one.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Independent Groups

To find the F ratio, we divide the MS of the IV by the MS for the error term:

F = MSIV/MSerror

To determine the probability of your findings occurring by chance, you will need to use the two sets of degrees of freedom (between groups, and within groups or error) to look up the critical value in your table.

Part of this information should be familiar to you. That is, you are able to see if your test statistic exceeds the critical value, or whether or not you have achieved significance.

With the two-group design, when you are finished with your calculations and can go on interpreting your statistics in words.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Independent Groups

Smith/Davis (c) 2005 Prentice Hall

Post Hoc Comparison

When you achieve significance with ANOVA, you know that there is significance among your means because your F statistic is significant.

We cannot tell however, which means are different from another. Thus, you will need to use a Post hoc comparison.

Post hoc Comparisons, also known as follow-up tests, allow us to determine which groups differ significantly from each other once we have determine that there is overall significance (by finding a significant F ratio).

You need to remember that you will need to conduct post hoc tests only if you find overall significance in a one-way ANOVA.

Smith/Davis (c) 2005 Prentice Hall

Post Hoc Comparison

There are several different post hoc tests, and there is much debate over these tests.

One such post hoc test is the Tukey HSD (an abbreviation for honestly significant difference). – The formula entails dividing the differences

between pairs of means by standard error and then examining a table to determine significance.

– This test allows you to make all pairwise comparisons.

Smith/Davis (c) 2005 Prentice Hall

Post Hoc Comparison

Smith/Davis (c) 2005 Prentice Hall

Source Table

Source tables provide a standardized way of reporting the results.

Source tables get their name because they isolate and highlight the different sources of variation in the data including the between groups and within groups.

Smith/Davis (c) 2005 Prentice Hall

Computer Analysis of ANOVA for Independent Groups

It is extremely common to use a computer program to analyze one-way ANOVAs.

As with the t test, you will likely see descriptive statistic and inferential statistic information.

You may also have information about the proportion of variance in the DV accounted for by the IV, one such example is eta squared (η2).

Again, you will use APA format to convey your findings in both words and numbers to present your findings in a clear and concise manner.

Smith/Davis (c) 2005 Prentice Hall

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

One-Way ANOVA for Correlated Groups is appropriate for designs that use repeated measures, matched sets, or natural sets.

Again the correlated-groups ANOVA is complicated enough that there is not one simple formula for calculation because you must calculate the variability in the DV scores that is attributable to various sources (i.e., between and within-groups variability).

In the independent-samples ANOVA, you calculate the sum of square for three sources: total, between (due to IV), and within (due to error).

For correlated-samples designs, you must also calculate sum of squares for participants. This is the case because we expect our participants’ scores to be correlated, we need to take that variability out of the error variability.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

Sum of Squares for One-Way ANOVA for Correlated Groups– Total sum of squares:

SStot = ΣX2 – (ΣX)2/N

– Sum of Squares for the IV: SSIV = Σ[ΣX2/n] – (ΣX)2/N

– Sum of Squares for the Participants: SSpart = Σ[(ΣP2)/n] –(ΣX)2/N

– Sum of Squares for Error: SSerror = SStot – SSIV - SSpart

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

To find the Mean Squares, we divide the sums of squares for our IV and error by their respective df:

MSIV = SSIV/dfIV

MSpart = SSpart/dfpart

MSerror = SSerror/dferror

The total degrees of freedom equals the total number of participants minus one.

The degrees of freedom for participants is equal to the number of participants in a group minus one.

The degrees of freedom for the treatment effect (IV) is equal to the number of groups minus one.

The degrees of freedom for the error term equals the df for the treatment multiplied by the df for participants.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

To find the F ratio, we divide the MS of the IV by the MS for the error term:

F = MSIV/MSerror

To determine the probability of your findings occurring by chance, you will need to use the two sets of degrees of freedom (between groups, and within groups or error) to look up the critical value in your table.

Again, when you achieve significance with ANOVA, you know that there is significance among your means because your F statistic is significant.

We cannot tell however, which means are different from another. Thus, you will again need to use a Post hoc comparison (e.g., Tukey HSD).

Smith/Davis (c) 2005 Prentice Hall

Smith/Davis (c) 2005 Prentice Hall

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

Also, you will put together a Source Table to show your statistical results.

The major difference for the correlated-samples ANOVA source table is that you have the additional source of participants accounting for variance.

Although you may find a significant participant (a.k.a., subject effect), it does not tell us anything very important.

However, it is important statistically because the correlated samples ANOVA takes away the subjects variability out of the within cells (or error) variability compared to the within groups (or error) term in the independent samples ANOVA.

Thus, doing a correlated groups ANOVA gives us much more power in our analysis indicated by the large F ratio.

Smith/Davis (c) 2005 Prentice Hall

One-Way ANOVA for Correlated Groups

Smith/Davis (c) 2005 Prentice Hall

Computer Analysis of ANOVA for Correlated Groups

Again, most researchers who use the ANOVA for correlated groups use a statistical program to analyze their data.

Smith/Davis (c) 2005 Prentice Hall

The Continuing Research Problem

Although you have examined a situation or research question in more detail using a multiple-groups design as compared to the two-groups design, there are still questions that remain.

For example, having more than one independent variable would give you a better picture of the “real world.”

Thus, it is important to realize that moving to a more complex design may be required to answer your questions more completely.