heather walen-frederick spss handout
TRANSCRIPT
-
8/3/2019 Heather Walen-Frederick SPSS Handout
1/13
SPSS Help 1
Help Sheet for Reading SPSS PrintoutsHeather Walen-Frederick, Ph.D.
This document will give you annotated SPSS output for the following statistics:
1. Correlation2. Regression
3. Paired Samples t-test4. Independent Samples t-test5. ANOVA6. Chi Square
Note that the version ofSPSS used for this handout was 13.0 (Basic). Therefore, if you have advancedadd-ons, or a more recent version there may be some slight differences, but the bulk should be the same.One possible difference would be for later versions or advanced packages to give the option of things likeeffect size, etc. In addition, the data used for these printouts were based on data available in the text:Statistics for the Behavioral Sciences, 4th Edition (Jaccard & Becker, 2002).
If you have trouble with data entry, or other items not addressed in this guide, please try using the SPSShelp that comes with the program (when in SPSS, go under the help tab and click on topics; you maybe surprised at how user friendly SPSS help really is). At the end of this document is a guide to assistyou in picking the most appropriate statistical test for your data.
Note: No test should be conducted without FIRST doing exploratory data analysis and confirming that thestatistical analysis would yield valid results. Please do thorough exploratory data analysis, to check foroutliers, missing data, coding errors, etc. Remember: Garbage in, garbage out!
A note about statistical significance (what it means/does not mean).
Most everyone appreciates a refresher on this topic.
Statistical Significance: An observed effect that is large enough we do not think we got it on accident(that is, we do not think that the result we got was due to chance alone).
How do we decide if something is statistically significant?If H0 is true, thep-value (probability value) is the probability that the observed outcome (or a value moreextreme than what we observe) would happen. Thep-value is a value we obtain after calculating a teststatistic. The smaller thep-value, the stronger the evidence againstthe H0. If we set alpha at .05, then the
p-value must be smaller than this to be considered statistically significant; if we set alpha at .01, then itmust be smaller than .01 to be considered statistically significant. Remember, thep-value tells us the
probabilitywe would expect our result (or one more extreme) GIVEN the null is true. If ourp-value isless than alpha, we REJECT THE NULL and say there appears to be a difference between groups/arelationship between variables, etc.
Conventional alpha ( ) levelsp < .05 andp < .01What do these mean?
p < .05 = this result would happen no more than 5% of the time (so 1 time in 20 samples), if the null weretrue.
p < .01 = this result would happen no more than 1% of the time (so 1 time in 100 samples), if the nullwere true.Because these are low probabilities (events not likely to happen if the null were true), we reject thenull when our calculatedp-value falls below these alpha levels.
-
8/3/2019 Heather Walen-Frederick SPSS Handout
2/13
SPSS Help 2
What if yourp-value is close to alpha, but slightly over it (like .056)? You cannot reject the null.However, you can state that it is marginally significant. You will want to look at your effect size todetermine the strength of the relationship and also your sample size. Often, a moderate to large effect willnot be statistically significant if the sample size is low (low power). In this case, it suggests furtherresearch with a larger sample.
Please remember that statistical significance does not equal importance. You will always want tocalculate a measure of effect size to determine the strength of the relationship. Another thing to keep inmind is that the effect size, and how important it is, is somewhat subjective and can vary depending onthe study at hand.
1. Correlation
A correlation tells you how and to what extent two variables are linearly related. Use this technique whenyou have two variables (measured on the same person) that are quantitative in nature and measured on alevel that at least approximates interval characteristics. When conducting a correlation, be sure to look atyour data, including scatterplots, to make sure assumptions are met (for correlation, outliers can be aconcern).
This example is from Chapter 14 (problem #49). The question was whether there was a relationshipbetween the amount of time it took to complete a test and the test score. Do people who take longer getbetter scores, maybe due to re-checking questions and taking their time (positive correlation), or dopeople who finish sooner do better possibly because they are more prepared (negative correlation)?
This is the default output.
This box gives the results.
Correlations
1 .083
.745
18 18.083 1
.745
18 18
Pearson Correlation
Sig. (2-tailed)
NPearson Correlation
Sig. (2-tailed)
N
score
time
score time
You see the correlation in the last column, first row: .083. Clearly, this is a small correlation (rememberthey range from 0-1 and this is almost 0). Thep-value (in the row below this) is .745. This is consistentwith the correlation. This is nowhere near either alpha (.05 or .01); in other words, because thep-valueEXCEEDS alpha, it is not statistically significant. Thus we fail to reject the null, and conclude that the timesomeone takes to complete a test is not related to the score they will receive.
The write up would look like this: r(18) = .08,p > .05 (if you were using an alpha of .01, it would read:
r(18) = .08,p > .01. Alternatively, you could write: r(18) = .08,p = .75 (the difference is that in the lastexample, you are reporting the actualp-value, rather than just stating that thep-value was greater thanalpha).
Correlationp-value
-
8/3/2019 Heather Walen-Frederick SPSS Handout
3/13
SPSS Help 3
2. Regression
A regression is typically used to predict a DV with an IV (or multiple IVs). It is a procedure that is closelyrelated to correlation (as such, look at your data before performing a regression, paying particularattention to outliers).
This regression is based on problem #50, Chapter 14. The researchers were interested in whether apersons GRE score was related to GPA in graduate school (after the first two years). Whether or notthere is a relationship could be answered by a correlation. However, the researchers would like to be ableto predict GPA for in-coming graduate students using their GRE scores. Also, remember that typicallyresearchers perform multiple regressions. These are regression with multiple predictor variables used topredict the DV; this is much more complex than a simple regression and questions answered by such atechnique could not be answered by a correlation.
The output below is the default for this analysis.
This box is telling you that GRE was entered to predict GPA (this box is meaningless for thisexample, but would be important if you were using multiple predictor (X) variables and usinga method of entering these into the equation that was not the default).
Variables Entered/Removedb
GREa . Enter
Model
1
Variables
Entered
Variables
Removed Method
All requested variables entered.a.
Dependent Variable: GPAb.
This box gives you summary statistics.
Model Summary
.751a .564 .509 .25070
Model
1
R R SquareAdjustedR Square
Std. Error ofthe Estimate
Predictors: (Constant), GREa.
Reading across, the second box tells you the correlation (.75 a strong, positive, linearrelationship). The next box gives you a measure of effect R2: 56% of the variance in GPA isaccounted for by GRE scores (this is a strong effect!). Adjusted R2 adjusts for the fact thatyou are using a sample to make inferences about a population; some people report R2 andsome report the adjusted R2. You then see the standard error of the estimate (think of thisas the standard deviation around the regression line).
This box gives you the results of the regression. The F is significant at .05, but not .01.
-
8/3/2019 Heather Walen-Frederick SPSS Handout
4/13
SPSS Help 4
ANOVAb
.650 1 .650 10.343 .012a
.503 8 .063
1.153 9
Regression
Residual
Total
Model
1
Sum of
Squares df Mean Square F Sig.
Predictors: (Constant), GREa.
Dependent Variable: GPAb.
A write up would look like: A regression analysis, predicting GPA scores from GRE scores,was statistically significant, F(1,8) = 10.34,p < .05 (or: F(1,8) = 10.34,p = .012). You wouldwant to report your R2, explain it and if the analysis was performed to be used for actualprediction, you would want to add the regression equation (see below) and something like:For every one unit increase in GRE score, there is a corresponding increase in GPA of .005.
This box gives you the coefficients related to a regression.
Coefficientsa
.411 .907 .453 .662
.005 .002 .751 3.216 .012
(Constant)
GRE
Model
1
B Std. Error
Unstandardized
Coefficients
Beta
Standardized
Coefficients
t Sig.
Dependent Variable: GPAa.
You can see that the tvalues associated with GRE is significant at the same level the Fstatistics was (this will happen if you have only 1 X variable). The regression equation is: = .411 * .005X.
3. Paired Samples t-test
A paired-samples, or matched-pairs, t-test is appropriate when you have two means to compare. It isdifferent from an independent samples t-test in that the scores are related in some way (like from thesame person, a couple, a family, etc.). Do not forget to check that assumptions for a valid test are metbefore performing this analysis.
The output shown here is the default.
This example based on an example from Chapter 11 (problem #8). In this study, 5 participants wereexposed to both a noisy and a quiet condition (this is the IV) and given a learning test (this is the DV). Thequestion is whether learning differs in the two conditions; is there a statistically significant difference in themeans for the noisy and quiet condition?
The first box gives you means, standard deviations, and the N.
Paired Samples Statistics
12.6000 5 6.42651 2.87402
8.4000 5 4.97996 2.22711
quiet
noisy
Pair
1
Mean N Std. Deviation
Std. Error
Mean
-
8/3/2019 Heather Walen-Frederick SPSS Handout
5/13
SPSS Help 5
This box next gives you the correlation between quiet and noisy.
Paired Samples Correlations
5 .928 .023quiet & noisyPair 1N Correlation Sig.
Typically, if you are conducting a t-test, you will not report these results. Nevertheless, because they arehere, I will interpret them for you. There is a significant correlation (relationship) between the quiet andnoisy conditions (the null is rejected). That is, if you scored high on one condition, you scored high on theother (this makes sense because it is the same person in both conditions). It is statistically significant atan alpha of .05, because when you look at the last box labeled Sig. the number there is .023 (this is the
p-value). This is below (less than) an alpha of .05 (if you happened to be using a more conservative alphasuch as .01, this result would not be statistically significant and you would retain the null).
The write up would be: r(5) = .93,p < .05. (or: r(5) = .97,p = .023). Using an alpha of .05 you wouldconclude there is a relationship between the conditions, which we would expect because it is the samepeople in each condition (again, this is not typically reported, because the question we want the answerto: are there differences in the conditions? This is not answered by this output).
This next box gives you the results of yourt-test.
Paired Samples Test
4.20000 2.58844 1.15758 .98603 7.41397 3.628 4 .022quiet - noisyPair 1
M ean Std. Deviation
Std. Error
Mean Lower Upper
95% Confidence
Interval of the
Difference
Paired Differences
t df Sig. (2-tailed)
It is statistically significant if you are using an alpha of .05, because thep-value in the last box is .022(which is less than .05). Again, this is a great example, because if you were using an alpha of .01, it wouldNOT BE statistically significant (because .02 is greater than .01). Let us assume we are using an alphaof .05. We would declare the result statistically significant and reject the null.
A write up of this would look something like: t(4) = 3.77,p < .05. Again, you could write: t(4) = 3.77,p = .02. You would want to go on to report the means and standard deviations as well as a measure of effect(eta2), which you would have to calculate by hand (NOTE: there is a way to get this on SPSS. You wouldneed to run the test as a repeated measures ANOVA and click on effect size in the option box.Introducing this analysis is beyond the scope of this guide, however, feel free to use SPSS help andexperiment with this for yourself). Eta2 = t2/(t2 + DF) [for this example: 3.6282/(3.6282 + 4) = .77]. Thismeans 77% of the variability in the DV (test scores) is due to the IV (condition); this is a large effect! Youcould also report the 95% confidence interval (.99 7.41). This interval is giving you a range of test scoredifferences; we know there is a difference in the test scores based on condition. How big is the differencebetween the mean scores? Our best guess is somewhere between about 1 point (.99) and 7 points(7.41).
4. Independent Samples t-test
An independent samples t-test is like a paired samples t-test, in that there are two means to compare. It isdifferent in that the means are not related (for example, means from a treatment and control group). Donot forget to check that assumptions for a valid test are met before performing this analysis.
This example is based on problem #57 in Chapter 10. Briefly, the study was interested in whether choiceinfluenced creativity in a task. The researchers randomly assigned children (2 6 years of age) to one oftwo groups: choice or no choice. They then had the children make collages and had the collages rated forcreativity. In the choice condition, children got to choose 5 boxes (out of 10) containing collage
-
8/3/2019 Heather Walen-Frederick SPSS Handout
6/13
SPSS Help 6
materials. In the no choice condition, the experimenters chose the 5 boxes for the children. Creativityratings (given by 8 trained artists) could range from 0 320.
The output below is the default output.
This first box shows you the N, the mean, standard deviation, and standard error of the mean for eachcondition.
Group Statistics
14 188.7857 16.32449 4.36290
16 142.0625 13.89709 3.47427
conditionchoice
no choice
creativeN Mean Std. Deviation
Std. Error
Mean
This next box gives you the results of yourt-test. The first column that you come to is labeled LevenesTest for Equality of Variances. This tests the assumption that the variances for the two groups are equal.You want this to be non significant (because you want there to be no difference in the variances betweenthe groups this is an assumption for an independent samples t-test). To determine if it is significant, you
look in the column labeled Sig. In this example, thep-value is .165 (this is not less than either alpha, soit is non significant). This means we can use the row of data labeled Equal variances assumed and wecan ignore the second row (Equal variances not assumed). If Levenes test WAS significant, then wewould use the second row (Equal Variances not Assumed), and ignore the first row. Thep-value is inthe box labeled Sig. (2-tailed) and is .000; thus we reject the null (this is less than either alpha).
The write up would be: t(28) = 8.47,p < .05 (or: t(28) = 8.47,p < .001 - round thep-value, because youwould never report a p = .000). There is a difference in the creativity between the groups. You would wantto report the means and standard deviations for both groups and a measure of effect size (calculate thislike above in the paired-samples example). The rest of the information in the box gives you the meandifference (the groups differed on creativity scores by about 47 points), and also the 95% CI (which youmay also want to report).
Independent Samples Test
2.032 .165 8.470 28 .000 46.72321 5.51608 35.42404 58.02239
8.377 25.743 .000 46.72321 5.57723 35.25349 58.19294
Equal variances
assumed
Equal variances
not assumed
creative
F Sig.
Levene's Test for
Equality of Variances
t df Sig. (2-tailed)
Mean
Difference
Std. Error
Diffe rence Lower Upper
95% Confidence
Interval of the
Difference
t-test for Equality of Means
5. One-way Analysis of Variance (ANOVA)
An ANOVA is the analysis to use when you have more than two means to compare (it is like theindependent samples t-test, but when you have more than two groups). Do not forget to check thatassumptions for a valid test are met before performing this analysis.
This analysis is based on the example in Chapter 12 (problem #53). Briefly, the study was designed toexamine whether a participant would give more shocks to a more aggressive learner. There were fourconditions (4 levels of the IV): non-insulting, mildly insulting, moderately insulting and highly insulting. AnANOVA is appropriate because there are more than two means being compared (in this case, there arefour). Each participant was in one condition only (thus, the design is between-subjects; had the same
-
8/3/2019 Heather Walen-Frederick SPSS Handout
7/13
SPSS Help 7
person been in all conditions, you would have a within-subjects design and need to perform a repeatedmeasures ANOVA this is not covered in Stats. I).
The output below is what you get when asking SPSS for a one-way ANOVA under the compare meansheading. You will get something different if you do an ANOVA using the univariate command under theheading General Linear Model. (NOTE: using the univariate command will give you the option ofselecting effect size, however, the printout is more complicated and beyond the scope of this guide.Again, feel free to experiment with this.).
The output below is not the default. In order to get this output, when you are setting up the ANOVA (undercompare means, one-way ANOVA) you need to click on Post Hoc and select S-N-K (you are free to usewhatever post hoc test you would like), and you need to click on Options and select: Descriptives,Homogeneity of variance test, and plot means.
This first box gives you descriptive information.
Descriptives
Shocks
10 9.3000 2.90784 .91954 7.2199 11.3801 6.00 16.00
10 12.6000 3.43835 1.08730 10.1404 15.0596 8.00 17.00
10 17.7000 2.90784 .91954 15.6199 19.7801 14.00 23.00
10 24.4000 5.05964 1.60000 20.7805 28.0195 16.00 32.00
40 16.0000 6.77098 1.07059 13.8345 18.1655 6.00 32.00
Non
Mildly
Moderately
Highly
Total
N Mean Std. Devia tion Std. Er ror Lower Bound Upper Bound
95% Confidence Interval for
MeanMinimum Maximum
It tells you there were 10 participants in each group (N). It gives you the means and standard deviationsfor each group (you can see the mean number of shocks increased with each condition, and the variabilityin the 4th condition was the greatest). It also lists the standard error, 95% CI for the mean for each group,and the range of scores for each group.
This next box gives you the results of a homogeneity of variance test.
Test of Homogeneity of Variances
Shocks
1.827 3 36 .160
Levene
Statistic df1 df2 Sig.
Remember, this is an assumption for a valid ANOVA (that the variances of each group/condition are thesame). We want this test to notreach statistical significance. When it is not significant, that is saying thegroups variances are not significantly different from each other (this is an assumption for a valid ANOVA).In this case, our assumption is met (p = .160). Thep-value does not exceed alpha (either .05 or .01). Thisis what we want; it allows us to move on to the next box. If you did get a significant value here, you needto read up on the assumptions and see if you believe your test is robust against this violation.
This next box gives us our results.
-
8/3/2019 Heather Walen-Frederick SPSS Handout
8/13
SPSS Help 8
ANOVA
Shocks
1299.000 3 433.000 31.877 .000
489.000 36 13.583
1788.000 39
Between Groups
Within Groups
Total
Sum of
Squares df Mean Square F Sig.
According to the last box, thep-value is < .001 (SPSS rounds, so when you see .000 in this box, alwaysreport it asp < .001, or as less than the alpha you are using). Thus, at either alpha (.05 or .01), we canreject the null; the groups are different.
The write-up would be: F(3, 36) = 31.88,p < .05 (or: F(3, 36) = 31.88,p < .001). Remember, with ANOVA,at this point that is all we can say. We need to do post hoc tests to see where the differences are. We willget to that box next. To get your measure of effect, calculate by hand (or use univariate method explainedabove). Eta2 = SS Between/Ss Total [for this example: 1299/1788 = .73; a large effect].
This next box gives the results of the post hoc tests.
Shocks
Student-Newman-Keulsa
10 9.3000
10 12.6000
10 17.7000
10 24.4000
.053 1.000 1.000
Group
Non
Mildly
Moderately
Highly
Sig.
N 1 2 3
Subset for alpha = .05
Means for groups in homogeneous subsets are displayed.Uses Harmonic Mean Sample Size = 10.000.a.
The means that differ from each other are in separate boxes (other post hoc tests show differencesbetween groups in different ways, sometimes by an *). Thus, the first two groups do not differ from eachother, but all the other means do differ from each other. That is, the non and mildly insulting groups hadsignificantly lower means than both the moderately and highly insulting groups. Further, the moderatelyinsulting group had a significantly lower mean than the highly insulting group. The means in the non andmildly insulting groups did not differ from each other.
This last portion of the output gives you a nice visual display of the group means. You can see the steepincrease in shocks given between the non/mildly insulting groups and the moderately/highly insultinggroups.
-
8/3/2019 Heather Walen-Frederick SPSS Handout
9/13
SPSS Help 9
HighlyModeratelyMildlyNon
Group
25.00
20.00
15.00
10.00
MeanofShock
s
6. Chi-Square
A chi-square (pronounced ky square) test is used with you have variables that are categorical ratherthan continuous. Examples when this test would be useful include the following: Is there a relationshipbetween gender and political affiliation? Is there a relationship between ethnic group and religious group?Is there a difference between private and public school in dropout rates? Do not forget to check thatassumptions for a valid test are met before performing this analysis.
The example here is based on Chapter 15 (problem #46). The research was examining TV and how itinfluences gender stereotyping. For this study, commercials were classified in terms of whether the main
character was male or female and whether he or she was portrayed as an authority (an expert on theproduct), or simply a user of the product. Thus, the IV is gender (male/female) and the DV is role(authority/user). Notice that these are categorical, or discrete variables.
The output below is not the defaultoutput. In order to get these tables you must do the following:
Analyze Descriptive Statistics Crosstabs. Put one variable in the row box and one in the column. Forthis example, I put gender in the row box , and role in the column box. Click on Statistics and chooseChi Square and Phi & Cramers V. Next, click on Cells and choose Counts, Observed andPercentages, Row, Column and Total.
This first box is a summary, detailing the total N and number and % of missing cases. You can see thereare 146 participants, with no missing cases.
Case Processing Summary
146 100.0% 0 .0% 146 100.0%gender * role
N Percent N Percent N Percent
Valid Missing Total
Cases
This box gives you descriptive information that you will use if your test is significant (toexplain the results).
-
8/3/2019 Heather Walen-Frederick SPSS Handout
10/13
SPSS Help 10
gender * role Crosstabulation
75 40 115
65.2% 34.8% 100.0%
88.2% 65.6% 78.8%
51.4% 27.4% 78.8%
10 21 31
32.3% 67.7% 100.0%
11.8% 34.4% 21.2%
6.8% 14.4% 21.2%
85 61 146
58.2% 41.8% 100.0%
100.0% 100.0% 100.0%
58.2% 41.8% 100.0%
Count
% within gender
% within role
% of Total
Count
% within gender
% within role
% of Total
Count
% within gender
% within role
% of Total
male
female
gender
Total
authority user
role
Total
This box gives the results of your2 test.
Chi-Square Tests
10.905b 1 .001
9.592 1 .002
10.849 1 .001
.002 .001
10.830 1 .001
146
Pearson Chi-Square
Continuity Correctiona
Likelihood Ratio
Fisher's Exact Test
Linear-by-Linear
Association
N of Valid Cases
Value df
Asymp. Sig.
(2-sided)
Exact Sig.
(2-sided)
Exact Sig.
(1-sided)
Computed only for a 2x2 tablea.
0 cells (.0%) have expected count less than 5. The minimum expected count is 12.
95.
b.
You read the first row (first line). Yourp-value appears in the forth column (box titled Asymp. Sig. (2-sided)). For this example, the p = .001. At either alpha this is statistically significant and we would rejectthe null and conclude there is a relationship between gender and role portrayal.
A write up may look like this: 2(1, N = 146) = 10.91,p = .05 (or: 2 (1, N = 146) = 10.91,p = .001). Thus,there is a relationship between gender and role portrayal in commercials. When your result is significant,you will want to go on to report percents and effect size. For percents, go back to the previous box. In thiscase, you may say something like: When an authority role was portrayed, it was a male 88% of the time,
while it was a female only 12% of the time.
This box gives your measure of effect.
-
8/3/2019 Heather Walen-Frederick SPSS Handout
11/13
SPSS Help 11
Symmetric Measures
.273 .001
.273 .001
146
Phi
Cramer's V
Nominal by
Nominal
N of Valid Cases
Value Approx. Sig.
Not assuming the null hypothesis.a.
Using the asymptotic standard error assuming the null
hypothesis.
b.
It is in the first line (first row) and is called Cramers V (pronouncedCramers Phi). In this case, the effect size is .273, a small to moderate effect.
A note about this symbol: 2 You get this, in Word, by choosing the Insert tab, and then symbol. It is aGreek letter. You may have to change your font to get it to show up like it is supposed to (Times NewRoman always works). Once you get the , you need to add the superscript 2.
Bonus: Books you may find helpful (in addition to Jaccard & Becker)
The Basic Practice of Statistics 2ndEdition.By: David S. Moore / Published by: Freeman.ISBN:0-7167-3627-6. [This is an undergraduate level text book that wouldserve as a nice refresher for basic topics. It is easy to read and understand.]
Statistical Methods for Psychology 5th Edition.By: David C. Howell / Published by: Duxbury
ISBN: 0-534-37770-X. [This is a graduate level text book and is very detailed,however, the author took great pains to make it readable.]
Reading and Understanding Multivariate Statistics.Edited by: Laurence G. Grimm & Paul R. Arnold / Published by: APA.ISBN: 1-55798-273-2 [This is an EXCELLENT book for multivariate statistics.For each analysis, an example is given, complete with how to read printoutsand how to report APA style. I would go so far as to say this is a *must have*for a graduate student in the social sciences.]
-
8/3/2019 Heather Walen-Frederick SPSS Handout
12/13
SPSS Help 12
Which test do I use?
Note: there are many other statistical tests available, but this outline shouldprovide you, at the very least, with a starting point in helping you determinethe type of test you will need to perform.
Dependent or Response Variable
ContinuousOrQuantitative
DiscreteOrCategorical
IndependentOrExplanatoryVariable
Continu
ousOrQuantitative
Correlation/Regression Logistic Regression(not covered inStatistics 1)
DiscreteOrCategorical
t -test or ANOVA(see below formore details)
Chi-Square ( 2)
To use this table, first determine your IV and whether it is measured on a
continuous (e.g. age in years) or discrete scale (e.g. gender). This willdetermine which ROW you use. Next, determine if the DV is continuous (e.g.happiness on a 10-point scale) or discrete (e.g. political affiliation). This willdetermine the COLUMN you will use. With both pieces of information, you willend up in one box. If you land on the t-test or ANOVA box see theillustration below to help you further narrow it down.
How manygroups
do I have?
1
2
>2
One sample t-test
Matched PairsOR
Two sample t-test
ANOVA
-
8/3/2019 Heather Walen-Frederick SPSS Handout
13/13
SPSS Help 13
Note: No test should be conducted without FIRST doing exploratory dataanalysis and confirming that the statistical analysis would yield valid results.That is, check that the assumptions are met, or that the test is robust against
violation(s).