bayesian evaluation of informative hypotheses in sem using mplus rens van de schoot...

Post on 19-Jan-2016

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Bayesian Evaluation of Informative Hypotheses in SEM using Mplus

Rens van de Schoota.g.j.vandeschoot@uu.nlrensvandeschoot.wordpress.com

Informative hypotheses

Null hypothesis testing

Difficult to evaluate specific expectations using classical null hypothesis testing:

– Not always interested in null hypothesis

– ‘accepting’ alternative hypothesis no answer

– No direct relation

– Visual inspection

– Contradictory results

Null hypothesis testing Theory

Expectations

Testing:

– H0: nothing is going on

vs.

– H1: something is going on, but we do not know what…

= catch-all hypothesis

Evaluating Informative Hypotheses Theory

Expectations

Evaluating informative hypotheses:

- Ha: theory/expectation 1

vs.

- Hb: theory/expectation 2

vs.

- Hc: theory/expectation 3

etc.

√√

Informative Hypotheses

Hypothesized order constraints between

statistical parameters

Order constraints: < > Statistical parameters: means, regression

coefficients, etc.

Why???

Direct support for your expectation

Gain in power Van de Schoot & Strohmeier, (2011), Testing

informative hypotheses in SEM Increases Power. IJBD vol. 35 no. 2 180-190

7

Default Bayes factors

Default Bayes factors

Default Bayes factors

Bayes factors for informative hypo’s

As was shown by Klugkist et al. (2005, Psych.Met.,10,

477-493), the Bayes factor (BF) of HA versus Hunc can be written as

where fi can be interpreted as a measure for model fit and ci as a measure for model complexity of Ha.

, =,

i

iuncA c

fBF

Bayes factors for informative hypo’s

Model Complexity, ci :– Can be computed before observing

any data. – Determining the number of

restrictions imposed on the means

– The more restriction, the lower ci

Bayes factors for informative hypo’s

Model fit, fi :– After observing some data, – It quantifies the amount of

agreement of the sample means with the restrictions imposed

Bayes factors for informative hypo’s

Bayesian Evaluation of Informative Hypotheses in SEM using Mplus

– Van de Schoot, Hoijtink, Hallquist, & Boelen (in press). Bayesian Evaluation of inequality-constrained Hypotheses in SEM Models using Mplus. Structural Equation Modeling

– Van de Schoot, Verhoeven & Hoijtink (under review). Bayesian Evaluation of Informative Hypotheses in SEM using Mplus: A Black Bear story.

Example: Depression

15

Data

(1) females with a high score on negative coping strategies (n = 1429),

(2) females with a low score on negative coping strategies (n = 1532),

(3) males with a high score on negative coping strategies (n = 1545),

(4) males with a low score on negative coping strategies (n = 1072),

16

Model

17

*40./*41./*47./44. *

Experienced a negative life

event

Depression Time 1

Depression Time 2

001./*04./03./08. *

*19./*18./*22./13. *

*73./*63./*71./61. *

84./83./77./78.

Expectations

“We expected that the relation between life events on Time 1 is a stronger predictor of depression on Time 2 for girls who have a negative coping strategy than for girls with a less negative coping strategy and that the same holds for boys. Moreover, we expected that this relation is stronger for girls with a negative coping style compared to boys with a negative coping style and that the same holds for girls with a less negative coping style compared to boys with a less negative copings style.”

18

Expectations

Hi1 : (β1 > β2) & (β3 > β4)

Hi2 : β1 > (β2, β3) > β4)

19

Model

20

*40./*41./*47./44. *

Experienced a negative life

event

Depression Time 1

Depression Time 2

001./*04./03./08. *

*19./*18./*22./13. *

*73./*63./*71./61. *

84./83./77./78.

Bayes Factor

21

i

i

c

fBF Hu vs.Hi

Hu vs.Hi2

Hu vs.Hi1Hi2 vs.Hi1 BF

BFBF

Step-by-step

22

we need to obtain estimates for fi and ci

Step 1. The first step is to formulate an inequality constrained hypothesis

Step 2. The second step is to compute ci. For simple order restricted hypotheses this can be done by hand.

Step-by-step

23

Count the number of parameters in the inequality constrained hypothesis – in our example: 4 (β1 β2 β3 β4)

Order these parameters in all possible ways: – in our example there are 4! = 4x3x2x1= 24

different ways of ordering four parameters.

Step-by-step

24

Count the number of possible orderings that are in line with each of the informative hypotheses:

– For Hi1 (β1 > β2) & (β3 > β4) that are 6 possibilities;

– For Hi2 β1 > (β2, β3) > β4) that are 2 possibilities;

Step-by-step

25

Divide the value obtained in step 3 by the value obtained in step 2:

– c i1 = 6/24 = 0.25

– c i2 = 2/24 = 0.0833

Note that Hi2 is the most specific hypothesis and receives the smallest value for complexity.

Step-by-step

26

Step 3. Run the model in Mplus:

Mplus syntax

DATA: FILE = data.dat;VARIABLE:NAMES ARE lif1 depr1 depr2 groups;MISSING ARE ALL (-9999);

KNOWNCLASS is g(group = 1 group = 2 group = 3 group = 4);

CLASSES is g(4);27

Mplus syntax

ANALYSIS:TYPE is mixture;

ESTIMATOR = Bayes; PROCESSOR= 32;

28

Mplus syntax

MODEL:%overall%depr2 on lif1;depr2 on depr1;lif1 with depr1;[lif1 depr1 depr2]; lif1 depr1 depr2; 

29

Mplus syntax

 !save the parameter estimates for each iteration:

SAVEDATA: BPARAMETERS are

c:/Bayesian_results.dat;

30

31

Using MplusAutomation

R syntax

To install MplusAutomation:

R: install.packages(c("MplusAutomation"))R: library(MplusAutomation)

Specify directory:R: setwd("c:/mplus_output")

33

R syntax

Locate output file of Mplus: R: btest <- getSavedata_Bparams("output.out")

Compute f1:

R: testBParamCompoundConstraint (btest, "( STDYX_.G.1...DEPR2.ON.LIF_1 > STDYX_.G.2...DEPR2.ON.LIF_1) & STDYX_.G.3...DEPR2.ON.LIF_1 > TDYX_.G.4...DEPR2.ON.LIF_1)")

34

R syntax

Compute f2:

R: testBParamCompoundConstraint(btest, "( STDYX_.G.1...DEPR2.ON.LIF_1 > STDYX_.G.2...DEPR2.ON.LIF_1) & (STDYX_.G.3...DEPR2.ON.LIF_1 > STDYX_.G.4...DEPR2.ON.LIF_1)& (STDYX_.G.1...DEPR2.ON.LIF_1 > STDYX_.G.3...DEPR2.ON.LIF_1)& STDYX_.G.2...DEPR2.ON.LIF_1 > STDYX_.G.4...DEPR2.ON.LIF_1)") 

35

Results

 fi1 = .7573

c i1 = 0.25

fi2 = .5146

c i2 = 0.0833

36

Results

 BF1 vs unc = .7573 / .25 = 3.03

BF2 vs unc = .5146 / .0833 = 6.18

37

Results

 BF1 vs unc = .7573 / .25 = 3.03

BF2 vs unc = .5146 / .0833 = 6.18

BF 2 vs 1 = 6.18 / 3.03 = 2.04

38

Conclusions

Excellent tool to include prior knowledge if available

Direct support for you expectations!

Gain in power

top related