path analysis with manifest variables mysterious endogeneity haiyan wang zach andersen 11/18/2014
TRANSCRIPT
Path Analysis with Manifest Variables
Mysterious Endogeneity
Haiyan Wang
Zach Andersen
11/18/2014
Outline of the Presentation
• Introduction of path analysis• Review of PLS• Model justification• Review of lavaan package• User Example of matrix form• Simulation
2
Problem: OLS Bias (mysterious endogeneity)
– Nonrecursive models (causal loops)• OLS results greatly biased
– Recursive (one direction) (recall PLS-path models)• OLS and path analysis both work fine• We will focus on the nonrecursive case since no one
else has so far
3
Solutions to nonrecursive OLS bias
– Method
• Instrumental Variables
• Implied covariance matrix
• Both above work equally well– We will cover implied covariance matrix since this is a
multivariate course
4
Review of PLS Path Analysis• PLS Goals
• “uncover a common structure among blocks of variables.” [2]• No covariance structure: Does not assume a ground truth
(focuses on what the data tells you)• Does not seek causal relationship, only relationships
• What does PLS do• “obtain score values of latent variables for
prediction purposes” [2]
• 1. From Tim and Jennifer’s slides• 2. Gaston “Partial Least Squares with R”
Review of SEM• SEM Goal
– “Test and estimate the (causal) relationships among observable measures and non-observable theoretical (or latent) variables” [1]
• What does SEM do– Seeks to approximate a ground truth by fitting a
covariance model to observed covariances
• [1] Jiyoon and Kiran SEM presentation• [2]. Gaston “Partial Least Squares with R”
6
Path analysis (w/ manifest)• Goal: “determines whether your theoretical model successfully
accounts for the actual relationships in the sample data” (1)– Like SEM unlike PLS Path Analysis
• What does path analysis (manifest) do – Fits a covariance model: seeks approximation of ground
truth [2]• Like SEM unlike PLS Path Analysis
– Uses manifest variables• Unlike either SEM PLS or Path Analysis
• 1: “A Step-by-Step Approach to Using SAS for Factor Analysis and Structural Equation Modeling” by Larry Hatcher
What does path analysis (manifest) do
• Uses the implied covariance matrix as the link between your data and your model, [3]– Implied covariance matrix relates model to your
data’s observed variances and covariances
• The estimated parameters are those that make the observed variance and covariances match as closely as possible to those of the model
• [3] Source: http://www.sagepub.com/upm-data/39916_Chapter2.pdf
A simple Path Analysis with manifest variables example (nonrecursive)
Intelligence
Motivation
Supervisory Support
Work Place Norms
Work Performance
1: “A Step-by-Step Approach to Using SAS for Factor Analysis and Structural Equation Modeling” by Larry Hatcher Figure 4.3
How is path analysis operationalized
• Uses only manifest variables (no latent variables)• Allows user to specify exogenous variables
effects (single arrows) on endogenous variables • Allows user to specify covariance between
antecedent variables (double arrows)• Allows recursive (one direction) and non-
recursive (>1 direction)• “A Step-by-Step Approach to Using SAS for Factor Analysis and Structural
Equation Modeling” by Larry Hatcher
Requirements of path analysis
• Causal model must have enough equations to solve for unknown parameters – Otherwise an infinite number of solutions
• Sufficient observations: Ugly rule of thumb 5 observations for every parameter to be estimated
• “A Step-by-Step Approach to Using SAS for Factor Analysis and Structural Equation Modeling” by Larry Hatcher
Model Justification--Definition• Model is written as simultaneous equations from path
diagram – One per endogenous variable
• Over-parameterized Model – (# of Parameters > # of equations and no unique solution)
• Just-identified Model– (# of Parameters = # of equations and have unique
solution)• Under-parameterized Model
– (# of Parameters < # of equations)Use weighted least square method or ML method to find some solutions that make the two sides of the equations close enough
12http://www.sagepub.com/upm-data/39916_Chapter2.pdf
Simple Math Example
• Model 1– X+Y=2 (2 param. & 1 eq.) – Over-parameterized model: infinite solutions
• Model 2– X+Y=2; X-Y=10 (2 param. 2 eq.)– Just-identified model: one solution
• Model 3– X+Y=2; X-Y=10; 2X+Y=5 (2 param. 3 eq.)– Under-parametrized model: can only approximate
13
Simple Lavaan tuturial
Jiyoon and Kiran SEM presentation
Our dataset
SAS 13.1 Users Guide: CALIS procedure
Operational model using Lavaan• Our model specification DispInc, FoodCons, FoodCostRatio, RatioPrecYear, YearQ = FoodConsP = FoodCostRatioD = DispIncF = RatioPrecYearY = Yeardata.k = data.frame(Q,P,D,F,Y) econ.mod = 'Q ~ P + D+ P ~ Q + F + Y + Q ~~P ' fit <- sem(econ.mod, data=data.k)
R code
• Summary function Output Discussion– Degrees of freedom– Regression estimates (direct and indirect effects)– Variances– R squared
Outline
• Chi-square likelihood ratio test
• Supply-and-demand model Example (SEM)
• Simulation
18
19
When a researcher has a model in his mind, he always ask himself a question. Is my model good enough? How can I test if my model is good or not?
20
Like what Dr. Westfall said in the ISQS 5347 class that model produces data. A good model should produce data that is close to the real data. This implies that we can test the null hypothesis:
Σ=Σ(λ)
The Chi-square likelihood ratio test is one of the method we can use for testing. (Dr.Wesfall ISQS 6348 class on 10/14/2014)
Chi-square Likelihood Ratio Test
• Our goal is to test the null hypothesis: Σ=Σ(λ), where Σ is the observed covariance matrix (unrestricted model) , λ is a vector of the parameters to be estimated, and Σ(λ) is the covariance matrix implied by our model (Restricted model).
21
Chi-square Likelihood Ratio Test
The null probability distribution of the test statistic can be approximated by a Chi-square distribution with (df1 − df2)degrees of freedom, where df1 and df2 are the degrees of freedom of unrestricted Σ model and restricted model Σ(λ), respectively.
22
Chi-square Likelihood Ratio Test
• In other words, the number of degree freedom of the unrestricted model is the number of equations we have.
• The number of degree freedom of the restricted model is the number of parameters in our model.
23
Example: Simultaneous Equations with Mean Structures and Reciprocal Path
The supply-and-demand food example of Kmenta (1971,pp.565,582).
Qtሺdemandሻ= α1 + β1Pt + ϒ1Dt + E1 (1) Qtሺsupplyሻ= α2 + β2Pt + ϒ2Ft + ϒ3Yt + E2 (2)
for t=1,...,20. Qtሺdemandሻ= Qtሺsupplyሻ The model is specified by two simultaneous equations containing two endogenous variables Q and P, and three exogenous variables D,F, and Y.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCQQFjAB&url=https%3A%2F%2Fsupport.sas.com%2Fdocumentation%2Fonlinedoc%2Fstat%2F131%2Fcalis.pdf&ei=eNE-VP7nFu6j8gG0ioCQCg&usg=AFQjCNGooKor9lXTZaS-CBkX9VA5ewEKVQ&bvm=bv.77648437,d.b2U 24
To estimate this model, each endogenous variable must appear on the left-hand side of exactly one equation. Rewrite the second equation as an function for Pt as: Pt = −α2β2 + 1β2 Qt − ϒ2β2 Ft − ϒ3β2 Yt + 1β2 E2
or, equivalently reparameterized as: Pt = θ1 + θ2Qt + θ3Ft + θ4Yt + E2 (3)
25
Path Diagram of the Supply-and-Demand Model
Dt
FtPt
Yt
Qt E1
E2
E1 and E2 are the error terms. We will have 8 parameters 8 parameters to be estimated from our restricted model.
P? ϒ1
P? θ3
P? θ4
Θ2 β1
26
P? P?C?
C?
C?
Var?
Var?
Cov?
How many equations will we have from the unrestricted model?
27
Number of equations= (p ( p + 1 ) ) / 2
p=the number of manifest variables
By Dr. Westfall notes
28
•In our supply-and –demand example, we have 5 manifest variables(Q,P,D,F,Y) , so the number of equations will be 5*6/2=15.
•But 6 of the equations involve variances and covariances among exogenous manifest variables that are not explained by any model. So the total number of equations are 9.
Observed covariance (from unrestricted model)
Σ= ΣYY . ΣYX… . …ΣXY . ΣXX൩
=ۏێێێێۍ
σQQ σQP . σQD σQF σQYσPQ σPP . σPD σPF σPY… … … … … …σDQ σDP . σDD σDF σDYσFQ σFP . σFD σFF σFYσYQ σYP . σYD σYF σYY ےۑۑۑۑې
29
These are the 6 equations not include in counting total number of equations
30
Now, you know we have 8 parameters and 9 equations and probably you have already figured out that this example is the under-paramterized model case.
But you may curious about what those “equations” look like and what is the mysterious behind Σ=Σ(λ).
The general matrix representation of simultaneous equation models is :
Y=ΒY + ΓX+E
Mysterious behind Σ=Σ(λ)
31
http://www.sagepub.com/upm-data/39916_Chapter2.pdf
Table1. Notation for Simultaneous Equation Models
Vector/Matrix Definition Dimensions
Variables
Y Endogenous p×1
X Exogenous q×1
E Disturbance(error) terms p×1
Coefficients
Γ Coefficient matrix for exogenous variables; direct effects of X on Y
p×q
Β Coefficient matrix for endogenous variables; direct effects of Y on Y
p×p
Covariance matrices
Φ Covariance matrix of X q×q
ψ Covariance matrix of E p×p32
Rewrite equation (1) and (3) in matrices form:
QtPt൨= 0 β1θ2 0൨QtPt൨+ ϒ1 0 00 θ3 θ4൨DtFtYt൩+E1E2൨ Y Β Y Γ X E
33
Y = ΒY+ ΓX+ E
Step1: (I-Β)Y = ΓX+ E
Step2: Let C=(I-B), the CY = ΓX+ E
Step3: CC−1Y = C−1ΓX+ C−1E
Y = C−1ΓX+ C−1E (Reduced form)
Step4: Y…X൩= C−1Γ . C−1… . ...I . 𝟎 ൩X…E൩
Some Fun Math
34
Step5: Σ(λ)=
CovY…X൩= C−1Γ . C−1… . ...I . 𝟎 ൩∗CovX…E൩∗C−1Γ . C−1… . ...I . 𝟎 ൩
′
(Looks familiar?)
35
Cov Y…X൩= C−1Γ . C−1… . ...I . 𝟎 ൩∗CovX…E൩∗C−1Γ . C−1… . ...I . 𝟎 ൩
′
= C−1Γ . C−1… . ...I . 𝟎 ൩∗Σxx . 𝟎… . …𝟎 . ΣEE൩∗C−1Γ . C−1… . ...I . 𝟎 ൩
′
= C−1ΓΣxx . C−1ΓΣEE… . …Σxx . 𝟎 ൩∗C−1Γ . C−1… . ...I . 𝟎 ൩
′
=ۏێێێێۍ ① . ②C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′ . C−1ΓΣEE… . …ΣxxΓ′C−1′ . Σxx
③ . ④ ےۑۑۑۑې
36
Structures Behind the Fitted Covariance Matrix
37
Cov Y…X൩=ۏێێێێۍ ① . ②C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′2×2 . C−1ΓΣEE2×3… . …ΣxxΓ′C−1′3×2 . Σxx3×3
③ . ④ ےۑۑۑۑې
=ۏێێێێۍ
σQQ σQP . σQD σQF σQYσPQ σPP . σPD σPF σPY… … … … … …σDQ σDP . σDD σDF σDYσFQ σFP . σFD σFF σFYσYQ σYP . σYD σYF σYY ےۑۑۑۑې
There are three equations hidden in ① and six equations hidden in ②
38
For instance:
C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′2×2 =𝐵11 𝐵12𝐵21 𝐵22൨ 𝐵11 = (𝛾12𝜎𝐷𝐷 + 𝛾1𝛽1𝜃3𝜎𝐹𝐷 + 𝛾1𝛽1𝜃4𝜎𝑌𝐷 + 𝛾1𝛽1𝜃3𝜎𝐷𝐹+𝛽12𝜃32𝜎𝐹𝐹+ 𝛽12𝜃3𝜃4𝜎𝑌𝐹+ 𝛾1𝛽1𝜃4𝜎𝐷𝑌+ 𝛽12𝜃3𝜃4𝜎𝐹𝑌+ 𝜎𝐸1𝐸1 +𝛽1𝜎𝐸2𝐸1 + 𝛽1𝜎𝐸1𝐸2 + 𝛽12𝜎𝐸2𝐸2)/(1− 𝛽1𝜃2) = 𝜎𝑄𝑄
(EQUATION 1)
Note: 𝜃3 = − 𝛾2𝛽2 , 𝜃4 = − 𝛾3𝛽2, 𝜃2 = 1𝛽2
If you are interested in the other 8 equations, it’s on my scratch paper. I would like to share with you after class.
39
Degree of freedom=# 𝑜𝑓 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛𝑠− # 𝑜𝑓 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠
In the supply-and-demand example, the degree of freedom 9− 8 = 1.
Let’s use the R output to check if we calculated the df correctly.
Results-R lavaan package
40
Results-R lavaan package
41
42
• Check Σ(λ)= Σ
43
• In this supply-and-demand model example since we don’t know the true model, it’s seems hard to say our estimated model is the best model.•What is the best way to check? Simulation!!
Simulate a simple model to show the mystery behind Σ= Σ(λ)
44
True model: y2 = 1.0∗y1 + e2
y1 = 0.5∗y2 + 1.0∗x+ e1
http://courses.ttu.edu/isqs6348-westfall/images/6348/simeqnbias.htm
Path Diagram of the True Model
45
x
y2
y1e1
e2
P? 1.01.0 0.5 P?P?
Var?
Var?
In this simulation example we have 5 parameters to be estimated.
46
Rewrite the simultaneous equations as matrices form:
Y=ΒY + ΓX+E
ቂy1y2ቃ= ቂ
0 0.51.0 0 ቃ× ቂy1y2ቃ+ቂ
10ቃx+ቂe1e2ቃ
47
Y = C−1ΓX+ C−1E (Reduced form)
ቂy1y2ቃ= ቂ
22ቃx+ቂ2 12 2ቃቂe1e2ቃ
y1 = 2.0∗x+ 2.0∗e1 + e2 y2 = 2.0∗x+ 2.0∗e1 + 2.0∗e2 We will use the reduced form to simulate our data. (See in the R code)
48
R code of simulation:
##Simulate the data set of x and residualse1 = rnorm(10000,0,1)e2 = rnorm(10000,0,1)x = rnorm(10000,0,1)
##Use simulated x and residuals to run the reduced form model and get the data set of y1 andy2.y1 = 2*x + e1 + 2*e2y2 = 2*x + 2*e1 + 2*e2
49
Σ(λ)=
Cov Y…X൩= C−1Γ . C−1… . ...I . 𝟎 ൩∗CovX…E൩∗C−1Γ . C−1… . ...I . 𝟎 ൩
′
=ۏێێێێۍ ① . ②C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′2×2 . C−1ΓΣEE2×1… . …ΣxxΓ′C−1′1×2 . Σxx1×1
③ . ④ ےۑۑۑۑې
= ΣYY . ΣYX… . …ΣXY . ΣXX൩
= ൦
σy11 σy12 . σxy1σy21 σy22 . σxy2… … … …σxy1 σxy2 . σxx൪= 𝚺
(Notice the df=5-5=0, just-identified)
50
C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′2×2= ቂ2 12 2ቃቂ10ቃ𝜎𝑋𝑋ሾ1 0ሿቂ2 21 2ቃ+ቂ
2 12 2ቃቂ𝜎𝑒11 𝜎𝑒12𝜎𝑒21 𝜎𝑒22ቃቂ2 21 2ቃ = ቂ2 12 2ቃቂ10ቃ1 ሾ1 0ሿቂ2 21 2ቃ+ቂ
2 12 2ቃቂ1 00 1ቃቂ2 21 2ቃ = ቂ9 1010 12ቃ
C−1ΓΣEE2×1 = ቂ2 12 2ቃቂ10ቃቂ𝜎𝑒11 𝜎𝑒12𝜎𝑒21 𝜎𝑒22ቃ= ቂ
2 12 2ቃቂ10ቃቂ1 00 1ቃ= ቂ22ቃ
𝜎𝑋𝑋 = 1 and ቂ𝜎𝑒11 𝜎𝑒12𝜎𝑒21 𝜎𝑒22ቃ= ቂ
1 00 1ቃ (Based on the simulation assumptions)
51
Σ(λ)=Cov Y…X൩=ۏێێێێۍ ① . ②C−1ΓΣxxΓ′C−1′ + C−1ΣEEC−1′2×2 . C−1ΓΣEE2×1… . …ΣxxΓ′C−1′1×2 . Σxx1×1
③ . ④ ےۑۑۑۑې
= 9 10 210 12 22 2 1൩= ൦
σy11 σy12 . σxy1σy21 σy22 . σxy2… … … …σxy1 σxy2 . σxx൪= 𝚺 (Is this true?)
R output-observed covariance matrix(Σ)
52
Fitted Covariance matrix
Observed Covariance matrix
53
The null hypothesis is Σ(λ)= Σ, the difference can be explained by chance alone and increase the number of simulation will make the difference smaller and smaller (by Law of Larger Numbers).
However, one of the important thing in this simulation example is that the df=0, which we cannot use the chi-square test to test the model. When the model is just identified, even it’s wrong, there is no way to test it.
By Dr. Westfall
Compare OLS v.s. SEM
54
Dependent Variable
OLS SEM True
𝐲𝟐= 1.112∗y1 (0.003)
0.997∗y1 (0.005)
1.0∗y1 𝐲𝟏= 0.746∗y2 + 0.512∗x
(0.002) (0.009) 0.507∗y2 + 0.990∗x (0.007) (0.017)
0.5∗y2 + 1.0∗x
OLS is always thinking as biased estimates of simultaneous equations which has endogeneity problem. The above comparison clearly explained why OLS estimate is biased and SEM is a better model.
Table2. Results comparison of OLS, SEM, and True model
55
You may also curious about what will happen if we have more parameters than equations?
56