introduction to intensive longitudinal methods

80
Introduction to Introduction to Intensive Longitudinal Methods Intensive Longitudinal Methods Larry R. Price, Ph.D Larry R. Price, Ph.D . . Director Director Interdisciplinary Initiative for Research Design & Interdisciplinary Initiative for Research Design & Analysis Analysis Professor – Psychometrics & Statistics Professor – Psychometrics & Statistics

Upload: trygg

Post on 23-Jan-2016

81 views

Category:

Documents


4 download

DESCRIPTION

Introduction to Intensive Longitudinal Methods. Larry R. Price, Ph.D . Director Interdisciplinary Initiative for Research Design & Analysis Professor – Psychometrics & Statistics. What are intensive longitudinal methods?. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction to  Intensive Longitudinal Methods

Introduction to Introduction to Intensive Longitudinal MethodsIntensive Longitudinal Methods

Larry R. Price, Ph.DLarry R. Price, Ph.D..Director Director

Interdisciplinary Initiative for Research Design & AnalysisInterdisciplinary Initiative for Research Design & Analysis

Professor – Psychometrics & StatisticsProfessor – Psychometrics & Statistics

Page 2: Introduction to  Intensive Longitudinal Methods

What are intensive What are intensive longitudinal methods?longitudinal methods?

Methods used in natural settings using many Methods used in natural settings using many repeated data captures (measurements) over repeated data captures (measurements) over time within a person. time within a person.

For example, daily diaries, interaction For example, daily diaries, interaction records, ecological momentary assessments, records, ecological momentary assessments, real-time data capture.real-time data capture.

The term intensive longitudinal methods is an The term intensive longitudinal methods is an umbrella term that includes the above types umbrella term that includes the above types of data structures.of data structures.

Page 3: Introduction to  Intensive Longitudinal Methods

Areas of notable growth in Areas of notable growth in the use of the methodsthe use of the methods

Interpersonal processes in dyads and Interpersonal processes in dyads and families.families.

For example, an examination of the link For example, an examination of the link between daily pleasant and unpleasant between daily pleasant and unpleasant behaviors (over 14 days) and global ratings behaviors (over 14 days) and global ratings of marital satisfaction.of marital satisfaction.

The study of dyads and family processes The study of dyads and family processes requires unique data-analytic challenges.requires unique data-analytic challenges.

Page 4: Introduction to  Intensive Longitudinal Methods

Why use intensive Why use intensive longitudinal methods?longitudinal methods?

Can be used to quantitatively study thoughts, Can be used to quantitatively study thoughts, feelings, physiology, and behavior in their natural, feelings, physiology, and behavior in their natural, spontaneous contexts or settings. spontaneous contexts or settings.

The data that result show an unfolding temporal The data that result show an unfolding temporal process (a) descriptively and (b) as a causal process (a) descriptively and (b) as a causal process.process.

For example, it is possible to show how an outcome For example, it is possible to show how an outcome YY changes over time and how this change is changes over time and how this change is contingent on changes in a causal variable contingent on changes in a causal variable XX..

Page 5: Introduction to  Intensive Longitudinal Methods

Difference between traditional Difference between traditional repeated measures analysisrepeated measures analysis

Usually limited to a few repeated measurements Usually limited to a few repeated measurements taken over long time intervals.taken over long time intervals.

We often use dynamic factor analysis models to study We often use dynamic factor analysis models to study complex change over time in this type of scenario.complex change over time in this type of scenario.

Intensive longitudinal methods offers a framework for Intensive longitudinal methods offers a framework for analyzing analyzing intrapersonalintrapersonal change (i.e. within-person change (i.e. within-person process) with extensive or rich outcome data.process) with extensive or rich outcome data.

Page 6: Introduction to  Intensive Longitudinal Methods

Advantages of useAdvantages of use

Measurement may be continuous or Measurement may be continuous or discrete based on a response to an discrete based on a response to an experimental or observational condition experimental or observational condition captured over many time points in a short captured over many time points in a short total duration. total duration.

Important for studying intraindividual and Important for studying intraindividual and interindividual change within a unified interindividual change within a unified model.model.

Page 7: Introduction to  Intensive Longitudinal Methods

Measuring a process over timeMeasuring a process over time

For example, Time 4 is regressed on Time 3, Time 3 is regressed on Time 2, Time 2 is regressed on Time 1. This yields a autoregressive structure or time series vector. In intensive longitudinal methods, these time series vectors are nested within individual persons yielding a hierarchical structure. Thus the need for a random coefficients mixed model approach (e.g., using SPSS mixed, SAS PROC MIXED, or the HLM program).

Page 8: Introduction to  Intensive Longitudinal Methods

Advantages of useAdvantages of use

Depending on when a variable Depending on when a variable XX measured at one measured at one point in time has a maximally causal effect on point in time has a maximally causal effect on YY at a at a later time point, later time point,

The precise temporal design of a longitudinal study The precise temporal design of a longitudinal study can greatly influence the observed effects. can greatly influence the observed effects.

We want to avoid using a between-subjects We want to avoid using a between-subjects

analysis to analyze these type of dataanalysis to analyze these type of data..

Page 9: Introduction to  Intensive Longitudinal Methods

Example – fMRI data structureExample – fMRI data structure

Small Sample Properties of Bayesian Multivariate Autoregressive

Time Series Models

(Structural Equation Modeling Journal, 2012)

Page 10: Introduction to  Intensive Longitudinal Methods

Relationship of measurement Relationship of measurement model to autoregressive modelmodel to autoregressive model

Page 11: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive (MAR) time series autoregressive (MAR) time series modelmodel

A MAR model predicts the next value in a A MAR model predicts the next value in a dd – dimensional time – dimensional time

series as a linear function of the series as a linear function of the pp previous vector values of the previous vector values of the

time series.time series.

The MAR model is based on the Wishart distribution, where The MAR model is based on the Wishart distribution, where VV = = pp x x

pp, symmetric, positive definite matrix of random variables., symmetric, positive definite matrix of random variables.

Page 12: Introduction to  Intensive Longitudinal Methods

Multivariate Vector Multivariate Vector Autoregressive ModelAutoregressive Model

Blue = contemporaneous

Brown = cross-covariances

Red = autoregression of time (t) on t-1

Page 13: Introduction to  Intensive Longitudinal Methods

Goals of the present studyGoals of the present study

Bayesian MAR model formulation capturing Bayesian MAR model formulation capturing contemporaneous and temporal contemporaneous and temporal components.components.

Evaluation of the effect of variations of the Evaluation of the effect of variations of the autoregressive and cross-lagged autoregressive and cross-lagged components of a multivariate time series components of a multivariate time series across sample size conditions where across sample size conditions where NN is is smaller than smaller than TT (i.e., (i.e., NN <= 15 <= 15 and and TT = 25 – = 25 – 125125).).

Page 14: Introduction to  Intensive Longitudinal Methods

Goals of the present studyGoals of the present study

To examine the impact of sample size and To examine the impact of sample size and time vector length within the sampling time vector length within the sampling theory framework for statistical power and theory framework for statistical power and parameter estimation bias.parameter estimation bias.

To illustrate an analytic framework that To illustrate an analytic framework that combines Bayesian statistical inference with combines Bayesian statistical inference with sampling theory (frequentist) inference. sampling theory (frequentist) inference.

Page 15: Introduction to  Intensive Longitudinal Methods

Goals of the present studyGoals of the present study

Compare and relate Bayesian credible Compare and relate Bayesian credible

interval estimation based on the Highest interval estimation based on the Highest

Posterior Density (HPD) to frequentist Posterior Density (HPD) to frequentist

power estimates and Type I error. power estimates and Type I error.

Page 16: Introduction to  Intensive Longitudinal Methods

Research ChallengesResearch Challenges

Sample size and vector length Sample size and vector length

determination for statistically reliable and determination for statistically reliable and

valid results – valid results – Bayesian and Frequentist Bayesian and Frequentist

considerationsconsiderations..

Modeling the structure of the multivariate Modeling the structure of the multivariate

time series time series contemporaneouslycontemporaneously and and

temporallytemporally..

Page 17: Introduction to  Intensive Longitudinal Methods

Research ChallengesResearch Challenges

Examining the impact of autocorrelation, Examining the impact of autocorrelation,

error variance and error variance and

cross-correlation/covariance on multivariate cross-correlation/covariance on multivariate

models in light of variations in sample size models in light of variations in sample size

and time series length.and time series length.

Page 18: Introduction to  Intensive Longitudinal Methods

Introduction to Bayesian Introduction to Bayesian probability & inferenceprobability & inference

Bayesian approach prescribes how learning Bayesian approach prescribes how learning from data and decision making should be from data and decision making should be carried out; the classical school does not.carried out; the classical school does not.

Prior information via distributional Prior information via distributional assumptions of variables and their assumptions of variables and their measurement is considered through careful measurement is considered through careful conceptualization of the research design, conceptualization of the research design, model, and analysis.model, and analysis.

Page 19: Introduction to  Intensive Longitudinal Methods

Introduction to Bayesian Introduction to Bayesian probability & inferenceprobability & inference

Stages of model development: Stages of model development:

1. Data Model: 1. Data Model: [data|process, parameters] - [data|process, parameters] - specifies the distribution of the data given the specifies the distribution of the data given the process.process.

2. Process Model: 2. Process Model: [process|parameters] - [process|parameters] - describes describes the process, conditional on other parameters.the process, conditional on other parameters.

3. Parameter Model: 3. Parameter Model: [parameters] [parameters] accounts for accounts for uncertainty in the parameters.uncertainty in the parameters.

Page 20: Introduction to  Intensive Longitudinal Methods

Introduction to Bayesian Introduction to Bayesian probability & inferenceprobability & inference

Through the likelihood function, the actual Through the likelihood function, the actual observations modify prior probabilities for the observations modify prior probabilities for the parameters.parameters.

So, given the prior distribution of the So, given the prior distribution of the parameters and the likelihood function of the parameters and the likelihood function of the parameters given the observations , the parameters given the observations , the posterior distribution posterior distribution of the parameters given of the parameters given the observations is determined.the observations is determined.

The Bayesian posterior distribution is the The Bayesian posterior distribution is the full full inferential solutioninferential solution to the research problem.to the research problem.

Page 21: Introduction to  Intensive Longitudinal Methods

Introduction to Bayesian Introduction to Bayesian probability & inferenceprobability & inference

.3 .5.4 .6 .7.2.10

2

4

6

8

The dashed line represents the likelihood, with 𝜃 being at its maximum at approximately .22, given the observed frequency distribution of the data. Applying Bayes theorem involves multiplying the prior density (solid curve) times the likelihood (dashed curve).

If either of these two values are near zero, the resulting posterior density will also be negligible (i.e., near zero, for example, for 𝜃 < .2 or 𝜃 > .6). Finally, the posterior density (i.e., the dotted-dashed line) is more informative than either the prior or the likelihood alone.

Page 22: Introduction to  Intensive Longitudinal Methods

Bayesian vs. Frequentist Bayesian vs. Frequentist probabilityprobability Frequentist (a.k.a. long-run frequency) Frequentist (a.k.a. long-run frequency)

– Estimates the probability of the data (D) under a Estimates the probability of the data (D) under a pointpoint null hypothesis (null hypothesis (HH00) or) or

– A large sample theory with adjustments for small and A large sample theory with adjustments for small and non-normal samples (e.g., non-normal samples (e.g., tt-distribution and -distribution and χχ2 2 tests/rank teststests/rank tests))

Bayesian (a.k.a. conditional or subjective)Bayesian (a.k.a. conditional or subjective)– Evaluates the probability of the hypothesis (not always Evaluates the probability of the hypothesis (not always

only the only the pointpoint) given the observed data and ) given the observed data and that a parameter falls within a specific confidence that a parameter falls within a specific confidence interval.interval.

– Used since 1950’s in computer science, artificial Used since 1950’s in computer science, artificial intelligence, economics, medicine.intelligence, economics, medicine.

0(D| )p H

( D)p H |

Page 23: Introduction to  Intensive Longitudinal Methods

Application of Bayesian Application of Bayesian modeling and inferencemodeling and inference

Data obtained from the real world mayData obtained from the real world may– Be sparseBe sparse– Exhibit multidimensionalityExhibit multidimensionality– Include unobservable variablesInclude unobservable variables

BioinformaticsIRT & CAT

Page 24: Introduction to  Intensive Longitudinal Methods

The 2-Level hierarchical The 2-Level hierarchical voxel-based fMRI model voxel-based fMRI model

1 voxel = 3 x 3 x 3mm

Page 25: Introduction to  Intensive Longitudinal Methods

Hierarchical Bayesian SEM in Hierarchical Bayesian SEM in functional neuroimaging (fMRI)functional neuroimaging (fMRI)

Page 26: Introduction to  Intensive Longitudinal Methods

Bayesian probability & Bayesian probability & inferenceinference

AdvantagesAdvantages– Data that are less precise (i.e., less Data that are less precise (i.e., less

reliable) will have less influence on the reliable) will have less influence on the subsequent plausibility of a hypothesis.subsequent plausibility of a hypothesis.

– The impact of initial differences in the The impact of initial differences in the perceived plausibility of a hypothesis tend perceived plausibility of a hypothesis tend to become less important as to become less important as results results accumulate accumulate (e.g., refinement of posterior (e.g., refinement of posterior estimates via MCMC algorithms and Gibbs estimates via MCMC algorithms and Gibbs sampling).sampling).

Page 27: Introduction to  Intensive Longitudinal Methods

Bayesian probability & Bayesian probability & inferenceinference

AdvantagesAdvantages– Sample size is not an issue due to MCMC Sample size is not an issue due to MCMC

algorithms.algorithms.

– Level of measurement is not problematicLevel of measurement is not problematic Interval estimates use the cumulative normal density Interval estimates use the cumulative normal density

(CDF) function. (CDF) function. Nominal/dichotomous and ordinal measures use the Nominal/dichotomous and ordinal measures use the

probit, logistic or log-log link function to map to a CDF.probit, logistic or log-log link function to map to a CDF. Uses Uses priorprior probability of each category given little/no probability of each category given little/no

information about an item.information about an item. Categorization produces a Categorization produces a posteriorposterior probability distribution probability distribution

over the possible categories given a description of an item.over the possible categories given a description of an item. Bayes theorem plays a critical role in probabilistic learning Bayes theorem plays a critical role in probabilistic learning

and classification.and classification.

Page 28: Introduction to  Intensive Longitudinal Methods

Example: parameters & Example: parameters & observationsobservations

Recall that the goal of parametric statistical Recall that the goal of parametric statistical inference to make statements about inference to make statements about unknown unknown parametersparameters that are not directly observable, from that are not directly observable, from observable random variables the behavior of which observable random variables the behavior of which is influenced by these is influenced by these unknown parameters.unknown parameters.

The model of the The model of the data generating process data generating process specifies specifies the relationship between the parameters and the the relationship between the parameters and the observations.observations.

If If xx represents the vector of represents the vector of nn observations, and observations, and 𝜣 𝜣 represents a vector of represents a vector of kk parameters parameters, 𝜃, 𝜃11, 𝜃, 𝜃22,, 𝜃𝜃33… 𝜃… 𝜃kk on which the distribution of observation depends…on which the distribution of observation depends…

Page 29: Introduction to  Intensive Longitudinal Methods

Example: parameters & Example: parameters & observationsobservations

Then, inserting Then, inserting 𝜣 𝜣 in place of in place of yy yields yields

So, we are interested in making statements about So, we are interested in making statements about parameters (parameters (𝜣), 𝜣), given a particular set of given a particular set of observations observations x.x.

p p ((xx) serves as a constant that allows for p( |𝜣) serves as a constant that allows for p( |𝜣 xx) to ) to sum or integrate to 1.sum or integrate to 1.

Bayes Theorem is also given asBayes Theorem is also given as = “proportional to”= “proportional to”

( | ) ( )( | )

( )

p x pp x

p x

( | ) ( | ) ( )p x p x p

Page 30: Introduction to  Intensive Longitudinal Methods

Example: parameters & Example: parameters & observationsobservations

In the previous slide, In the previous slide, pp ( (𝜣) 𝜣) serves as serves as “a distribution of “a distribution of belief”belief” in Bayesian analysis and is the joint distribution in Bayesian analysis and is the joint distribution of the parameters prior to the observations being of the parameters prior to the observations being applied.applied.

pp ( (x x |𝜣) |𝜣) is the joint distribution of the observations is the joint distribution of the observations conditional on values of the parameters.conditional on values of the parameters.

pp ( (𝜣|𝜣|xx) ) is the joint distribution of the parameters is the joint distribution of the parameters posterior to the observations becoming available.posterior to the observations becoming available.

So, once the data are obtained, So, once the data are obtained, p p ((x x |𝜣) |𝜣) is viewed as is viewed as function of and results in the 𝜣function of and results in the 𝜣 Likelihood functionLikelihood function forfor 𝜣 𝜣 given given x x , i.e., , i.e., LL ( (𝜣|𝜣|xx).).

Page 31: Introduction to  Intensive Longitudinal Methods

Example: parameters & Example: parameters & observationsobservations

Finally, it is through the likelihood function Finally, it is through the likelihood function that the actual observations modify prior that the actual observations modify prior probabilities for the parameters.probabilities for the parameters.

So, given the prior distribution of the So, given the prior distribution of the parameters and the likelihood function of the parameters and the likelihood function of the parameters given the observations , the parameters given the observations , the posterior distribution posterior distribution of the parameters given of the parameters given the observations is determined.the observations is determined.

The posterior distribution is The posterior distribution is the full inferential the full inferential solutionsolution to the research problem.to the research problem.

Page 32: Introduction to  Intensive Longitudinal Methods

Univariate unrestricted VAR Univariate unrestricted VAR time series modeltime series model

A time series process for each variable contained in A time series process for each variable contained in

the vector the vector yy is is

y t A L y t X t t

E u t u t t 1 T

;

' , , .

Page 33: Introduction to  Intensive Longitudinal Methods

Univariate unrestricted VAR Univariate unrestricted VAR time series modeltime series model

Where,Where, yy(t) = (t) = nn X 1 stationary vector of variables observed at time X 1 stationary vector of variables observed at time t t;; A(L)A(L) = = nn X X nn matrix of polynomials in the lag or backward shift matrix of polynomials in the lag or backward shift

operator operator LL;; X(t)X(t) = = nn X X nknk block diagonal matrix of observations on block diagonal matrix of observations on kk

observed variables;observed variables; ββ = = nknk X 1 vector of coefficients on the observed variables; X 1 vector of coefficients on the observed variables; u(t)u(t) = = nn X 1 vector of stochastic disturbances; X 1 vector of stochastic disturbances; Σ = Σ = nn X X nn contemporaneous covariance matrix. contemporaneous covariance matrix.

Page 34: Introduction to  Intensive Longitudinal Methods

Univariate unrestricted VAR Univariate unrestricted VAR time series modeltime series model

Also,Also, The coefficient on The coefficient on LL00 is zero for all elements of is zero for all elements of A(L)A(L) ( (i.e.,i.e.,

only the lagged elements of only the lagged elements of yy appear on the right side of appear on the right side of the equation). the equation).

y(t)y(t) matrix is equal to matrix is equal to – where y(t) is the k X 1 vector of observations on the k variables where y(t) is the k X 1 vector of observations on the k variables

related to each equation and related to each equation and is the matrix is the matrix KroeneckeKroenecker product.r product.'( ) ny t Ι

Page 35: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

A MAR model predicts the next value in a A MAR model predicts the next value in a dd – dimensional time – dimensional time

series as a linear function of the series as a linear function of the pp previous vector values of the previous vector values of the

time series.time series.

The MAR model is based on the Wishart distribution, where The MAR model is based on the Wishart distribution, where VV = = pp x x

pp, symmetric, positive definite matrix of random variables., symmetric, positive definite matrix of random variables.

Page 36: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

The MAR model divides each time series into two additive The MAR model divides each time series into two additive

components components (a) predictable portion of the time series(a) predictable portion of the time series and and (b) (b)

prediction error prediction error (i.e., white noise error sequence).(i.e., white noise error sequence).

The error covariance matrix is Gaussian with zero mean and The error covariance matrix is Gaussian with zero mean and

precision (inverse covariance) matrix precision (inverse covariance) matrix ΛΛ..

Page 37: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

The model with The model with NN variables can be expressed in matrix format as variables can be expressed in matrix format as

yM 11 1 11N

yi 1 nN1 NN N

a i a i n i e n

na i a i n i e n

' ( ) ' ( ) ( ) ( )

' ( ) ' ( ) ( ) ( )y

M

i 1

n i n i n'( ) ( ) ( )y a x e

Page 38: Introduction to  Intensive Longitudinal Methods

Multivariate Vector Multivariate Vector Autoregressive Time Series Autoregressive Time Series ModelModel

The multivariate prediction error filter is expressed asThe multivariate prediction error filter is expressed as

where,where,

a and a -a'( ) ( ) ( ).0 I i i

M

i 0

n i n i( ) ( ) ( )e a x

Page 39: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

The model can also be written as a standard The model can also be written as a standard multivariate linear regression multivariate linear regression model asmodel as

where are the where are the pp previous multivariate time series samples and previous multivariate time series samples and WW is a is a (p x d)-by-d(p x d)-by-d matrix of MAR coefficients (weights). matrix of MAR coefficients (weights).

There are therefore There are therefore k = p x d x d k = p x d x d MAR coefficients.MAR coefficients.

n n nx W ey

1, 2,...,n n pn nx y y y

Page 40: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

If the If the nnth rows of th rows of Y, XY, X, and , and EE are are yynn, x, xnn, , and and eenn respectively, then for the respectively, then for the nn = 1,… = 1,…NN samples, the equation is samples, the equation is

Where Where YY is an is an N-by-d N-by-d matrix, X is an matrix, X is an N-by-(p x d) N-by-(p x d) matrix and matrix and EE is an is an N-by-dN-by-d matrix. matrix.

Y XW E

Page 41: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

For the multivariate linear regression model, the data set For the multivariate linear regression model, the data set D = {X,Y}, D = {X,Y}, the likelihood of the data is the likelihood of the data is

given asgiven as

where, is the determinant and ( ) the trace, and where, is the determinant and ( ) the trace, and

the error matrix.the error matrix.

22 12

2NdN

Dp D W, Tr E (W//( | ) ( ) exp ( ))

( ) ( ) ( ) TD W Y - XW Y - XWE

Tr

Page 42: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

To facilitate matrix operations, the notation is given asTo facilitate matrix operations, the notation is given as

denotes the columns being stacked on top of each other.denotes the columns being stacked on top of each other.

To recover the matrix , columns are “unstacked” from the vector .To recover the matrix , columns are “unstacked” from the vector .

This matrix transformation is a standard method for implicitly defining a probability density over a matrix.This matrix transformation is a standard method for implicitly defining a probability density over a matrix.

( )w vec W

wW

vec

( )w vec W

Page 43: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

The maximum Likelihood (ML) solution for the MAR coefficients isThe maximum Likelihood (ML) solution for the MAR coefficients is

And the ML error covariance , And the ML error covariance , SSML ML is estimated asis estimated as

where, .where, .

-1

T TMLW X X X Y

1 1

T

ML ML ML D MLY- XW Y- XW E (W )N k N k

S

k = p x d x d

Page 44: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

Again, to facilitate matrix operations, the notation is given asAgain, to facilitate matrix operations, the notation is given as

denotes the columns of being concatenated.denotes the columns of being concatenated.

This matrix transformation is a standard method for implicitly defining a probability density over a matrix.This matrix transformation is a standard method for implicitly defining a probability density over a matrix.

( )w vec W

W

vec

( )w vec W

Page 45: Introduction to  Intensive Longitudinal Methods

Multivariate vector Multivariate vector autoregressive time series modelautoregressive time series model

Using the Using the vecvec notation , we define as notation , we define as

Finally, the ML parameter covariance matrix for is given asFinally, the ML parameter covariance matrix for is given as

Where , Where , denotes the Kroenecker product (Anderson, 2003; Box & Tiao, 1983)denotes the Kroenecker product (Anderson, 2003; Box & Tiao, 1983) ..

( )ML MLw vec W

ML ML -1TS S X X

MLw

MLw

Page 46: Introduction to  Intensive Longitudinal Methods

The Population Model : The Population Model : Bayesian SEMBayesian SEM

Unknown quantities (e.g., parameters) are viewed Unknown quantities (e.g., parameters) are viewed

as random.as random.

These quantities are assigned a probability These quantities are assigned a probability

distribution (e.g., Normal, Poisson, Multinomial, distribution (e.g., Normal, Poisson, Multinomial,

Beta, Gamma, etc.) that details a generating Beta, Gamma, etc.) that details a generating

process for a particular set of data.process for a particular set of data.

In this study, our unknown population parameters In this study, our unknown population parameters

were modeled as being were modeled as being randomrandom and then assigned and then assigned

to a joint probability distribution.to a joint probability distribution.

Page 47: Introduction to  Intensive Longitudinal Methods

The Population Model : The Population Model : Bayesian SEMBayesian SEM

The sampling based approach to Bayesian The sampling based approach to Bayesian

estimation provides a solution for the random estimation provides a solution for the random

parameter vector by estimating the parameter vector by estimating the posterior density posterior density

of a parameter.of a parameter.

This posterior distribution is defined as the product This posterior distribution is defined as the product

of the likelihood function and the prior density of the likelihood function and the prior density

updated via Gibbs sampling and evaluated by updated via Gibbs sampling and evaluated by

MCMC methods.MCMC methods.

Page 48: Introduction to  Intensive Longitudinal Methods

The Population Model : The Population Model : Bayesian SEMBayesian SEM

The present study includes Bayesian and frequentist The present study includes Bayesian and frequentist

sampling theory - a sampling theory - a “dual approach” “dual approach” (Mood & (Mood &

Graybill, 1963; Box & Tiao, 1973) approach Graybill, 1963; Box & Tiao, 1973) approach ..

In the In the dual approachdual approach, Bayesian posterior information , Bayesian posterior information

is used to suggest functions of the data for use as is used to suggest functions of the data for use as

estimators.estimators.

Inferences are then made by considering the Inferences are then made by considering the

sampling properties of the Bayes estimators.sampling properties of the Bayes estimators.

Page 49: Introduction to  Intensive Longitudinal Methods

The Population Model : The Population Model : Bayesian SEMBayesian SEM

The goal is to examine the congruency between the The goal is to examine the congruency between the

inferences obtained from a Monte Carlo study inferences obtained from a Monte Carlo study

based on the population Bayes estimators and the based on the population Bayes estimators and the

full Bayesian solution relative to sample size and full Bayesian solution relative to sample size and

time vector length.time vector length.

Page 50: Introduction to  Intensive Longitudinal Methods

Data generation & methodsData generation & methods

Five multivariate autoregressive (AR1) series Five multivariate autoregressive (AR1) series vectors of varying lengths (5) and sample vectors of varying lengths (5) and sample sizes (5) were generated using SAS v.9.2.sizes (5) were generated using SAS v.9.2.

Autoregressive coefficients included Autoregressive coefficients included coefficients of .80, .70, .65, .50, and .40.coefficients of .80, .70, .65, .50, and .40.

Error variance components Error variance components of .20, .20, .10, .15, and .15.of .20, .20, .10, .15, and .15.

Cross-lagged coefficients of .10, .10, .15, .10, Cross-lagged coefficients of .10, .10, .15, .10, and .10.and .10.

Page 51: Introduction to  Intensive Longitudinal Methods

Bayesian Vector Autoregressive Bayesian Vector Autoregressive ModelModel

Blue = contemporaneous

Brown = cross-covariances

Red = autoregression of time (t) on t-1

Page 52: Introduction to  Intensive Longitudinal Methods

DesignDesign

Study design: completely crossed 5 (sample Study design: completely crossed 5 (sample size) X 5 (time series vector length).size) X 5 (time series vector length).

Sample size conditions: N = 1, 3, 5, 10, 15.Sample size conditions: N = 1, 3, 5, 10, 15.

Time series conditions: T = 25, 50, 75, 100, Time series conditions: T = 25, 50, 75, 100, 125.125.

Page 53: Introduction to  Intensive Longitudinal Methods

Bayesian model Bayesian model developmentdevelopment

Structure of the autocorrelation, error and Structure of the autocorrelation, error and cross-correlation processes were selected to cross-correlation processes were selected to examine a wide range of plausible scenarios examine a wide range of plausible scenarios in behavioral science. in behavioral science.

Number and direction of paths were Number and direction of paths were determined given the goals of the study and determined given the goals of the study and previous work using the Occam’s Window previous work using the Occam’s Window Model Selection Algorithm (Madigan & Model Selection Algorithm (Madigan & Raftery, 1994; Price, 2008).Raftery, 1994; Price, 2008).

Page 54: Introduction to  Intensive Longitudinal Methods

Bayesian model Bayesian model developmentdevelopment

Optimal model selection was based on Optimal model selection was based on competing competing Bayes Factors Bayes Factors (i.e. (i.e. p(data|M1)/p(data|M2)p(data|M1)/p(data|M2) and and Deviance Deviance Information CriterionInformation Criterion (DIC) indices. (DIC) indices.

Development of the multivariate BVAR-SEM Development of the multivariate BVAR-SEM proceeded by, modeling the population proceeded by, modeling the population parameters as being unknown but following parameters as being unknown but following a random walk with drift. a random walk with drift.

Page 55: Introduction to  Intensive Longitudinal Methods

Bayesian Model Bayesian Model DevelopmentDevelopment

BVAR-SEM estimation of the population models BVAR-SEM estimation of the population models proceeded by using a proceeded by using a ~noninformative (weakly ~noninformative (weakly vague ) diffuse normal prior vague ) diffuse normal prior distribution for distribution for structural regression weights and variance structural regression weights and variance components.components.

Prevents the possible introduction of parameter Prevents the possible introduction of parameter estimation bias due to potential for poorly estimation bias due to potential for poorly selected priors in situations where little prior selected priors in situations where little prior knowledge is available (Lee, 2007, p. 281; knowledge is available (Lee, 2007, p. 281; Jackman, 2000). Jackman, 2000).

Page 56: Introduction to  Intensive Longitudinal Methods

Bayesian model Bayesian model priorspriors development development

Semi-conjugate for parameters Semi-conjugate for parameters multivariate normal N(0, 4).multivariate normal N(0, 4).

Precision for the covariance matrix inverse Precision for the covariance matrix inverse Wishart distribution (multivariate generalization Wishart distribution (multivariate generalization of the chi-square distribution). of the chi-square distribution).

~

~

Page 57: Introduction to  Intensive Longitudinal Methods

Priors & ProgrammingPriors & Programming

These priors were selected based on (a) the These priors were selected based on (a) the distributional properties of lagged vector distributional properties of lagged vector autoregressive models in normal linear autoregressive models in normal linear models following a random walk with drift, models following a random walk with drift, and (b) for complex models with small and (b) for complex models with small samples.samples.

Bayesian estimation was conducted using Bayesian estimation was conducted using MMplusplus, v 6.2 (Muthén & Muthén, 2010). , v 6.2 (Muthén & Muthén, 2010).

Page 58: Introduction to  Intensive Longitudinal Methods

Model convergenceModel convergence

After MCMC burn-in phase (N=1,000), After MCMC burn-in phase (N=1,000), posterior distributions were evaluated using posterior distributions were evaluated using time series and auto correlation plots to time series and auto correlation plots to judge the behavior of the MCMC judge the behavior of the MCMC convergence (i.e., N = 100,000 samples).convergence (i.e., N = 100,000 samples).

Posterior predictive Posterior predictive pp statistics ranged statistics ranged between .61 - .68 for all 25 analyses. between .61 - .68 for all 25 analyses.

Page 59: Introduction to  Intensive Longitudinal Methods

Step 2: Monte Carlo studyStep 2: Monte Carlo study

Monte Carlo simulation provides an empirical Monte Carlo simulation provides an empirical way to observe the behavior of a given statistic way to observe the behavior of a given statistic or statistics across a particular number of or statistics across a particular number of random samples.random samples.

This part of the investigation focused on This part of the investigation focused on examining the impact of length of time series and examining the impact of length of time series and sample size on the accuracy (i.e., parameter sample size on the accuracy (i.e., parameter estimation bias) of the model to recover the estimation bias) of the model to recover the Bayes population parameter estimatesBayes population parameter estimates. .

Page 60: Introduction to  Intensive Longitudinal Methods

Monte Carlo StudyMonte Carlo Study

The second phase proceeded by generating data The second phase proceeded by generating data using parameter estimates derived from the using parameter estimates derived from the Bayesian population model incorporating a lag-1 Bayesian population model incorporating a lag-1 multivariate normal distribution for each of the multivariate normal distribution for each of the following conditions following conditions (a) (a) T = T = 25, N = 1, 3, 5, 10, 15; 25, N = 1, 3, 5, 10, 15; (b) (b) T = T = 50, N = 1, 3, 5, 10, 1550, N = 1, 3, 5, 10, 15, (c) T = 75, , (c) T = 75, N = 1, 3, N = 1, 3, 5, 10, 15; 5, 10, 15; (d)T = 100; (d)T = 100; N = 1, 3, 5, 10, 15; N = 1, 3, 5, 10, 15; (e) T = (e) T = 125; 125; N = 1, 3, 5, 10, 15. N = 1, 3, 5, 10, 15.

MMplusplus, v. 6.2 was used to conduct the N=1000 , v. 6.2 was used to conduct the N=1000 Monte Carlo study and derive power and bias Monte Carlo study and derive power and bias estimates.estimates.

Page 61: Introduction to  Intensive Longitudinal Methods

Results: Results: Over all conditionsOver all conditions

When power estimates for parameters were < .80, When power estimates for parameters were < .80, Bayesian credible intervals contained the null value of Bayesian credible intervals contained the null value of zero 78% of the time.zero 78% of the time.

Power for parameter estimates displayed power < .80 Power for parameter estimates displayed power < .80 in 51% estimates at sample size N=1. in 51% estimates at sample size N=1.

A parameter value ofA parameter value of zero zero was contained in the 95% was contained in the 95% Bayesian credible interval in Bayesian credible interval in 51% 51% of the estimates at of the estimates at sample sizesample size N=1. N=1.

Page 62: Introduction to  Intensive Longitudinal Methods

Blue = contemporaneous

Brown = cross-covariances

Red = autoregression of time (t) on t-1

Page 63: Introduction to  Intensive Longitudinal Methods

Results: Results: Path E on APath E on A

When the regression of path When the regression of path E on A E on A exhibited exhibited power < .80 population values of zero were also power < .80 population values of zero were also observed within the Bayesian 95% credible interval observed within the Bayesian 95% credible interval in 72% of sample size conditions. in 72% of sample size conditions.

Overall, a parameter value of zero was contained in Overall, a parameter value of zero was contained in the 95% Bayesian credible interval in 84% of the the 95% Bayesian credible interval in 84% of the conditions.conditions.

Page 64: Introduction to  Intensive Longitudinal Methods

Results: Results: Path E on BPath E on B

When the regression of path When the regression of path E on B E on B exhibited exhibited power < .80, Bayesian credible intervals contained power < .80, Bayesian credible intervals contained the null value of zero in 56% of the conditions.the null value of zero in 56% of the conditions.

A parameter value of zero was contained in the A parameter value of zero was contained in the 95% Bayesian credible interval in 88% of the 95% Bayesian credible interval in 88% of the conditions.conditions.

Page 65: Introduction to  Intensive Longitudinal Methods

Blue = contemporaneous

Brown = cross-covariances

Red = autoregression of time (t) on t-1

Page 66: Introduction to  Intensive Longitudinal Methods

Results: Results: Path D on BPath D on B

When the regression of path When the regression of path E on B E on B exhibited exhibited power < .80, Bayesian credible intervals contained power < .80, Bayesian credible intervals contained the null value of zero in 56% of the conditions.the null value of zero in 56% of the conditions.

A parameter value of zero was contained in the A parameter value of zero was contained in the 95% Bayesian credible interval in 88% of the 95% Bayesian credible interval in 88% of the conditions.conditions.

Page 67: Introduction to  Intensive Longitudinal Methods

Blue = contemporaneous

Brown = cross-covariances

Red = autoregression of time (t) on t-1

Page 68: Introduction to  Intensive Longitudinal Methods

Results: Results: Path E on BPath E on B

The regression of path The regression of path E on B E on B exhibited power exhibited power < .80 in 32% of conditions. < .80 in 32% of conditions.

This occurred 24% of the time in the N=1 and N=3 This occurred 24% of the time in the N=1 and N=3 sample size conditions for time series up to T = 75. sample size conditions for time series up to T = 75.

A parameter value of zero was contained in the A parameter value of zero was contained in the 95% Bayesian credible interval in 24% of the 95% Bayesian credible interval in 24% of the conditions.conditions.

Page 69: Introduction to  Intensive Longitudinal Methods

T = 25 Condition

Path Pop Mean M.S.E. % bias power Pop Mean M.S.E. % bias power

SERIESA ← SERIESB -1.14 -1.14 0.071 -0.72 0.98 -1.14 -1.14 0.018 -0.52 1.00

SERIESC ← SERIESB 1.60 1.60 0.053 0.11 1.00 1.63 1.62 0.015 -0.25 1.00

SERIESD ← SERIESB 0.65 0.65 0.301 -1.06 0.26* 0.62 0.62 0.082 0.00 0.61*

SERIESE ← SERIESB -0.16 -0.15 0.077 -4.46 0.09* -0.18 -0.17 0.023 -7.10 0.20*

SERIESD ← SERIESC 0.98 1.00 0.224 2.24 0.58* 1.00 1.00 0.065 0.00 0.97

SERIESA ← SERIESC 0.50 0.49 0.058 -1.82 0.60* 0.50 0.50 0.015 0.00 0.98

SERIESE ← SERIESC 0.77 0.75 0.051 -1.96 0.90 0.75 0.75 0.014 0.00 1.00

SERIESD ← SERIESE -0.34 -0.34 0.193 -0.18 0.12* -0.37 -0.38 0.059 3.27 0.36*

SERIESA ← SERIESE -0.08 -0.08 0.045 -7.41 0.06* -0.07 -0.07 0.014 0.00 0.08*

SERIESD ← SERIESA 1.78 1.77 0.044 -0.48 1.00 1.79 1.80 0.014 0.21 1.00

N = 1 N = 3

Red = the null values θ0 lying in the credible interval for parameter θ, therefore the null hypothesis cannot be rejected (i.e., its credibility of the null hypothesis). Blue = power < .80.

Page 70: Introduction to  Intensive Longitudinal Methods

T = 25 Condition

Path Pop Mean M.S.E. % bias power Pop Mean M.S.E. % bias

SERIESA ← SERIESB -1.14 -1.13 0.010 -0.35 1.00 -1.14 -1.14 0.005 -0.26

SERIESC ← SERIESB 1.61 1.61 0.009 -0.25 1.00 1.61 1.61 0.004 -0.25

SERIESD ← SERIESB 0.64 0.64 0.048 0.63 0.84 0.64 0.63 0.020 -0.94

SERIESE ← SERIESB -0.15 -0.14 0.014 -2.91 0.26* -0.17 -0.16 0.007 -6.95

SERIESD ← SERIESC 1.00 1.00 0.036 -0.01 0.99 1.00 1.00 0.017 -0.01

SERIESA ← SERIESC 0.50 0.50 0.008 -0.36 1.00 0.49 0.49 0.004 -0.61

SERIESE ← SERIESC 0.75 0.75 0.008 -0.27 1.00 0.76 0.76 0.017 0.40

SERIESD ← SERIESE -0.34 -0.35 0.034 4.13 0.46* -0.36 -0.36 0.004 0.84

SERIESA ← SERIESE -0.08 -0.07 0.008 -4.00 0.12* -0.08 -0.07 0.004 -4.00

SERIESD ← SERIESA 1.80 1.80 0.008 0.00 1.00 1.80 1.80 0.004 0.00

N = 5 N = 10

Red = the null values θ0 lying in the credible interval for parameter θ, therefore the null hypothesis cannot be rejected (i.e., it’s credibility of the null hypothesis). Blue = power < .80.

Page 71: Introduction to  Intensive Longitudinal Methods

T = 50 Condition

Path Pop Mean M.S.E. % bias power Pop Mean M.S.E. % bias power

SERIESA ← SERIESB 1.54 1.55 0.066 0.78 1.00 1.54 1.53 0.023 -0.34 1.00

SERIESC ← SERIESB 1.12 1.13 0.022 0.40 1.00 1.12 1.12 0.007 -0.03 1.00

SERIESD ← SERIESB 0.42 0.42 0.102 0.17 0.29* 0.41 0.40 0.032 -3.21 0.62*

SERIESE ← SERIESB 1.11 1.10 0.031 -0.79 1.00 -0.11 -0.10 0.010 -3.89 0.20*

SERIESD ← SERIESC -0.82 -0.83 0.082 1.58 0.83 -0.82 -0.83 0.030 1.58 0.99

SERIESA ← SERIESC -1.45 -1.44 0.023 -0.69 1.00 -1.45 -1.45 0.008 -0.54 1.00

SERIESE ← SERIESC 0.64 0.64 0.022 -0.64 0.98 0.63 0.62 0.007 -1.60 1.00

SERIESD ← SERIESE 0.38 0.38 0.080 1.86 0.27* 0.39 0.38 0.025 -1.29 0.66*

SERIESA ← SERIESE 0.17 0.18 0.022 2.69 0.23* 0.17 0.18 0.006 2.69 0.57*

SERIESD ← SERIESA 1.62 1.62 0.021 -0.07 1.00 1.64 1.65 0.007 0.54 1.00

N = 1 N = 3

Page 72: Introduction to  Intensive Longitudinal Methods

T = 75 Condition

Path Pop Mean M.S.E. % bias power Pop Mean M.S.E. % bias power

SERIESA ← SERIESB 1.08 1.12 0.021 3.54 1.00 1.10 1.12 0.006 1.45 1.00

SERIESC ← SERIESB 1.08 1.06 0.016 -1.85 1.00 1.09 1.06 0.006 -2.99 1.00

SERIESD ← SERIESB 0.01 0.09 0.345 841.00 0.23* 0.01 0.08 0.014 742.00 0.25*

SERIESE ← SERIESB 0.35 0.33 0.046 -3.48 0.40* 0.36 0.33 0.016 -6.93 0.83

SERIESD ← SERIESC 0.31 0.22 0.038 -27.66 0.39* 0.30 0.23 0.015 -23.33 0.75*

SERIESA ← SERIESC -0.13 -0.09 0.021 -32.33 0.14* -0.08 -0.06 0.006 -21.25 0.17*

SERIESE ← SERIESC 0.88 0.96 0.053 9.32 0.99 0.93 0.96 0.015 3.83 1.00

SERIESD ← SERIESE 0.01 0.21 0.078 2021.00 0.36* 0.01 0.20 0.049 1894.00 0.65*

SERIESA ← SERIESE 0.01 -0.36 0.140 -3746.00 0.94 0.01 -0.39 0.154 -4003.00 1.00

SERIESD ← SERIESA 0.01 -0.38 0.167 -3900.00 0.88 0.01 0.08 0.170 695.00 1.00

N = 1 N = 3

Page 73: Introduction to  Intensive Longitudinal Methods

T = 125 Condition

Path Pop Mean M.S.E. % bias power Pop Mean M.S.E. % bias power

SERIESA ← SERIESB 1.02 1.02 0.03 -0.01 1.00 1.00 0.99 0.008 -1.30 1.00

SERIESC ← SERIESB 1.33 1.32 0.01 -0.08 1.00 1.33 1.32 0.003 -0.75 1.00

SERIESD ← SERIESB 0.73 0.73 0.04 -0.55 0.96 0.74 0.73 0.012 -1.35 1.00

SERIESE ← SERIESB 0.05 0.05 0.01 0.67 0.08* 0.05 0.05 0.003 0.00 0.18*

SERIESD ← SERIESC -0.52 -0.53 0.03 1.46 0.82 -0.51 -0.51 0.010 0.79 0.99

SERIESA ← SERIESC -1.47 -1.47 0.01 -0.14 1.00 -1.44 -1.43 0.002 -0.42 1.00

SERIESE ← SERIESC 0.17 0.17 0.01 0.83 0.46* 0.16 0.16 0.002 0.00 0.87

SERIESD ← SERIESE 0.86 0.87 0.03 0.23 0.99 0.86 0.87 0.009 0.70 1.00

SERIESA ← SERIESE 0.09 0.09 0.01 1.08 0.17* 0.10 0.10 0.003 -1.02 0.46*

SERIESD ← SERIESA 1.54 1.54 0.01 0.26 1.00 1.52 1.53 0.003 0.66 1.00

N = 1 N = 3

Page 74: Introduction to  Intensive Longitudinal Methods

ConclusionsConclusions

When the autoregressive value drops below .70 and When the autoregressive value drops below .70 and variance below .20 (as in vectors A and B), problems variance below .20 (as in vectors A and B), problems related to parameter bias and statistical power related to parameter bias and statistical power consistently occur – regardless of sample size and consistently occur – regardless of sample size and vector time length.vector time length.

Percent bias in parameter estimates (> 5%) and/or Percent bias in parameter estimates (> 5%) and/or power was particularly problematic (< .80) in paths power was particularly problematic (< .80) in paths involving the involving the CC, , DD and and EE vectors across all sample size vectors across all sample size conditions and all time vectors conditions.conditions and all time vectors conditions.

Recall that the Recall that the CC, , DD and and EE vectors included vectors included autoregressive coefficients of .65 (autoregressive coefficients of .65 (σσ22 =.10), .50 ( =.10), .50 (σσ22 =.10) and .40 (=.10) and .40 (σσ22 =.15). =.15).

Page 75: Introduction to  Intensive Longitudinal Methods

ConclusionsConclusions

One-sided Bayesian hypothesis tests (i.e., One-sided Bayesian hypothesis tests (i.e., HH11: : HH00 > 0) > 0) conducted by using the posterior probability of the null conducted by using the posterior probability of the null hypothesis concurred with frequentist sampling theory hypothesis concurred with frequentist sampling theory power estimates. power estimates.

However, when we move beyond hypothesis tests by However, when we move beyond hypothesis tests by modeling the values for the entire distribution of a parameter modeling the values for the entire distribution of a parameter the Bayesian approach is more informative (*important for the Bayesian approach is more informative (*important for applied research). applied research).

This settles the question of where we are in the parameter This settles the question of where we are in the parameter dimension given a particular probability value (e.g., we can dimension given a particular probability value (e.g., we can make probabilistic statements with greater accuracy).make probabilistic statements with greater accuracy).

Page 76: Introduction to  Intensive Longitudinal Methods

ConclusionsConclusions

The Bayesian MAR provides a fully Bayesian solution The Bayesian MAR provides a fully Bayesian solution to multivariate time series models by modeling the to multivariate time series models by modeling the entire distribution of parameters – not merely point entire distribution of parameters – not merely point estimates of distributions.estimates of distributions.

The Bayesian approach is more sensitive than The Bayesian approach is more sensitive than frequentist approach by modeling the full distribution frequentist approach by modeling the full distribution of the parameter dimension for time series where the of the parameter dimension for time series where the autoregressive components are autoregressive components are

< .80. < .80.

Page 77: Introduction to  Intensive Longitudinal Methods

THANK YOU FOR YOUR ATTENTION!THANK YOU FOR YOUR ATTENTION!

Page 78: Introduction to  Intensive Longitudinal Methods

ReferencesReferences

Anderson, T. W., 2003. An Introduction to Multivariate Time Anderson, T. W., 2003. An Introduction to Multivariate Time Series 3Series 3rdrd ed. Wiley, New York, NY. ed. Wiley, New York, NY.

Ansari, A., & Jedidi, K. (2000). Bayesian factor analysis for Ansari, A., & Jedidi, K. (2000). Bayesian factor analysis for multilevel binary observations. Psychometrika, 64, 475–496.multilevel binary observations. Psychometrika, 64, 475–496.

Box, G. and TiaoBox, G. and Tiao, (, (1973). 1973). Bayesian Inference in Statistical Bayesian Inference in Statistical AnalysesAnalyses. Reading, MA: Addison-Wesley.. Reading, MA: Addison-Wesley.

Chatfield, C. (2004). Chatfield, C. (2004). The Analysis of Time Series: An The Analysis of Time Series: An Introduction, 6Introduction, 6th th ed. New York: Chapman & Hall. ed. New York: Chapman & Hall.

Page 79: Introduction to  Intensive Longitudinal Methods

ReferencesReferences

Congdon, P. D. (2010). Congdon, P. D. (2010). Applied Bayesian Hierarchical MethodsApplied Bayesian Hierarchical Methods. . Boca Raton: Chapman & Hall/CRC Press.Boca Raton: Chapman & Hall/CRC Press.

Gelman, A., Carlin, J. B., Stern, H., & Rubin, D.B. (2004). Gelman, A., Carlin, J. B., Stern, H., & Rubin, D.B. (2004). Bayesian Bayesian Data Analysis, 2Data Analysis, 2ndnd ed ed. Boca Raton: CRC/Chapman Hall.. Boca Raton: CRC/Chapman Hall.

Jackman, S. (2000). Estimation and inference via Bayesian Jackman, S. (2000). Estimation and inference via Bayesian simulation: An introduction to Markov Chain Monte Carlo. simulation: An introduction to Markov Chain Monte Carlo. American Journal of Political Science, 44:2, 375 – 404. American Journal of Political Science, 44:2, 375 – 404.

Lee S.Y. (2007). Lee S.Y. (2007). Structural Equation Modeling: A Bayesian Structural Equation Modeling: A Bayesian Approach. New York: Wiley.. Approach. New York: Wiley..

Lee S.Y., & Song X.Y. (2004). Bayesian model comparison of Lee S.Y., & Song X.Y. (2004). Bayesian model comparison of nonlinear structural equation models with missing continuous and nonlinear structural equation models with missing continuous and ordinal data. ordinal data. British Journal of Mathematical and Statistical British Journal of Mathematical and Statistical Psychology, 57, 131-150. Psychology, 57, 131-150.

Page 80: Introduction to  Intensive Longitudinal Methods

ReferencesReferences

Lütkepohl, H. (2006). Lütkepohl, H. (2006). New Introduction to Multiple Time Series Analysis. New Introduction to Multiple Time Series Analysis. Berlin: Springer-Verlag. Berlin: Springer-Verlag.

Price, L.R., Laird, A.R., Fox, P.T. (2009). Neuroimaging network analysis Price, L.R., Laird, A.R., Fox, P.T. (2009). Neuroimaging network analysis using Bayesian model averaging. using Bayesian model averaging. Presentation at The International Presentation at The International Meeting of the Psychometric SocietyMeeting of the Psychometric Society, Durham, New Hampshire., Durham, New Hampshire.

Price, L.R., Laird, A.R., Fox, P.T., & Ingham, R., (2009). Modeling Price, L.R., Laird, A.R., Fox, P.T., & Ingham, R., (2009). Modeling dynamic functional neuroimaging data using structural equation dynamic functional neuroimaging data using structural equation modeling. modeling. Structural Equation Modeling: A Multidisciplinary Journal, Structural Equation Modeling: A Multidisciplinary Journal, 16, 147-162.16, 147-162.

Scheines, R., Hoijtink, H., & Boomsma, A. (1999). Bayesian estimation Scheines, R., Hoijtink, H., & Boomsma, A. (1999). Bayesian estimation and testing of structural equation models. and testing of structural equation models. PsychometrikaPsychometrika, 64, 37–52., 64, 37–52.

Muthen L.K., Muthen, B.O. (2010). Mplus Version 6.0 [Muthen L.K., Muthen, B.O. (2010). Mplus Version 6.0 [Computer Computer program]. Los Angeles, CA: Muthen & Muthen. program]. Los Angeles, CA: Muthen & Muthen.