by - stat.ncsu.eduboos/library/mimeo.archive/isms_19… · vii chapter 5: simulation studies 5.1...

219
MAXIMUM LIKELIHOOD METHODS FOR NONLINEAR REGRESSION MODELS WITH COMPOUND-SYMMETRIC ERROR COVARIANCE by carolin M. Malott Department of Biostatistics University of North Carolina at Chapel Hill Institute of Statistics Mimeo Series No. 1873T January 1990

Upload: others

Post on 30-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

MAXIMUM LIKELIHOOD METHODS FORNONLINEAR REGRESSION MODELS WITH

COMPOUND-SYMMETRIC ERROR COVARIANCE

by

carolin M. Malott

Department of BiostatisticsUniversity of North Carolina at Chapel Hill

Institute of Statistics Mimeo Series No. 1873T

January 1990

Page 2: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

MAXIMUM LIKELIHOOD METHODS FOR

NONLINEAR REGRESSION MODELS 'VITH

COMPOUND-SYMMETRIC ERROR COVARIANCE

by

Carolin M. Malott

A dissertation submitted to the faculty of the University of North Carolinain partial fulfillment of the requirements for the degree of

Doctor of Philosophy in the Department of Biostatistics

Chapel Hill

1989

Approved by:

~~~91l~

~~dId!&Reader

C1<~e= b. ,-::;:~~Reader

Page 3: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

11

ABSTRACT

CAROLIN M. MALOTT. Maximum Likelihood Methods for Nonlinear Regression

Models with Compound-Symmetric Error Covariance. (Under the direction of Keith E.

Muller).

Statistical methods are developed for fitting nonlinear functions to multivariate

data generated by response variates with compound-symmetric covariance. With

complete data, maximum likelihood estimation of the model and covariance parameters

is described, under an assumption of Gaussian errors. The estimation procedure

accommodates both within-unit and between-unit variability in fitting an expectation

function. Under regularity conditions, the estimation procedure yields asymptotically

normal, unbiased and consistent estimators. However, the focus of this research is on

small sample properties. Existing general methods for fitting nonlinear multivariate

regression functions produce standard errors for parameter estimates which are

extremely optimistic when small samples are used. By incorporating the compound­

symmetric covariance structure into the model, substantial improvements in the

estimation of the covariance matrix for the parameter estimates are obtained. A two

stage approximate weighted least squares estimation method, analogous to that for

complete data, is developed for incomplete data.

F approximations to modified Wald and likelihood ratio statistics were derived

to address the anti-conservatism of many small sample inference procedures. The

modified statistics are constructed by omitting the covariance estimates in the

computation of the usual statistics and instead using repeated applications of Box's

Page 4: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

iii

[1954a] results for characterizing the distributions of approximate X2 random variates.

The performance of the complete- and incomplete-data estimation and inference

procedures were evaluated through simulation studies. Type I error rates for both the

usual and modified statistics were only mildly inflated with data that are complete or

up to 10% incomplete when the compound-symmetric covariance is modelled.

The estimation and inference procedures developed in this research are applied

to a real data example involving human thyroid stimulating hormone response to

injection with thyrotropin.

..

Page 5: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

iv

ACKNOWLEDGEMENTS

First and foremost, I would like to thank my father for his encouragement to

reach for the highest academic goals possible. His sincere appreciation for the rigors of

this pursuit sustained me during many of the darker moments of this endeavor.

Second, I must thank the many people who accepted me as an absentee friend and

family member. While many of those around me did not understand why or how a

dissertation is done, they generously supported my effort by giving me the vast amount

of time necessary to complete this research. In addition, I give special thanks to

Michael Frey who was in the unique position of thoroughly understanding the scope of

my work as well as the time and effort required. Our many discussions of my research

served to further my understanding of my topic as well as to put it into a broader

perspective. Finally, I must thank my advisor, Keith Muller, for introducing me to this

exciting area of statistics and my committee for guidance and encouragement

throughout my research.

Page 6: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

v

TABLE OF CONTENTS

Page

LIST OF TABLES viii

LIST OF FIGURES xi

CHAPTER 1: INTRODUCTION AND LITERATURE REVIEW

1.1 Introduction 1

1.1.1 Statement of the Problem 1

1.1.2 An Example 4

1.2 Literature Review 6

1.2.1 The Linear Model 6

1.2.1.1 Model Formulations 7

1.2.1.2 Least Squares Procedures 9

1.2.1.3 The Compound Symmetry Assumption 13

1.2.1.4 Missing Data 15

1.2.2 The Nonlinear Model with Spherical Covariance Matrix 17

1.2.2.1 Least Squares Computational Methods 17

1.2.2.1.1 Estimation 20

1.2.2.1.2 Inference 22

1.2.3 The Nonlinear Model with Non-Spherical Covariance Matrix 26

1.2.3.1 Approximate Weighted Least Squares Computational

Methods 27

1.2.3.1.1 Estimation 29

1.2.3.1.2 Inference 30

1.2.3.2 Survey of Alternate Methods 32

Page 7: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

vi eCHAPTER 2: ESTIMATION FOR COMPLETE DATA

2.1 Introduction 38

2.2 The Orthonormal Model Transformation 39

2.3 Regularity Conditions 42

2.4 Maximum Likelihood Estimation 47

2.5 Asymptotic Properties of the Parameter Estimates 54

2.6 Bias Approximations for the Parameter Estimates 59

CHAPTER 3: INFERENCE FOR COMPLETE DATA

3.1 Introduction 61

3.2 Error Variance Estimation 63

3.3 Development of Hypothesis Sums of Squares 74

3.4 Construction of Test Statistics 85 e3.5 Comparison of Test Statistics 92

3.6 Confidence Interval and Confidence Region Estimation 94

CHAPTER 4: ESTIMATION AND INFERENCE FOR INCOMPLETE DATA

4.1 Introduction 97

4.2 The Orthonormal Model Transformation 99

4.3 Maximum Likelihood Estimation 103

4.4 Method of Moments Estimation of the Variance Components 106

4.5 Approximate Weighted Least Squares Estimation 110

4.6 Inference 110

Page 8: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

vii

CHAPTER 5: SIMULATION STUDIES

5.1 Introduction 112

5.2 Models Simulated 113

5.3 Simulation Design 117

5.4 Results of the Complete Data Study 122

5.4.1 Evaluation of the Estimation Methods 123

5.4.2 Evaluation of the Inference Methods 128

5.5 Results of the Incomplete Data Study 135

5.5.1 Evaluation of the Estimation Method 135

5.5.2 Evaluation of the Inference Methods 137

CHAPTER 6: AN EXAMPLE

6.1 Overview of the Study 139

6.2 Model Selection 140

6.3 Research Hypotheses 143

6.4 Results and Conclusions 144

CHAPTER 7: SUMMARY AND SUGGESTIONS FOR FURTHER RESEARCH

7.1 Summary 147

7.2 Suggestions for Further Research 149

BIBLIOGRAPHY 151

TABLES 161

FIGURES ...............................•................................................................................. 199

Page 9: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

viii

LIST OF TABLES

Table 1: Least squares terminology 161

Table 2: Wald based statistics for testing Ho : ~(~o) = Q vs. Ha : ~(~o) i:. Q 162

Table 3: likelihood ratio based statistics for testingHo : Q(~o) = Qvs. Ha : Q(~o) i:. Q 163

Table 4: Approximate 95% confidence intervals for various observedType I error rates and numbers of replications 164

Table 5a: Number of replications used for model 1 in the complete data study ...... 165

Table 5b: Average number of iterations until convergence criteria were reachedfor estimation of the full model in the complete data study 166

Table 6a: Average of the parameter estimates for model 1 in the completedata study 167

Table 6b: Average of the parameter estimates for model 2 in the completedata study 168

Table 7a: Mean estimates of the asymptotic covariance matrix for ~ in thereduced model 1 for the complete data study and correspondingpercent of sample value achieved by the mean asymptoticcovariance estimates 169

Table 7b: Mean estimates of the asymptotic covariance matrix for ~ in thereduced model 2 for the complete data study and correspondingpercent of sample value achieved by the mean asymptoticcovariance estimates 170

Table 8a: Average of the estimated variance components and the percent ofpopulation value achieved for transformed model 1 in the completedata study 171

Table 8b: Average of the estimated variance components and the percent ofpopulation value achieved for transformed model 1 in the completedata study 172

Table 9a: Average of the estimated variance components and the percentpopulation value achieved for model 1 in the complete data study ......... 173

Table 9b: Average of the estimated variance components and the percentpopulation value achieved for model 2 in the complete data study ......... 174

Table lOa: Type I error rates for approximate F-tests of the joint hypothesisHo: ~, = ~" at the .05 level of significance in model 1 of thecomplete data study 175

Page 10: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

ix

Table lOb: Type I error rates for approximate F-tests of the joint hypothesis eHo: ~, = ~" at the .05 level of significance in model 2 of thecomplete data study 176

Table lla: Type I error rates for approximate x2-tests of the joint hypothesisHo: ~ I = ~ I' at the .05 level of significance in model 1 of thecomplete data study 177

Table lIb: Type I error rates for approximate x2-tests of the joint hypothesisHo : ~ I = ~ 9' at the .05 level of significance in model 2 of thecomplete data study 178

Table 12: Percent coverage for approximate 95% (Wald-based) confidenceintervals for reduced model parameter estimates in model 1of complete data study 179

Table 13: Observed F at which Dmu occurs and corresponding p-value forD maz from Kolmogorov-Smirnov goodness-of-fit test for approximateF statistics used to test Ho: ~ 9 = ~ , with constrained covarianceand AWLS estimation in the compl~te data study 180

Table 14a: Average correction factors for Wald and likelihood ratio based testsof Ho: ~g = ~" in model 1 using constrained covariance estimationand AWLS in the complete data study 181

Table 14b: Average correction factors for Wald and likelihood ratio based tests aof Ho: ~ I = ~ I' in model 1 using unconstrained covariance estimation •and AWLS in the complete data study 182

Table 15a: Average degrees of freedom for Wald and likelihood ratio based testsof Ho : ~ I = ~9' in model 1 using constrained covariance estimationand AWLS in the complete data study 183

Table 15b: Average degrees of freedom for Wald and likelihood ratio based testsof Ho: ~, = ~" in model 1 using unconstrained covariance estimationand AWLS in the complete data study 184

Table 16: Number of replications used for model 1 in the incomplete data study ... 185

Table 17: Average number of iterations until convergence criteria werereached for estimation of the full model using AWLS andconstrained covariance estimation in the incomplete data study 186

Table 18: Average of the parameter estimates for model 1 using AWLS andconstrained covariance estimation in the incomplete data study 187

Table 19: Mean estimaties of the asymptotic covariance matrix for ~ in thereduced model 1 for the incomplete data study and correspondingpercent of sample value achieved by the mean asymptoticcovariance estimates 188

Page 11: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

x

Table 20: Average of the estimated variance components and the percent ofpopulation value for model 1 using AWLS and constrained covarianceestimation in the incomplete data study 189

Table 21: Type I error rates for approximate F-test of the joint hypothesisHo: ~ 9 = ~9' at the .05 level of significance in model 1 of theincomplete C:iata study 190

Table 22: Percent coverage for approximate 95% (Wald-based) confidenceintervals for reduced model parameter estimates in model 1of the incomplete data study 191

Table 23: Observed F at which Dmaz occurs and corresponding p-value forDmaz from Kolmogorov-Smirnov goodness-of-fit test for approximateF statistics used to test Ho: ~g' = ~ I with constrained covarianceand AWLS estimation in the incom~lete data study 192

Table 24: Average correction factors for Wald and likelihood ratio based testsof Ho : ~g = ~ g' in model 1 using constrained covariance estimationand AWLS in the incomplete data study 193

Table 25: Average degrees of freedom for Wald and likelihood ratio based testsof Ho: ~g = ~9' in model 1 using constrained covariance estimationand AWLS in the incomplete data study 194

Table 26: Parameter summary for model (6.1) fitted to the TSH response datausing OLS 195

Table 27: Parameter summary for the full model (6.1) under Hal fitted to theTSH response data using AWLS for incomplete data 195

Table 28: Test statistics, degrees of freedom and p-values for testing H1 196

Table 29: Parameter summary for the full model (6.2) under Ha2 fitted to theTSH response data using AWLS for incomplete data 196

Table 30: Test statistics, degrees of freedom and p-values for testing H2 197

Table 31: Asymptotic correlation among parameter estimates from the fullmodel (6.2) under Ha2 using AWLS for incomplete data 198

Page 12: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

xi

LIST OF FIGURES

Figure 1a: Model 1 199

Figure 1b: Model 2 199

Figure 2: Complete data simulation study design 200

Figure 3: F-plot of the W 1 statistic with n = 20, p =0.6 and complete datausing AWLS estimation in model 1 201

Figure 4a: F-plot analog of the W 2 statistic with n = 20 and p = 0.6 andcomplete data using AWLS estimation in model 1 202

Figure 4b: F-plot analog of the W 3 statistic with n = 20 and p = 0.6 andcomplete data using AWLS estimation in model 1 203

Figure 5: F-plot of the W 1 statistic with n = 20, p = 0.6 and 5% missingdata using AWLS estimation in model 1 204

Figure 6a: F-plot analog of the W 2 statistic with n = 20 and p = 0.6 and5% missing data using AWLS estimation in model 1 205

Figure 6b: F-plot analog of the W 3 statistic with n = 20 and p = 0.6 and5% missing data using AWLS estimation in model 1. 206

Figure 7: TSH Response Curves 207

Page 13: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 1INTRODUCTION AND LITERA TURE REVIEW

1.1 Introduction

Nonlinear regression models are used to describe processes in many disciplines

such as the physical, chemical, biological and social sciences. Studies may be limited to

small samples when observational units are difficult or expensive to obtain. In such

cases it is common to take several measurements on each unit. In practice, it may not

be possible to obtain a complete set of measurements for each unit. This results in

incomplete or "missing" data.

The objective of this work is to develop maximum likelihood methods for

estimation and inference for a class of nonlinear repeated measurements models with

additive, normally distributed errors and compound symmetric error covariance

structure. In particular, these methods are to be used with small samples and complete

data. In a later section, similar methods are developed which are appropriate for

incomplete data. Important features of the class of models of interest and their designs

are more fully outlined in the following statement of the problem.

1.1.1 Statement of the Problem

In general, a nonlinear model is one which is nonlinear in its parameters. An

inherently nonlinear model is one which cannot be transformed into a linear model. A

nonlinear model for which such a transformation exists may be called transformably

linear [Bates and Watts, 1988]. Throughout this paper nonlinear will be taken to

Page 14: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

2

mean inherently nonlinear since well-known linear model methods may be applied to

transformably linear models. To make this distinction clear, consider a fixed effect

linear model with response variable Yi' i E {I, 2, ... , n}, fixed known predictor xi'

random error ei and fixed unknown parameters {3o and {31:

Yi = {3o + {31 xi + ei . (1.1)

Additionally, consider the following model which is not linear in its parameters but

which is transformably linear:

Yi = (3oexp({31 xi)ei . (1.2)

Taking natural logarithms on both sides of equation (1.2) produces a model in the

linear form of (1.1) with Yi* = In(Yi)' (3o* = In({3o), ei* = In(ei) and {3h = {31. Thus

linear model methods apply to this model once it has been transformed. The following

logistic model is an example of an inherently nonlinear model since it cannot be

expressed in linear form:

(1.3)•

The choice to use a nonlinear model may be motivated by the desire to correctly

specify the phenomenon being studied and/or the desire for parsimony. Any set of

data may be fitted with a polynomial model. However, a polynomial model is linear in

its parameters even though the "correct" model may be nonlinear [Gallant, 1979;

Sandland and McGilchrist, 1979]. Hence, polynomial parameters may not be useful for

describing the underlying phenomenon. The primary motivation for using a nonlinear

model typically arises from prior knowledge that a particular process exhibits a known

nonlinear form. In this case, the parameters may have interesting and relevant

interpretations. A secondary motivation for using a nonlinear model is the desire for

parsimony. A suitable nonlinear model may require far fewer parameters than a

polynomial model does for the same data.

The following general notation for a nonlinear model is defined below. In

general, the fixed effects model equation for the j-th response from the i-th

Page 15: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

~

observational unit, i = {I, 2, ... , n} and j = {I, 2, ... , p}, may be written as

3

(1.4)

in which

and

Yij is the j-th response from the i-th observational unit

J( ".) denotes the nonlinear response function

f ij is the (r x 1) vector of fixed, known predictors for the j-th responsefrom the i-th observational unit

~ is the (q x 1) vector of fixed unknown parameters for the responsefunction

eij is the random error for the j-th response from the i-th observationalunit

r i is the (p x 1) vector of responses for the i-th observational unit

~ is a (p x p) positive definite symmetric covariance matrix.

Two important assumptions regarding this statement of the model should be

emphasized. First, there is independence between observational units and second,

responses within an observational unit are correlated. Additionally, it will be assumed

that the errors are normally distributed and additive.

An important restriction to the class of models considered here involves an

assumption that ~ be compound-symmetric.

matrix has the form

A compound-symmetric covariance

~ = 0'2[(1 - pH p + pU'] , (1.5)

in which 0<0'2<00 and -1/(p-1)<p<1 are unknown parameters, !p is the (p x p) identity

matrix and ! is a (p x 1) vector of 1'so The symmetry is compound in that all variances

are equal to 0'2 and all correlations are equal to p. The restrictions on the ranges of p

Page 16: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

4

and (72 ensure ~ is positive definite. This assumption is reasonable given certain

experimental designs. eThe purpose of this thesis is to develop accurate small sample estimation and

inference procedures for the special class of models described above. Comprehensive

sources exist for estimation and inference in both linear multivariate models [see, for

example, Morrison, 1976, Searle, 1971 or Kshirsagar, 1971] and nonlinear multivariate

models [Gallant, 1987 or Seber and Wild, 1989]. Methods for the latter rely on

asymptotic results based on estimation of the covariance matrix among repeated

measurements; hence, the accuracy of estimation and inference procedures in the

general nonlinear multivariate model is highly dependent on sample size [Gallant,

1987]. This is true for linear model methods that rely on asymptotic results as well

[Freedman and Peters, 1984].

Incomplete data occurs when it is not possible to obtain all P measurements on

each observational unit. Let Pi denote the number of repeated measurements available

for the i-th observational unit. In order to accommodate incomplete data, model (1.4)

must be generalized to allow Pi '# P for all i. Hence, methods similar to those for

complete data are developed to address incomplete data.

1.1.2 An Example

An example of the problem described in §1.1 is based on a growth experiment

presented by Rawlings [1988]. Consider four treatments applied to the blue-green algae

Spirulina platensis which differed according to the amount of "aeration" of the

cultures:

1) no shaking and no CO2 aeration

2) CO2 bubbled through the culture

3) continuous shaking of the culture but no CO 2 ; and

4) CO 2 bubbled through the culture and continuous shaking of the culture.

Page 17: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Culture growth was assessed daily for 14 days, with each of 14 solutions prepared

independently. It will be assumed here that each solution was aliquotted into four

treatment groups, thus providing four repeated measurements, and that each of these

14 solutions was randomly assigned to one of the 14 times of growth measured in the

study. The dependent variable reported for each treatment is a log-scale measurement

of the increased absorbance of light by the solution. This is used as a measure of algae

density.

It was suggested that the following response function be used to model the

process:

5

4

Yij = 2: {aiXlij [ 1 - exp( - I3j x2ij ) ]} + eij ,

j=l(1.6)

in which i indexes time and j indexes treatment condition, so that y ij is the absorbance

of light at the i-th time for the j-th treatment, the Xlij are a set of indicators for the

four treatments and X2ij is the time, in days. For this model, X2ij = x2ij

' for all

jJ' E {I, 2, 3, 4}. The parameters for this model have interesting interpretations. The

a j are maximum attainable light absorbances for the four treatments and the 13i

essentially describe the rate at which these maximum absorbances are achieved.

An overall hypothesis might concern whether or not algal density, as measured

by light absorbance, increases across time in the same fashion for all four treatments.

As a function of the parameters this leads to a test of coincidence:

hypotheses. One is directed at the maximum algal density and the other at the rate of

growth across treatments. Note that the nature of the repeated measurements, arising

from four treatments applied to a single prepared solution, suggests that an assumption

of compound symmetry may be plausible (see §2.1.3).

Page 18: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

6

1.2 Literature Review

This literature review has been divided into three sections. These concern the

issues surrounding the model formulation and maximum likelihood methods considered.

The first section introduces the notation of the linear model and describes several

important design schemes which are instructive in the understanding of the nonlinear

model. This section also includes a review and clarification of the notation and

terminology appearing in the literature which concerns least squares procedures. Least

squares procedures, under a normality assumption, are equivalently maximum

likelihood. The compound symmetry assumption and missing data are briefly

reviewed. The second section provides notation for the nonlinear model with spherical

covariance matrix and least squares computational methods. The third section extends

the notation of the previous two sections to include nonlinear models with non­

spherical covariance matrices and least squares computational methods. Finally, a

summary of alternate methods which are relevant to the nonlinear model with general

non-spherical covariance, as well as those methods specific to the nonlinear model with

compound symmetric covariance, is provided.

1.2.1 The Linear Model

In order to lay the foundation for the discussion of the nonlinear model which

follows, it is helpful to introduce the notation and terminology of the general linear

multivariate model with normality (GLMM). The GLMM may be written as

in which

y = ~IJ + ~ , (1.7)

and

Y is an (n x p) matrix of observed responses

~ is an (n x q) matrix of fixed, known predictors

Page 19: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

II is a (q x p) matrix of fixed, unknown parameters

.I? is an (n x p) matrix of random errors

~ is a (p x p) positive definite symmetric covariance matrix.

Let i E {I, 2, ... , n} denote n independent observational units and assume

rk(JS) = q, in which rk(.) denotes rank. The best linear unbiased estimator (BLUE) of

II is

7

(1.8)

(1.9)

Furthermore, since normality of the errors is assumed, ~ is a maximum likelihood

estimator (MLE). The MLE of ~ is

t = (y -JS~)'(Y -JS~) / n .

The MLE of ~ is biased since it does not account for estimation of the parameters in

ll. Thus, an unbiased estimator of ~, [n/(n-q)]~, is often preferred in practice.

1.2.1.1 Model Formulations

It is convenient to recast a multivariate problem as a univariate one by writing

the data matrix in vector form. Let N - np so that N denotes the full set of

observations. Then n denotes the number of independent observational units which

contribute to Nand p denotes the number of repeated measurements from each unit.

In general, for p > I, this leads to a nondiagonal covariance matrix among the full set

of N observations. In any case, it emphasizes a model classification scheme based on

assumptions regarding the covariance and design matrix structures.

Without loss of generality for the analysis methods used, the GLMM may be

expressed in two different vector forms. Each of these result in error covariance and

design matrices which may be written as Kronecker product forms. Throughout this

paper, ® will be used to denote the Kronecker product such that for any two matrices

~ ={al:/} and ll, ~®ll ={al:/ll}· Borrowing terminology from Gallant [1987], the

Page 20: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

8

two vector forms are 1) grouped by subject (denoted by subscript "s") and

2) grouped by equation (denoted by subscript "e").

For the GLMM, the grouped by subject arrangement appears as:

y s = Vec (y') = (~/, ~/, ... , ~n')'

/!.s = Vec ee')Os = V[Vec(~')] = !n ~ ~ . (1.10)

In the above, ~ / is a (1 x p) vector of measurements for the i-th unit, i E {1, 2, ... , n}.

This data arrangement is used, for example, by Winer [1971] in describing the

univariate approach to repeated measures. Typically, the motivation for the grouped

by subject data arrangement is the convenience of writing the (N x N) covariance

matrix, 0, in block diagonal form.

For the GLMM, the grouped by equation arrangment appears as:

Ye = Vec(y) = (~/, ~/, ..., ~p')/

/!.e = Vec ee)

Oe = V[Vec(t:)] = ~ ~ !n . (1.11)

Here, y.' is_J a (1 x n) vector of observations for the j-th response variable,

j E {1, 2, ... , p}. The block diagonal design matrix, )Se, emphasizes the fact that a

multivariate model may be considered as a set of p equations.

These model formulations can be generalized to produce features shared by the

class of nonlinear models for which maximum likelihood (ML) methods are being

sought. In particular, the grouped by equation arrangement can be generalized to

include )S i "# )S for all j. This is sometimes referred to as a multiple design matrix

(MDM) model [Srivastava, 1966]. It was this design feature that motivated the

development of seemingly unrelated regressions (SUR) by Zellner [1962, 1963]. In

Page 21: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

9

SUR, it is assumed that the response function for each j-th variable is different.

Alternately, the ~ j may differ when one or more covariates are measured separately

for each j-th response. When j indexes time, such covariates have been referred to as

time-varying. More generally, models possessing this feature, in which j indexes some

factor other than time, may be referred to as having a repeated covariates design. An

equivalent situation arises frequently for nonlinear models either from time-varying

covariates or SUR type designs. When ~ j =1= ~, such as in these generalizations of the

GLMM, iterative methods are necessary to solve the likelihood equations to obtain the

MLE of :e. For example, SUR and MDM models possess this feature and therefore

require iterative methods.

1.2.1.2 Least Squares Procedures

It will be useful to carefully distinguish among ordinary (OLS), ~xact weighted

(EWLS), approximate weighted (AWLS) and iterated approximate weighted least

squares (ITAWLS) since various terms are used for these in the literature. Consider a

linear model for some (N x 1) vector, ~, of observed responses:

(1.12)

in which

~ is an (N x 1) vector of observations

JS is an (N x q) matrix of fixed, known predictors

I!. is a (q x 1) vector of fixed, unknown parameters, and

~ is an (N x 1) vector of random errors.

Assume only that E(~) = Q, and let 9 = V(~) = (T2y in which O<(T2<oo and y is an

(N x N) positive definite symmetric matrix.

The least squares criterion may be applied to minimize the objective functions

appearing in Table 1. Each objective function provides distinct least squares

Page 22: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

10

procedures, although in some special cases the estimators from different procedures will

coincide. These procedures produce estimates of I! which are optimal, in one or more

senses, given certain assumptions about y. The fact that the OLS and EWLS

estimates of I! are best linear unbiased estimates (BLUE) follows directly from model

(1.12) by the Gauss-Markov theorem, given that the corresponding assumption about

V is met. The asymptotic properties of 13 reported for each procedure require- -independence as well. As the last column in Table 1 indicates, for some procedures the

estimates produced will coincide with the maximum likelihood estimates under an

additional assumption of normally distributed errors.

Closed form solutions exist for OLS in linear models making this procedure

computationally simple. If weighted least squares are to be used, and y is known then

E\VLS also provide a closed form solution. Typically, when weighted least squares are

to be used, y is not known so that an approximate procedure must be employed. In

general, minimization of the objective function of AWLS and ITAWLS involves a

system of equations which are nonlinear in I! and y thereby requiring methods which

iterate between estimation of y and I!. The distinction between AWLS and ITAWLS

is that the former procedure uses a single iteration while the latter involves iteration

until convergence of the sequence of estimates.

With reference to SUR, Zellner [1962] referred to AWLS estimators as two-

stage Aitken estimators since two estimation stages are necessary. In the first stage,

OLS are used to compute an estimate of {3. The OLS residuals are then used to

compute an estimate of~. In the second stage, the estimate of ~ produced from the

OLS residuals is used in the WLS estimation of I!i this is the AWLS estimate of I!.

The two estimation stages comprise the first iteration of ITAWLS.

ITAWLS begins with the residuals produced from the AWLS estimation of {3.

These residuals are used to recompute a new estimate of ~ which, in turn, can be used

Page 23: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

11

to obtain a new WLS estimate of /3 and so forth. This iteration between estimation of

~ and estimation of I! may be continued until some convergence criterion is reached.

This produces the ITAWLS estimator of I! at the last iteration.

Iterative methods can be computationally intensive, due in part to the number

of covariance parameters to be estimated. An assumption that the error covariance

matrix has some structure may reduce the number of parameters to be estimated in

A\VLS or ITAWLS. More important than a possible reduction in computational

effort is the potential reduction in sampling variation in the estimation of constrained

covariance parameters. In the vector version of the GLMM, assumptions about the

structure of ~ may be made. For example, it may be assumed that ~ has an

autoregressive moving average, autoregressive order one or linear covariance structure.

The compound symmetry structure, a special case of a linear covariance structure, will

be discussed in the next section.

Some linear model designs permit a reduction in the computation of iterated

estimates since they either 1) provide convergence in one iteration or 2) produce

iterated estimates which are identical to the OLS estimates. An example of the first

case occurs for some patterned covariance matrices in the GLMM in which explicit

solutions providing ML estimates of the elements of ~ may be obtained in one

iteration. The necessary and sufficient conditions for this to occur with complete data,

using a scoring algorithm, were presented in Szatrowski [1980]. Examples of the second

case are provided by Zellner [1962, 1963] for SUR. Zellner reported some situations for

which the OLS, AWLS and ITAWLS estimators are identical. Note that for the

GLMM the OLS, EWLS, AWLS and ITAWLS estimators coincide and with Gaussian

errors all but AWLS in turn coincide with the MLE.

The AWLS and ITAWLS estimators of I! in the linear model have both been

demonstrated to be consistent, asymptotically unbiased, efficient and normally

Page 24: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

12

distributed [Zellner, 1962; Kmenta and Gilbert, 1968]. Moreover, these authors noted

that with the additional assumption of normal errors, only the ITAWLS estimator is eML. A proof that ITAWLS produces ML estimates in linear models, under an

assumption of normality of the errors, may be found in Srivastava and Giles [1987].

Additionally, the latter authors reported the necessary and sufficient conditions for the

iterative procedure to converge yielding a solution to the system of equations. Note

that in general, additional regularity conditions must be imposed to ensure the

uniqueness of the solution.

An important shift in the focus of objectives occurs between the work

surrounding SUR, or MDM, models and that proposed here. The SUR literature is

dominated by the issue of efficiency in regression parameter estimation [Zellner, 1962

and 1963; Kmenta and Gilbert, 1968; Binkley, 1982 and 1988]. However, the standard

errors of the parameter estimates obtained from SUR or MDM models are typically too

small [Freedman and Peters, 1984]. It should be emphasized that with respect to

inference about the parameters, accurate estimation of the covariance matrix among

the parameter estimates is required in order to avoid an inflated Type I error rate of,

for example, the Wald based hypothesis tests reported by Lightner and O'Brien [1984].

Note that the estimated covariance matrix among the parameter estimates is a

function of the covariance among repeated measurements,~. Hence, accurate error

covariance estimation is sought here.

Two strategies for dealing with the problem of variance estimates which are too

small are 1) to improve the error covariance estimation procedure and apply the usual

inference procedure and 2) to seek improvements to the approximation to the

distribution of a scalar variance estimate used in the computation of test statistics.

The first strategy may be accomplished parametrically or non parametrically.

Freedman and Peters [1984] provided a nonparametric approach. They suggested

Page 25: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

13

bootstrapping the optimistic variance estimates. They noted, however, that while this

may improve the situation, the bootstrapped estimates may still remain significantly

underestimated. Alternately, for the special case of heteroscedastic linear models, for

which the variances are parametric functions of known regressors, the extent of

underestimation has been evaluated [Carroll and Ruppert, 1985]. Another parametric

approach will be taken here by incorporating the compound-symmetric covariance

structure in the likelihood function.

An example of the second strategy is provided by the work of Box [1954a and

b]. He provided some theorems on quadratic forms applied in the study of analysis of

variance problems. These led to degree of freedom corrections for the within-unit F­

test statistic in the univariate approach to the repeated measures model [Geisser and

Greenhouse, 1958; Greenhouse and Geisser, 1959].

1.2.1.3 The Compound Symmetry Assumption

The compound symmetry assumption arises naturally in several data analysis

settings. In the sample survey literature, several authors have assumed compound

symmetry for data obtained through a two-stage sampling procedure [Scott and Holt,

1982; Christensen, 1984; and Wu, et ai, 1988]. This is a reasonable assumption in two­

stage sampling if elements chosen at the second stage, within clusters, are pairwise

equally correlated by p. This tends to occur, in expected value, with random sampling.

In the linear mixed effects model literature [see Hocking, 1985 or Winer, 1971],

compound-symmetric ~ may appropriately be assumed with, for example, a

randomized block design. Arnold [1981] described the relationship between the mixed

model with a randomized block design and the repeated measures model. He reported

that the only real difference is that p can be negative for the repeated measures model

but it is constrained to be positive in the mixed model. In the mixed model, p is

Page 26: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

14

constrained to be positive since by definition it is a ratio of two variances.

Unfortunately, negative estimates of p from the variance components can occur e[Hocking, 1985].

In certain repeated measurements designs the compound symmetry assumption

is plausible. For example, situations in which a response is measured upon each

presentation of several stimuli will usually lead to this covariance structure when the

stimulus presentation order is counterbalanced. This has also been referred to as the

common correlation model [Winer, 1971].

When the compound symmetry assumption is met, the univariate approach to

repeated measures provides a uniformly most powerful unbiased test of the repeated

measures factor [Morrison, 1976]. Huynh and Feldt [1970] reported that somewhat

more relaxed conditions than compound symmetry are sufficient to produce the same

results. The gain in power is largely attributable to the p repeated measurements.

Depending on the extent of their common correlation, and variance, these provide

additional information about the variability of the parameter estimates. This

illustrates a classic trade-off in statistics: a gain in power may be achieved at the cost

of a strong parametric assumption.

Estimation of compound-symmetric ~ reduces to estimation of CT2 and p. The

usual estimator of CT2 is the mean squared error from the model. Furthermore, CT 2 is a

scaling factor and, hence, removable in some sense. Thus estimation of the covariance

parameters focuses on p.

Many biased estimators for the intraclass correlation can be found in the

literature. A maximum likelihood estimator for p from a repeated measures design is

reported in Arnold [1981]. Fisher [1950] discussed several estimators and included bias

corrections to improve them. Looney [1986] evaluated four estimators of p and found

them to be similar with respect to mean squared error and bias. He recommended use

Page 27: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

15

of the average sample correlation, i.e., the average of the off-diagonal elements of the

sample correlation matrix, since it is easy to compute. Unbiased estimation of p is

possible, although it is computationally intensive except for some special cases [Olkin

and Pratt, 1958]. Furthermore, the unbiased estimator has the undesirable property of

sometimes producing estimates outside the range of possible values.

1.2.1.4 Missing Data

Thus far, it has been assumed that for all n independent observational units

exactly p measurements are available. In practice, data may be lost for a variety of

reasons such as loss to followup, equipment failure, human error, or failure in subject

compliance. If the data are missing completely at random (MCAR), it is often possible

to ignore the process by which the missing data arose [Rubin, 1976]. The MCAR

assumption is reasonable when lost measurements are due to processes unrelated to the

experimental conditions and to the process under study. Several statistically valid

approaches to estimation and hypothesis testing are available for the MCAR case.

Little and Rubin [1987] provided a broad taxonomy of missing data methods

into four, not necessarily mutually exclusive, classes. The first class includes

procedures based on completely recorded units. For example, listwise deletion of

observational units possessing missing data produces a new set of data. Subsequently

complete data analysis methods may be applied, if sufficient data remain. However,

much information is lost in the deletion step resulting in a less powerful analysis [see

Barton, 1986]. A second class of methods involves imputation of the missing data

values. A third class of methods is particularly relevant to sample survey data. This

involves using design weights which are inversely proportional to the probability of

selection. These weights are modified to adjust for nonresponse. The fourth class of

missing data methods includes methods which define a model for the partially missing

Page 28: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

16

data and construct a likelihood function under that model.

A model-based maximum likelihood method for the general incomplete data

situation was proposed by Orchard and Woodbury [1972]. The method of these

authors was based on their "Missing Information Principle." The principle states that

the missing values are random variables and that the likelihood for the full sample may

be constructed from the conditional distribution of the complete data given the

observed data. These maximum likelihood solutions are often more easily obtained

than ones based on only the observed data.

The EM algorithm described by Dempster, Laird and Rubin [1977] was a

generalization of the method of Orchard and Woodbury. When applied to missing

data, the E and M steps contained within each iteration of this algorithm are as

follows. In the estimation (E) step the sufficient statistics of the hypothetical complete

data are estimated conditional upon the observed data and the current es.timates of the

parameter vector. The maximization (M) step produces maximum likelihood

estimators of the parameters based on the complete sufficient statistics. Working with

the sufficient statistics provides a reduction in computation over the method of

Orchard and Woodbury which requires estimation of the hypothetical complete data at

each iteration.

Two competitors of the EM algorithm for producing ML estimates of the

regression and covariance parameters of a linear model are the Newton-Raphson and

Fisher scoring algorithms. All three algorithms were described by Jennrich and

SChluchter [1986] in reference to incomplete repeated measures linear models with

structured covariance matrices. The methods were compared with respect to

computational efficiency. Means for modifying the algorithms to guarantee

convergence and general recommendations concerning which algorithm to use for

certain applications were suggested. The authors noted that for complete data the EM

Page 29: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

17

and scoring algorithms were equivalent.

1.2.2 The Nonlinear Model with Spherical Covariance Matrix

The nonlinear fixed effects model with spherical covariance structure may be

written as in (1.4) with p = 1. Suppressing the index j in (1.4) produces the following

model equation for the i-th unit, i E {I, 2, ... , n},

Yi =f(fi' ~) + ei ,

with obvious notation. The (n x 1) vector form of the model is

in which

~' = ( Yll Y2' ... , Yn )'

l'(~) = ( f(fl' ~), f(f2' ~), ..., f(fn, ~) )'

It will be assumed that

(1.13)

(1.14)

~ ,... Nn( Q, (1'2!n ) ,

with 0<(1'2<00 and ~ E e, in which e is an open convex subset of q-dimensional

Euclidean space, ~q.

1.2.2.1 Least Squares Computational Methods

Least squares estimation of the parameters from a nonlinear model requires

minimization, with respect to ~, of the following objective function:

Q =(~ - t(~) )' ( ~ - t(~)) .

An iterative procedure is necessary to produce a solution to (1.15).

(1.15)

Many algorithms exist to provide a numerical solution to (1.15). Two of the

most widely used include the modified Gauss-Newton method [Hartley, 1965; Hartley

and Booker, 1965] and the Marquardt method [Marquardt, 1963]. A survey of

Page 30: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

18

algorithms may be found in Chapter 10 of Kennedy and Gentle [1980]. In general, for

a solution to be found it is assumed that the first and second derivatives of I (f i'~) ewith respect to 8,., r E {I, 2, ... , q} are continuous functions of the elements of ~ for all

f i' An algorithm is often chosen for a particular application based on its properties of

convergence. However, Kennedy and Gentle [1980] suggested that mathematical proof

of convergence of an algorithm is not a sufficient reason to choose it. These proofs

often assume conditions which are difficult to verify in practice and numerical

inaccuracies in a particular application may still result in non-convergence.

An overview of the modified Gauss-Newton method is presented here because it

is the chosen algorithm for the simulation studies conducted in this research. Jennrich

[1969] identified a set of sufficient conditions to prove that the Gauss-Newton

algorithm is asymptotically numerically stable. This means that the algorithm, given

sufficiently large sample size, will converge to the same fixed point whenever the initial

value is in a neighborhood of that point. Although this is an attractive property of the

Gauss-Newton algorithm, it is indicative of the reliance of this procedure on obtaining

good starting values. Starting values may be chosen from some combination of prior

knowledge of the situation and grid search. The numerous examples in Gallant [1987]

and Bates and Watts [1988] are instructive in this respect.

The Gauss-Newton method uses a first order Taylor series approximation to

replace l(~) in (1.15). Define

f(~) = {at j(fi'~)} = a~' l(~) (1.16)

to be the (n x q) Jacobian of l(~). The Taylor series expansion of l(~) about some

initial point ~(o) may be written as

(1.17)

in which f(~(o» is the Jacobian of l(~) evaluated at ~ = ~(o) and ~(.) is the

remainder term which is composed of quadratic and higher order terms in the series.

Page 31: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

19

Replacing l(~) by its first order Taylor series approximation about ~(o) ill (1.15)

produces the objective function, Q(l), at the first Gauss-Newton iteration,

Q(l)= Q(~(l»

_ [:r - l(~(o» - f(~(O»(~(l) _ ~(o»]'[:r -l(~(o» _ f(~(O»(~(l) _ ~(o»] .

(1.18)

Let ~(l) be the single-step least squares estimator which minimizes (1.18) for an initial

(0)value, ~ . Define

(1.19)

to be the Gauss-Newton step away from ~(o). Note that invertibility of

[f'(~(O»f(~(O»] requires that f(~(o» be of full rank just like the design matrix, ~, in

the GLMM. The ~(l) that minimizes (1.18) is

(1.20)

In order to ,minimize Q in (1.15) it is necessary to iterate the process defined in (1.18)­

(1.20). More generally, let k index the steps of the iterative process and define ~(A:) to

be the k-th step least squares estimator which minimizes the k-th step objective

function Q(A:) = Q(~(A:». Then, the Gauss-Newton step away from ~(A:-1) takes the

form

and,

-(A:) -(A:-l) - (A:-l)~ =~ +Q .

(1.21)

(1.22)

Since it is not guaranteed for the k-th iteration that Q(~(A:» $ Q(~(A:-l», Hartley [1961]

introduced a modification to the Gauss-Newton algorithm. He showed that there exists

a step length O$A(A:) $1 for each k-th step such that:

(1.23)

Refer to Chapter 10 of Kennedy and Gentle [1980] for a review of some of the various

methods for obtaining an appropriate step length. Iterations are continued until

convergence of the algorithm has been achieved according to some stopping rule. See

Page 32: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

20

Gill, Murray and Wright [1981] for a discussion of stopping criteria. For simplicity, let

~ denote the estimator obtained in the last iteration. More generally, let ~ denote the

least squares estimator of ~ from some, not necessarily Gauss-Newton, algorithm.

1.2.2.1.1 Estimation

Comprehensive sources describing the statistical properties of the nonlinear

least squares estimator of ~, with spherical covariance, include Ratkowsky [1980],

Gallant [1987] and Bates and Watts [1988]. Jennrich [1969] was first to report the

regularity conditions for consistency and asymptotic normality of the least squares

estimator of~. Malinvaud [19iO] provided an alternate proof of the consistency of the

nonlinear least squares estimator. Gallant [1987, chapters 3 and 4] extended these

results to obtain the asymptotic behavior of estimation and inference procedures under

specification error. Gallant used specification error to refer to a situation in which an

analysis is based on a particular nonlinear model when, in fact, some other model had

generated the data. Furthermore, his asymptotic theory of nonlinear least squares

estimation is valid without assuming a particular parametric form for the distribution

function of the errors. As with linear models, and under the regularity conditions

reported in Gallant [1987], when the errors are normally distributed as in the model of

(1.13), the least squares estimator of ~ is also the maximum likelihood estimator.

Further clarification of this point may be found in §2.2 and §2.3.

Throughout the remainder of this section and §2.3.1, the notation of Gallant

[1987] will be followed closely since it serves to illustrate the similarities between linear

and nonlinear least squares estimators. In particular, the nonlinear least squares

estimators can be characterized as linear and quadratic forms in ~ which are analogous

to those that appear in linear regression to within an error of approximation. To

clarify this analogy it is helpful to see that f(~) in nonlinear regression plays a role

Page 33: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

21

similar to that of ~ in the GLMM since these may be recognized as the Jacobians for

their respective models. In what follows, f(~) will be assumed to have full column

rank.

Define

52 _ SSE(~) _ [~ - t(~) ]'[ ~ - t(~) ]

- n q - n q (2.2.13)

to be an estimator of u 2 corresponding to the least squares estimator of ~.

Additionally, let ~n =op(an) denote a matrix valued random variable such that for

each element of ~nl:l' with {an} denoting some sequence of real numbers, ~nl:';an

converges in probability to zero as n ~ 00. Letting f = f(~), Gallant [1987J reported

that

Furthermore,

e'[ I - F(F'F)"lF' ] e52 = - on n - q- - - + op(l/n) .

Cn - q)52 L 2[ ]

2 ~ X n-qu

(1.24)

(1.25)

(1.26)

(1.27)

in which ~ denotes convergence in law, and finally that ~ is independent of 52. In

applications it is necessary to approximate u 2 by 52 in (1.16) and (f':n-1 by (t'tf1,

in which t = f(~). Thus, an approximate estimate of the variance of ~ is provided by

(1.28)

In contrast to the GLUM, the regression parameter estimates from a nonlinear

model are typically biased in small samples. The concept of curvature is instructive in

understanding the dual origins of bias in nonlinear parameter estimation. The term

curvature is used to refer to a property of a given expected value function and data set

combination. Bates and Watts [1980] proposed two measures of curvature and related

Page 34: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

22

them to the bias expressions produced by M.J. Box [1971]. Consider the plot of the

solution locus for a model/data set combination in n-dimensional sample space. The ecurvature of the solution locus in the neighborhood of ~ is a measure of the intrinsic

nonlinearity. The unequal spacing and lack of parallelism of parameter lines projected

onto the tangent plane to the solution locus is a measure of parameter effects

nonlinearity. The latter measure may change upon reparameterization of a model

while intrinsic nonlinearity will not. This has motivated some authors to strongly

advocate reparameterization of the model function in order to control the parameter

effects nonlinearity to some degree [Ratkowsky, 1983]. A survey of 24 published data

sets (and their chosen models) collected by Bates and Watts [1980] revealed that

intrinsic nonlinearity is, in general, a minor component of total nonlinearity while

parameter effects curvature. is sometimes substantial. When intrinsic nonlinearity is

judged to be problematic for a particular application, Hamilton, Watts and Bates

[1982] suggested using a quadratic approximation to l(~) instead of a linear

approximation. The curvature measures also hold implications for choosing an

inference procedure since these procedures are differentially sensitive to the two kinds

of nonlinearity.

1.2.2.1.2 Inference

Gallant [1987] provided a comprehensive presentation of a unified asymptotic

theory for the nonlinear regression model, including inference, which borrows from

classical maximum likelihood theory. In the nonlinear model, a least squares objective

function is treated as the analog of the log-likelihood in the classical theory. In this

way, analogs of Wald (W) and likelihood ratio (L) statistics are derived.

The following notation is necessary for definition of the test statistics. Consider

a general statement of an hypothesis:

Page 35: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

VS.

23

(1.29)

in which M~) is a possibly nonlinear (5 x 1) vector valued differentiable function of ~.

Let ij(~) denote an (5 x q) matrix of first order partial derivatives of Q(~), with respect

to ~, evaluated at ~ =~. Let ~ denote the least squares estimator of ~ subject to the

constraints imposed by the null hypothesis. Also, let SSE(~) and SSE(~) denote the

sums of squares of the residuals from the full and constrained models respectively. The

test statistics are

W = Q'(~) [ ij(€) [f'(~)f.(€)rlij'(€)r 1 M€) / 5

SSE(~) / (n-q)

L = [ SSE(~) -: SSE(~) ] / 5

SSE(~) / (n-q)

(1.30)

(1.31)

Asymptotically, the Wand L statistics each have an F-distribution with 5 and

(n-q) degrees of freedom, under Ho . Thus Wand L reject Ho when they exceed

The approximate nature of these statistics means that their small sample

performance varies with respect to one another as well as with respect to different

applications. The Wald statistic performed poorly in large simulations conducted by

Gallant [1975d, 1987]. For a sample size of 12, Gallant [1987, p. 84] used two

nonlinear models with different amounts of nonlinearity. The Wald statistic produced

a Type I error rate of 0.0525 for the nearly linear model and 0.1345 for the highly

nonlinear model with target Q = 0.05. The unreliable performance of this statistic is

attributed to its sensitivity to parameter effects curvature [see Donaldson and

Schnabel, 1987 or Ratkowsky, 1983]. However, a counterexample of this phenomenon

also exists [Cook and Witmer, 1985]. In contrast, the L statistic achieved target Q for

both models in this same simulation study.

Page 36: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

24

It should be noted that the L statistic is invariant to reparameterizations of the

model. This demonstrates that this statistic is not sensitive to parameter effects ecurvature. This leaves it subject only to the intrinsic nonlinearity of the model and

data from which it is computed, which is often of minor importance. Donaldson and

Schnabel [1987] used a model/data set combination reported by Cook, Tsai and Wei

[1986] to provide an example with large intrinsic nonlinearity since none of the 20 data

sets in their study were observed to have appreciable intrinsic nonlinearity.

Furthermore, only two of 24 data sets examined by Bates and Watts [1980] possessed

significant intrinsic nonlinearity.

Inversion of either of the test statistics in (1.30)-(1.31) may be used to

construct confidence intervals and confidence regions for the model parameters.

Donaldson and Schnabel [1987] provided a comprehensive Monte Carlo study

comparing confidence procedures based on Wald and likelihood ratio test statistics.

Their results are consistent with the findings of Gallant [1975d, 1987] which concern

the test statistics. Specifically, Wald based confidence intervals and regions were too

small while likelihood based confidence procedures provided more accurate coverage

across a range of parameter effects and intrinsic curvatures. Furthermore, the authors

found the diagnostic measures of Bates and Watts [1980] to be successful at predicting

when the Wald type confidence regions will be poor.

Description of the confidence procedures used by Donaldson and Schnabel

requires some additional notation. Let y denote an estimate of the variance of ~. Let

y = s2[t'tr1 as in (1.28) although there exist alternate choices based on the

estimated Hessian matrix of SSE(~), i.e., the matrix of second partial derivatives with

respect to ~ evaluated at ~ =~. Let ~ = {Or}, r E {1, 2, ... , Q} so that Vrr denotes

the rr-th element of y. The (I-a) Wald based confidence region for ~ includes all

values of ~. such that

Page 37: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

• ,. 1 •(~. - ~) y. (~. - ~) :5 q Fq,n-q,l-a

and the (1-0') Wald based confidence interval includes all values of 0; such that

• • 1/2I 0; - Orl :5 V rr t n-q ,l-a/2 .

25

(1.32)

(1.33)

The (1-0') likelihood ratio based confidence region for ~ includes all values of ~. such

that

• 2SSE(~·) - SSE(~) / (s q):5 Fq,n-q,l-a (1.34)

and the corresponding confidence interval is bounded by the points that maximize

(0;-Or)2 subject to

(1.35)

Selection of a procedure for estimation of a confidence region may be based on

three properties of these methods. First, consideration is often given to the amount of

computation required. Second, consideration should be given to the structural

characteristics of the estimated regions. Third, and perhaps most importantly, the

accuracy of the method should be taken into account upon choosing a method. The

Wald based regions are guaranteed to be ellipsoidal in two dimensional cross-section

and they are very simple to compute. However, these regions are often too small. In

contrast, likelihood based confidence regions are more computationally intensive and

potentially have undesirable structural characteristics such as being unbounded or

disjoint. The structural properties occur because likelihood based regions are

guaranteed to contain every minimum, maximum and/or saddle point of the likelihood

surface [Gallant, 1987]. However, Gallant reported that in practice, likelihood based

methods only infrequently produce structurally undesirable regions. Most importantly,

simulation results indicate more accurate coverage than Wald based regions [Gallant,

1987; Donaldson and Schnabel, 1987]. Bates and Watts [1988] provided a thorough

treatment of the computation an~ presentation of confidence intervals and regions for

the nonlinear model with numerous examples. At the present time, the Wald method

Page 38: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

26

remains popular because of its ease of use and desirable structural characteristics

despite producing too small confidence intervals and regions.

A potential improvement to the dilemma posed by using Wald based methods,

at least for confidence intervals, may be found by bootstrapping the bounds of an

interval. Unfortunately, Freedman and Peters [1984] reported that bootstrapping

provided only a partial solution to the general problem of too small variance estimates.

1.2.3 The Nonlinear Model with Non-Spherical Covariance Matrix

This section serves two purposes. First, an extension of nonlinear least squares

theory from spherical to non-spherical covariance structures is provided. Much of the

development of this theory can be credited to Gallant [1975d, 1987] although one

should also recognize the work of Box and Tiao [1973] and Allen [1967]. Gallant

generalized the work of Zellner [1962, 1963] on SUR to nonlinear models. Nonlinear

least squares estimation in the non-spherical covariance model closely parallels that for ethe spherical covariance model with the added complication imposed by the need to

estimate the covariance matrix, just as in SUR. Emphasis is given to this method,

which will be called seemingly unrelated nonlinear regression (SUNR), since it forms

the basis for the proposed research methods. A second objective of this section is to

provide a brief survey of other methods relevant to the nonlinear model with general

covariance structure as well as those methods relevant to special covariance structures,

in particular those which are compound symmetric.

Define the following vectors i'n relation to the nonlinear fixed effects model

r = [(~) + ~ ,with non-spherical covariance structure

{ '} - (" ')'r = r j - rl' r2 , ..., rp

(1.36)

Page 39: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

')­_I

- { '} - ( " , )'~ - ~i - ~l' ~2 , ... , ~P ,

with, j E {I, 2, ... , p} and i E {I, 2, ... , n},

, - ( )'~ j - Ylj' Y2j' ... , Ynj

and

It will be assumed that

with ~ a positive definite symmetric covariance matrix and ~ Ee, in which e is an open

convex subset of Q-dimensional Euclidean space, lRQ

.

1.2.3.1 Approximate Weighted Least Squares Computational Methods

The nonlinear least squares method for a model with non-spherical covariance

structure closely parallels that for a model with spherical covariance structure.

Additionally, however, it is desirable to take into account the information contained in

the covariance structure. Alternately, one could view this estimation problem as being

parallel to that in SUR with the added complication that the response function is

nonlinear.

Either AWLS or ITAWLS procedures may be used for the general method of

SUNR. The AWLS estimate of ~ is that ~ which minimizes the following objective

function:

(1.37)

Clearly, an estimate of ~, t(o), is necessary to solve (1.37). In SUNR it is customary

to compute t(o) from the OLS residuals obtained in an initial step. This initial step as

well as the remaining steps for computing AWLS and ITAWLS estimates are outlined

as follows.

First, OLS nonlinear methods such as those described in §2.2.1 are used to

Page 40: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

28

estimate the elements of Q from each of the p response equations. The residuals from

h d I . . Co) b d ' , 'I' f ~t ese mo e equatlOns, ~j ,may e use to compute an mltla estimate 0 fo"

(1.38)

Second, tCo) is substituted into (1.37) and an initial AWLS estimate of Q is

"CI) N h h 'd '(J°Cl). h 'obtained, ~ . ote t at t e non-lterate , one-step estlmator, _ ,1S t e estlmator

reported by Gallant [1975d] in his paper which introduced SUNR. However, the

estimation procedure may be iterated.

When the procedure is to be iterated, ~CI) may be used to compute a new set of

residuals and hence a new estimate of ~,tCl). More generally, define the k-th step

ITAWLS estimator to be the QCI:) which minimizes:

(1.39)

Similarly, define

tCI:) = { A~}I:)I ~j~) } , (1.40) ein which the ~/I:) are the residuals obtained from the j-th response function at the k-th

iteration.

Th . . 't' d . h f' I ~Co).e process IS lnl late Wit a vector 0 startmg va ues, Iteration of

(1.39)-(1.40) generates a sequence of estimates

~(o) _ t(o) _ ~(I) _ t(l) _ ~(2) _ ... _ t(l:) _ ~(I:) _ ...

in which ~(I) is the AWLS estimate and ~CI:) is a finite step estimate of Q. Thus, ~CI) is

just one of many possible finite step estimates. One may iterate until the sequence of

estimates converges. Then the ITAWLS estimates are defined to be

O• (00) _ I' (J" CI:)- 1m .- 1:-00·

(1.41)

These will correspond to a local maximum of a likelihood surface under the regularity

conditions described by Barnett [1976], Charnes, et ai, [1976], Jorgensen [1983] and

Gallant [1987]. Furthermore, each of these authors proved the equivalence of the

Page 41: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

29

ITAWLS and ML estimators for this nonlinear model given that the errors are

normally distributed. Discussion of this equivalence may also be found in Box and

Tsiao [1973] and Bates and Watts [1988].

Gallant [1987] suggested that some algorithm other than ITAWLS, may

provide a more computationally tractable means of computing the ML estimates of ~

and~. In particular, he suggested that the natural log of the determinant of the

covariance matrix be used as the objective function. In fact, this was one approach

taken by Allen [1967] in methods proposed for nonlinear growth curve analysis.

1.2.3.1.1 Estimation

The finite step estimates t(t) and ~(t) are consistent estimates of ~ and ~

respectively [Barnett, 1976; Gallant, 1987]. Specifically,

t(t) = ~ + Opel) and

~(t) = ~ + Opel) ,

under weak regularity conditions.

(1.42)

(1.43)

In addition, Barnett [1976] proved the following results for the maximum

likelihood estimators. Let t = p(p+l)/2 denote the number of unique elements in ~.

Then, define 1 to be a [(q+t) x 1] vector containing the elements of ~ followed by the t

unique elements of~. Similarly, define 100 to be the corresponding vector containing

the ML estimates of the elements of ~ and~. Then 1(00) is consistent for 1,

.(00) ( )1 = 1 + op 1 . (1.44)

Additionally, define Ln(~11) to be the likelihood function. A further assumption is that

Ln(~ /1) is three times differentiable in l' Define the information matrix to be

Page 42: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

l30

(1.45 )

in which G n(1') is a (q x q) matrix containing the part of the information matrix

pertaining to the elements in ~ and Hn(1') is an (t x t) matrix pertaining to the unique

elements in~. Finally, also define the Hessian matrix for l' to be

JJn(-y) = -(1, in Ln(yl..,,) ) , (1.46)- 01'01' - -

which is assumed to be nonsingular for all l' with probability one as n ... 00. From

Theorem one of Barnett [1976], given 1(00) is consistent by (1.45), then

(1.4;)

An ML estimate of l' allows a consistent estimate of the information matrix to be

found. In Theorem two of Barnett [1976], for 1(00) an ML estimate of 1"

as n ... 00. In a corollary it was proven that

-(00) L ( -1 )iii (~ - ~) ... Nq Q,Gn(1') ,

(1.48)

(1.49)

as n ... 00. The asymptotic theory for the ML estimates may also be found in Gallant

[1987], with additional generality, and hence loss of clarity, gained by allowing model

misspecification.

1.2.3.1.2 Inference

Again consider hypotheses as in (1.29). It will be assumed that t and ~ are any

consistent estimators of ~ and~. Thus, finite step estimators of ~ and ~, such as the

AWLS estimates, are sufficient to allow development of the following theory, found in

Gallant [1987]. As for univariate models, Gallant again reported analogs of Wald and

likelihood ratio test statistics. In general define

SSE(~,t) = ( ~ -t(~) )' (t-l~!n) ( ~ - t(~) ) (1.50)

to be the error sum of squares from the unrestricted model. In contrast, define

SSE(~,t) to be the error sum of squares from the restricted model, i.e. under Ho as in

Page 43: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

31

(1.29). Note that the restricted error sum of squares is computed using the estimate of

~ obtained from the unrestricted model. Let

The following Wald and likelihood ratio type statistics may be defined:

W = Q(~) [ ~H~)9(.~)IJ'(~) r 1Q(~) / s

SSE(~,~) / (np - q)

L = [ SSE(€,t) ~ SSE(~,t) ] / s

SSE(~,~) / (np - q)

The Wand L statistics reject Ho when they exceed For = F-1[1-or; s, np-q].

(1.51)

(1.52)

(1.53)

Simulation studies reported in Gallant [1987, p. 350] allow comparison of the

performance of these statistics, as functions of the one-step estimatesof ~ and ~, with

respect to Type I error rate. Using a bivariate function and n = 46, the Type I error

rates for the Wald and likelihood ratio statistics were 0.084 and 0.067 respectively for

Ct =0.05. Freedman and Peters [1984] pointed out that variability in the estimation of

~ leads to negative bias in the standard errors for ~. Subsequently, Gallant attributes

Type I error rate inflation beyond target Ct in the nonlinear non-spherical covariance

model to estimation of ~,just as in SUR [Gallant; 1975d, 1987].

Barnett [1976] derived the classical likelihood ratio test statistic based on the

ML estimates as

(1.54)

and proved that -21n~n ! X2[s] as n ... 00. Gallant [1987] discussed this statistic as well

but he recommended use of (1.53) since the F approximation to the classical test

provided more accurate testing than did (1.54) in his simulations (see also, Milliken

and DeBruin, 1978). Gallant's motivation for the F approximation to the classical

Page 44: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

32

likelihood ratio statistic was the desire to compensate for the sampling variation

introduced by estimating ~.

Confidence regions and intervals may be computed in a manner completely

parallel to that in §2.2.1.2 so no further discussion will be added here.

1.2.3.2 Survey of Alternate Methods

A variety of analysis strategies, other than SUNR, have evolved for nonlinear

multivariate models. These are typically motivated by a particular application, such as

growth curve analysis. Therefore, methods are not based on common principles nor do

they attempt to provide a unified approach to this problem. Frequently, these

methods do not adequately evaluate inference procedures or, in some cases, even

address them. The survey of methods to follow is meant to provide a brief review of

other nonlinear multivariate methods, their associated assumptions and a comparison

of these methods with respect to small sample estimation and inference. These model- ebased methods may be divided into three broad classes which will be addressed in the

following order: 1) general nonlinear multivariate model methods, 2) random effects

based model methods and 3) methods based on models assuming structured

covariance.

Allen [1967] produced a general approach for the analysis of nonlinear growth

curves. His methods are directly applicable to the model described in §1.1, the

statement of the problem. However, these methods do not make use of the compound

symmetry assumption. Allen suggested two estimation methods. One method

minimizes the determinant of the residual covariance matrix while the other minimizes

its trace. The first method provides maximum likelihood estimates, under a normality

assumption, and produces a likelihood ratio test statistic. Gallant [1987; Ch. 5]

suggested that this means of producing ML estimates is more computationally efficient

Page 45: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

33

than SUNR. The second method may be considered analogous to a modified minimum

X2 method. Both methods produce consistent, asymptotically efficient and

asymptotically normal estimators. A small simulation for sample sizes of 12, 24 and 36

indicated that the likelihood ratio test is reasonably approximated by a scaled X2

distribution. However, Allen did not prove this analytically.

A commonly used method of growth curve analysis involves a two-stage

procedure described in Bock, et at [1973J. For the purpose of classifying this method, it

may be loosely called a random effects based method since individual growth curve

parameters are implicitly assumed to vary within some population. In the first stage,

least squares are used to separately fit a nonlinear model to each unit's data. In the

second stage, a measure of precision is computed for the parameter estimates from the

first stage is computed. This measure of precision is based on the average of the

estimated covariance matrices from each observational unit. Furthermore, the

population parameter vector is estimated as the mean of the individual parameter

vectors. As Berkey and Laird [1986] indicated, the real danger of this kind of method

is that the mean parameter curve may not be equivalent to the population mean

growth curve. This type of analysis, though somewhat relevant to the goals of the

research as stated in §1, handles the properties of the covariance among repeated

measurements very differently. Furthermore, inference for this type of growth curve

analysis is aimed at statements about individual differences rather than statements

about populations. Applications of interest for this research involve the latter.

Berkey and Laird [1986] provided a method for the analysis of nonlinear growth

curves which permits estimation of population parameters. Their method is based on a

two-stage random effects linear model described by Laird and Ware [1982]. Berkey and

Laird generalized the two-stage random effects linear model to a nonlinear growth

curve model and included means for handling time-constant covariates. In the first

Page 46: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

34

stage each individual's data is fit separately to some nonlinear model. The parameters

of the individuals are assumed to vary in the population, with the population mean of eeach parameter dependent upon covariates. Thus, in the second stage the individual

parameter estimates are modelled, using linear model methods, according to the

assumed distribution for the population, and the covariates are included in the

estimation procedure. Berkey and Laird applied their method to just a single data set

and results from the estimation procedure were reported. Inference procedures were

not addressed.

Nonlinear random effects models motivated the analysis methods of Scheiner

and Beal [1980]. These authors proposed a method they call NONMEM, for "nonlinear

mixed effects model." The method involves a first-derivative linear approximation to

the model function and then uses a maximum likelihood procedure to iteratively

estimate the expected value parameters and covariance matrix among repeated

measurements. These authors model the between unit parameter variation and

incorporate this directly into the WLS procedure in contrast to the two-step methods

of Berkey and Laird or Bock. Scheiner and Beale's simulation was based on only ten

replications and highly restrictive model assumptions. Furthermore, the methods of

NONMEM are extremely computationally intensive due to the use of a nested iterative

procedure.

Another nonlinear random effects model method, based on a Bayesian

approach, provided accurate estimation in several small simulations [Racine-Poon,

1985]. However, hypothesis testing was not evaluated.

A recent paper by Lindstrom and Bates [1988] proposed a general nonlinear

mixed effects model for repeated measures data. The estimators they define are a

combination of least squares estimators for nonlinear fixed effects models and

maximum likelihood estimators for linear mixed effects models. No discussion of

Page 47: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

35

inference is included although applications of their estimation method for two examples

with small sample sizes are provided.

Gallant and Goebel [1976] considered a nonlinear model which was assumed to

have a first order autoregressive covariance structure. The method of Gallant and

Goebel may be considered as a special case of SUNR in which an assumption about the

covariance structure is made. This method is essentially a time series application with

n =1. The estimation method first involves computation of the nonlinear OLS

estimate of ~. Then, the residuals from this model are used to estimate the

autoregression parameters. These essentially specify an approximating covariance

matrix, t. This, in turn, is used to conduct a final WLS estimation of~. As for

SUNR, estimates are asymptotically normal and efficient. However, simulation results

indicate that confidence intervals based on a t-approximation are too narrow [Gallant

and Goebel, 1976].

Glasbey [1979] suggested a maximum likelihood method for obtaining logistic

model parameter estimates for each observational unit under the assumption that each

unit's errors have a first-order autoregressive covariance structure. In general, the

methods of Glasbey cannot be used to make between-unit comparisons for data in

which the number of observations on each unit differs. Glasbey applied his estimation

method to a single data set with just eight animals and no discussion of inference

procedures was made.

Work by Muller and Helms [1984], Williams [1984] and Hafner [1988] focussed

on nonlinear models in which an assumption of compound symmetry can be made for

the covariance matrix among repeated measurements. The method proposed by Muller

and Helms [1984] is an EWLS procedure based on the separability of the nonlinear

model into "average" and "trends" components. Williams [1984] and Hafner [1988]

reported accurate estimation and excellent Type I error control in their simulations

Page 48: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

36

which evaluated this method. However, the necessary and sufficient conditions on the

expected value function needed to insure applicability of the method are quite

restrictive [Hafner, 1988].

Malott [1985], Muller and Malott [1988] and Gennings, et al [1989] proposed an

AWLS method for nonlinear models with a compound symmetric covariance matrix

among repeated measurements. This approach does not rely on separability of the

model and is, therefore, applicable to a wider range of model functions than is the

approach of Hafner. Similar to SUNR, an estimate of ~ is necessary. Additionally, the

method requires estimation of p as an intermediate step in the estimation of ~. In a

simulation study by Malott [1985], parameter estimates were obtained in which the

bias never exceeded 2% of the true value. Furthermore, these same simulation results,

with target Q = 0.05, using the approximate F statistic of (1.54) included reasonable

type I error control (.056) for a sample size of 24. Somewhat higher type I error rate

(.072) was obtained for a sample size of 12.

Schaff, et ai, [1988] described a method for analyzing nonlinear models when the

data are from a split-plot design. Their method involved fitting the nonlinear model to

partitions of the data corresponding to the various treatment and plot combinations in

their design. Subsequently, residual sums of squares were computed for the partitions

and pooled to provide sums of squares for the factors to be evaluated. In this way

their method entails construction of an ANOVA table in parallel to that for the classic

split-plot design for linear models. A notable difference is that the usual split-plot

degrees of freedom are multiplied by the number of parameters in the nonlinea! model.

Tests are provided by F ratios as in the usual split-plot analysis. This method was

applied to a single data set without either simulation studies or analytic justification

for its use. Hence it is not known how well these methods perform in general.

Furthermore, an important limitation is that no means for testing hypotheses involving

Page 49: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

37

the nonlinear parameters is suggested.

In summary, many nonlinear repeated measurements methods have been

developed which provide reliable estimation. However, hypothesis testing, in small

samples, remains a problem due to inflation of the Type I error rate resulting from

overly optimistic standard errors. It appears that the most general of these methods,

SUNR using an AWLS estimate of the parameter vector is subject to unacceptable

Type I error rates in some cases due to the sampling variation involved in the

estimation of~. Specifically, inference procedures derived from SUNR rely heavily on

asymptotic properties making these procedures inappropriate for many small sample

situations. Other methods which are more limited in scope, by virtue of making strong

parameter assumptions, include for example the methods of Gallant and Goebel [1976],

Malott [1985] and Muller and Malott [1988]. These appear to perform somewhat

better, with respect to inference, than do the more general methods such as those

presented by Allen [1967] and Gallant [1987]. It is believed that the improvement in

performance is due to the reduction in the number of elements of ~ that must be

estimated, and subsequent reduction in sampling variability of t, in these SUNR based

methods.

The goal, here, is to propose maximum likelihood methods for the nonlinear

model with repeated measurements which provide both reliable estimation and accurate

hypothesis testing in small samples. This new method will be a special case of SUNR

requiring strong parametric assumptions of compound symmetry and normality. These

limit the applicability of the new method somewhat. However, compound symmetry

and normality are reasonable and common assumptions given certain classes of

frequently encountered experimental situations. A major advantage to the new method

is that it does not involve assumptions about the nature of the expected value function

like those inherent in the method of Muller and Helms [1984] and Hafner [1988].

Page 50: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 2

ESTIMATION FOR COMPLETE DA TA

2.1 Introduction

Consider the following regression model with normally distributed errors, eij

(2.1)

in which i E {I, 2, ... , n} indexes independent observational units and j E {I, 2, ... , p}

indexes repeated measures. Let f( ".) denote some possibly nonlinear function in the

fij and~. The (q x 1) column vector ~ E e contains unknown parameters and the

(r x 1) column vectors f ij E!Rr contain known constants associated with the i-th

observational unit. Note that model (2.1) explicitly permits time-varying covariates by eaccommodating f ij which may differ across the P repeated measures, within a single

observational unit.

It is convenient to write (2.1) in vector form as ~ = .[(~) + ~, in which the

data may be partitioned as ~ = (~/, ... , ~n')' with ~i = (Yil' ... , YiP)', and .[(~) and ~

are partitioned similarly. Throughout, it will be assumed that for all a,

~ i ,... Np(.[(fi>~)' ~) in which fi = (fil' fi2' ... , fiP)' with ~ a full rank positive

definite real matrix. Although the full rank assumption is not strictly necessary,

tolerating less than full rank requires additional complexity in notation which only

serves to obscure the major results presented here. Furthermore, the assumption of

full rank ~ is automatically ensured under compound symmetry. Finally, letting

p(~i) = 1211'~rl/2exp{-![~i-.[(fi,~)]'~-l[~i-.[(fi'~)]}

denote the normal probability density function for any (p x 1) vector ~ i' the log

Page 51: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

39

likelihood function for a set of n realizations of ~ may be written as

n

In(~I~, ~) = lnIIp(~i)i=l

n= -~ ln121l"~1 -~~)~i-I(;i,~)]I~-l[~i-I(;i'~)]

i=ln

= -~ Inj21l"~1 -~E~/~-l~i . (2.2)i=l

2.2 The Orthonormal Model Transformation

Several properties of orthonormal transformations make them useful tools in

the distribution theory associated with the normal distribution and hence a broad class

of regression models. The following theorem states some interesting properties of

orthonormal model transformations. Two corollaries are obtained by imposing further

conditions which are relevant to the model of interest here. Various forms of these

results may be found in the literature. For related discussion see Kshirsagar [1983; Ch.

10]. These results are provided here in full detail so that they may be referred to by

proofs appearing in later chapters.

Theorem 1: Define y (p x p) such that yy' = y'y = Ip and T = [In ® V'].

Consider transforming model (2.1):

T~ = TI(~) + T~ ,

with obvious notational shift, write

~. = t..(€) + ~•.

Assume ~ - Nnp(Q, (2), with (2 = [In ® ~] and rk(~) = p. Then,

(i) ~. - Nnp(Q, (2.) with (2. = Un ® ~.l, and ~. = y'~Y,

n n n(ii) E ~i.'~i. = E ~iYY'~i = L ~/~i' in which ~i. = Y'~i' and

i=l i=l i=l

(2.3)

Page 52: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

40

Proof: (i) The distribution of ~. follows directly from the fact that T defines a

linear transformation of ~, a multivariate normal vector. (ii) This part is proven en n

above. (iii) Substituting for ~. = y'~y and ~i.' L ~i.'~:l~i. = L ~/~-l~i andi=l i=l

127l'~.1 = 127l'~lly'IIYI = 127l'~1, then,

n

-21n(r.1 ~,~.) = n lnI27l'~.1 + L ~i.'~:l~i.i=l

n= n ln127l'~1 + L ~/~-l~i

i=l

J

Dividing both sides by -2 gives iii). o

It is important to note that the expected value parameter vector ~ remains

unchanged for this transformation. Thus, an orthonormal transformation affects only

the covariance structure of the model. The following corollary identifies a sufficient

condition for an orthonormal transformation to lead to zero covariance terms. Given

an assumption of normality of the underlying errors this is equivalent to independence

of the transformed errors.

Corolla", Ll: Consider the spectral decomposition ~ = YDg(~)Y' in which

the columns of yare the eigenvectors associated with ~' = Pl' ... , Ap )', the

eigenvalues of ~, and y'y = I. Assume rk(~) = p. Consider transforming model

(2.1) as in Theorem 1 by the eigenvector matrix, y. The errors from the model

transformed in this fashion are independent.

£I!l.gf. From Theorem 1 ~e have ~. - Nnp(Q, 0.) with O. =[In ~ ~.] in

which ~. =y'~Y = Dg(~). Clearly O. = [In ~ Dg(~)] is a diagonal matrix. Given

the multivariate normality of ~., the zero covariance terms of O. indicate that the

elements of ~., the transformed errors, are mutually independent. o

An application of Corollary 1.1 provides the rationale for the traditional

Page 53: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

41

approach to the singular multivariate normal problem in which rk(~) = P. :$ p. In this

case, a subset of eigenvectors associated with the nonzero eigenvalues of ~ are used to

project ~ into a full rank subspace [see, for example, Kshirsagar 1983]. As stated

earlier, non-singular ~ will be assumed here since treatment of the singular case

unnecessarily complicates the development of the model of interest and does not

provide further insight into the proposed methods.

Corollary 1:.2.:

-1/(P-1)<p<1 and consider the spectral decomposition of~. It is easy to show

[Morrison, 1976] that the eigenvalues of ~ are Al = 0'2[1 + (p-1)p] and

A2 = '" = Ap = O'2(1_p). Furthermore, the normalized eigenvector associated with Al

is !/...['ii. The remaining (p-1) eigenvectors may be any set of vectors which are

mutually orthonormal and orthogonal to a vector of ones. For convenience, choose

them to be the normalized orthogonal polynomial coefficients generated by the vector

(I, 2, ...• p). Consider a transformation of model (2.1) using this set of eigenvectors.

The transformed errors will then be independent; n transformed errors will have

variance Al and n(p-1) transformed errors will have variance A2.

£!:RJlf: From Corollary 1.1 we have ~."" Nnp ( Q, g.) with

g. = [In ~ Dg(~)] and the elements of ~. are mutually independent. Given ~

compound-symmetric, ~ contains just two unique elements, Al with multiplicity one

and A2 with multiplicity (P-1). Thus, g. will contain nand n(p-1) copies of Al and

A2' respectively. 0

In summary, for general ~, Corollary 1.1 states that an exact orthonormal

transformation exists which, under normality, transforms a set of dependent errors into

a set of independent, heteroscedastic errors. However, Corollary 1.1 merely

demonstrates existence and says nothing about how to obtain the eigenvectors of ~.

Thus, for any given set of data, under a general ~ assumption, one must estimate y

Page 54: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

42

and can, at best, produce an approximate transformation to independence. By

Corollary 1.2, the eigenvectors of compound-symmetric ~ may be specified exactly

providing an exact transformation to independence.

Given compound-symmetric ~, it is convenient to write the log likelihood

function in simplified form using the transformed model,

nl \ n(p-l)l \ SSe sswIn(~.I~,~) = C - '2 nAt - -2- nA2 - 2.A

t- 2.A

2'

in which C is a constant not involving ~ or ~ andn

SSe = L: [Yih - fil.(fil' ~)]2i=t

and,

(2.4)

(2.5)

(2.6)

Also note that ~ and '1 = (p, ( 2)' are equivalent as ~ is a 1:1 function of '1 so that

knowing one, it is possible to solve for the other. Thus one may choose to work with

either ~ or '1 for convenience. As eigenvalues, the elements of ~ are variances for the eaverage and trends transformed responses respectively. By writing

u 2 = (l/P).A t + [(p-l)/p].A 2, u 2 may be recognized as a weighted sum of the average and

trends variances. Thus the transformation based on compound symmetry provides

both notational convenience as well as insight into the univariate formulation of the

model for data which meet this assumption.

2.3 Regularity Conditions

The set of regularity conditions put forth by Gallant [1987] are formulated to

cover very general nonlinear models. In particular, his assumptions permit

consideration of random effects models, i. e. he allows for random predictor variables, f,

defined on a probability space ($, 18($), 1£) in which 18($) is used to denote the u-field

generated by the Borel subsets of $. In the fixed effects model, the random vector f is

Page 55: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

43

degenerate; equivalently, the probability measure is atomic with unit mass tied to the

value set for;. Thus for the fixed effects models considered here, Gallant's regularity

conditions may be simplified.

In addition, Gallant's regularity conditions are appropriate for a variety of

estimation procedures defined by various objective functions to be minimized. His

assumptions will be restated below for the special case of maximum likelihood

estimation in which the objective function is proportional to the log likelihood function

(2.2). The regularity conditions will be stated for general, full rank ~ noting that the

compound symmetry assumption does not alter these conditions in any essential way.

Adoption of Gallant's regularity conditions validates application of his many

theorems to the methods developed here. Clarification of the notation used in this

section, to be simplified elsewhere, is chosen to follow closely that of Gallant so that

readers may easily refer to the original source. For the remainder of this section, a

parameter vector written with superscripts will be used to denote a set of fixed but

perhaps unknown values, while one written without superscripts will be treated as a

variable with respect to some operation. In addition the use of "." will indicate an

estimate of the associated parameter. Define vech(·) to be an operator which forms a

vector from the unique elements of a symmetric matrix [Searle, 1982; p. 332]. Then

! = vech(~) in which! E lR P(P+l)/2. Conversely, let ~(!) denote the mapping of !

into the elements of~, and set ~(!~) = ~~ for all n and ~(!-) = ~- in which !~ and

!- will be defined in Assumption 4. Write the sample objective function as

n

sn(~) = ~lnl~1 + ~E~[ri - .[(;i,~)]'~-l[ri - .[(;i'~)] .i=l

Define the following analogous functions:

s~(~) = ~lnl~~1 + ~

(2.7)

n+ ~E~[[(;i'~~) - .[(;i7~)]'(~~rl[[(;i'~~) - .[(;i'~)] (2.8)

i=l

Page 56: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

44

and

s*(~) = ~lnl~*1 + ~

+ f9; ~[[(t,~*) - [Ct,~)1'(~*r1[[(t,~*) - [(t,~)]oJJ(O

= !lnl~*1 + ~n

+ L: ![[(fi'~*) - [(fi,~)1'(~*r1[[(fi'~*) - [(fi,~)lwi , (2.9)i=l

in which wi = JJ(fi)' The simplification in (2.9) results from the fixed effects

assumption. Then, ~~ and ~* are defined to be the vectors which minimize (2.8) and

(2.9) respectively. In addition, ~n and tn = ~(tn) are defined as the estimates of ~

and ~ which jointly minimize (2.2).

Assumption 1: The errors, ~i' from each observational unit are independently and

identically distributed with common normal distribution, denoted Pee).

Assumption 2: [(f,~) is continuous on $x6*, 6* C lRq

is compact and ~ C lRr .

In order to develop a uniform strong law of large numbers, Gallant assumed

that the sequence h a upon which the results are conditioned is a Cesaro sum

generator, as is almost every joint realization {(~i' fin. He stated that independent

variables generated from an experimental design or by random sampling satisfy the

definition of a Cesaro sum generator. A formal definition and statement of this

assumption follow.

Definition: (Gallant and Holly, 1980). A sequence {Yi} of points from a probability

space ('f', ~('f'), II) is said to be a Cesaro sum generator with respect to an integrable

dominating function b(1') if

M~ A tJ(Yi) = fJ(y) 011(1')00 i=l

Page 57: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

45

for every real valued, continuous function fwith I.ttY)1 ~ bey).

Assumption 3: (Gallant and Holly, 1980). Almost every realization of {y;} with

y i = (~i' fi) is a Cesaro sum generator with respect to the product measure

v(A) = f$fg IA(~' oap(~)aJl(O

= fgIA(~' f)ap(~)

and dominating function b(~, f). The sequence {fi} is a Cesaro sum generator with

respect to Jl and b(f) = fgb(~,f)ap(~). For each f E 9; there is a neighborhood Nx

such that fgsuPNxb(~'f)ap(~)<oo. Here, IA(~' f) = 1 if(~, f) E A, 0 otherwise.

The next assumption may be considered an identification condition in the sense

that it is intended to ensure that a unique minimum of the objective function exists.

Assumption 4: The parameter €~ is indexed by n, and the sequence {€~} converges to

€*. Furthermore, for all n, r~ = r*, {ii(!n - r*) is bounded in probability for all n,

and tn converges almost surely to r*. Also, the function s*(€) has a unique minimum

over the parameter space e* at ~*.

Gallant [1987; Chapter 5, section 6] proved that ~-l(r) is continuous and

differentiable over T C mP(P+l)/2. Define B = sup[~j'(r): rET, i,j' E {l. 2, .... p}], in

which (Tij' denotes the i/-th element of ~-l(r). Then, B < 00 because ~-l(r) is

continuous over the compact set T.

Assumption 5: The parameter space e* is compact; {tn}is contained in T, which is a

closed ball centered at r* with finite, nonzero radius. The log likelihood function,

1(~I~,r), is continuous and Il(~I~.'r)1 ~ b(~,f) on CVx9;xTxa*, cv c mP•

Page 58: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

46

The next assumption is, perhaps, the major regularity condition cited in the

development of the theory of maximum likelihood estimation of nonlinear parameters. eIt is a technical restriction sufficient for proving asymptotic normality of the scores,

{ii(8/8~)sn(~)I!=!~, and subsequently the asymptotic normality of ~n. In particular, it

"validates the application of maximum likelihood theory to a subset of the parameters

when the remainder are treated as if known in the derivations but are subsequently

estimated" [Gallant, 1987; p. 185]. When a particular application fails to satisfy this

condition, joint estimation of the parameter sets rather than iterative estimation may

provide an appropriate solution.

Assumption 6: The parameter space e- contains a closed ball e centered at ~- with

finite, nonzero radius such that the elements of

(8/8€)[-l(~ I~ ,1')] ,

(82/8~8~')[-I(~I~,1')] ,

(82/ 81'8~')[ -l(~ I~ ,1')]

and

{(8/ 8~)[ -l(~ I~ ,1' )](8/ 8~)[ -l(~ I~ ,1' )]}'

are continuous and dominated by b(~,~) on q,fx$xTxe. Moreover,

in which Wi = JA(xi)' is nonsingular.

Readers desiring a more thorough treatment of these regularity conditions are

directed to Chapter 3 in Gallant [1987] or Gallant and Holly [1980]. Evaluation of

many of these assumptions with respect to any given data analytic setting are

tangential to the present research.

Page 59: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

47

2.4 Maximum Likelihood Estimation

Gallant [1987; p. 190] stated that "the usual consequence of a correctly

specified model and a sensible estimation procedure is that [~~ = ~*] for all n." Thus,

for simplicity, in the remainder of this work, this distinction will be dropped since it

will be assumed that the model is "correctly specified." Here, correct specification of a

model is taken to mean the correct identification of the functional form which underlies

the data generating mechanism. Model identification must be distinguished from

consideration of alternate models constructed by specifying constraints on the

parameters for a given functional form. Therefore, the theory of constrained

estimation is not related to the notion of model identification used in this way.

Clarification of the notation regarding parameter vectors may be achieved by using a

subscript "0" indicate the true parameter for which an estimate is sought.

Maximum likelihood estimation for the parameters of model (2.1) with

compound-symmetric ~o = Dg(~o) differs substantially from that for general ~o. Only

the compound-symmetric case will be presented here because the case of general ~o

may be found in Gallant [1987] and Barnett [1976] among others. Let 10 = (~o, ~o)

denote the full (q+2) parameter vector, under compound symmetry.

As is often true in normal theory maximum likelihood estimation, it will be

convenient to maximize the log likelihood function (2.2) noting that this is equivalent

to maximization of the likelihood function itself. Thus, the first partial derivatives of

the log likelihood function will be referred to as the likelihood equations. Closed form

expressions do not exist for the full solution vector to the set of (q+2) likelihood

equations consisting of (8/8~/)I(y*I~,~)= Q and (8/8~/)I(y*I~,~)= Q.

separate solution of each set, conditional upon the other set, is quite feasible.

However,

ML estimation of each parameter set, conditional upon an estimate of the other

set, was termed pseudo-maximum likelihood (PML) estimation by Gong and

Page 60: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

48

Samaniego [1981]. Under the regularity conditions stated above, iteration between

estimation of the two parameter sets until convergence may be used to compute the eMLE's. This approach will be taken here with explicit descriptions of the two PML

estimation steps for this estimation problem and their iteration to produce ML

estimates provided by Theorems 2-4.

For the sake of clarity, it is necessary to distinguish between two types of

iteration encountered in the ML estimation of the full parameter vector !o. The term

I-iteration will refer to the process of cycling between PML estimation of ~o and ~o.

In contrast, ~-iteration will refer to the embedded iterative procedure necessary to

estimate ~o for each cycle of a 1-iteration. Under compound-symmetry, no iteration is

necessary for PML estimation of the covariance parameters in ~o because, as will be

shown, closed form expressions exist for these estimates. Let 9 index the steps of 1-

iteration.

Theorem ~: (~-iteration). The PML estimator of ~o which maximizes

In(r.I~,~(1-1)) is the ~(I) E e that solves

f'(~) [4 (1-1)r1 ~. = Q , (2.10)

in which ~ (g-l) is a PML estimate of ~o from the (g-l)-th I-iteration and

4 (g-l) = ! ~ Dg(~ (1-1», and f(~) = (()/{)~')l(~).

fuRf. The proof consists of two parts: 1) a statement of the likelihood

equations and 2) demonstration that, with a nonlinear model, solution to the likelihood

equations identifies an MLE. With respect to this second part, achieving an ML

solution is dependent on the numerical algorithm used. The likelihood equations for ~,

conditional on ~ (9-1), a PML estimate of ~o, are

() • (g-l) { () • (g-l) }~ln(r·I~,~ ) = ()Orln(r.I~,~ ),

in which the r-th equation, r E {l, 2, ... , q}, takes the form

Page 61: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

49

This notation may be simplified by introducing the following matrices. Let

.f(~) = (fll' ... , fl P ' ... , fnp) be the (np x q) matrix of partial derivatives of l.(~) with

respect to ~ such that .t\i = ({)/{)~')hi.('f.ii'~)' Also observe that

[J. (g-l)r l = [In 0 Dg{~(g-l)rl] and ~. = r. -l.(~). Setting (2.11) equal to zero,

and writing these equations compactly, we have (2.10). The ~(g) which is the solution

, . -l(g-l) • (g-l)to f (~)~ ~. = Q maximizes In(r.I€,~ ) provided that the pseudo-likelihood

function has a unique maximum and the estimation procedure converges to that point.

In order to show convergence of the estimation procedure, see for example, the

regularity conditions necessary to ensure convergence of the modified Gauss-Newton

algorithm [Hartley, 1961 or Gallant, 1987, p. 44] or gradient methods [Bard, 1974]. 0

Theorem a: The PML estimators of Aol and Ao2 which locally maximize

In(r.I~(g),~) are

~~') = s-s~)In and ~~I) = s-sWI(n(p-l» , (2.12)

with s-s~) = SSB(€)I!=!(I) and ssW = SSw(€)I!=!(I) in which ~(g) is the PML

estimate of €0 at the g-th I-iteration.

l!J:!J.gf. The first partial derivatives of In(r .I~(I) ,~) with respect to Al and A2

are

(2.13)

and- (I)

Ll ( I(J°(I) ') - _nepAl) + sSw (? 14){)A2 n r· - ,~- 2'X;"" 2A~' _.

Setting these equal to zero and solving, one obtains (2.12). That (2.12) is a local

Page 62: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

50

maximum is verified by showing that the second derivative matrix is negative definite.

The second partial derivatives of the log likelihood, with respect to ~, are

(2.15)

(2.16)

(2.1 i)a2 in (y .1 ~ (g) ,~ )

a).laA2

= 0 .

Evaluating (2.15) and (2.16) at Al = ~ig) and A2 = ~~g) respectively, one obtains

-(g) -(g)-n/2A 1 and -[n(p-l)]/2A 2 • Note that ~ig) > 0 and ~~g) > 0 with probability one

(but in fact ~ 0 with finite precision). Thus the (2 x 2) second derivative matrix is

diagonal, with negative diagonal terms and hence negative definite, demonstrating that

(2.12) identifies a local maximum of the log likelihood. o

Theorem~: (I-iteration). If one iterates until the sequence of PML estimates e{(~(g), ~(g»}, 9 E {I, 2, ... , oo} converges, then the converged estimates, ~(OO) and ~(OO)

will correspond to a local maximum of the likelihood surface.

Proof: Proof of the above assertion must address the necessary conditions for

convergence of the sequence of estimates to occur. Under the regularity conditions

stated in §2.2, Gallant [1987; Chapter 5, section 5] provided a brief argument to

support this claim, for general ~o. Formal statement and proof may be found In

Barnett [1976], for general ~o. The assumption of compound symmetry does not alter

Barnett's proof in any essential way. o

In what follows, the "00" will be dropped, and, unless otherwise noted, ~ and ~

will be used to denote the converged products of I-iteration. Under the regularity

conditions stated above as well as regularity conditions to ensure convergence of the

estimation procedure, these estimates will be considered the MLE's for model (2.1)

Page 63: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

51

with compound-symmetric ~o.

Theorem Q: The maximum likelihood estimators of O'~ and po are

(2.18)

in which s-ss and s-sw are the sums of squares defined in (2.5) and (2.6) evaluated at

the MLE for ~o.

Proof: Given that ~ = (.~l' ~2) is the MLE of ~o and that the elements of

'1 = (0'2, p) are one to one functions of the elements of ~ (see Corollary 1.2) then one

may solve for the elements of '1 in terms of ~ and evaluate them at ~ = ~ to give

(2.18).

Theorem~: The Fisher information matrix is

j - [~' Q]_'Y - 0 j ,_ _A

in which

and

o

(2.19)

(2.20)

h =[ (2.21)

.I!.!:Jl!d: The general form of the Fisher information matrix is

(2.22)

in which t,t' E {I. 2•...• (q+2». The second partial derivatives of the log likelihood

function, 1= In(~.I~,~), are

Page 64: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

and

a2 l _ n(p-l) sswa'\~ - 2:\f - ,\~

and finally

The mixed partial derivatives of the log likelihood are

a2l _ 1 ~ )] a ( )af) a,\ - -\1 L.J ~ i -l(fi'~ lfO"l fi'~ ,

r 1 "1 i=l r

Taking the negative of the expectation of the second partial derivatives gives

(2.23)

(2.24)

(2.25)

(2.26)

(2.27)

(2.28)

(2.29)

(2.30)

Page 65: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

53

(2.31)

and

(2.32)

Taking the negative expectation of the mixed partial derivatives gives

Concatenating these matrices gives (2.19).

(2.33)

o

Given that ~ and ~ are MLE's, one may appeal to the well known properties of

MLE's [see, for example Bickel and Doksumj 1977, Section 4.4.C] and note that ~ and

~ are best asymptotically normal estimates. Hence, these estimates are consistent,

with asymptotic covariance equal to the inverse Fisher information matrix:

= Q

Q

Q

o

Q

o (2.34)

In practice, evaluation of (2.34) at 1'0 = 2' provides an estimate of the asymptotic

covariance of the parameter estimates.

Page 66: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

54

2.5 Asymptotic Properties of the Parameter Estimates

In what follows, it will be assumed that N = np ... 00 in such a way that p

remains fixed. This corresponds to the number of sampling units, n, going to infinity.

The decision for p to remain fixed follows from consideration of the experimental

design. Given certain experimental designs that may be employed to ensure a valid

compound symmetry assumption, such as counterbalanced stimulus presentation, often

the notion of p ... 00 will not make sense.

The asymptotic equivalence of AWLS (or PML) and ML estimates in nonlinear

models, under the regularity conditions stated above, is well established [Gallant, 1987;

Ch.5, section 5j see also Charnes, et ai, 1976]. In addition, for this equivalence relation

to hold, it is necessary that the variance does not depend on the mean of the regression

function [Davidian and Carroll, 1987]. For our model this is taken to mean that ~o is

not a function of ~o or the fi' Given the asymptotic equivalence ,of estimators

obtained from AWLS or ML procedures, it will be convenient to characterize the MLEs

using the basic principles from Gallant's unified asymptotic theory for nonlinear

parameter estimation which is motivated by least squares (rather than maximum

likelihood) theory. To this end, note that ~, the MLE of ~o, minimizes the

approximate weighted least squares objective function:

in which sj-1 = [In®t-1] and t is the MLE of ~o.

As Gallant does not provide an explicit characterization of ~ from a

multivariate model with general ~o, it will be provided here for completeness. The

special case of compound-symmetric ~o involves a simple substitution for the

covariance structure in the results that follow. Some preliminary results, largely

attributable to Gallant [1987], must be established.

Page 67: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

55

The consistency of maximum likelihood estimates is a well known property.

For the situation considered here, a somewhat stronger statement may be made. In

particular it can be shown that ~ = ~o + op(l/-{ii) [Gallant, 1987, p. 356]. Then ~

may be characterized to the same order. Define the following functions:

F(O) = -2-f(O)- - a~'--

.Q.s(O) = -~F'(O)n-1ea~ - n - - - -

~(~) = a:;~'s(~) = ~f'(~)O-lf(~) ,

and real-valued matrices:

f 0 = f(~)I!=!o

t~s(~o) = t~S(~)I!=!o

a' a Iaos(~) = aos(~) 8=9- - --~o = ~(~)I!=!o

; = H~)I-· .- - !-!

One important result used in the characterization that follows is:

a (0 ) 2F' A-1a~s _0 = -i'i _ 0 ~ ~

= -~f~ [Q~l + Op(l/{il)] ~

_ 2F' ,...,-1 1 F' 2 /_r:)- -ii _ 0 ~o ~ - 'Tn _0 ~rnOp(l ., n

= -~f~ 9~1~ + Op(l/{il) . (2.35)

The last line follows from an application of Slutsky's theorem [Serfling, 1980; p. 19]

using the facts that 1) tnf~~ ~ Nq(Q, f~9-1fo) implies tnf~~ is bounded in

probability and 2) 2/{il =o(1).

A second important result used in the characterization that follows is:

2 , '-1~0=iif09 fo

= ~f~ [Q~l + ope1/{il)] f 0

= ~f~ 9~lfo + ~f~ foOp(l/{il)

Page 68: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

56

(2.36)

The last line follows from an application of Slutsky's theorem using the fact that

~f~ f 0 converges uniformly to a fixed real-valued matrix [Gallant, 1987; theorem 5, p.

189].

Theorem 1: Under the regularity conditions stated in §2.2 above, consider

model (2.1) with general ~o. Then

~ = ~o + [f~ {)~lforlf~ {)~l~ + Op(l/{ii) . (2.37)

Proof: (By extension of Gallant [1987; Chapter 4, §3]). Assume that ~ and ~o

are contained in e. By Taylor's theorem [in Gallant, 1987; p. 13],

fj' fj fj2 _.{iifj~s(~) = {iifj~s(~o) + {iifj(Jfj(J's(~)(~ - ~o) ,

~ = a~ + (1-a)~0 for O~a~1 and (fj2 /fj~fj~')s(~) = (fj2 /fj~fj~')S(~)I!=!. Then

~o = ~ + 0.(1) and ~ = ~ + 0.(1), in which ~ = ~8' by Theorem 5 in Gallant [1987; p.

189]. Here 0.( .) is used to indicate almost sure convergence. Then it is also true that

j = H~)I~ _;; = ~,s(~) = '0 + 0.(1) .- - ~ - ~ fj~ fj~ -

By lemma 2 of Chapter 3 in Gallant [1987] we may assume without loss of generality

that {ii(fj/fj~)s(~)= Op(I). Substituting these equalities into the above and

completing the square with ~o, we obtain

Op(l) = {ii~s(~o) + [~ + 0.(1)]{ii(~ - ~o)

fj ••= {ii~s(~o) + [~ - ~o + 0.(1)]{ii(~ - ~o) + ~o{ii(~ - ~o) .

Let Op(') denote bounded in probability. Then [~ - ~o + 0.(1)] = 0.(1) and the

convergence in distribution of {ii(~ - ~o) to a normal random vector with mean zero

implies that {ii(~ - ~o) = Op(I). Hence [~- ~o + 0.(1)]{ii(~ - ~o) =Op(I).

Simplification of the above expression gives

• fj~o{ii(~ - ~o) = -{iifj~s(~o) + Op(l) .

There is an n' such that for n>n' the inverse of ~o exists giving

• 1 fj{ii(~ - ~o) = -{iiqof fj~'s(~o) + Op(l) .

Page 69: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

o

57

Substituting for ~o and ({)/{)~)s(~o) from (2.35) and (2.36) and applying Slutsky's

Theorem, we obtain

{ii(~ - ~o) = {ii[(~f~ S]~lforl + op(l/{ii)][~f~ S]~l~ + Op(l/{ii)] + Op(l)

= {ii[(f~ S]~lforlf~ S]~l~ + Op(l/{ii)] + Op(l) .

Finally, rearranging terms gives (2.37).

Corollary 7.1: Under the regularity conditions stated in §2.2 above, consider

model (2.1) in which ~o is compound-symmetric, then

~ = ~o + (f~ 4.~lforlf~ 4.~l~. + Op(l/{ii) , (2.38)

in which 4.0 = [In 0 Dg(~o)] and ~o contains the eigenvalues of ~o.

Proof: Recognize that ~o may be estimated equivalently from either the

transformed or the untransformed model. For convenience, consider the transformed

model, with covariance structure 4.0. By substituting 4.0 = [In 0 Dg( ~o)] for S]O and

~. for ~ in (2.37), the proof is complete. 0

Two lemmas follow from the nature of the asymptotic covariance matrix.

Although their proofs are trivial, these lemmas are essential because they provide a

sufficient condition to ensure the asymptotic independence of certain estimators.

Functions of these estimators are used, in Chapter 3, to construct test statistics. The

characterization of the resulting test statistics relies on the asymptotic independence of

the component parts.

lemma 1: Under the regularity conditions stated in §2.2, consider model (2.1)

with ~o compound-symmetric. Applying the transformation from Corollary 1.2, the

maximum likelihood estimates ~ and ~, of ~o and ~o, are asymptotically independent.

Proof: Since ~ and ~ are MLEs, they are asymptotically normal with

asymptotic covariance as in (2.34). Applying the definition of asymptotic independence

used by Kendall and Stuart [1970; V.2, p. 56], i.e. asymptotic normality and zero

covariance, independence follows. o

Page 70: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

58

lemma~: Under the regularity conditions stated in §2.2, consider model. (2.1)

with ~o compound-symmetric. Applying the transformation from Corollary 1.2, the

maximum likelihood estimates ~1 and ~2' of Aol and A0 2' are asymptotically

independent.

Proof: See the proof of lemma 1 above. o

The large sample normality of ~1 and ~2 is most likely misleading for smaller

samples. It is more typical to approximate variances, or more accurately, scaled sums

of squares, with X2 distributions. Recall that a X2 variate with degrees of freedom

equal to v converges to a normal as v goes to infinity. In this way, the X2 limit

distribution may be thought of as a "smaller sample" result. The following two

theorems provide the rationale for the X2 limit distributions for ~1 and ~2. For

convenience, partition ~* = (~*/, ~*/)' such that ~*/ contains the n residuals

corresponding to the average transformed responses and ~*/ contains the n(p-1)

residuals corresponding to the trend transformed responses. Similarly, consider such

partitions for the transformed responses, model equations and errors.

Theorem a: Under the regularity conditions stated in §2.2 above, consider

model (2.1) with compound-symmetric ~o. Transform the model as in Corollary 1.2.

then,

(2.39)

and

(2.40)

£r!l!lf. Recall that ~*1 = r*1 -1*1(~) is a function of~, and ~ is consistent for

~o. Hence by Serfling [1980; p. 24] n~1 = ~*/~*1 = ~*/~*1 + Op(l). A similar

argument may be constructed for (2.40). o

Page 71: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

59

Theorem 1!:

and

• P 2n(p-1)A 2 ~ A2 X [n(p-1)] .

(2.41)

(2.42)

Proof. Write n~l/Al = ~./~~.l + Op(l) in which ~ = (I/A1>!n. The proof is

completed by recalling that ~.l "" Nn( Q, Adn) and Al~ = In is idempotent so that

Theorem 2 from Searle [1971] applies giving (2.41). A similar argument may

constructed for (2.42). 0

2.6 Bias Approximations for the Parameter Estimates

A bias approximation for ~ was developed by M. J. Box [1971]. His derivations

are quite general and include both spherical and non-spherical nonlinear models. In

particular the results for a heteroscedastic model are directly applicable to the

transformed compound-symmetric model considered here. These derivations are based

on a second order Taylor series approximation to the nonlinear expected value

function. This type of bias approximation is intuitively appealing since it is well-known

that measures of nonlinearity (in particular the parameter effects nonlinearity) are

closely related to the second order term from a Taylor series approximation. The bias

for ~ may be defined as ~ = E(~ - ~o). In addition, let f = f(~)lf=~ and let Ui be

the (q x q) matrix of second partial derivatives of l(;i'~) with respect to ~, evaluated

at ~ = ~. Here tr(.) is used to denote the trace of a matrix. Then define I!1 = {mi}'

i E {I, 2, ... , np} to be an (np x 1) vector with mi = tr(~;lUi]' An approximation for the

bias in ~, to second order in (~ - ~o), was found by M. J. Box as

(2.43)

An application of (2.43) that is relevant here involves estimation of ~o. It is

possible that (2.43) may be used to improve the accuracy of estimation of ~o. Two

Page 72: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

60

means to this end include 1) bias correction of the predicted value vector l(~), by using

a bias corrected version of ~, and hence the residuals upon which computation of ~o is

based and 2) direct application of (2.43) to a Taylor series expansion of ~, as a

function of ~, about ~o. Since the bias in ~ is zero asymptotically, bias corrected

estimates of ~o retain the same asymptotic properties as the uncorrected estimators.

Bias corrections will not be pursued here, although they offer a possible means to

improvement in variance estimation as well as their obvious potential for improvement

in estimation of ~o.

Page 73: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 3

INFERENCE FOR COMPLETE DATA

3.1 Introduction

Current methods for hypothesis testing in multivariate nonlinear models are

inadequate in two ways. First, many methods are limited to particular types of

hypotheses or models. Second, the more general methods often do not work well in

small samples. Methods are proposed, here, which are expected to provide more

accurate small sample hypothesis testing than existing methods. The new methods are

essentially a very general extension of the univariate approach to repeated measures

from linear models to nonlinear models. In simulation studies presented in a later

chapter, the new methods will be compared to existing general methods [Gallant,

1987]. A gain in accuracy is expected to be achieved by correctly modelling the

covariance structure, namely one of compound symmetry. A brief review of the

situations for which the new methods will be appropriate is helpful.

Model-based inference procedures may be classified according to whether they

are applicable to linear and/or nonlinear hypotheses for linear and/or nonlinear models.

Hence, the term "nonlinear" may refer to either 1) a type of hypothesis or 2) a type of

model. In this context, the "usual" multivariate statistics, i.e. for the general linear

hypothesis for the GLMM as in (1.7) (see, for example, Chapter 8 in Hocking, 1985),

involve a very restricted set of circumstances: both linear hypotheses and linear

models. The methods evaluated herein are appropriate for possibly nonlinear models

with possibly nonlinear hypotheses.

Page 74: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

62

The new test statistics proposed below, as well as those attributable to Gallant

[1987], are based on F approximations to the classic Wald and likelihood ratio

statistics. The basic strategy here will be to construct the test statistics of interest as

ratios of asymptotically independent quadratic forms which correspond to hypothesis

and error sums of squares, denoted SSH and SSE respectively. This allows ease of

comparison of the new statistics to those of Gallant's.

The new statistics may be viewed as modifications to Gallant's, however, they

are motivated very differently. The difference in motivations revolves around error

variance estimation. Briefly, the distinction between the two approaches derives from

Gallant's transformation of the multivariate nonlinear model to one with

approximately N(O,l) errors. The transformation is defined by an estimated weight

matrix so that the transformation is only approximate. Hence distributional properties

of the transformed errors, particularly their independence, are only approximate.

Furthermore, the original scale of the data is lost. Thus the concept of "error

variance" (a scale parameter) is somewhat unnatural in the multivariate methods

proposed by Gallant. In contrast, when ~o is compound-symmetric, the methods

proposed here involve an exact transformation to independence and use an error

variance estimate for 0'2, the true variance of the data. When ~o is not compound­

symmetric, the new methods are still applicable in the spirit of a generalized univariate

approach to repeated measures for nonlinear models. More detailed comparison of the

motivations for the new approach and that of Gallant are contained in §3.2.

Derivation of the sum of squares estimators of Gallant [1987] and of those being

proposed here is provided in §3.2 and §3.3. The proposed estimators are characterized

for general ~o, with 00 = In ® ~o. The results are easily specialized to the case of

compound-symmetric ~o = ~e. by simply replacing ~o with ~e. throughout.

Moreover, when the exact transformation (and subsequent estimation) method of

Page 75: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

63

Chapter 2 for compound-symmetric covariance is used, the transformed error structure

may be written ~o = Dg(~o), with 00 = 40 = ! ~ Dg(~o). Thus, these substitutions

may be made for ~o and 00, respectively, to incorporate the estimation method for

compound-symmetry, described in Chapter 2, into the inference procedures of the

present chapter.

Some further notation and an additional assumption must be added. Consider

hypotheses of the following form:

VS. (3.1)

in which the function M·) is a once continuously differentiable mapping, IRq ..... IRs, so

that r == q-s is the dimension of the parameter space under the constraint imposed by

Ho. Let tI(~) = (8/8~)1/(~) denote the Jacobian for the function M~). An additional

assumption will be added at this point which imposes full rank on tI and ~8' the Fisher

information matrix. This requirement is not strictly necessary because the less than

full rank case can be accommodated in much the same fashion as for linear models.

However, tolerating less than full rank introduces substantial notational and proof

complexity which detracts from the major results to be developed here.

Assumption 13: [in Gallant, 1987; p. 219]. The function ~(~) that defines the null

hypothesis Ho: M~o) = Q is a once continuously differentiable mapping of the

estimation space into IRs. Its Jacobian ~J(~) has full rank at ~ =~o. The matrix ~8 has

full rank. The statement "the null hypothesis is true" means that M~o) = Qfor all n.

3.2 Error Variance Estimation

With respect to Type I error rate, it is generally accepted that F

approximations perform better than X2 approximations for hypothesis testing in

univariate nonlinear models [Gallant, 1987, Ch. 5; Milliken and DeBruin 1978].

Page 76: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

64

Typically F approximations are constructed by dividing the classic Wald and likelihood

ratio X2 statistics by a suitable independent X2 denominator. Two approaches to echoosing a denominator X2 are considered here; both approaches may be viewed as

choosing an error variance estimator. The first approach uses a divisor which is

approximately equal to one, i.e. the estimated variance of a unit normal [Gallant,

1987]. The second approach, to be proposed here, seeks an estimate of a univariate

measure of error variance, in the scale of the data. This estimator may be adjusted

when independence and/or variance homogeneity are violated. Hence, this second

approach is consistent with a univariate approach to repeated measures.

Gallant's improvement to the usual asymptotic X2 statistics was motivated by

the desire to compensate for sampling variation in having to estimate ~o. To this end,

Gallant exploited a byproduct of the AWLS estimation process, namely standardized

residuals. Standardized residuals are obtained from a model which has been

transformed by a Cholesky factor of the sample covariance matrix. The mean square

of the standardized residuals essentially estimates unity. In contrast, the improvement

to the usual asymptotic X2 statistics proposed here involves using the mean square of

the unstandardized residuals. In both cases, as will be shown, these estimators may be

shown to be approximately distributed as X2 and asymptotically independent of Wald

and likelihood ratio based X2 numerators.

In order to clarify the two mean squared error estimators, define a Cholesky

decomposition of t as t = V'V in which Vis upper triangular. The "." is used to

emphasize that this factor, and hence the following transformation, is approximate. In

general, t may be any consistent estimate of ~o. Hence, in the theory that follows, it

is sufficient to consider t computed from the OLS residuals of an untransformed

model. Note that AWLS estimation is equivalent to OLS estimation of the model

transformed by V. Specifically,

Page 77: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

65

s(~) = [~ - l(~)]'[In ® t-l][~ -l(~)]

= [~ -l(~)]'Un ® -g-l]'Un ® -g-l][~ -l(~)]

= [~ .. - l ..(~)]'[~ .. - l ..(~)] ,

in which the subscript "**" denotes transformation by the estimated (Cholesky) factor

matrix, as distinct from the subscript "*" which will be used throughout to identify the

exact (orthonormal) transformation described in Corollary 2.1. The following two sets

of residuals are defined

and

1) standardized,

2) unstandardized,

~ .. = ~ .. -l..(~)

= [In ® (J-t][~ - l(~)] .

~ = ~ -l(~) ,

(3.2)

(3.3)

Regardless of the nature of ~o, the mean squared error computed from (3.2)

estimates a unit variance. However, the interpretation of the mean squared error

computed from (3.3) depends on the nature of Eo. With compound-symmetric

covariance and an estimation procedure which utilizes the orthonormal transformation

of Corollary 2.1, the mean squared error computed from (3.3) is estimating an

unstandardized error variance, i.e. the variance of the model in the original metric. For

general covariance, this proposed estimator has no simple interpretation.

The following three theorems provide asymptotic characterizations of 1) a

standardized (unit) error variance estimator 2) a generalized unstandardized error

variance estimator for general ~o and 3) an unstandardized error variance estimator

when ~o = ~e,.

Theorem.JJl: (Gallant's estimator of standardized (unit) error variance).

Under the regularity conditions of §2.2, consider model (2.1) with general ~o. An

estimator of standardized error variance is provided by

Page 78: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

. ,.2 (n) e •• e ••s. = s ~ = -np - q ,

in which ~ and t are the ML estimates of ~o and ~o. Then

66

(3.4)

and

(i)

( ii)

in which

and

(np-q)s; = Q.+ op(l/n) ,

(3.5)

(3.6)

Proof of (i): (By extension of univariate argument in Gallant [1987; Ch. 4]).

By Taylor's theorem,

nps(~o) = nps(~) - np(8~'s(~))'(~ - ~o) + n:(~ - ~o)'(8:;8's(~))(~ - ~o) ,- - - (3.7)

in which ~ = a:~ + (l-a:)~o for some O$a:$L Recall that {iiP(8/8~)s(~)= Op(l) and

(82/8~8~')s(~) = ~o + 0.(1). Making these substitutions in (3.7) and rewriting gives

• • np -, •np[s(~o) - s(~)] = -Op(l){iiP(~ - ~o) + 2(~ - ~o) [~o + O.(l)](~ - ~o) .

Noting that {iiP(~ - ~o) = Op(l), this reduces to

• np -, •np[s(~o) - s(~)] = 2(~ - ~o) [~o + o.(l)](~ - ~o) + Op(l)

np - ,-= 2(~ - ~o) ~o(~ - ~o) + Op(l) .

Applying Theorem 7 to ~ and replacing ~o = n~f /Qo-If 0 + Op(l/{ii),

np[s(~o) - s(~)] = n:{(f/Q~lforlfo'O~I~ + Op(l/{ii)}'

{n2pf/O~lfo + Op(l/{ii)}

{(f/g~lforlf/O~l~+ Op(l/{ii)} + Op(l)

Page 79: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

67

Rearranging and substituting for

nps(~o) = ~'Q-l~

= ~'[Q~1 + Op(l/{il)]~

then

s(~) = ~{~'Q~l~ - ~'Q~lfo(fo'Q~lforlfo'Q~l~ + Op(l/{il)} + Op(l/n)

= lp~'{Q~l - Q~lfo(fo'Q~lforlfo'Q~l}~ + Op(l/n) .

Multiplying the right hand side by np/(np-q) = 1 + 0(1) we obtain (3.5).

Proof of (ii): Write Q, = ~'~1~ in which

~1 is symmetric and ~lQO is idempotent. Then by Theorem 2 in Searle [1971; section

2.5], ~'~1~ "" x2[rk(~lQO)]' Finally, the proof is completed by observing that

= tr[Inp - Q~lfo(fo'Q;,1forlfo']

= tr(Inp) - tr[Q~lfo(fo'Q~lforlfo']

= np - tr(!q)

= np-q. o

An alternate asymptotic distributional result is obtained by applying the

following argument to (3.4). By using the fact that ~ and ~ are jointly consistent for

~o and ~o [Barnett, 1976], it can be shown that (np-q)s~ = ~'Q~l~ + Opel) and hence

(np-q)s~ !. x2[np]. This type of argument relies only on the consistency of ~ for ~o

rather than the {il characterization provided in Theorem 7. Of course the two

characterizations of (np-q)s~ are asymptotically equivalent because in large samples np

and np-q are indistinguishable. In practice it is preferable to use the results based on

Page 80: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

68

np - q because the correction in degrees of freedom for estimation of ~ 0 has been shown

to provide a better small sample approximation to the error variance in a variety of

situations [see, for example, Harville, 1977]. Of course, for linear models, np-q

provides exact results.

In general, any quadratic form written Q = ;'~; with ; "" N(~. y) and ~

symmetric, is distributed exactly as a weighted sum of independent noncentral chi­

squares [Johnson and Kotz, 1970; Ch. 29]. The weights are the eigenvalues of ~y.

Exact probabilities of any quadratic form in independent normal variables may be

computed using one of several algorithms [Davies, 1980]. However, these algorithms

are computer intensive and they have not been evaluated with respect to the use of

estimated weights.

estimated.

For the applications considered here, the weights must be

It will be shown that the approximate asymptotic distribution of an

unstandardized error variance estimator may be obtained using a method of moments

approach. A brief review of such an approach for approximating the distribution of a

quadratic form in normal variables is helpful.

Consider Q as above. The distribution of Q may be approximated by a single

scaled noncentral chi-square, so that Q N CX~[II,W] in which c, II and ware obtained by

equating these to the moments of the quadratic form, Q. An evaluation of the

accuracy of method of moments approximations to quadratic forms related to analysis

of variance applications is available in Box [1954a and b].

It is sufficient for the present applications to consider the case of a central

quadratic form, i.e. ~ = Q. Applying the formulae for the first and second moments of

a central chi-square variate with degrees of freedom II (see for example Ch. 28, section

4 of Johnson and Kotz, 1970) we have

E(Q) ~ CII (3.8)

Page 81: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

69

and

(3.9)

The first and second central moments of Q may be found exactly as

(3.10)

and

(3.11)

Then (3.8) and (3.9) may be equated to (3.10) and (3.11) respectively and solved

simultaneously for c and v in terms of functions of the traces of ~y and (~y)2. This,yields

and

c = tr(~y)2 / tr(~Y) (3.12)

(3.13)

Note that, using the fact that the trace of a matrix is equal to the sum of its

eigenvalues, one may choose to consider c and v as functions of the eigenvalues of ~y.

This was the approach taken by Geisser and Greenhouse [1958] in extending Box's

results to the use of the F statistic in multivariate analyses. In practice, y is not

known so that a consistent estimate of y replaces y in (3.12) and (3.13).

Theorem 11: (An estimator of unstandardized error variance). Under the

regularity conditions of §2.2, consider model (2.1) with general ~o. An estimator of

unstandardized error variance is provided by

(3.14)

in which €is the ML estimate of ~o. Then

( i)

in which

(np-q)s~ = Qu + op(l/n) ,

(3.15)

Page 82: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

70

and

( ii)

in which

and

(3.16)

(3.17)

Proof of (i): Three algebraic properties of Pno aid simplification in the steps

that follow,

(a) Pn)~2~lpno = Pno '

(b) Pno9~1 is idempotent and

(c) Pno is symmetric.

Replacing t(~) by its first order Taylor series about ~o for every fixed, finite sample

size n,

(np-q)s~ = [~ - t(~o) - f(~)(~ - ~o)]'[~ - t(~o) - f(~)(~ - ~o)]

= [~ - f(~)(~ - ~o)]'[~ - f(~)(~ - ~o)]

= ~'~ - ~'f(~) (~ - ~o) - (~ - ~o)'f'(~)~

+ (~ - ~o)'f'(~)f(~)(~ - ~o) ,

in which ~ = Q~ + (l-Q)~o with O~Q~1. Note that f(~) = f(~o) + Opel) by

applying §1.7 of Serfling [1980] and using the fact that ~ is consistent for ~o. Let

fo = f(~o). Making the appropriate substitutions and applying Theorem 7 we have

(np-q)s~ = ~'~ - ~'[f 0 + op(l)][(f0'9~lfofIf0'9~1~ + opel/iii)]

- [~'9~lfo(f0'9~lfofl + op(l/{ii)][fo + Op(l)]~

+ [~'9~lfo(f 0'9~lfofIf0' + op(l/{ii)][f 0 + Opel)]

[fo(f0'9~lfoflf0'9~1~ + opel/iii)] .

, 'p n-l 'n-lp 'n-lp p n-l ( )= ~ ~ - ~ _no~'o ~ - ~ ~'o _no~ + ~ ~o _ no _no~o ~ + op I

Page 83: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

71

Dividing through by (np-q) gives (3.14).

Proof of (ii): Qu is a quadratic form in normal vector ~ - N(g, S)o), with

~2 = [Inp - P noS)~l]'[Inp - P noS)~l] symmetric, although Qu does not exactly follow a

chi-square distribution because ~2S)0 is not idempotent. Hence a method of moments

approach may be applied to give an approximate distributional result. Then

tr(~2S)0) = tr{[!n p - PnoS)~l]'[Inp - PnoS)~l]S)o}

= tr{[!np - PnoS)~l]'[S)o - Pno]}

= tr[S)o - 2P no + PnoS)~lpno]

and similarly, it can be shown that

Substituting these expressions for tr(~y) and tr[(~y)2] in (3.12) and (3.13) completes

the proof. o

When ~o = 0'2!p. the asymptotic distribution of the unstandardized error

variance estimator simplifies to the well known result for the univariate nonlinear

model as the following corollary shows.

Corollary lLl.: Under the conditions of Theorem 11, if ~o = 0'2Jp then

Proof: When ~o = 0'2!p, S)O = 0'2!np. Applying Theorem 2 from Searle [1971]

and let ~2 =(1/0'2)[Inp - fo(fo'for1fo']. The proof is completed by observing that

o

When the covariance structure is assumed to be compound-symmetric, a

weaker approximate distributional result may be obtained for the estimator of error

variance, 0'2, based on the unstandardized residual sum of squares. This result is

obtained by using the fact that ~ is consistent for ~o, rather than using the {ii

Page 84: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

characterization of Theorem 7.

72

Partition the residuals from the orthonormally

transformed model into ~* = (~*/, ~*/)', such that the first n elements are the

residuals corresponding to average transformed responses and the remaining n(p-l) are

those residuals corresponding to trend transformed responses.

Theorem.l2.: (An estimator of unstandardized error variance with ~o = ~c,).

Under the regularity conditions of §2.2, consider model (2.1), with ~o = ~C"

transformed orthonormally as in Corollary 1.2. An estimator of unstandardized error

variance is

Then

s~, =. ,.~* ~*

(np-q) .(3.18)

and

( i)

( ii)

in which

with

and

(np-q)s~, = Qc, + Op(l) ,

Q - ,c. - ~* ~* ,

_ '\~1 + (p -1)'\~2- '\01 + (p-l)'\02 '

(3.20)

(3.21)

Proof of (i): This follows immediately from Theorem 8 in Chapter 2.

Partitioning the residuals from the transformed model as above, write

Page 85: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

73

, ,_ ~l* ~h + ~2* ~2* + Op(l) .- ""iij5'='Q np q

Proof of (ii): Using the method of moments approach described above, observe

that Qe. may be written as a weighted sum of asymptotically independent chi-squared

variates, ~l and ~2. Using the fact that ~l and ~2 are MLE's, we have

and

in large samples, by Theorem 6 in Chapter 2, where the subscript "a" denotes

"asymptotically."

V{Ce.X 2[Ve.]} = 2C~.Ve. respectively and solving for Ce. and Ve. completes the proof. 0

Again, the case of ~o = 0'2!p poses a special case of interest. It demonstrates

that while the method of moments approach provides the correct asymptotic result for

spherical covariance, in small samples it leads to liberal degrees of freedom.

Corolla", 12.1: Under the conditions of Theorem 12, with ~o = 0'2!p, Ce. = 0'2

and Ve. = np.

Proof: When ~o = 0'2!p, then Aol = Ao2 = 0'2. Substituting these equalities

into (3.20) and (3.21) one obtains Ce. = 0'2 and Ve. = np. o

Corollary 12.1 suggests that Ve. in (3.21) ignores estimation of the parameters

in ~o. One may interpret Ve. as a parameter which describes the effective sample size.

In practice, one might choose to use v~. = (Ve. - q) degrees of freedom.

Page 86: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

74

3.3 Development of Hypothesis Sums of Squares

In this section two hypothesis sums of squares are considered. These form the

basis of the Wald and likelihood ratio test statistics. Although asymptotically

equivalent, these two statistics perform very differently with respect to Type I error

rate, for nonlinear models with small samples. In particular, the Wald test tends to

perform worse than the likelihood ratio test. Furthermore, the Wald test performs

very poorly in models possessing large parameter effects curvature. Theorems 13 and

14 extend univariate arguments in Gallant [1987] to the multivariate case of a Wald

based hypothesis sum of squares. Theorem 15 provides an analogous unstandardized

approximator to a Wald based SSH. Theorem 16 provides an asymptotic

characterization of a likelihood ratio numerator using an application of Theorem 10.

Theorem 17 provides an alternative formulation of a likelihood ratio numerator and X2

test statistic based on the unstandardized error variance estimator.

Theorem la: Under the regularity conditions of §2.2 and assumption 13 above, econsider model (2.1) with general ~o. Defining M~) = M~)I!=!, then

(3.22)

Proof. (adapted from Gallant, 1987, p. 260-261). By Taylor's theorem

{iiM~) = M~o) + ij'{ii(~ - ~o) ,

in which ij has rows (a/a~')M~/)I!/=!, with ~/ = aA + (l-a/)~o for O$a/$1 and

I E {I, 2, ... , s}. Then ij' = lJo' + 0.(1) and

Applying Theorem 7 from Chapter 2,

Page 87: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

75

= {iiM~o) + tIo'{ii[(fo'9~lfoflfo'9~1~ + Op(l/{ii») + 0,(1)

= {iiM€o) + tIo'{ii(fo'9~lfOflf o'9~1~ + Op(l) •

Dividing both sides of the above equation by {ii gives (3.22). o

Theorem 14: Under the conditions of Theorem 13, define a Wald based

hypothesis sum of squares as

(3.23)

in which fJ = tI(~)I!=t Then

(i)

in which,

with

SSHw = Qw + Op(l) ,

(3.24)

and

and

( ii)

in which the noncentrality parameter w is

(3.25)

Proof of (i): (adapted from Gallant [1987; Ch. 4]). Substituting (3.22) into

(3.23), using the consistency of ~ for €0 and t for ~o and then applying Slutsky's

Theorem, one obtains

Page 88: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

76

([lJo'(f o'G~lfoflIJor1+ Op(l)}

[!!(€o) + lJo'(fo'G7,1foflfo'G~1~ + Op(l/{ii)]

= ~'~3~ + Op(l) ,

with ~ and ~3 defined as above.

Proof of (ii): It is easy to show that ~ "" Ns(M~o), ~31) with ~3 symmetric

and ~3~31 = Is so that applying Theorem 2 of Searle [1971], ~'~3~ "" X2 [s, w], with

o

An alternate approach to defining a hypothesis sum of squares involves

omitting the covariance matrix, 0- 1, in (3.23) and identifying an approximating X2

distribution for it as follows. In order to be consistent with notation previously

developed, the subscript "u" will be appended to indicate an "unstandardized" form.

Theorem lQ: Under the conditions of Theorem 13, define an unstandardized

Wald based hypothesis sum of squares as

in which iJ = IJ(€)I!=,!' Then

( i)

in which,

SSHWu = Qwu + Op(l) ,

(3.26)

(3.27)

with

~3u = g~lfo(f/g~lfof1IJo[ij/(fo'fof1IJor11J/(fo'g~lfoflf/g~l ,

and, under Ho

( ii)

in which

Page 89: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

77

(3.28)

and

(3.29)

with

Proof of (i): This follows directly from Theorem 14 part (i), noting that ~3u

replaces ~3'

Proof of (ii): A method of moments approach may be used to approximate the

distribution of (3.27), which is not exactly distributed as X2 because ~3u(20 is not

idempotent. We have

Similarly it can be shown that tr(~3u(20)2 = tr('Y)2. Then tr('Y) and tr('Y)2 may be

substituted for tr(~y) and tr(~y)2 in (3.12) and (3.13) to obtain cWu and IIwu as

above. a

CorollanJ~: Under the conditions of Theorem 15, if ~o = 0'2!p then

cWu = 0'2 and IIwu = S.

Proof. If ~o = 0'2Jp , (20 = 0'2!np.

one obtains Cwu = 0'2 and IIwu = S.

Substituting for (20 in (3.28) and (3.29),

a

In contrast to a Wald-based hypothesis sum of squares which relies on the

normality of Q(~) one may define a hypothesis sum of squares using the likelihood

ratio. The likelihood ratio, under normality of errors, is defined by the difference

Page 90: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

78

between error sums of squares from full and reduced models, denoted s(J) and s(r)

respectively. Here, a reduced model is taken to be one which is formulated under the

constraints of the null hypothesis. For a univariate model, the classic likelihood ratio

test for a univariate model may be written as LR = n In(Lol La) in which Lo and La

denote the likelihoods under the null and alternative hypotheses. For a model with q

parameters an F approximation is obtained using the approximation In(1-x) ~ x,

sse(r)LR ~ (n-q)ln -­sse(/)

(sse(r) - sse(/))

= (n-q)ln 1 + sse(/)

sse(r) - sse(/)~ (n-q) SS€(f)

= 5 F LR

in which LR N X~ where 5 is the number of constraints imposed by Ho and the sse(.)

are residual sums of squares from reduced (r) and full (1) models. Gallant [1987]

provided support for using Fs,n-q to approximate the null distribution of F LR for

nonlinear models. Gallant also extended the use of the F approximation to the

multivariate model with n observations and p repeated measurements. However, the

residual sums of squares used by Gallant in the multivariate extension are computed

from standardized residuals. It is proposed here that unstandardized residuals be used.

In what follows, SSH L• and SSH Lu will be used to denote approximators of the

difference in reduced and full model, standardized and unstandardized, error sums of

squares, respectively.

It is helpful for what follows to briefly consider the theory of constrained

estimation. The null hypothesis as stated in (3.1) is in the form of a parametric

restriction. Alternately, it may be stated as a functional dependence,

Page 91: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Ho: ~o = !J(po) for some po vs. Ha: ~o =F !J(p) for any p ,

79

(3.30)

in which !J:Rs-+Rq, r + 5 = q and !J(p) is twice continuously differentiable. Using the

functional dependence one may write the restricted model as

Applying the chain rule for differentiation, the Jacobian for the restricted model is

= fG,

in which f = f[!J( p)] = f(~) appears as before and G is essentially a Jacobian for the

transformation from RS to Rq• This greatly simplifies the development of the error

sum of squares for the restricted model. Define

(3.31)

in parallel to that of the full model using f(~) and G(p) evaluated at ~o and po

respectively. Henceforth use P n ,previously written simply as P no, to denote the- 0(1) -

matrix derived from the full model. Furthermore, note the following relations

a) P n n~lpn = P n and- 0(1) - - 0(1) - 0(J)

b) P n-1p - p n-1p - p- no(r) - 0 - no(J) - - no(J) - 0 - no(r) - - no(r)

Theorem!§: (Gallant's likelihood ratio numerator). Under the regularity

conditions of §2.2 and assumption 13, consider model (2.1) with general ~o and a

hypothesis as in (3.1) or (3.30). Recall that s:(r) and s:(I) are the mean squared

standardized residuals from the reduced and full models respectively. Define

sse,(r) = [np-(q-S)]s:(r) and sse,(J) = [np-q]s:(I) to be the reduced and full model

standardized residual sums of squares. An approximator of the hypothesis sum of

squares is provided by

(3.32)

Page 92: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

80

Under the null hypothesis

(i)

in which,

and,

( ii)

(3.33)

(3.34)

Proof of (i): By repeated application of Theorem 10 to sse,(r) and sse,(J)

SSHL ,= sse,(r) - sse,(J)

= ~'Q~l[(lnp - :P no(r» - (lnp - :PnO(I)]Q~l~ + Opel)

= e'O~l(Pn - Pn )O~le + Opel) .- - - 0(1) - o(r) - -

Proof of (ii): Theorem 2 of Searle [1971; section 2.5] may be applied. Let

A 4 = O~l(Pn - Pn )O~l, noting that it is symmetric. Using (a) and (b) above, e- - - 0(J) - o( r) -

we have that A 40 0 = O~l(pn - P n ) is idempotent. Specifically,- - - - 0(J) - o(r)

(A 0 )2 - O-l(p _ P )O-l(p _ P )- 4 - 0 - - 0 _ no(J) - no(r) - 0 - no(J) - no(r)

= O~l(Pn - Pn - Pn + Pn )- - 0(J) - o(r) - o(r) - o(r)

= O-l(p _ P )- 0 - no(J) - no(r)

= ~4QO'

The proof is completed by observing that

rk(~4SJO) = tr(~4QO)

= tr[O-l(p - P )]- 0 - no(J) - no(r)

- tr[O-lp ] tr[O-lp )]- - 0 _ no(J) - - 0 - no(r)

= tr[Q~lfo(fo'Qofoflf0']

- tr[Q~lfoGo(Go'fo'SJofoGof1Gof0']

Page 93: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

81

= tr(fo'O~lfo(fo'Ooforl]

- tr(Go'fo'O~lfoGo(Go'fo'OofoGor1]

=tr(!q) - tr(!r)

=q-r

=S. o

A technical point regarding the computation of the constrained estimator of ~o

and hence residuals must be made. In Theorem 16 it was implicitly assumed that ~

was computed by minimizing the sample objective function as in (2.7) subject to the

constraints imposed by Ho. This implies that the estimation procedure must cycle

between estimation of ~o and estimation of ~o. As a product of this procedure a

"constrained" estimate of ~o, t, is obtained. Strictly speaking, t is the constrained

MLE of ~o. However, Gallant [1987; p. 366-367] indicated that an asymptotically

equivalent procedure involves conditional constrained estimation of ~o by minimizing

the sample objective function using the previously obtained unconstrained estimate of

~o, t;. Clearly, this latter procedure is computationally much easier because it involves

only additional ~-iterations after having computed t;, rather than entire x-iterations

(see Chapter 2). For convenience, both in practice and in the development of the

results that follow, the conditional constrained MLE of ~o will be used. Let ~ denote

this estimate. Note that using the unconstrained MLE of ~o in place of the

constrained MLE does not alter the conclusions of Theorem 16.

In parallel to the development of standardized and unstandardized error

variance estimators, an alternate estimator of the hypothesis sums of squares is

computed from the unstandardized error variance estimators from full and reduced

models as described in the following theorems.

Theorem 11: Under the regularity conditions of §2.2 and assumption 13,

consider model (2.1) with general covariance and a hypothesis as in (3.1) or (3.30).

Page 94: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

82

Recall that s~(r) and s~(J) are the mean squared standardized residuals from the

reduced and full models respectively. Define sseu(r) = [np-(q-S)]S~(r) and

sseu(J) = [np-q]s~(J) to be the reduced and full model unstandardized residual sums of

squares. An approximator of the hypothesis sum of squares is provided by

(3.35)

Note that s~(r) = ~'~ in which ~ = ~ - [(€), where € is the estimator of ~o subject to

M~o) = Qand conditional on ~o = t. Then, under the null hypothesis

(i )

in which

and

and

(3.36)

and

( ii)

in which

and

(3.37)

(3.38)

Proof of (i): The strategy of this proof relies on characterization of €, the

conditional constrained MLE of ~o.

function

Hence, we seek to minimize the Lagrangean

Page 95: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

83

(3.40)

with respect to ~ and §, in which § is a vector of unknown Lagrangean multipliers.

The set of first derivatives of (3.40) are

and

a~~~) = 2M~) .

(3.41)

(3.42)

Setting (3.41) equal to zero and substituting the first order Taylor series

expansion of t(~) about ~ we obtain

Q = -t'(2-1[~ - t(~) - f(€*-)(~ -~)] + ij'§

in which €*- = o~ + (1-0)~ where 0:50:51 and t = f(~)I!=! and ij = :~..I(~)I!=r

Also, f(€*-) = t + Opel), and, when Ho is true, both ~ and ~ converge in probability

to ~o so that t = f 0 + Opel) and ij = ijo + Opel). Furthermore, recall that

(2 = 00 + Opel). Then the above simplifies to

, -1 • _. - ,Q= -fo 00 [~ - t(~) - fo(~ -~) + Opel)] + ij § + Opel) .

Rearranging terms, we obtain

~ - ~ = (fo'o~lfor1fo'o~1[~ - t(~)] - (fo'o~lfor1ijo'§ + Opel)

= (fo'o~lfor1fo'o~1~ - (fo'o~lfor1ijo'§ + Opel) (3.43)

in which the last line results from the facts that 1) ~ - t(~) = ~ - t(~o) + Opel)

since ~ is consistent for ~o and 2) ~ = ~ - t(~o).

Setting (3.42) equal to zero and substituting the first order Taylor series

expansion of M~) about ~ we obtain

Q= M~)

= M~) + ij'(€•• )(~ - ~)

= M~o) + ijo'(~ - ~) + Opel) (3.44)

by using arguments similar to those used to obtain (3.43). Substituting (3.43) into

Page 96: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

84

(3.44) we obtain

Q= M~o) + lJo'(fo'Q:,lfoflfo'Q:,l~ - 1J/(f/Q:,lfofllJo§ + Op(l) .

Solving for § and noting that when 1J0 is true, Q(~o) = Q, gives

(3.45)

Substituting (3.45) back into (3.43) and simplifying gives

(3.46)

The unstandardized error sum of squares, computed from residuals which are

functions of the conditional constrained estimator ~ may be characterized as follows, by

replacing l(~) by its first order Taylor series about ~

sseu(r) = [~ -l(~)]/[~ - l(~)]

= [~ -l(~) - f(~ .. )(~ - ~)J/[~ -l(~) - f(~ .. )(~ - ~)J

= [~ - 1(~)J'[~ - 1(~)J + (~ - ~)'fo'f o(~ - ~) + Op(l)

- A' , _ A= sseu(J) + (~ - ~) f 0 f o(~ - ~) + Op(l) . (3.47)

Thus substituting (3.46) for (~ - ~) in (3.47), and simplifying, the proof is complete.

of moments approach may be used to approximate the distribution of (3.35). Its

distribution may not be specified exactly as X2 because ~5Qo is not idempotent. In

order to approximate the scale parameter and degrees of freedom for (3.35) we have

tr(~5qO) = tr{Q~lfo[9 - 9¥9Jfo'fo[9 - 9¥9Jfo'}

= tr{9- 1[9 - 9¥9]fo'f0[9 - 9¥9]}

= tr{[Iq - ¥9Jfo'f0[9 - 9¥9]}

=tr{fo'f0[9 - 9¥9HIq - ¥9]}

=tr{fo'f0[9 - 29¥9 - 9¥9¥9]}

= tr{fo'f0[9 - 9¥9]} since 9¥9¥9 =9¥9

=tr{f09f0' - f 09¥9fo'}

= tr{f0[9 - 9¥9Jfo'} .

Page 97: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

85

Then

substituting these equalities for tr(~y) and tr(~y)2 in (3.12) and (3.13) one obtains

o

When ~o = O'2!p. the distribution of QLu may be shown to simplify to the

usual univariate result.

Corollary 1Ll: Under the conditions of Theorem 17, if ~o = O'2!p then

Proof: If ~o = O'2!p , 90 = O'2!np. Substituting for 90 in (3.38) and (3.37),

one obtains cLu = 0'2 and vLu = s.

3.4 Construction of Test Statistics

o

The following set of theorems establish the distributional properties of the test

statistics formed by computing ratios of appropriately scaled hypothesis and error sums

of squares defined in §3.2 and §3.3. For Theorems 18-23 consider testing hypotheses as

in (3.1) or (3.30).

computed by replacing €o and ~o by €and t in (3.16), (3.17), (3.20), (3.21), (3.28),

(3.29), (3.38) and (3.39) respectively. In practice, negative estimates of SSHLu and

SSHWu may occur. It is recommended that when these improper estimates arise, they

be set to zero.

Theorem .la: (Gallant's Wald statistic, assuming general ~o). An

asymptotically a-level test of Ho , as in (3.1) is provided by

W 1 = SSH'1' / 5 ,

5,

in which

(3.48)

( i)

with

Page 98: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Then

86

F - Qw/sW1 - Q. / (np-q) .

( ii) FW1 ,.., F a[S, np-q; w] .

Proof of (i): Applying part (i) of Theorems 14 and 10 to the numerator and

denominator of (3.48) respectively and applying Slutsky's Theorem gives (i) above.

Proof of (ii): Define

recognizing that SSHw = ~'~6~ when Ho is true. Then it is sufficient to demonstrate

that ~6go~1 = 9 as follows

H (F 'O-l F )-IF '0-10 [0-1 - O- l p 0- 1]_ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ no(J) _ 0

H (F 'O- l F )-IF '[0-1 - O- l p 0-1]_ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ 0 _ no(J) _ 0

=9

in order to prove the independence of the Qw and Q. when Ho is true. Furthermore,

since Qw is just ~'~6~ plus a constant not involving ~ when Ha is true, independence

holds under Ha as well. Application of part (ii) of Theorems 14 and 10 to the

numerator and denominator of FW1 completes the proof of (ii) here. o

Theorem JJ!: (new Wald statistic, assuming general ~o). An approximately Ck-

level test of Ho, as in (3.1) is provided by

(3.49)

Page 99: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

87

in which

( i)

with

Then, under Ho

( ii)

Proof of (i): Applying part (i) of Theorems 15 and 11 to the numerator and

denominator of (3.49) respectively and applying Slutsky's Theorem gives (i) above.

Proof of (ii): Demonstrating that ~3uS]0~2 = Q is sufficient to prove the

independence of Qwu and Qu, specifically

~3uS]0~2 = S]~lfo(f o'S]~lfor1ijo'[:tl:o(fo'for1ijo'r1

Ho(Fo'n~lFo)-lFo'n~lno[lnp - Pn n~l]'[lnp - Pn ' n~l]- - - - - - - - - 0(1) - - - 0(1)-

= n-1F (F 'n-1F )-lH '[H (F 'F )-lH ']-1_0_0_0_0_0 _0 _0_0_0 _0

H (F 'n-1F )-l[F' F 'n-1p ][1 P n-1]_.0 _0 _0 _0 _0 - _0 _0 -no(J) _np - -no(l)-o

= S]~lfo(f o'S]~lfor1ijo'[:tl:o(fo'for1ijo'r1

Ho(Fo'n~1Fo)-l[F 0' - F o'][l np - Pn n~l]- - - - - - - - 0(1)-

= Q.

Additionally, it is necessary to show that cu and vu converge in probability to Cu and

lIu respectively. This follows from the fact that c u and vu are functions of ~ and twhich are consistent estimates of ~o and ~o. Hence, applying section 1.7 from Serfling

[1980], cu and vu are consistent for Cu and lIu. A similar result may be obtained for

CWu and vWu. Application of part (ii) of Theorems 15 and 11 to the numerator and

denominator of FW2 and applying Slutsky's Theorem completes the proof of (ii) here. 0

Page 100: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

88

Theorem 211: (new Wald statistic, assuming ~o = ~e,). An approximately a-

level test of Ho , as in (3.1) is provided by

(3.50)

in which

( i)

with

Then, under Ho

( ii)

Proof of (i): Note that ~'~ = ~.'~* by Theorem l(ii). Applying part (i) of

Theorems 15 and 12 to the numerator and denominator of (3.50) respeotively, noting

that in large samples lie' = lie' -q, and applying Slutsky's Theorem gives (i) above.

Proof of (ii): When ~o = ~e" s~ == s~,. Hence independence of the numerator

and denominator of FW3 follows directly as in the proof of Theorem 18 (ii) above. It

is easy to show that ee, and €Ie, are consistent for ee, and lie,. A similar result may be

obtained for cWu and vWu' Finally, applying part (ii) of Theorems 15 and 12 to the

numerator and denominator of Fw3 ' the proof is completed. o

Theorem.2l: (Gallant's "Likelihood Ratio" statistic, assuming general ~o). An

asymptotically a-level test of Ho , as in (3.1) is provided by

(3.51)

in which

( i)

with

Page 101: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

89F _ QL. /5

U - Q. / (np-q) .

Then, under Ho

( ii) Fu '" Fo[s. np-q] •

Proof of (i): Applying part (i) of Theorems 16 and 10 to the numerator and

denominator of (3.51) respectively and applying Slutsky's Theorem gives (i) above.

Proof of (ii): Demonstrating that ~4go~1 = Q is sufficient to prove the

independence of QL. and Q., specifically

A 0 A = 0-1(p - P )0-10 (0~1 - O~lp 0~1)_ 4 _ 0 _ 1 _ 0 _ no(J) _ no(r) _ 0 - 0 - - - no(J)-

- 0-1(p _ P )(0-1 _ 0-lp 0-1)- _ 0 _ no(J) - no(r) - 0 _ 0 - no(J) _ 0

= 0-lp 0-1 _ 0-lp 0-1 _ 0-lp 0-lp 0-1_0 -no(J)-o _0 -no(r)-o _0 -no(J)-o -no(J)-o

0-lp 0-lp 0-1+ _0 -no(r)-o -no(J)-o

- 0-lp 0-1 0-lp n-1 n-1p n-1 + n-1p n-1- _ 0 _ no(J) _ 0 - _ 0 _ no(r) _ 0 - _ 0 _ no(J) _ 0 _ 0 _ no(r) _ 0

=Q

Application of part (ii) of Theorems 16 and 10 to the numerator and denominator of

Fu completes the proof of (ii) here.

Theorem 22,: (new "Likelihood Ratio" statistic, assuming general ~o)

L - SSHLu / (CLuVLu)2 - [(np-q)s~] / (CuVu) ,

in which

o

(3.52)

( i)

with

QLu /5FL2 = Q / ( ) .u CuVu

Then, under Ho

( ii)

Page 102: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

90

Proof of (i): Applying part (i) of Theorems 17 and 11 to the numerator and

denominator of (3.52) respectively and applying Slutsky's Theorem gives (i) above.

Proof of (ii): Define

~7 = 9~1f0[9 - 9¥9]fo'f0[9 - 9¥9]f o'9~1

Demonstrating that ~790~2 = Q is sufficient to prove the independence of QLu and

Qu, specifically

~790~2 = 9~1f0[9 - 9¥9]fo'f0[9 - 9¥9]fo'9~190[!np - PnO(J)9~1]'

[I P 0-1]_np - _no(J)-o

= 9~1f0[9 - 9¥9]fo'f0[9 - 9¥9]fo'[!np - PnO(J)9~1]'

[I P 0-1]_np - -no(J)-o

= 9~1fo[9 - 9¥9]fo'fo[9 - 9¥9][fo' - fo'9~lfo(fo'9~1for1fo']

[I P 0- 1]_np - _no(J)_o

= 9~lfo[9 - 9¥9]fo'fo[9 - 9¥9][fo' - fo'H!np - PnO(J)9~1]

= Q.

It can be shown that cLu and vLu converge in probability to cLu and lILu respectively.

Application of part (ii) of Theorems 17 and 11 to the numerator and denominator of

F L2 completes the proof of (ii) above. o

Theorem 2a: (new "Likelihood Ratio" statistic, assuming ~o = ~c,). An

approximately a-level test of Ho , as in (3.1) is provided by

L _ SSHLu / (CLuiiLJ3 - [(np-q)s~,] / [Cc,(Vc,-q)] ,

in which

(3.53)

( i)

with

Then, under Ho

Page 103: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

91

( ii)

Proof:

Proof of (i): Note that ~'~ = ~*'~ by Theorem l(ii). Applying part (i) of

Theorems 17 and 12 to the numerator and denominator of (3.53) respectively, noting

that in large samples lie' = lie' -Q, and applying Slutsky's Theorem gives (i) above.

Proof of (ii): When ~o = ~e" s~ == s~,. Hence independence of the numerator

and denominator of F Ls follows directly as in the proof of Theorem 20 (ii) above. It is

easy to show that ee, and Ve, are consistent for ee, and lie,. Finally, applying part (ii)

of Theorems 17 and 12 to the numerator and denominator of F L3' the proof is

completed. 0

In addition to the above test statistics, a test of independence of the

untransformed data is easily obtained. Equivalently, this may be viewed as a test of

Ho : p = 0 vs. Ha : P '# 0 . (3.54)

All of the test statistics reported above reduce to the "usual" univariate statistics (for

a nonlinear model) under sphericity, i.e. p = O. Hence, a preliminary step to

hypothesis testing might begin by testing for i~dependence. If the null hypothesis of

independence is accepted one could proceed with a univariate statistic, or alternately

where the independence test is rejected one could choose to use a multivariate statistic.

This approach is naive, it is hoped that one has a better understanding of a particular

study than to resort to such a "data driven" practice. Furthermore, note that the

distribution of the proposed test statistic for independence, reported below, is known

only asymptotically so that, at the very least, one should use caution in applying it to

small samples.

A similar test statistic was reported in Arnold [1981; p. 228] for the linear,

repeated measures model. For the linear model, the distributional properties of the

test statistic are known exactly. Furthermore, Arnold uses numerator and

Page 104: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

92

denominator degrees of freedom which, in effect, are corrected for the number of

"between" and "within" parameters estimated, respectively. As has been pointed out

earlier, this kind of model separability does not, in general, apply to multivariate

nonlinear models. Hence one is left with the rather crude asymptotic approximation

derived below in Theorem 24.

Theorem 21: Consider the hypothesis as in (3.54) An asymptotically level a

test of (3.54) is provided by

(3.55)

in which,

x ~ F a[n, n(p-l)] under Ho.

Proof: Lemma 2 establishes the asymptotic independence of ~l and ~2 and

hence the asymptotic independence of the numerator and denominator of (3.53). Then

~l n~l / (n0'2)~2 = n(p-I)X 2 / (n(p-I)0'2)

~ F[n. n(p-I)] •

3.5 Comparison of Test Statistics

by Theorem 9 and Slutsky's Theorem

c

Tables 2 and 3 provide a summary of the Wald and likelihood ratio based test

statistics under consideration here. Note that these statistics may be computed from

either AWLS or ITAWLS estimates of the expected value and covariance parameters.

Furthermore, because both the AWLS and ITAWLS parameter estimates are

Page 105: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

93

consistent, the test statistics computed from either set of estimates possess the same

asymptotic properties indicated in the previous section. F and X2 approximations are

provided for both Wald and likelihood ratio based statistics.

WI and L1 will be referred to as standardized Wald and likelihood ratio

statistics, respectively, because they are computed using standardized versions of SSH

and SSE. Development of these statistics is generally attributed to Gallant [1987; Ch.

5], although similar versions of these test statistics and analogous confidence

procedures are used widely (see for example, Donaldson and Schnabel, [1987]).

Alternate proofs of the approximate F distributions of WI and L1 are provided in §3.4.

W 2' W 3' L2 and L3 will be referred to as unstandardized statistics because they

are computed from unstandardized versions of SSH and SSE. These statistics are

modified versions of WI and L1 respectively. The modifications involve using a very

general univariate approach to repeated measures in the sense that the term for the

estimated covariance matrix, (2-1, may be omitted in the computation of WI and L1

upon including estimates of the appropriate scale and degree of freedom parameters in

the new statistics W 2' W 3' L2 and L3 • These statistics also possess approximate F

distributions. It will be of interest to evaluate the performance of the modified

statistics W 2 , W 3 , L2 and L3 in comparison to WI and L1 •

In addition, the unstandardized X2 statistics W sand Ls may be compared to

the standardized X2 statistics W 4 and L4 • Finally, comparison of the set of Wald

statistics to the set of likelihood ratio statistics is interesting for a given model and

hypothesis because it is generally true that for a model with large curvature the

likelihood ratio test performs more accurately than does the Wald test [Gallant, 1987;

p.84]. These comparisons will be evaluated using simulation studies reported in

Chapter 5.

Page 106: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

94

3.6 Confidence Interval and Confidence Region Estimation

The current knowledge and practice of confidence interval and confidence region

estimation for parameters, or functions of parameters, from a nonlinear model is far

from satisfactory. This is particularly true for multivariate nonlinear regression

models. Recall that a review of the literature in Chapter 1 provided no solution that

was both easy to implement and reliable. Most notably, the Monte Carlo studies by

Donaldson and Schnabel [1987], with univariate nonlinear models, clearly demonstrated

that the commonly used and easy to compute Wald based confidence intervals often

provide poor coverage. In contrast, the more accurate likelihood based intervals are

difficult to compute and occassionally ill-behaved. Moreover, these authors related the

too small coverage of the Wald based intervals to the inadequacy of a linearizing

approximation to the nonlinear function. Despite its poor performance, the Wald

based confidence intervals continue to be widely used in practice. Given the practical

appeal of Wald based confidence intervals, it would be advantageous to find ways to

accurately implement them. This will be the approach taken here.

It is not within the scope of this research to provide a comprehensive solution

to the difficult problem of confidence interval and confidence region estimation.

Rather, Wald based confidence procedures will be discussed in the context of nonlinear

models possessing the compound-symmetric covariance structure. The emphasis, here,

will be on confidence interval estimation since there is some evidence to support the

notion that more accurate coverage is obtained for confidence intervals than for

confidence regions related to a given parameter set [Donaldson and Schnabel, 1987].

The Wald tests discussed in §3.4 will be inverted to provide approximate confidence

intervals. Confidence intervals computed by inverting a Wald test are asymptotically

correct although they are typically too narrow with small samples. In practice,

moderate sample sizes may be necessary to produce coverage close to the nominal level.

Page 107: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

95

It is hoped that upon correct specification and modelling of the compound-symmetric

covariance structure, reasonable accuracy may be achieved for confidence interval

estimation as well as for the hypothesis tests provided in the previous two sections.

A confidence interval for some (possibly) nonlinear scalar-valued function of ~o,

written h(~o), may be obtained by inverting the Wald test as follows. From Theorem

14 we may construct a Wald test (based on a X2 distribution with one degree of

freedom, equivalently a z-distribution) which accepts when

In which zOl/2 denotes the upper 0:/2 critical point of the z-distribution. For small

samples it is preferable to use the t-distribution replacing zOl/2 with tnp- Q ;0I/2 in the

above. Adopting this practice, those points ~ that satisfy the above inequality are in

the interval

h(9) ± [H'(F'n-1F)-lHll/2t . ._ _ _ _ _ _ np-q,OI/2

Thus (3.56) provides an approximate 100(1-0:)% confidence interval for h(~o).

(3.56)

Frequently, a confidence interval for just one element of ~o, Or with

r E {l, 2, ... , q}, is sought. This is easily obtained as a special case of (3.56), in which

h(~o) = Or and iI' = (Q~-r-l' 1, Q~-r)'. Then [iI'(t'Q-ltrliIl l/2 = V;~2, in which

V;~2 is the rr-th element of (t'Q-ltrl. This gives the following approximate

100(1-0:)% confidence interval for Or as

• • 1/2Or ± V rr tnp- q ;0I/2 (3.57)

For situations in which Q(~o) is of dimension 5, Theorem 14 suggests an

approximate 100(1-0:)% confidence region as the set of ~ that satisfy

(3.58)

Page 108: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

96

For small samples it has been suggested that F a[S, np-qJ!s be substituted for Xz.[s] in

(3.58) [see for example, Gennings, et ai, 1989].

A straightforward extension of Theorem 15 provides an alternate, and possibly

conservative, approximate 100(1- (l')% confidence region as the set of € that satisfy

(3.59)

Again, in small samples, Fa[vwu' np-q]/(vwu) or Fa[vWu' vu]/(vwu) can be

substituted for Xz.[vWu]' In theory, confidence intervals may be defined for any

suitable one-dimensional h(€o) using (3.59). However, when applying (3.59) to obtain

a confidence interval, estimation of a critical value for X2 with so few degrees of

freedom (0 < vWu ~ 1 typically) is expected to be unreliable. Finally, as with the test

statistics of the previous two sections, any of the confidence procedures just described

may be equivalently (in the asymptotic sense) computed using either AWLS or

ITAWLS estimation of the model parameters.

Page 109: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 4

ESTIMA TlON AND INFERENCE FOR INCOMPLETE DATA

4.1 Introduction

In Chapters 2 and 3, estimation and inference methods were developed under

the assumption that p repeated measurements were available for every observational

unit. In practice, it is often the case that one or more measurements are missing for

some observational units. In many situations it is reasonable to assume that the data

are missing completely at random [Little and Rubin, 1987]. This assumption will be

adopted here.

In §4.3, maximum likelihood estimation methods for nonlinear models with

compound-symmetric covariance will be extended to the case of incomplete data. The

approach to ML estimation taken here is to treat the problem in two steps in parallel

to ML estimation for complete data. Throughout this chapter, the convention of using

"0" to denote the true parameter value will be discontinued with repspect to estimation

of the covariance paramters. Hence, the nature of a "parameter" must be interpreted

from its context. The first step involves pseudo-maximum likelihood (PML) estimation

of two sets of parameters, those contained in ~ 0 and those contained in "1' = (u 2, p)'.

In a second step, the PML steps are iterated until convergence of the full set of

estimates (in practice, until some convergence criterion is reached). This is not to

suggest that this is the only, or even the best, way to proceed. In fact, many

algorithms for finding solutions to sets of nonlinear equations may be found in the

literature. Recall that with complete data, the ML estimation procedure discussed in

Chapter 2 involved a "singly nested iterative" procedure. The overall process was

Page 110: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

98

referred to as I-iteration, where I-iteration referred to the cycling between estimation

of ~o and estimation of the covariance parameters. It is a "singly nested iterative"

procedure in the sense that for any particular I-iteration, PML estimation of ~o

required an iterative algorithm. However, within a particular I-iteration, PML

estimation of 7] = ((7'2, p) via estimation of ~ was possible non-iteratively. It will be

shown that with incomplete data I-iteration is a "doubly nested iterative" procedure

since PML estimation of ~o and PML estimation of the covariance parameters each

require iterative procedures. Henceforth let f'-iteration refer to the PML estimation

procedure for the covariance parameters when there are incomplete data.

For the case of complete data, estimation of the covariance parameters was

greatly simplified by considering an orthonormal model transformation. This yielded

alternate covariance parameters, Al and A2' which are one to one functions of the

original parameters of the model, p and (7'2. PML estimators for Al and .\2 exist in

closed form. Hence the MLE's for p and (7'2, in the complete data case, were obtained eas simple functions of the MLE's for .\1 and A2' As will be shown in this chapter,

orthonormal transformation of the model, with incomplete data, yields heterogeneous

variances which are functions of the number of available repeated measures. The

transformed data are independent, however, which permits a simple expression for the

log likelihood function. Unfortunately, closed form expressions for the PML estimators

of the covariance parameters do not exist. Thus, in the case of incomplete data

covariance estimation, an iterative procedure must be employed to solve the likelihood

equations for p and (7'2, directly. Hence PML estimates of p and (7'2 which may be

obtained non-iteratively with complete data must be obtained iteratively when the data

are incomplete.

Alternately, estimators for p and (7'2 may be found by using a method of

moments (MOM) approach. These will be derived in §4.4. The method of moments

Page 111: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

99

estimators will be shown to have the intuitively appealing property that they reduce to

the familiar MLE's for p and (1'2 when complete data are available. In addition, these

estimators are non-iterative so that the computational and numerical difficulties of

obtaining the MLE's may be avoided. Subsequently, the MOM estimators of p and (1'2

may be used to produce an AWLS estimate of ~o, as discussed in §4.5.

As for complete data, the orthonormal model transformation is a useful tool.

The next section includes a brief overview of this technique for incomplete data as well

as an introduction of some new notation.

4.2 The Orthonormal Model Transformation

The nonlinear repeated measurements model for the case of incomplete data

may be written as in (2.1)

As before, i E {I, 2, ... , n}. However an important distinction is that with incomplete

data, j E {I, 2, ... , pa, in which 1 $ Pi $P. Let Yi' = (Yil' Yi2' ... , YiP')' denote the- I

possibly incomplete data vector for the i-th unit with corresponding predictor set

~i = (1'2[P!Pi!~i + (l-p)Ip;l- As for complete data, there exist only two unique

covariance parameters for this model, p and (1'2. However, there are P possible

dimensions for the ~i corresponding to having one, two, or as many as P non-missing

repeated measurements.

In order to simplify the notation somewhat, define m E {l, 2, .... p} to be the

number of non-missing repeated measurements available for some set of observations.

Then m may be used to index equivalence classes related to the number of repeated

measurements available for observational units belonging to that class. Let Pm denote

the number of repeated measurements available for each observational unit in the m-th

Page 112: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

100

equivalence class and let nm denote the number of observational units in the m-th

equivalence class. Within the m-th equivalence class, the set of errors corresponding to

the i-th observational unit has covariance matrix

~m = (T2[P!m!m' + (l-pHm]. Define p. = L:nm(m-l). Without loss of generality,m

the data may be grouped by subject within the p equivalence classes allowing the model

to be written in vector form

?:. = [.(~) + ~. , (4.1)

in which n. = ~nmm = (n + ~nm(m-l») = (n + p.) is the dimension of the data

vector in (4.1). The vector of errors for this data arrangement has (n. x n.) covariance

matrix

Inl®~l Q Q

Q In2®~2 Q0·= (4.2) e

Q Q Inp®~p

In parallel to the model transformation described in Chapter 2 for complete data, an

orthonormal transformation may be constructed for the case of incomplete data. The

varying dimensions of the?: i' must be accommodated by the transformation matrix.

The following lemma, defines the appropriate orthonormal transformation for

incomplete data and proves the independence of the resulting data. Its presentation is

somewhat brief since it uses principles thoroughly covered in Chapter 2. Various forms

of this result have been reported previously (see Schwertman, 1978, Muller, 1989 and

Hafner, 1988).

Lemma a: Define the Vm to be (m x m) matrices of eigenvectors associated

with the eigenvalues of the Em and V:nVm = VmV:n = 1m. Define the following

Page 113: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

101

transformation matrix

T·=Q

Q

Q

Q

Q

Q(4.3)

Note that Y1 == 1 so that In1 ® Yi =In1' Transform model (4.1) by multiplying

both sides by T. yielding the transformed model

~ •. = l··(~) + ~ •.

Then

(i) ~ •. - N(Q, 4.),

(4.4)

in which,

4·=Q

Q

Q

Q

Q

Q(4.5)

in which the A1m occur with multiplicities of nm and A2 occurs with

multiplicity p., and

(4.6)

Proof: Proof of assertions (i)-( iii) above, is obtained from straightforward

extension of Theorem 1 in Chapter 2.

An important consequence of using the orthonormal transformation, for

incomplete data as well as complete data, is that the transformed data are

Page 114: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

102

independent. For incomplete data (under a normality assumption), this is evident from

the diagonal covariance structure in (4.5). This permits a simple expression for the log

likelihood with incomplete data

1 p. L SSSmIn(y.·I~, ~.) = C - 2~:nm InAl m - '2 InA2 - -2\

- m m "1m

SSw-2r'2

(4.7)

in which C is a constant not involving ~ or ~.' = (All' Al2' ... , Alm , A2)',

nm

SSsm =L [Yih' - fi1 •. (fi1,~)]2;=1

(4.8)

and

(4.9)

In the above, the subscript "m" is used to indicate the equivalence class from which an

observation arose. It is important to distinguish the equivalence classes for the

transformed responses corresponding to the "averaging" column of Ym because the

variance of these observations is a function of the number of available repeated

measurements, m, so that Aim = 0'2[1 + (m-1)p]. This results in m "between" sums

of squares being defined. However, the "trends" transformed responses all have

variance A2 = 0'2(1-p) so that a single "within" sum of squares is defined. Thus the

methods described in Chapter 2 for obtaining the MLE's for p and 0'2 are not

applicable. With incomplete data, the MLE's for p and 0'2 must be obtained directly.

Thus it is preferable to write the log likelihood as a function of p and 0'2. Substituting

the expressions in (4.6) for the ~lm 's and ~2 in (4.7), the log likelihood may be

rewritten as

In(r•. I~, 0'2, p) = C- !f;nm In{0'2[1 + (m-1)p]} - ~ In[0'2(1-p)]

'" ( SSSm ) SSW- ~ 20'2[1 + (m-1)p] - 20'2(1-p) .

(4.10)

In the next section, ML estimation of ~o, p and 0'2 from incomplete data is discussed.

Page 115: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

103

4.3 Maximum Likelihood Estimation

The regularity conditions outlined in Chapter 2 will be assumed here as well.

Many results obtained in Chapter 2 are also applicable here upon making one

important modification. This modification involves a new vector of variances for the

transformed incomplete data, specifically ~.' = (All' A12' .... AlP' A2)' rather than

~' = (Al' A2' ... , A2)' associated with complete data. It should be emphasized that the

(P+l) elements of ~. are functions of only two unique covariance parameters, (1'2 and p.

Hence the elements of ~. should not be confused with specification of a minimal set of

covariance parameters, rather they provide a complete set of heterogenous variances

induced by the model transformation. Keeping this in mind, maximum likelihood

estimation for missing data is easily developed using concepts introduced in Chapter 2.

In Chapter 2, Theorem 2 defined the PML estimator of ~o. This result is

applicable to the incomplete data case as well upon replacing the complete data

estimator J(g-l) with one for incomplete data, to be denoted J. (9-1). With respect to

ML estimation, let J.(9-1) be computed as ~. evaluated at the PML estimates of p and

(1'2 obtained at the (g-l)-th I-iteration. Subsequently, ~.(9-1) may be constructed from

J.(9-1). Upon substituting ~.(,-1) for 0(,-1) in Theorem 2, the basic form of the PML

equations is essentially the same as for complete data. Thus the reader is referred to

Chapter 2 for the form of the PML equations relating to ~.

As indicated earlier, PML estimation of the covariance parameters for the case

of incomplete data requires one to solve the PML equations for (1'2 and p. Note that as

a special case this includes PML estimation of p and (1'2 for complete data as well. The

form of the PML estimators for p and (1'2 will be stated here without proof.

Proposition 1: (~-iteration). The PML estimators of p and (1'2 which locally

maximize In(r .·I~(g), (1'2, p) must satisfy the equations

Page 116: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

and

104

(4.11)

- (9) a I -(9)and SSw = SSW(!Z) _(9) in which ~ is the PML!=!

- (9)" nm(m-l) + p. +" SSBm(m-l)~l + (m-l)p I-p ~0'2[1 + (m-l)p]2

- (9) a Iin which SSBm = SSBm(!Z) _(9)!=!

estimate of ~ 0 at the g-th I-iteration.

(4.12)

The left side of equations (4.11) and (4.12) are essentially the first partial

derivatives of the log likelihood function (4.10). In order to obtain the MLE's, it is

necessary to iterate the sequence of PML estimates of p and 0'2 until convergence.

These nonlinear equations may have multiple solutions, only one of which will be the

global maximum lielihood estimator. Thus for a nonlinear model with compound

symmetric covariance among repeated measurements and incomplete data,

computation of the MLE's, as described here, requires a doubly nested iterative

algorithm to solve the likelihood equations for ~ 0, p and 0'2. The reader is referred to eTheorem 4 in Chapter 2 for a general argument regarding how the process of iterating

between the two sets of PML estimates (i.e. I-iteration) leads to the MLE's.

The Fisher information matrix for '1 = (0'2, p) in the incomplete data case, h.,

may be found by taking minus the expectation of the second partial derivatives of the

log likelihood as follows. Letting In. = In(~ •. I~, 0'2, p), the set of second partial

derivatives are

{)2/n. _ (n + p.) 1 "SSBm SSW{)(0'2)2 - 20'4 - ;;:El~[l + (m-l)p] - 0'6(1_p) •

and

.-

Page 117: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

105

Let i(x. y) = - E( {)21n.l8x8y) and note that considerable simplification results by using

the fact that E(ssBm) = nmCT2[1 + (m-1)p] and E(SSw) = P.CT2(1-p). Taking minus

the expected value of the second partial derivatives gives

.( ) - 1L nm(m-1)2 +P.I p. P - 2 m [1 + (m-1)p]2 2(1-p)2 •

and

p. + _1_,", nm(m-1)2CT 2(1-p) 2CT 21it[1 + (m-1)p] .

Finally,

i(p, p) lAs for complete data, it is easy to show that the full «q+2) x (q+2»)

information matrix, h., for incomplete data is block diagonal

[~,. Q]

h·= ,Q ~'l'

in which ~,. is essentially the same as ~, for complete data upon making the important

substitution of 4. (for 40)' One may infer from the form of the information matrix

that the covariance parameters are asymptotically independent of the expected value

parameters. Under the assuqiption that the ML estimates are asymptotically efficient,

one may use the inverse of the Fisher information matrix, h., evaluated at the MLE's

as an asymptotic estimate of the covariance matrix for these incomplete data

Page 118: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

106

estimators.

It is clear from the above discussion that ML estimation with incomplete data

is more computationally intensive than estimation with complete data. This is entirely

attributable to the nature of the covariance parameters which must be computed

iteratively. Thus an alternate method of estimation for p and (12 is proposed in the

following section.

4.4 Method of Moments Estimation of the Variance Components

Method of moments estimators of (12 and p are intuitively appealing and easy

to compute. Furthermore, under quite general regularity conditions, they may be

shown to be consistent, asymptotically normal and unbiased [Bickel and Doksum, 1977;

pgs. 133-135]. In addition, they closely resemble the ML estimators of p and (12 for

complete data.

In parallel to the complete-data estimators, ~1 and ~2' it is convenient to define

incomplete-data analogs. Let ~1' and ~2' denote these analogs. Consider the

orthonormal model transformation described in §4.2. Let ~~:) denote the residuals

obtained from fitting the transformed model (4.4) using OLS. These OLS residuals

may be partitioned into those corresponding to the "average" transformation and those

corresponding to the "trends" transformation such that ~~:), = (~~~.'. ~W•. ').Partition the vector of transformed errors, ~ •. similarly. Furthermore, for convenience,

consider that the "average" transformed residuals are grouped according to equivalence

class. Then define

and

.(0)'.(0)• eB eB\ - - .. - ..A1' - n

.(0) '.(0)• eW ew\ -- .. - ..A2' - p.

(4.13)

(4.14)

Page 119: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

107. ,

Recall that the OLS estimate of ~o is consistent so that Al' = (~B •. ~B.)/n + Opel)

. ,and A2' = (~w•. ~w.)/P. + Opel). Then the expections of (4.13) and (4.14), in large

samples, are

Q

Q

Q

Q

Q

Q

and

= AL:nmO'2[1 + (m-l)p]m

~nmO'2 ~ 2= LJ-n- + LJ nm(m-l)O' pm m

(4.15)

(4.16)

Setting jl' and j2' equal to (4.15) and (4.16) respectively and solving these

equations simultaneously for 0'2 and p gives the following MOM estimators

(4.17)

and

(4.18)

Recall that -l/(p-l) < p < 1, so that in practice, one should determine if an estimate

of p using (4.17) is within this bound. If it is not, the estimate should be set to a

proper value. When complete data are available, jl = jl" j2' = ~2 and p. = n(p-l)

Page 120: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

108

so that (4.17) and (4.18) simplify to the complete data MLE's p and &2. The MOM

estimates, Pmom and &~om, may be used to compute estimates of the (p+1) elements of

~., producing ~mom as a vector of estimated heterogeneous variances. Similarly, define

4mom to be the (n. x n.) estimated covariance matrix for the transformed incomplete

data.

The asymptotic variances of Pmom and &~om may be found using the delta

method [see for example, Miller, 1981; pgs. 25-27]. First it is necessary to obtain the

asymptotic covariance matrix, Va(A i ., ).2')' as follows. In large samples,

'\~i 1n1 Q

Q '\~2 1n2Va(A i ·) = .£. tr

n2

Q Q

Q

Q

and

_ 20.4(1_p)2- p.

Furthermore, it can be shown that Ai' and A2 • are asymptotically independent so that,

Page 121: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Define

v.(i,., i,.) =[o

o l109

and

( ' ') >'1·->'2·gp "1·,"2· =, +( I)' ."1. p. n "2·

Finally, let J1.>'1. = E,,(A 1.) and J1.>'2. = E,,(A 2.). Applying the delta method and

making the appropriate substitutions

_ 20'4 ( ,,2 2)- 2 n + p. + p.p + .L"nm(m-l) p ,(n + p.) m

and

(4.19)

= 2(1-P);{(n + 2p.p + Enm(m-l)2p2) + ~~(l + (p./n)p)2}. (4.20)(n + p.) m

Estimates of the asymptotic variances of the MOM estimators, here, can be

obtained by evaluating (4.19) and (4.20) at 0'2 = &~om and p = pmom. Additionally

(4.19) and (4.20) may be used to compute the asymptotic relative efficiencies of these

estimators as compared to the MLE's. In general, MOM estimators are known to be

less efficient than ML estimators.

Page 122: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

110

4.5 Approximate Weighted Least Squares Estimation

It is convenient to estimate ~o from incomplete, transformed data using AWLS.

In particular, let 4~om provide an approximate weight matrix. Define an AWLS

estimate of ~ 0 as the ~. that minimizes

Most importantly, note that because 4mom is a consistent estimate of 4., the

asymptotic results of §2.4 are applicable here as well. In particular, because 4mom

provides a consistent estimate of 4., a consistent estimate of the asymptotic covariance

matrix for the AWLS estimate of ~o may be computed using ~;~ evaluated at the

• • 1AWLS estimate, ~., and 4~om.

In addition, most of the asymptotic results reported in Chapter 3 are also

applicable to the case of incomplete data, using 4mom in place of sj in practice. The efollowing section on inference will clarify one important modification to the results of

Chapter 3 for applications involving incomplete data.

4.6 Inference

Most of the results of §3.2-3.4 may be transferred to the case of incomplete data

with compound.symmetric covariance intact, replacing 90 by 4. in the theory.

Similarly, WI' W 2 , LI and L2 of §3.4 may be computed for incomplete data by

replacing (} with 4mom or 4., the MLE for incomplete data. While the test statistic

for testing Ho : p = 0 may be adapted in a straightforward fashion to include the case

of incomplete data, it will not be discussed further because it is not likely to perform

well even for complete data, as indicated earlier..

The proposed statistics, W sand Ls which make use of the compound

Page 123: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

111

symmetry assumption must be altered to accommodate the somewhat different pattern

of variance heterogeneity inherent in the transformed model with incomplete data.

These alterations are confined to estimation of the scale and degree of freedom

parameters, Ce, and lie', as defined Theorem 12 of Chapter 2. These will be rederived

below for the case of incomplete data.

Theorem 12, part (i) holds for incomplete data as well with "." appended to ~.

and ~. to indicate an application involving incomplete data. Part (ii) of Theorem 12

for incomplete data is modified as follows

(4.21)

and

(4.22)

Solving simultaneously for Ce,. and lie,. gives

(4.23)

and

(4.24)

In practice, (4.23) and (4.24) may be estimated using either the MLE's or the MOM

estimates for p and (12 (and hence, estimates of A2 and the Aim)'

For the simulation studies reported in Chapter 5, the MOM estimates of (12 and

p will be used for evaluating the performance of the estimation and inference methods

with incomplete data.

Page 124: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 5

SIMULA TlON STUDIES

5.1 Introduction

Simulation studies provide an important means for evaluating small sample

properties of nonlinear models. Such studies provide a rational basis for constructing

sensible experimental designs when data are known to exhibit a nonlinear response.

However, there is no replacement for common sense and prudence in any particular

data analytic situation.

For univariate nonlinear models, a great deal of research has been done to show

that a lack of generalizability from one model to the next may be attributed to ediffering curvature properties of the models [Ratkowsky, 1983]. Recall from Chapter 1

that curvature is a geometric property of the solution locus for a model and data

combination. Curvature is what distinguishes a nonlinear model from a linear model

and therefore is generally agreed to be at least partially responsible for such properties

as biasedness of nonlinear parameter estimates. However, curvature is essentially a

small sample property since it is well known that under minimal regularity conditions a

nonlinear least squares parameter estimate is asymptotically unbiased and normally

distributed with variance close to the minimum variance bound.

With univariate models, the curvature measures of Bates and Watts [1980] can

often be used to predict when parameter estimates will be biased [Ratkowsky, 1983]. It

is generally agreed that of the two curvature measures, intrinsic and parameter effects

(PE) nonlinearity, the latter is far more important In predicting the accuracy of

Page 125: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

113

estimation and confidence procedures in small samples. Donaldson and Schnabel [1987]

showed that higher PE curvature values were associated with lower coverage

probabilities for confidence procedures across a range of univariate nonlinear models.

However, even for situations in which nonlinear parameter estimates are biased only by

a negligible amount, hypothesis tests may still exhibit inflated Type I error rates,

particularly with small samples [Malott, 1985]. This raises the question of the

usefulness of curvature measures or bias in parameter estimation in predicting when

inference procedures will be anti-conservative, i.e. Type I error rates higher than

nominal a or confidence interval coverage smaller than (l-a)%. Furthermore, the

interaction of these phenomenon with multivariate data is currently not well

understood. Simulation studies with multivariate nonlinear models, such as the

following, help to lay the ground work for future analytic work as well as provide

recommendations for researchers currently faced with such data.

5.2 Models Simulated

The multivariate models upon which the simulation studies are based were

constructed by considering n independent experimental units in which the p-dimensional

response vector for a single experimental unit may be described by a nonlinear model.

For these models, the p responses correspond to p design points. Furthermore, a

common correlation was induced among errors from a single experimental unit with a

common variance induced for each response. The same equicorrelation structure was

induced for each experimental unit.

The experiments chosen for these simulations provide interesting models which

are based on data which have appeared in the literature and for which information on

nonlinear behavior was available [Bates and Watts, 1988]. Two one-compartment

pharmacokinetic models, one with low PE curvature and one with moderate PE

Page 126: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

114

curvature, were chosen for the simulation studies. This choice was made so that the

PE curvature measures obtained from a single experiment could be evaluated as a

predictor of accuracy of estimation and inference for a situation involving n sets of

independent experimental units and correlation among the p responses. It is important

to recognize that measures of multivariate curvature have not been proposed or

studied.

The one-compartment pharmacokinetic model function used for these

sim ulations may be written

(5.1)

in which i E {1, 2, ... , n} and j E {1, 2, ... , 6}. The simulation studies are based on

two sets of data from a Master's thesis entitled "Biochemical oxygen demand data

interpretation using the sum of squares surface," by Donald Marske [University of

Wisconsin, 1967]. This data was published in Draper and Smith [1981, p. 522] as part

of the exercises for nonlinear estimation. Subsequently, Bates and Watts [1988, p. 257]

fitted one-compartment pharmacokinetic models to these two sets of data and

estimated intrinsic and PE curvatures for them. Unfortunately, neither of the latter

two sources provide additional information regarding the experimental design which

would be helpful in adapting them to the multivariate setting sought here. Hence, each

set of data from Marske's thesis is used to provide a population response curve in the

simulation studies. Furthermore, consider the set of design points, \1;, to be fixed. The

design points, parameter estimates and mean squared error, &2, for the two models

which formed the basis of the simulation studies are

MODEL 1: \1;' = (1, 2, 3, 5, 6, 7)'

., ,~ = (892.56, .245)

and

&2 = 844.1 , (5.2)

Page 127: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

115

MODEL 2: ;' = (1, 2, 3, 5, 7, 10)'

., ,~ = (213.81, .547)

&2 = 292.0 . (5.3)

Plots of the data and fitted curves for these models are provided in Figures 1a and b.

Both models possess scaled intrinsic effects values which are sufficiently small to

indicate that these models possess solution loci which are reasonably linear [Bates and

Watts, 1988]. However, both models possess scaled PE values which are sufficiently

large to suggest that estimation may be biased. Based on the sample of 67 models and

data combinations reported in Bates and Watts [1988, p. 256-259], these scaled

intrinsic and PE values are among the most frequently encountered in practice. The

scaled parameter effects values for models 1 and 2 are 3.09 and 1.17, respectively, with

values closer to zero (as would be found for a linear model) considered ideal. Upon

surveying a variety of model and data combinations, Donaldson and Schnabel [1987,

Figure 5] found that the decrement in coverage for a 95% Wald-based confidence

region dropped off linearly as a function of the logged scaled parameter effects values.

They observed, for several model and data combinations with scaled parameter effects

curvatures less than or equal to one, that confidence regions achieved approximately

95% coverage. In turn, interpolation on their Figure 5 indicates that for a scaled

parameter effects curvature of three, one might expect coverage which is five to ten

percent less than nominal.

It should be re-emphasized that these curvature measures are defined for data

obtained from completely independent observational units. Hence these may not be

appropriate for predicting estimation properties in a multivariate model with multiple

observational units. In the absence of suitable multivariate measures, these univariate

measures provide a rational, though possibly inadequate, basis for the choice of models

for these simulation studies. A final note in this regard is that even univariate analytic

Page 128: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

116

curvature measures sometimes fail dramatically to predict estimation behavior so that

such measures are often used in conjunction with simulation studies. Ratkowsky [1983,

§9.5] provided several interesting examples of this phenomenon.

It is often of interest to test whether different treatment groups exhibit the

same response curve. For the purpose of evaluating hypothesis testing, the following

hypothesis of no difference between treatment groups was chosen:

Ho : (Jg = (J ,- - 9

VS. Ha : ~g =1= ~g' , (5.4)

in which 9 E {1, 2} indexes treatment group and 9 =1= 9'. For the models under

consideration, ~/ = ((In, (J2d' and ~/ = ((J12' (J22)" Note that this hypothesis is a

"neither A nor B" type hypothesis in the terminology of Arnold [1981]. Furthermore,

the model chosen for these simulation studies does not possess a form which permits

the methods of Hafner [1988] to be used. Hence this combination of nonlinear model

function and hypothesis test provides a testing ground which is inaccessible with the

methods of Hafner. However, for situations in which the methods of Hafner do apply,

they would be preferable due both to their simplicity of implementation and accuracy

with respect to Type I error rate.

Unconstrained covariance estimation may be used in practice despite the fact

that ignoring the nature of the compound-symmetric covariance structure amounts to a

model misspecification which may have implications in a nonlinear model for which

unbiasedness and efficiency of parameter estimates are only asymptotic properties.

Gallant [1987] proved, even when the covariance structure is ignored in the estimation

procedure the usual inference methods are still asymptotically correct. However, with

small samples, correct covariance specification is expected to play an important role in

inference for multivariate nonlinear models. Further consider that a population

correlation of zero corresponds to a special case of compound symmetry, namely

sphericity. For simulated cases with p = 0, subsequent data analysis using either the

Page 129: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

117

constrained or unconstrained covariance structure may be considered a second kind of

model misspecification since the true covariance structure involves only one parameter.

To the extent that p can be accurately estimated, the constrained covariance

estimation procedure is likely to fare better than one based on an assumption of a

general, unspecified covariance structure. The inclusion of the p = 0 case essentially

provides an evaluation of the effect of estimation of p on the analysis methods

evaluated here. From work by Hafner [1988] it is reasonable to expect Type I error

rates near the nominal level when the data possess spherical covariance and OLS are

used. For data simulated with p = 0 and to the extent that p is estimated without

bias, the AWLS method would approximately coincide with OLS.

5.3 Simulation Design

Two simulation studies were conducted to evaluate the methods developed in

Chapters 2, 3 and 4. Errors were generated to follow a multivariate Gaussian

distribution, with compound-symmetric covariance. One study was constructed to

address estimation and inference procedures when complete data are available while a

second study was designed to evaluate the methods of Chapter 4 for incomplete data.

For both studies, data was generated under the null hypothesis of no group difference.

The parameter estimates for models 1 and 2 above were used as the fixed

population parameters in the simulation studies. In addition it was assumed that the

same number of experimental units were present in each of the two groups so that

ng = n/2 for g E {1, 2}, and that the same fixed parameter values applied to both

groups within a given model.

For the complete data simulation study, a 25 factorial design was used to

investigate the effect of the following factors on estimation and/or inference:

Page 130: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

118

1) moderate vs. low parameter effects curvature in the underlying responsecurve,

2) constrained vs. unconstrained covariance estimation,

3) approximate vs. iterated approximate weighted least squares estimation,

4) sample sizes of n E {10, 20, 40}, and

5) population correlation of p E {O, 0.3, 0.6}

In order to reduce the number of conditions to be simulated an incomplete factorial

design was constructed from the full design by eliminating consideration of some

sample size and population correlations combinations. Figure 2 illustrates the

particular choice of conditions used in the complete data simulation design.

Until very recently [see Gennings and Chinchilli, 1989], the compound-

symmetric covariance structure in conjunction with a nonlinear model was typically

ignored in the estimation procedure. Subsequent analysis of such data also ignores the

compound symmetry [see for example Gallant, 1987, Chapter 5 or Seber and Wild,

1989, Chapter 11]. While Gennings and Chinchilli provided an example using

constrained covariance estimation, they did not compare their results to those which

would have been obtained using unconstrained covariance estimation. Furthermore,

they did not address the issue of small sample size. A major objective of this research

is to evaluate the small sample accuracy of inference for multivariate nonlinear models

with compound-symmetric covariance for which an estimation procedure addressing

this structure has been used. Hence an appropriate comparison is provided by the

unconstrained covariance method.

For the incomplete data simulation study, a 23 complete factorial design was

used to evaluate the effects of the following factors on estimation and inference:

1) sample size of n E {10, 20},

2) population correlation of p E {0.3, 0.6} and

3) 5% vs. 10% randomly missing data.

Page 131: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

119

Only model 1, with moderate parameter effects curvature, was used. The proposed

approximate weighted least squares estimation methods of Chapter 4 were evaluated,

which disallowed unconstrained covariance estimation. Furthermore, the proposed

incomplete data estimation procedure has no iterated analog, so that only an AWLS

procedure was evaluated. Hence, ML methods for incomplete data were not evaluated

here.

The simulations were accomplished using code written by the author in SAS's

PROC IML (available upon request). The steps of the algorithm used to generate the

data are outlined below. For each replicated data set, from a given model, the

following steps were taken:

STEP 1: An (n x p) matrix, ~, of i.i.d. N(O, 1) random variables was constructed

using the RANNOR function.

STEP 2: An (n x p) matrix of correlated errors, ~, was constructed by applying

the following transformation to~. ~ = ~[Dg(~)]1/2f', where f was

obtained as the set of p column orthonormal vectors from the ORPOL

function and the elements of ~ are the ordered eigenvalues corresponding

to the eigenvalue decomposition of a fixed population covariance

matrix obtained from the population values for p and (72.

Thus, V[rowi(~)) = ~c. for i E {1, 2, ... , n}.

STEP 3: An (n x p) matrix of simulated responses was constructed by adding the

error matrix from step 2 to the matrix of nonlinear expected values, ¥,

in which ¥ = {mij} with mij = 01[1 - exp( -02Xj))' This produced

the matrix of simulated responses y = ¥ + ~ consistent with the

null hypothesis of no group effects.

(STEP 4): (for the missing data simulations only). "Missingness" was induced

for each simulated dataset as described below.

Page 132: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

120

(STEP 5): (for the constrained estimation methods only). The simulated data were

transformed a priori, using a columnwise orthonormal matrix. For

complete data, the transformation matrix used was f obtained

from the ORPOL function producing Y* = Yf. For incomplete data,

refer to Chapter 4 for the form of the transformation matrices

applied.

STEP 6: OLS was used to fit the full model to the data in order to obtain the

OLS residuals.

STEP 7: An estimate of the covariance matrix (constrained or unconstrained)

was computed from the OLS (or for iterated methods, AWLS residuals).

STEP 8a: AWLS, using the estimated covariance structure from step 7,

was used to fit the unrestricted model to the data producing the., . . . . ,expected value parameter vector ~u = (8 11 , 821 , 812 , ( 22 ) .

STEP 8b: AWLS, using the estimated covariance structure from step 7,

was used to fit the restricted model to the data producing the

expected value parameter vector ~r' = (01' O2 )'.

STEP 10: A set of statistics sufficient to conduct the tests described in

Chapter 3 and/or 4 were computed.

Expected value parameter estimates, test statistics and diagnostic measures of the

performance of the algorithm were stored. For all of the complete data simulations

with n = 10 or 20, steps 7 and 8 were iterated until the convergence criterion was

reached and a corresponding set of parameter estimates from this ITAWLS procedure

were stored.

Let SSEA: and SSEA:+1 denote the sums of squares obtained at two successive ~-

iterations. ~-iteration was terminated for f9 = (SSEA: - SSEA:+1)/SSEA: < 1 X 10-8•

Recall that a modified Gauss-Newton algorithm, described in Chapter 1, was used for

the ~-iterations so that a halving step was employed in cases where overshoot occurred.

Page 133: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

121

However, if 10 halving steps did not permit the sum of squares to decrease at that

iteration, the algorithm was terminated and the replicate was omitted from further

analysis. As will be shown, this was a very rare occurrence.

Let 9 index successive :r-iterations and define the following sum of squares

n

SSE = S(~,~) = 2)~i -li(~)]'~-l[~i -li(~)] .;=1

(5.5)

Gallant [1987] showed that :r-iteration convergence could be monitored by evaluating

£.., = S(~1:,t1:+1) - S(~1:+1,t1:+1) at each :r-iteration. For these simulations :r-iteration

was terminated for £.., < 1 X 10-8. Inadvertently, £.., was not 8caled in the same fashion

that £8 was. Hence, the convergence criterion for :r-iterations was much more stringent

than for ~-iterations.

For the incomplete data simulations, an additional step (4) was included which

"tagged" either 5% or 10% of the observations missing. The identification of the

missing data was done so that missingness was generated completely at random so that

the exact number of missing values desired was achieved for each case. Furthermore,

the selection of the missing data values was such that at least one observation

remained for each of the n experimental units. Missingness was generated without

regard to the specification of the two treatment groups, hence it was random with

respect to the full (n x p) matrix of simulated data. The RANUNI function was used to

generate indices identifying a particular data value to be tagged as missing.

Subsequently, these indices were carried throughout the algorithm to induce the

appropriate missingness structure where necessary. For example, the subsequent a

priori orthonormal model transformation in step 5 utilized the missingness indices to

produce the appropriate "customized" transformation required for each replicate.

The seed values used to generate the errors for the simulated data and the

indices of missingness were five digit positive integers obtained by giving the RANUNI

Page 134: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

122

function a single starter seed. The same seeds were used for corresponding evaluation

of the complete data models using the constrained and unconstrained covariance

estimation procedures. This was done to ensure the comparability of competing data

analytic techniques applied to identical data. In all other cases, unique seed values

were used for each simulation task.

For the complete data simulation study, 1000 replications of each condition

were produced. For the incomplete data simulation study, where CPU time rapidly

increased with both sample size and percent of missing data, 500 replications of each

condition were produced. Refer to Table 4 for approximate 95% confidence intervals

associated with a range of observed Type I error rates and number of replications

relevant to these simulation studies.

5.4 Results of the Complete Data Simulation Study

The objectives of the complete data simulation study are outlined as follows. eFirst, the computational efficiencies of the constrained and unconstrained covariance

estimation methods are to be compared. Second, determination of whether parameter

effects curvature measures for a single response curve predict nonlinear behavior in a

multivariate model will be made. Third, the effects on estimation and inference related

to the covariance estimation method (constrained or unconstrained) will be assessed.

Fourth, the effects on estimation and inference related to using an iterated (ITAWLS)

or one-step (AWLS) estimation procedure will be assessed. Fifth, the performance of

estimation and inference methods using small samples will be evaluated. Sixth, the

effects of population correlation on estimation and inference methods will be examined.

Seventh, a comparison of the Type I error rates for the new, unstandardized,

approximate F statististics to those for the existing, standardized, approximate F

statistics will be made. Finally, for the cases in which the Type I error rate is close to

Page 135: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

123

the nominal level, the conformity of the associated observed statistics to their

hypothesized distributions will be evaluated.

The first objective will be met by examining the number of ~- and 1-iterations

necessary to fit the models being studied. The second through sixth objectives will be

met by examining the bias in parameter estimates and their estimated asymptotic

covariance matrix, as well as examination of Type I error rates for the various

simulation conditions. The eighth objective will achieved by conducting one-sample

Kolmogorov-Smirnov tests of the goodness of fit for the approximate F statistics as

compared to their hypothesized distributions. Additionally, F-plots corresponding to

tests which exhibit close to nominal Type I error rates will be used to compare the

observed to the hypothesized distributions of the associated statistics.

5.4.1 Evaluation of the Estimation Methods

It is apparent from Table 5a that the algorithms employed to fit the full and

reduced models worked very well for model 1. Failure to achieve the convergence

criteria occurred only rarely. Failure to converge was more frequent upon fitting the

reduced models than the full models. This may be related to the fact that the

estimated covariance matrix used for fitting a reduced model was that obtained from

the previous full model fit [see §3.3]. For model 2, only one occurrence of a failure to

converge occurred (for a reduced model with n = 40, P = 0.3).

The average number of iterations until convergence criteria were reached, for

modell, are reported in Table 5b. Similar results were found for model 2. For

brevity, these are not reported. For the ITAWLS estimation methods, the number of

~-iterations refers to the total number of ~-iterations accumulated across all of the 1­

iterations. In the AWLS procedures, the modal number of ~-iterations is 2.9 and

remained relatively consistent across combinations of sample size and correlation. The

Page 136: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

124

average number of iterations was generally slightly higher for the unconstrained

covariance estimation method than for the constrained covariance estimation method.

For the ITAWLS estimation procedures, the number of ~-iterations is larger than for

AWLS due to the i-iteration process. The number of i-iterations increases with

correlation, but not sample size, for the constrained covariance estimation method.

Finally, the number of i-iterations required for the unconstrained covariance

estimation method is roughly three times that observed for the constrained covariance

estimation method. It is clear that the constrained covariance estimation method is

much more computationally efficient than the unconstrained covariance estimation

method when ITAWLS are used. In contrast, the computational efficiency of this

algorithm when AWLS are used is not dependent on the type of covariance estimation

used. Note that, despite a very stringent convergence criterion for the i-iteration

process (corresponding to eight digits of accuracy), relatively few iterations were

required to fit these models.

It is evident from Tables 6a and 6b that regardless of estimation procedure

(ITAWLS VB. AWLS or constrained VB. unconstrained covariance estimation), sample

size or population correlation, that on the average, parameter estimates were only

slightly biased. Estimation of ()1 (or ()n and ()12) in model 1 is consistently positively

biased, though only by a negligible amount which never exceeds 0.3% of the population

value. A similar result was found for estimation of ()1 (or ()n and ()12) in model 2. For

both models 1 and 2 estimation of ()2 (or ()21 and ()22) is nearly unbiased. From Tables

6a and 6b it appears, in practical terms, that PE curvature was not evident in either

model 1 or 2. This may be due to the relatively much larger amount of data available

in this multivariate design as compared to estimation based on a single response curve.

It is well known that the effects of PE curvature disappear asymptotically. However, it

is not clear whether reduction of PE curvature, or bias in parameter estimation, is

more easily achieved by including more design points (corresponding to increasing the

Page 137: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

125

number of repeated measures in these models) or simply providing additional

observations at each existing design point (corresponding to increasing sample size in

these models). Ratkowsky [1983, §6.2] provided an interesting example of the former.

The effectiveness of the former approach is related to directly improving the resolution

of the solution locus. The latter approach corresponds to simply reducing the pure

error variance.

In order to better understand the reduction in the bias of parameter estimation

incurred by increasing the sample size, a small simulation was conducted using the

reduced modell, with spherical covariance, 500 replications and sample sizes of one,

two and five (per design point). The sample size of one corresponds exactly to the

model upon which the PE curvature measure was computed. For that case, the

average of the estimates for 81 was 900.98 which represents an average bias of about

1%. The average of the estimates for 82 was 0.248, which represents an average bias of

about 1.4%. It appears that for modell, despite possessing a moderate PE curvature

value, bias in parameter estimation is not particularly problematic even without

duplicate design points. Doubling the sample size reduced the average bias in

estimation of 81 and 82 to 0.3% and 1.2% respectively. For a sample size of five, the

average biases were further reduced to 0.3% and nearly 0% for these parameter

estimates. The number of independent observational units appears to be a very

effective way of reducing bias in parameter estimation.

Mean elements of the estimated asymptotic covariance matrix for ~' = (0 1 , 82 )'

from the reduced model 1 appear in Table 7a. In addition, this table provides the

percent of the sample values achieved by these estimates as a relative measure of bias.

There is close agreement between the mean asymptotic variance estimates and the

sample values when constrained covariance estimation was used. Although typically

the mean asymptotic estimate is smaller than the sample estimate as evidenced by

Page 138: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

percents of sample value achieved which are less than 100%.

126

However, when

unconstrained covariance estimation was used this discrepancy is much larger. With

constrained covariance estimation the mean asymptotic estimate is no worse than 90%

of the sample value while with unconstrained covariance estimation this falls as low as

51% and never achieves better than 82%. There is virtually no difference between

covariance estimates obtained from AWLS as compared to ITAWLS when constrained

covariance estimation was used. In contrast, when unconstrained covariance

estimation was used, ITAWLS produced mean asymptotic estimates which typically

achieve 10% less of the sample values than those produced using AWLS. Hence, it

appears that when unconstrained rather than constrained covariance estimation is

used, the iterated estimation method produces more biased estimates of the asymptotic

covariance among parameter estimates. The mean elements of the estimated

asymptotic covariance matrix for ~ in the reduced model 2 using constrained

covariance estimation appear in Table 7b. Comments similar to those regarding Table

7a apply to Table 7b as well.

The average estimated variance components and their percent of population

value achieved for the transformed models 1 and 2 are reported in Tables 8a and b,

respectively. The percents appearing in the last two columns, which are all less than

100%, consistently indicate negative bias. The largest biases are observed for the

smallest sample size, n = 10, as might be expected. Bias decreases with sample size

with generally less than 5% relative bias for estimates obtained from samples of size 40.

Most notably, the bias is much greater for ~i than for ~2' even for the cases when

p = 0 for which Ai = A2 = (72. This is most likely related to the fact that estimation

of Ai is based on only n observations while that of A2 is based on n(p-l) observations.

Improvement in the relative bias, particularly for estimation of Ai' is observed for

AWLS and ITAWLS as compared to OLS when p = O. However, the AWLS and

ITAWLS estimates of Ai and A2 differ very little.

Page 139: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

127

The average estimated variance components and their relative biases for models

1 and 2 are reported in Tables 9a and b respectively. Estimation of 0'2 tends to be

somewhat less biased than estimation of p, especially for the smallest sample size

n = 10 and p = 0.3. It appears that p is more biased for smaller p using the relative

scale for bias. Using an absolute scale for bias, i.e. bias = p - p, estimation of p is

actually much less biased when p = 0 than for larger p. In any case, estimation of

these variance components is consistently negatively biased.

In summary, estimation of ~ is nearly unbiased for these models. Furthermore,

the average bias is typically negligible. In contrast, estimation of the covariance matrix

for ~ is consistently negatively biased, though generally by no more than 10%. This is

most likely related to the bias in estimation of the variance components. Estimates of

the variance components are consistently negatively biased, often by 10% or more of

the population values.

Some comments regarding the consistent negative bias of the elements of the

estimated asymptotic covariance matrix for ~ is warranted. Recall that the asymptotic

covariance matrix for ~ is a function of the covariance matrix among repeated

measurements,~. Estimation of ~, throughout, was based on np degrees of freedom.

Particularly in small samples, it is appropriate to multiply t by a factor of

[(np)/(np-q)] in order to produce a less biased estimate. For a sample size of n = 10

this factor is equal to 1.07 and it is equal to 1.03 for a sample size of n = 20. With

regard to estimation, such correction is clearly advisable with one important exception.

This exception concerns estimation of the covariance matrix among repeated

measurements when compound symmetry is assumed and the orthonormal

transformation has been applied to the model. In this case ~ = Dg(~1'~2' ''''~2)' and

estimates of ~1 and ~2 are based on nand n(p-l) degrees of freedom respectively.

Estimators for both ~1 and ~2 depend on the q-dimensional estimate of ~ from the full

Page 140: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

128

model. However, if one were to correct both sets of degrees of freedom by q, the pooled

degrees of freedom would be np - 2q, rather than the desired np-q. Hence, it is

unclear what constitutes a small sample degree of freedom correction for these

estimators.

Note that the inflation factors mentioned above, despite being "worst case"

estimates, are relatively small so that it is not clear how using them will affect

subsequent inference procedures. It should also be noted that for certain applications,

the small sample degree of freedom corrections to variance estimates are irrelevant. In

particular, these correction factors would cancel out of the numerator and denominator

of the standardized F statistics and would never appear in the unstandardized F

statistics so that hypothesis testing would remain unaffected. However, Wald based

confidence intervals are affected since they rely directly on the estimated covariance

matrix. It will be shown in the next section that for the situations studied here these

effects are negligible.

For situations in which the underlying covariance structure is modelled

correctly, there is little difference in the estimates obtained from either AWLS or

ITAWLS. However, ITAWLS provides more highly biased estimates than AWLS

under misspecification. In either case, it appears that there is little to be gained by

going to the extra trouble to do ITAWLS.

5.4.2 Evaluation of the Inference Methods

The Type I error rates for the approximate F-tests of no group difference for

models 1 and 2 are reported in Tables lOa and b respectively. An important finding

for this simulation study, which is well illustrated in Tables lOa and b, is that in many

cases Type I error rates close to the nominal 0.05 level may be achieved for the

approximate F-tests when the model covariance structure is correctly specified and

Page 141: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

subsequently the constrained covariance estimation method is used.

129

This is

particularly true for sample sizes of 20 or 40. However, when unconstrained covariance

estimation was used, all of the simulation conditions gave significantly inflated Type I

error rates which were often three to four times the nominal rate of 0.05.

Furthermore, when constrained covariance estimation was used, the Type I error rates

for ITAWLS are essentially the same as those for AWLS. In contrast, when

unconstrained covariance estimation is used, the Type I error rates for ITAWLS are

much worse than those for AWLS. The underlying population correlation appears to

have no effect on the Type I error rate.

It is interesting to observe that the unstandardized, approximate F statistics

(W2' W 3' L2 and L3 ) appear to perform similarly to the standardized approximate F

statistics (W I and LI), under Ho , for a variety of sample sizes and population

correlations when constrained covariance estimation is used. In contrast, when

unconstrained covariance estimation is used, the unstandardized statistics (W2 and L2 )

appear to have smaller Type I error rates than the standardized statistics (WI and L l ).

However the Type I error rates for all of the approximate F-tests are unacceptably

high when unconstrained covariance estimation is used. This indicates that the

unstandardized statistics possess some ability to compensate for the misspecification in

the modelling of the covariance structure. It is likely that in some circumstances this

compensation would be sufficient to allow the unstandardized statistics to provide

nearly correct Type I error rates. For a few of the cases presented here, the Type I

error rate of the unstandardized statistics used with unconstrained covariance

estimation approaches an acceptable level, particularly for the largest sample size of 40.

For the hypothesis of no group difference used here in conjunction with the one­

compartment pharmacokinetic models 1 and 2, the set of Wald statistics perform very

similarly to the set of likelihood ratio statistics. The likelihood ratio statistic (L l ) may

Page 142: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

130

be expected, in some circumstances, to provide less biased Type I error rates than the

Wald statistic (Wi) for example when PE curvature is a significant feature of the

model and data being considered. However, when PE curvature is small or absent (as

for a linear model) these two statistics may be expected to perform similarly. Hence,

this observation is consistent with the estimation results which do not support the

notion that PE curvature was a prominent feature for either model.

The Type I error rates for the approximate X2 tests of the no group difference

hypothesis are presented in Tables lla and b for models 1 and 2 respectively. These

tests were constructed by considering only the numerators of the approximate F tests.

Refer to Tables 2 and 3 for the form of these approximate X2 statistics relative to the

approximate F statistics. The approximate F tests generally provide better Type I

error control than do X2 tests. This is supported by the comparison of Tables lla and

b to Tables lOa and b respectively. As is true for the approximate F tests, when

unconstrained covariance estimation is used, the Type I error rate of the approximate

X2 tests is badly inflated with some improvement seen for the unstandardized versions

as compared to the standardized versions of the test statistics. Also, as is true for the

approximate F tests, with unconstrained covariance estimation, ITAWLS provides

much more inflated Type I error rates than does AWLS.

It is interesting to note that for modell, with constrained covariance

estimation and sample sizes of 20 or 40 the Type I error rates of the approximate X2

tests often approach or achieve the nominal rate of 0.05. However, it is generally

recommended that a test based on an F approximation as opposed to a X2

approximation be used in practice. The inclusion of the approximate X2 statistics was

meant to provide a way to evaluate the effect of using an unstandardized numerator as

opposed to a standardized numerator in the approximate F statistics. In this regard,

recall that W 4 and L4 may be considered standardized statistics while W sand Ls are

Page 143: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

131

unstandardized. It is clear from Tables 11a and b that the unstandardized statistics

may result in Type I error rates that are either more or less inflated than standardized

statistics. Therefore, it is not possible to claim that they consistently perform better

(or worse) than their standardized counterparts.

Slightly more accurate Type I error rates are observed for model 1 as compared

to model 2. This is most likely to be related to sampling variation or inter-model

differences which are not well understood. In any case, this observation is in the

opposite direction to that expected for PE curvature to have been responsible.

The percent coverage of approximate 95% Wald type confidence intervals

(obtained by inverting the W 1 test) on 81 and 82 from model 1 are reported in Table

12. The coverage for all of the sample size or population correlation combinations is

excellent when constrained covariance estimation was used. However, when

unconstrained covariance estimation was used, the confidence interval coverage falls to

less than 90% in most cases. Some improvement is seen with a sample size of 40. As

might be expected, these findings are consistent with the corresponding Type I error

rates reported for the W 1 statistics in Table lOa. In order to follow up on the

comments made in §5.4.2 about small sample degree of freedom corrections, the 95%

Wald type confidence intervals were re-computed using variance estimates based on

np-q, rather than np, degrees of freedom. Improvement in coverage probabilities never

exceeded 1% so that with constrained covariance estimation the average coverage

probability for the smallest sample size went from about 94% to 95%. With

unconstrained covariance estimation the average coverage probability for n = 20 went

from about 84% to 85%. For brevity, the many exact numbers are not reported. In

practice, it is recommended that the small sample degree of freedom correction be used

in computing Wald based confidence intervals because this does provide some

improvement to the typically too small coverage probability.

Page 144: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

132

One-sample Kolmogorov-Smirnov tests of the goodness of fit of the

approximate F statistics, W land Ll , to an hypothesized F distribution with 2 and

np-q degrees of freedom are reported for models 1 and 2, with AWLS constrained

covariance estimation, in Table 13. In addition the observed F value associated with

the Kolmogorov-Smirnov statistic, DmG£', appears in Table 13. Note that Dmu is the

maximum difference in probability corresponding to a comparison of the observed vs.

hypothesized cumulative probability plots. A Bonferroni-corrected nominal (}' = 0.003

was used to evaluate the p-values from the set of 18 goodness of fit tests performed on

each model. The goodness of fit hypothesis was accepted for all but some of the cases

with the smallest sample size of 10. This provides confirmation of the appropriateness

of these statistics for applications involving small samples as well as a basis for

comparison to the unstandardized approximate statistics, W 2' W 3' L2 and L3 • Noting

.that all of the critical values for these cases were between 3.00-3.70, only one instance

of a significantly large discrepancy between observed and hypothesized distributions

occurred in the neighborhood of the critical value. This occurred for the L1 statistic in

model 1 with n = 10 and p = 0, in which F = 3.79.

F-plots were constructed to correspond to the test statistics evaluated in Table

13. An F-plot is constructed by plotting the hypothesized and observed F values, as

functions of the hypothesized values, for each replication. The observed F values were

obtained by evaluating the inverse F function at the sample quantile for that F

statistic. The approximately 45° line represents the hypothesized F distribution. The

starred "." points correspond to the observed F values as a function of the

hypothesized F values. A vertical reference line was drawn at the critical F value for

the test under consideration. The F-plot for the W l test statistic from model 1 with

n = 20 and p = 0.6 is provided in Figure 3 in order to provide an example of the

conformity of the standardized approximate statistics to their hypothesized

distribution. Reasonable conformity is evidenced by slopes close to one for the observed

Page 145: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

133

values. Note that this is seen for all but the largest values of F. For every sample size

and population correlation combination reported in Table 13 the F-plots for WI and LI

appeared to be very similar so that only the one F-plot is shown for brevity.

The unstandardized, approximate F statistics use degrees of freedom which are

estimated from each set of data generated so that the observed F statistics cannot be

compared to a single hypothesized F distribution. This makes a one-sample

Kolmogorov-Smirnov test inappropriate. Alternately, analogs to F-plots were

constructed for these unstandardized, approximate statistics so that judgement of

conformity of the computed statistics to their expected counterparts could be made

empirically. The F-plot analogs for the unstandardized statistics used the sample mean

degrees of freedom for evaluating the inverse F function. In general, the appearance of

these plots was largely indistinguishable from the F-plots obtained from the weighted

approximate F-statistics. Hence, it appears that for the range of sample sizes and

population correlations studied, under Ho , the approximations provided by the

unstandardized F-statistics are reasonably accurate. Refer to Figures 4a and b for F­

plot analogs of W 2 and W 3 in model 1 with n = 20 and p = 0.6. These F-plots are

representative of the full set of plots constructed.

Further understanding of the nature of the approximations obtained when using

the unstandardized statistics may be gained by examining the average correction

factors and degrees of freedom for the various sample size and population correlations

with model 1. The correction factors are ratios of the scale factors obtained for the

numerator and denominator approximate X2 variates. The correction factors may be

viewed as multipliers on the basic F statistic formed by the ratio of the hypothesis and

error sums of squares. The average correction factors are reported for model 1 using

unconstrained and constrained covariance estimation in Tables 14a and b respectively.

The estimated degrees of freedom for model 1 using unconstrained and constrained

Page 146: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

134

covariance estimation are reported in Tables 15a and b respectively.

When constrained covariance estimation is used, the average correction factor

hovers around one for the various sample size and population correlation combinations.

In contrast, when unconstrained covariance estimation is used, the average correction

factor appears to be both more variable and typically larger than one. However, from

Tables 15a and b, it is seen that the mean estimated degrees of freedom are

consistently smaller than those used for the standardized statistics, using either

covariance estimation method. Furthermore, the maximum estimated degrees of

freedom never exceed those which would be used for the corresponding standardized

statistic. Specifically, numerator degrees of freedom never exceed two and denominator

degrees of freedom never exceed np-q for W 2 and L2 or np for W 3 or L3 • As might be

expected, both numerator and denominator degrees of freedom decrease with increasing

population correlation. Hence, the modified degrees of freedom account for an increase

in correlation.

The average numerator degrees of freedom are comparable for similar sample

size and population correlation combinations for constrained and unconstrained

variance estimation. In contrast, the average denominator degrees of freedom with

unconstrained variance estimation are substantially smaller than those with constrained

variance estimation. Thus, the error variance approximation described in Chapter 2

seems to take into account the number of covariance parameters that were estimated.

In order to better understand the effect that these correction factors and

estimated degrees of freedom have on the Type I error rates of the respective tests it is

helpful to relate them to the work of Box [1954a and b] and Geisser and Greenhouse

[1958]. When using the univariate approach to repeated measures in an ANOVA

model, one fits the model using OL8 and computes the usual F test for a

"between x within" hypothesis. This statistic is compared to a critical F with

Page 147: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

135

numerator and denominator degrees of freedom multiplied by what has come to be

known as the "Geisser-Greenhouse epsilon" [Geisser and Greenhouse, 1958].

Straightforward application of their suggestion to the general class of nonlinear models

here unfortunately does not permit such simplification. Specifically, the ratio of the

scale factors multiplied by the ratio of the degrees of freedom does not equal the ratio

of the "usual" degrees of freedom and therefore appears in the calculation of the

approximate F statistics described in Chapter 3 and evaluated here. For example, in

the notation of Chapter 3 for the W 2 statistic, (cuvu)/(cwuvwu) i= (np-q)/s.

5.5 Results for Incomplete Data Simulation Study

The main objective of the incomplete data simulation study was the evaluation

of the effect of missing data on estimation and inference procedures. From the

complete data study it is clear that modelling the compound-symmetric covariance

structure was the greatest determinant of the performance of any of the test statistics.

Furthermore, it was seen that ITAWLS provided no clear advantage over AWLS with

respect to reducing the bias in estimation of the variance of parameter estimates or

controlling the Type I error rates. Hence applying the AWLS incomplete data

estimation method of Chapter 4 and proceeding with the approximate F statistics of

Chapter 3 is expected to provide reasonable results. Given the comparability of results

for models 1 and 2 of the complete data study, only model 1 was used for the

incomplete data study.

5.5.1 Evaluation of the Estimation Method

The number of converged replications for each condition of the incomplete data

study are reported in Table 16. As for complete data, non-convergence occurred only

infrequently. The incomplete data (AWLS) estimation method is comparable to that

Page 148: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

136

for complete data with respect to the number of ~-iterations required to achieve the

convergence criterion as seen in Table 17. Roughly three ~-iterations are necessary to

achieve the convergence criterion with sample sizes of n = 10, 20, p = 0.3, 0.6 and 5%

or 10% missing data.

Estimates of the elements of ~ from either a full or reduced model are

consistently positively biased though by a negligible amount (see Table 18). The

average bias never exceeds 0.5% of the population value. In Table 19 it appears that

the mean estimates of the asymptotic variances for 01 and O2 are negatively biased by

about 10% more than similar estimates from complete data; asymptotic variance

estimates from incomplete data typically achieve 85% of the sample variance as

compared to 95% when complete data are used. However, this bias in estimation of

the variance of the parameter estimates is not consistently greater for 10% than 5%

missing data. Furthermore, this bias is relatively consistent across the sample size and

population correlations examined here.

The average of the estimates of the variance components across varying

conditions of sample size and correlation are reported in Table 20. Comparing Table

20 to Table 9a, it may be seen that estimates of the variance components computed

from incomplete data are no more biased than those computed from complete data.

The bias is larger for p = 0.6 than for p = 0.3 and it is somewhat larger for n = 10

than for n = 20. As for the complete data estimates of the variance components, some

improvement in the bias is seen for estimates computed from the AWLS residuals as

compared to those computed from the OLS residuals. A small increase in the bias of

q2 and p is seen for 10% missing data as compared to 5% missing data.

Page 149: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

137

5.5.2 Evaluation of the Inference Methods

The Type I error rates for the approximate F tests described in Chapter 3,

applied to incomplete data, are reported in Table 21. The most striking finding for the

incomplete data as compared to the complete data is the consistent appearance of

overly conservative Type I error rates for the W 2 and L2 statistics. Otherwise,

comparison of Table 21 to Table lOa shows that the remaining statistics computed

from incomplete data perform similarly, though with slightly higher Type I error rates,

to those for complete data.

Consistent with the slight increase in bias for the mean asymptotic variance

estimates for 01 and O2 , the corresponding coverages of Wald-based confidence intervals

are somewhat less for incomplete data than for complete data. This may be seen by

comparing Table 22 for incomplete data to Table 12. This amounts to only a 2-4%

loss of coverage for 5-10% missing data as compared to complete data.

One-sample Kolmogorov-Smirnov tests comparing the approximate F statistics,

WI and L1, to an hypothesized F distribution with 2 and np-q-m degrees of freedom,

where m is the number of missing data values, were conducted. The D mozo statistic, its

p-value and the corresponding observed F statistic are reported in Table 23. In

general, good agreement between observed and hypothesized distributions were found

for all conditions when a Bonferroni corrected Q = 0.003 was used for the set of 18

tests. However, it is clear by examining the set of p-values for these tests that less

good conformity occurs with the smallest sample size, n = 10. Even for cases in which

the Dmu statistic is significant, the observed F value corresponding to the deviation

from the hypothesized cumulative probability plot is not in a region near the critical

value for that test. Note that all of the critical F values are greater than three.

As for complete data, F-plots for the WI and L1 statistics and analogous plots

for the W 2' W 3' L2 and L3 statistics were constructed and examined. For brevity only

Page 150: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

138

the plots of W l' W 2 and W 3 with n = 20, p = 0.6 and 5% missing data are included as

examples since there is little variability in the appearance of plots across simulation

conditions. Refer to Figure 5 for a plot of W 1 and to Figures 6a and b for plots of W 2

and W 3' In general, F-plot analogs for the unstandardized statistics showed them to

follow their hypothesized distributions reasonably well as evidenced by slopes close to

one for the observed values. The F-plot for W 2 shows that this statistic follows its

hypothesized distribution less well, with a slope somewhat greater than one.

The mean correction factors for the various combinations of sample size and

correlation are reported in Table 24. These are consistently less than or equal to one.

Although this indicates that on the average they make the test statistics smaller, for

any given analysis situation they may act to make them larger as evidenced by the

maximum values wich are consistently greater than one. The mean correction factors

for the W 2 and L2 statistics are notably consistently smaller than for the other

statistics. This appears to explain the excessively small Type I error rates seen for ethese statistics in Table 21.

As for complete data, the estimated degrees of freedom reported in Table 25 are

consistently less those for the corresponding standardized statistics. Both numerator

and denominator degrees of freedom decrease with increasing p. However, only the

denominator degrees of freedom decrease for 10% as compared to 5% missing data.

Finally, the estimated degrees of freedom do not decrease with 10% as compared to 5%

missing data.

Page 151: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 6AN EXAMPLE

6.1 Overview of the Study

The theory developed and evaluated in the previous chapters can be applied to

many areas of research in the biological, chemical and physical sciences. One example

is provided by a recent study of blood levels of thyroid stimulating hormone (TSH) in

humans conducted by the Department of Psychiatry at the University of North

Carolina at Chapel Hill. There were 17 subjects (7 females and 10 males) representing

three different diagnosis groups. The three diagnosis groups included five alcoholic, six

depressed and six normal subjects. Each subject was given an injection of thyrotropin

(TRH) on four separate occasions, three to seven days apart. After each TRH

injection, blood was drawn immediately and then six more times at 15 minute

intervals. Each blood sample was divided in half. One half-sample was assayed for

TSH level, in JJ UIml, soon after it was taken while the other half-sample was assayed,

after a considerable delay in months, using a new method. The average of the blood

levels of TSH across the four occassions was computed for each subject at each time

point. These mean TSH blood levels provided the analysis variable of interest. Note

that for two subjects, both females with a diagnosis label of normal, the mean blood

levels of TSH at the last two time points are missing. It will be assumed that these

measurements are missing completely at random.

One objective for analysis of these data concerned comparison of the new assay

method to the old one. The new method of performing TSH assays is replacing the old

one, which will no longer be available. In particular, formulae for converting old TSH

Page 152: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

140

results into new ones are being sought. In this regard, it was also of interest to

determine whether different formulae should be used for males and females, or for

subjects with different diagnoses. These latter objectives, involving comparisons

between the response curves for males vs. females and among the response curves for

the three diagnosis groups, will be addressed here using results obtained from the old

assay method. The analyses conducted here are strictly for the purpose of illustrating

the methods developed in this research.

comprehensive analysis for these data.

6.2 Model Selection

They are not meant to provide a

Preliminary scatterplots of the data confirmed the compatibility of the response

curves with a simultaneous uptake and elimination pharmacokinetic model, i.e. roughly

the form of a convex parabola. In constructing a plausible model, the nature of the

errors were considered with respect to whether they were multiplicative or additive and

with respect to their homogeneity across time. In order to simultaneously address

multiplicative errors and variance heterogeneity for these data, the natural logs of both

sides of the proposed model were taken.

The logged simultaneous uptake and elimination model appears as

(6.1)

in which y ij is the mean blood level of TSH for the j-th time point from the i-th

subject, tj is the j-th time point and eij is the random error associated with the j-th

repeated measurement from the i-th subject. For these data i E {1, 2, ... , 17} and

j E {O, 1, ... , 6}. In model (6.1), 91 is essentially an "intercept" parameter in the sense

that it represents the mean TSH blood level at the time of TRH injection and 92

represents the asymptotic blood level of TSH achievable if there was no simultaneous

Page 153: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

141

elimination of this hormone from the bloodstream. Finally, 03 is a rate parameter

characterizing the uptake or elimination of TSH from the bloodstream, respectively. A

more general model would have included separate rate parameters for the uptake and

elimination of TSH in the blood. Attempts to fit the more general model repeatedly

failed indicating that the data do not support different rate parameters. In particular,

examination of the parameter estimates at each iteration showed that estimation of the

two rate parameters, but not the intercept or asymptote parameters, was unstable.

Hence, model (6.1) was adopted for the following analyses.

In order to verify the appropriateness of the expectation function for these

data, and compute an estimate of the residual covariance matrix to be examined with

respect to an assumption of compound-symmetry, model (6.1) was fitted to these data

using OLS. Using OLS for this preliminary analysis made it possible to ignore the

missing data and permitted computations to be done in readily available software,

SASs PROC NLIN. At this point in the data analysis, separate parameters were not

sought for the various membership groups, which simplified finding starting values for

the parameters. The starting value for 01 was easily obtained as the mean TSH blood

level at to = 0, the time of injection with TRH. It is clear that O2 must be larger than

the largest TSH level observed since it is an asymptote parameter. However, a good

starting value for 03 was not obvious. A grid search using the following values was

conducted: 01 =4, 02 = 20, 30, 40 and 03 =0.001, 0.01, 0.1, 1. Using the best

starting parameter set from the grid search, the algorithm rapidly converged to the

values shown in Table 26.

Model fit was evaluated in several ways. The asymptotic correlation matrix for

the parameter estimates also reported in Table 26 shows that there is moderate

correlation among the parameter estimates. The fact that no excessively large

correlations appear indicates the model is neither over-parameterized nor poorly

Page 154: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

142

parameterized. The appropriateness of the expectation function was also substantiated

by a plot of the predicted values superimposed on the observed responses. This plot is

omitted for brevity. A plot of a final descriptive model for these data will be provided

later. Model fit was also assessed by plotting the residuals from the logged model (6.2)

as a function of time. These revealed generally random behavior with homogeneity of

variance across time.

The covariance and correlation matrices of residuals were computed separately,

for the females and males, and examined with respect to the compound-symmetry

assumption. Noting that the two gender groups each involve small samples, inspection

of the respective covariance matrices revealed them to be reasonably consistent with

each other and with the assumption of compound-symmetry. The variance of the

repeated blood measurements ranged from 0.11 to 0.22 for the females and 0.08 to 0.12

for the males. The correlations among repeated measurements ranged from 0.59 to

nearly 1 for the females and from 0.82 to 0.98 for the males. The median correlations

were 0.96 for the females and 0.94 for the males. Formal testing of the consistency of

these covariance matrices to each other and to one of compound-symmetry were not

undertaken because such tests tend to be too sensitive to departures from normality

[Morrison, §7.4]. Similar evaluation of the covariance and correlation matrices

computed separately for the alcoholic, depressed and normal subjects was done with

similar results found.

A normal probability plot of the residuals (omitted for brevity) showed that

their distribution is consistent with that of a normal random variable, substantiating

an assumption of Gaussian errors. Recall that the Gaussian error assumption is

necessary to ensure validity of the a priori orthonormal model transformation, which is

integral to the missing data estimation method.

It was surprising to find that the various sub-group covariance matrices more

Page 155: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

143

closely followed a pattern of compound symmetry than that of an autoregressive

pattern. The latter pattern is more typical of observations made across time.

However, the correlations within a particular correlation matrix were relatively uniform

and, in this case, consistently high.

Armed with the OLS parameter estimates as starting values for the missing

data AWLS estimation method of Chapter 4 and the knowledge that the model

assumptions are plausible, subsequent analyses were accomplished using SAS code

written in PROC IML (written by the author and available upon request).

6.3 Research Hypotheses

Although it would have been desirable to evaluate the interaction of diagnosis

group by gender, this was not practical due to the small freqency of observations

within each cell of such a design, given only 17 subjects. Therefore, the example

analysis of the TSH response was directed at the following two "main effect"

hypotheses.

(HI) HOl : response curves for males and females coincide

vs.

Hal: response curves for males and females do not coincide

and

(H2) Ho2 : response curves for the three diagnosis groups coincide

vs.

Ha2 : response curves for the three diagnosis groups do not coincide.

A more general form of model (6.2) may be written in order to incorporate different

"between-subjects" groups as follows

Page 156: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

144

(6.2)

in which 9 E {f, m} indicating females and males for HI and 9 E {a, d. n} indicating

alcoholic, depressed and normal groups for H2. Hence the full model parameter set

corresponding to HI is ~' = (Olf' 0lf' 03f' 0lm, 02m, 03m)' and to H2 is

~' = (Ola, 02a, 03a, Old' 02d' 03d' 0ln, 02n' 03n)'. The null hypothesis for HI may be

written analytically as

and the null hypothesis for H2 may be written analytically as

0la - 0ln

°2a - °2n

!h(~) =03a - 03n

= Q.Old - °In e02d - 02n

03d - 03n

A Bonferroni corrected nominal Q = .05/2 =.025 will be used to evaluate the

significance of these two tests.

6.4 Results and Conclusions

The full and reduced models related to testing HI were fitted using the AWLS

estimation method for missing data described in Chapter 4. This was done using code

written by the author in SAS's PROC IML. The full model converged estimates, their

asymptotic standard errors and asymptotic 95% confidence intervals appear in Table

27. In general, it is not reccommended that multiple tests directed at the same

hypothesis be used. For the purposes of illustration of the new statistics, the full set of

six weighted and unweighted approximate F statistics described in Chapter 3, as well

Page 157: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

145

as their degrees of freedom and p-values for testing HI, are presented in Table 28.

Tests based on W l and Ll reject HOI while those based on W 2 , W a, L2 and La

accept HoI. Examination of the full model parameter estimates indicate that of the

three pairs of parameters involved in a test of coincidence, only the pair of asymptote

parameters appears to differ substantially, 53.89 for females vs. 41.14 for males.

However the asymptotic 95% confidence intervals for these asymptote estimates

overlap to a large extent, despite the fact that these may be too narrow given the small

sample size from which they were computed. Pairwise comparison of confidence

intervals in this fashion corresponds loosely to conducting Wald based stepdown tests

of the equality of pairs of parameter estimates.

The full and reduced models related to testing H2 were fitted as before, using

the AWLS method for missing data from Chapter 4. The full model parameter

estimates, their asymptotic standard errors and asymptotic 95% confidence intervals

are reported in Table 29. The full set of approximate F statistics, degrees of freedom

and p-values for testing H2 are reported in Table 30. For this test, all six statistics

reject Ho2. Furthermore, examination of the parameter estimates and their asymptotic

95% confidence intervals substantiates this finding.

Refer to Figure 7 for a plot of the data superimposed on prediction curves for

the three diagnosis groups, in the original scale of the data. A plot of the residuals as a

function of time revealed them to be randomly dispersed with homogeneous variance

across time. No remarkable outliers were found. A normal probability plot showed

that the distribution of residuals is consistent with that of a normal random variable,

substantiating the assumption of Gaussian errors. The asymptotic correlation matrix

among the parameter estimate is reported in Table 31. It appears that the fully

parameterized model for the three diagnosis groups fits the data very well. There is a

clear ordering of responses among the three groups. The alcoholic group exhibits a

Page 158: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

146

very diminished TSH response curve as compared to the group of normals, with the

depressed group giving a response which is intermediate between the alcoholics and

normals.

Note that an alternative procedure for evaluating the above two hypotheses

could have been done using linear model methods available in standard software

packages. Quadratic polynomials could have been fitted to the various "between­

subjects" groups and tests of coincidence conducted. There are two potential

disadvantages to such an analysis. First, the coefficients from a quadratic model are

not particularly meaningful, while the nonlinear model parameters for a one-

compartment pharmacokinetic model are scientifically interesting. Second, many

widely used statistical software packages use listwise deletion of observations with

missing data. Such an analysis would be less powerful than the analysis presented here

by virtue of not using all of the available data. Note, however, that a careful choice of

software would permit alternate strategies for handling the missing data. In

conclusion, the example analysis of the TSH data clearly demonstrates the applicability

of the methods of Chapters 2, 3 and 4 to a real situation.

Page 159: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Chapter 7SUMMARY AND SUGGESTIONS FOR FURTHER RESEARCH

7.1 Summary

The goal of this research was to provide accurate analysis methods for

nonlinear regression models with compound-symmetric error covariance for each

experimental unit and small samples of data. Two approaches were undertaken

simultaneously. First, improvement of existing estimation methods was sought by

incorporating the compound-symmetric covariance structure into the model. By

reducing the number of covariance parameters to be estimated, variability in

estimation of the covariance matrix among parameter estimates is reduced. In turn,

this reduced the negative bias in the standard errors of the parameter estimates. This

is especially relevant for small samples, where it is often not reasonable to expect

reliable estimation of a relatively large number of covariance parameters. Hence, when

the compound-symmetry assumption is tenable, estimation methods which incorporate

the covariance structure into the model are recommended. Second, modifications of

existing F approximations for Wald and likelihood ratio based statistics were sought.

The modifications used applications of Box's [1954a] results for characterizing

approximate X2 variates. The scale and degree of freedom estimates supplied an

alternate way of incorporating the covariance among repeated measurements into the

computation of the approximate F statistics.

It was clearly demonstrated in the complete-data simulation studies that using

an estimation method which incorporates the compound-symmetric covariance

structure into the model can provide a dramatic improvement over methods which

Page 160: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

148

ignore this structure. In particular, test statistics, computed from parameter estimates

obtained by using an estimation method which incorporates the covariance structure

into the model, exhibited only mildly inflated Type I error rates as compared to rates

which are three to four times the nominal a = 0.05 level, when the covariance

structure is misspecified. Similarly, Wald based confidence intervals provide reasonably

accurate coverage for a correctly specified model, even in very small samples. The

performance of the modified, unstandardized statistics was comparable to that of the

standardized statistics when the model was correctly specified. However, under

misspecification, the unstandardized statistics provided somewhat lower, though still

inflated, Type I error rates as compared to the standardized statistics. One-sample

Kolmogorov-Smirnov tests of the standardized statistics showed that these follow their

hypothesized F distributions in small samples when the covariance structure is

correctly specified. F-plots of the unstandardized statistics showed them to be

consistent with their hypothesized distributions.

The approximate equivalence of AWLS and ITAWLS (ML under correct model

specification) appears to hold even for samples as small as n = 10. Under a variety of

sample sizes and population correlations the non-iterated AWLS estimation method

appeared to perform at least as well as, and under model misspecification better than,

the iterated method. With regard to the application of these methods with small

samples, it appears that little is to be gained by using the iterated estimation method.

The incomplete-data estimation method, again by virtue of addressing the

compound-symmetric covariance structure, offers a statistically valid analysis when the

data are missing completely at random. This estimation method requires only slightly

more computational effort than the complete-data methods developed for compound­

symmetric covariance. Most importantly, with small samples and 5-10% missing data,

inference procedures applied in conjunction with the new estimation method appear to

Page 161: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

149

work nearly as well as complete data methods.

The major recommendation to come out of this effort regards the careful design

of experiments involving small samples and response functions which are known to be

nonlinear. If at all possible, it is desirable to plan such experiments in a way which

might permit the plausibility of the compound-symmetric error covariance structure.

This clearly affords the opportunity to conduct a more accurate analysis than one

based on a general covariance structure. For example, compound symmetry may be

induced in the error structure by using counter-balanced stimulus or treatment

presentation within experimental units. Occasionally, the compound symmetry

assumption may be appropriate for repeated measures collected over time, although it

is likely that some other error structure may be more correct.

7.2 Suggestions for Further Research

First, the robustness of the estimation methods to violations of the compound

symmetry assumption should be evaluated. Similarly, evaluation of the robustness of

this method to violations of the normality assumption are recommended. Recall that

the normality assumption is a necessary condition for the orthonormal model

transformation to produce the convenient results described in §2.2.

Second, extensions of the constrained covariance estimation methods discussed

here to include other interesting covariance structures such as autoregressive or moving

average structures would greatly broaden the scope of applicability. Furthermore,

these alternate structures address situations which are simply inaccessible given the

compound-symmetry assumption.

longitudinal data.

In particular, these include many types of

Third, given the equivalence in Type I error rates of the unstandardized and

standardized approximate F statistics, it would be most interesting to compare these

Page 162: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

150

with respect to power. The derivation of the unstandardized statistics was directed at

Type I error control. However, a similar approach with specific types of linear models

is well known to produce more powerful test statistics. Hence, evaluation of the

performance of the unstandardized statistics with respect to this second criterion would

be most valuable.

Fourth, multivariate measures of curvature might provide a useful tool both for

statisticians attempting to better understand nonlinear multivariate models and for

researchers who routinely encounter and analyze such data. The role of the covariance

among repeated measurements on the nonlinear response surface is not well

understood. Analytic measures of this phenomenon, generalizing those put forth by

Bates and Watts [1980] for univariate situations, might be similarly enlightening.

Page 163: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

151

BIBLIOGRAPHY

Allen, D.M. (1967). Multivariate analysis of nonlinear models. Ph.D. Dissertation,University of North Carolina at Chapel Hill.

Allen, D.M. (1983). Parameter estimation for nonlinear models with emphasis oncompartmental models. Biometrics, 39, 629-637.

Andrade, D.F. and Helms, R.W. (1984). Maximum likelihood estimates in themultivariate normal with patterned mean and covariance via the EM algorithm.Communications in Statistics, 13(18), 2239-2251 .

Arnold, S.F. (1981). The Theory of Linear Models and Multivariate Analysis, NewYork, New York: John Wiley & Sons.

Bard, Y. (1974). Nonlinear parameter estimation. New York: Academic Press.

Barnett, W.A. (1976). Maximum likelihood and iterated Aitken estimation ofnonlinear systems of equations. Journal of the American Statistical Association,71, 354-360.

Barton, C.N. (1986). Hypothesis testing in multivariate linear models with randomlymissing data. Ph.D. Dissertation, Univeristy of North Carolina at Chapel Hill.

Bates, D.M. and Watts, D.G. (1980). Relative curvature measures of nonlinearity(with discussion). Journal of the Royal Statistical Society, B 40, 1-25.

Bates, D.M. and Watts, D.G. (1988). Applied Nonlinear Regression. NY: JohnWiley and Sons.

Beale, E.M.L. and Little, R.J .A. (1975). Missing values in multivariate analysis.Journal of the Royal Statistical Society, B(1), 129-145.

Benignus, V.A., Muller, K.E., Barton, C.N. and Bittikofer, J.A. (1981). Toluene. levels in blood and brain of rats during and after respiratory exposure. Toxicology

and Applied Pharmacology, 61, 326-334.

Berk, K. (1987). Computing for incomplete repeated measures. Biometrics, 43, 385­398.

Berkey, D.S. and Laird, N.M. (1986). Nonlinear growth curve analysis: estimating thepopulation parameters. Annals of Human Biology, 13, 111-128.

Bickel, P.J. and Doksum, K.A. (1977). Mathematical Statistics. Oakland, CA:Holden-Day, Inc.

Page 164: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

152

Binkley, J.K. (1982). The effect of variable correlation on the efficiency of seeminglyunrelated regression in a two-equation model. Journal of the A merican StatisticalAssociation, 77, 890-894.

Binkley, J.K. and Nelson, C.H. (1988). A note on the efficiency of seeminglyunrelated regression. American Statistician, 42, 137-139.

Bock, R.D., Wainer, H., Petersen, A., Thissen, D., Murray, J. and Roche, A. (1973).A parameterization for individual human growth curves. Human Biology, 45(1),63-80.

Box, G.E.P. (1954a). Some theorems on quadratic forms applied in the study ofanalysis of variance problems, I. Effect of inequality of variance in the one-wayclassification, Annals of Mathematical Statistics, 25, 290-302.

Box, G.E.P. (1954b). Some theorems on quadratic forms applied in the study ofanalysis of variance problems, II. Effects of inequality of variance and correlationbetween errors in the two-way classification, Annals of Mathematical Statistics,25, 484-498.

Box, G.E.P. and Tiao, G.C. (1973). Bayesian Inference an Statistical Research.Reading, MA: Addison-Wesley.

Box, M.J. (1971). Bias in nonlinear estimation. Journal of the Royal StatisticalSociety, B(32), 171-201.

Carroll, R.J. and Ruppert, D. (1985). A note on the effect of estimating weights inweighted least squares. Institute of Mimeo Series #1570, Chapel Hill, NorthCarolina.

Charnes, A., Frome, E.L. and Yu, P.L. (1976). The equivalence of generalized leastsquares and maximum likelihood estimates in the exponential family. Journal ofthe American Statistical Association, 71, 169-171.

Christensen, R. (1984). A note on ordinary least squares methods for two-stagesampling. Journal of the American Statistical Association, 79, 720-721.

Clarke, G.P.Y. (1987). Marginal curvatures and their usefulness in the analysis ofnonlinear regression models. Journal of the American Statistical Association, 82,844-850.

Cook, R.D. and Goldberg, M.L. (1986). Curvature for parameter subsets in nonlinearregression. The Annals of Statistics, 14, 1399-1418.

Cook, R.D., Tsai, C.L., and Wei, B.C. (1986).Biometrika, 73(3), 615-623.

Bias in nonlinear regression.

Cook, R.D. and Witmer, J.A. (1985). A note on parameter effects curvature. Journalof the American Statistical Association, 80(392), 872-877.

Page 165: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

153

Corbeil, R.R. and Searle, S.R. (1976). Restricted maximum likelihood (REML)estimation of variance components in the mixed model. Technometrics, 18(1), 31­38.

Cox, D.R. (1984). Effective degrees of freedom and the likelihood ratio test.Biometrika, 71(3),487-493.

Danford, M.B., Hughes, H.M. and McNee, R.C. (1960). On the analysis of repeated­measurements experiments. Biometrics, 16, 547-565.

Davidian, M. and Carroll, R.J. (1987). Variance function estimation. Journal of theAmerican Statistical Association, 82, 1079-1091.

Davidson, M.L. (1972). Univariate versus multivariate tests in repeated-measuresexperiments. Psychological Bulletin, 77, 446-452.

Davies, R.B. (1980). Algorithm AS 155. The distribution of a linear combination ofX2 random variables. Applied Statistics, 29, 323-333.

De Bruijn, N.G. (1981). Asymptotic Methods in Analysis. New York: Dover.

Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977). Maximum likelihood fromincomplete data via the EM algorithm. Journal of the Royal Statistical Society,(B) 38, 1-22.

Donaldson, J.R., and Schnabel, R.B.confidence regions and confidenceTechnometrika, 29(1), 67-82.

(1987). Computational experience withintervals for nonlinear least squares.

Donner, A. and Koval, J.J. (1980a). The estimation of intrac1ass correlation in theanalysis of family data. Biometrics, 36, 19-25.

Donner, A. and Koval, J.J. (1980b). The large sample variance of an intrac1asscorrelation. Biometrika, 67(3), 719-722.

Donner, A. and Wells, G. (1986). A comparison of confidence interval methods forthe intrac1ass correlation coefficient. Biometrics, 42, 401-412.

Draper, N. and Smith, H. (1981). Applied Regression Analysis, Second Edition. NewYork: John Wiley and Sons.

Elashoff, J.D. (1986). Analysis of repeated measures designs. BMDP TechnicalReport #83.

Fisher, R.A. (1950). Statistical Methods for Research Workers, Hafner: New York.

Freedman, D.A. and Peters, S.C. (1984). Bootstrapping a regression equation: Someempirical results. Journal of the American Statistical Association, 79, 97-106.

Gallant, A.R. (1975a). Nonlinear regression. The American Statistician, 29(2), 73­81.

Page 166: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

154

Gallant, A.R. (1975b). The power of the likelihood ratio test of location in nonlinearregression models. Journal of the American Statistical Association, 70, 198-203.

Gallant, A.R. (1975c). Testing a subset of the parameters of a nonlinear regressionmodel. Journal of the American Statistical Association, 70, 927-932.

Gallant, A.R. (1975d). Seemingly unrelated nonlinear regressions. Journal ofEconometrics, 3, 35-50.

Gallant, A.R. (1976). Nonlinear regression with autocorrelated errors. Journal of theAmerican Statistical Association, 71, 961-967.

Gallant, A.R. (1979). A note on the interpretation of polynomial regressions.Institute of Statistics Mimeo Series, No. 1245, North Carolina State University.

Gallant, A.R. (1982). On unification of the asymptotic theory of nonlineareconometric models. Econometrics Reviews, 1(2), 151-190.

Gallant, A.R. (1987). Nonlinear Statistical Models, New York, New York: JohnWiley & Sons.

Gallant, A.R. and Goebel, J.J. (1976). Nonlinear regression with autocorrelatederrors. Journal of the American Statistical Association, 71, 961-967.

Gallant, A.R. and Holly, A. (1980). Statistical inference in an implicit, nonlinear,simultaneous equations model in the context of maximum likelihood estimation.Econometrica, 48, 697-720.

Geisser, S. and Greenhouse, S.W. (1958). An extension of Box's results of the use ofthe F-distribution in multivariate analysis. Annals of Mathematical Statistics, 29,885-891.

Gennings, C., Chinchilli, V.M. and Carter, Jr., W.H. (1989). Response surfaceanalysis with correlated data: a nonlinear model approach. Journal of theAmerican Statistical Association. 84, 805-809.

Giesbrecht, F.G. and Burns, J.C. (1985). Two-stage analysis based on a mixed model:Large-sample asymptotic theory and small-sample simulation results. Biometrics,41,477-486.

Gill, P.E., Murray, W. and Wright, M.H. (1981). Practical Optimization, NY:Academic Press.

Glasbey, C.A. (1979). Correlated residuals in non-linear regression applied to growthdata. Applied Statistics, 28(3), 251-259.

Gong, G. and Samaniego, F.J. (1981). Pseudo maximum likelihood estimation:theory and application. The Annals of Statistics, 9, 861-869.

Page 167: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

155

Greenhouse, S.W. and Geisser, S. (1959). On methods in the analysis of profile data.Psychometrika, 24(2), 95-112.

Hafner, K.B. (1988). Analysis of nonlinear regression models with compoundsymmetric covariance structures. Ph. D. Dissertation, University of NorthCarolina.

Hamilton, D.C., Watts, D.G. and Bates, D.M. (1982). Accounting for intrinsicnonlinearity in nonlinear regression parameter inference regions. The Annals ofStatistics, 10, 386-393.

Hartley, H.O. (1961). The modified Gauss-Newton method for fitting of non-linearregression functions by least squares. Technometrics, 3(2), 269-280.

Hartley, H.O. and Booker, A. (1965). Nonlinear least squares estimation. Annals ofMathematic Statistics, 36, 638-650.

Harville, D.A. (1977). Maximum likelihood approaches to variance componentestimation and to related problems. Journal of the American StatisticalAssociation, 72, 320-340.

Hocking, R.R. (1985). The Analysis of Linear Models. Monterey, California:Brooks/Cole Publishing Company.

Huynh, H. and Feldt, L.S. (1970). Conditions under which mean square ratios inrepeated measurements designs have exact F-distributions. Journal of theAmerican Statistical Association, 65, 1582-1589.

Jennrich, R.I. (1969). Asymptotic properties of non-linear least squares estimators.The Annals of Mathematical Statistics, 40(2), 633-643.

Jennrich, R.I. and Ralston, M.L. (1978). Fitting nonlinear models to data. BMDPTechnical Report #46.

Jennrich, R.I. and Sampson, P.F. (1976). Newton-Raphson and related algorithms formaximum likelihood variance component estimation. Technometrics, 18, 11-17.

Jennrich, R.I. and Schluchter, M.D. (1986). Unbalanced repeated-measures modelswith structured covariance matrices. Biometrics, 42, 805-820.

Johansen, S. (1984). Functional Relations, Random Coefficients, and NonlinearRegression with Application to Kinetic Data. New York, New York: Springer­Verlag.

Johnson, N.L. and Kotz, S. (1970). Distributions in Statistics:Univariate Distributions - 2. New York: John Wiley and Sons.

Continuous

Johnson, P. and Milliken, G.A. (1983). A simple procedure for testing linearhypotheses about the parameters of a nonlinear model using weighted leastsquares. Communications in Statistics, 12(2), 135-145.

Page 168: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

156

Jorgensen, B. (1983). Maximum likelihood estimation and large-sample inference forgeneralized linear and nonlinear regression models. Biometrika, 70(1), 19-28.

Kendall, M.G. and Stuart, A. (1970). The Advanced Theory of Statistics, Vol 1:Distribution Theory , Vol. 2: Inference and Relationship. London: CharlesGriffin and Co. Ltd.

Kennedy, W.J. and Gentle, J.E. (1980). Statistical Computing. NY: Marcel Dekker,Inc.

Keselman, H.J., Rogan, J.C., Mendoza, J.L., and Breen, L.J. (1980). Testing thevalidity conditions of repeated measures F tests. Psychological Bulletin, 87, 479­481.

"

Kirk, R.E. (1982).Publishing Co.

Experimental Design. Monterey, California: Brooks/Cole

Kleinbaum, D.G. (1973). Testing linear hypotheses in generalized multivariate linearmodels. Communications in Statistics, 1(5), 433-457.

Kmenta, J. and Gilbert, R.F. (1968). Small sample properties of alternativeestimators of seemingly unrelated regressions. Journal of the American StatisticalAssociation, 63, 1181-1200.

Kshirsagar, A.M. (1983). Multivariate Analysis. New York, New York: Marcel _Dekker, Inc. •

Laird, N.M. and Ware, J.H. (1982). Random effects for longitudinal data. Biometrics,38, 963-979.

LaVange, L.M. and Helms, R.W. (1983). The analysis of incomplete longitudinal datawith modeled covariance structures. Institute of Mimeo Series No. 1449,University of North Carolina.

Lee, J.C. (1988). Prediction and estimation of growth curves with special covariancestructures. Journal of the American Statistical Association, 83, 432-440.

Lee, S.Y. (1979). Constrained estimation in covariance structure analysis.Biometrika, 66, 539-545.

Lightner, J .M. and O'Brien, R.G. (1984). The MDM model for repeated measuresdesigns with repeated covariates. Proc. of the American Statistical Association,126-131.

Lindstrom, M.J. and Bates, D.M. (1988). Nonlinear mixed effects models for repeatedmeasures data. Technical Report # 48, University of Wisconsin Clinical CancerCenter.

Little, D.B. and Rubin, R.J.A. (1987). Statistical Analysis with Missing Data. NY:John Wiley and Sons.

Page 169: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

157

Looney, S.W. (1986). A comparison of estimators of a common correlation coefficient.Communications in Statistics - Simulation, 15(2), 531-543.

Malinvaud, E. (1970). The consistency of nonlinear regressions. The Annals ofMathematical Statistics, 41(3), 956-969.

Malott, C.M. (1985). An approximate weighted least squares method for repeatedmeasurements in nonlinear models. Masters paper, University of North Carolina.

Marquardt, D.W. (1963). An algorithm for least-squares estimation of nonlinearparameters. Journal for the Society of Industrial and Applied Mathematics,11(2), 431-441.

Maxwell, S.E. and Bray, J.H. (1986). Robustness of the quasi F statistic to violationsof sphericity. Psychological Bulletin, 99(3), 416-421.

Miller, Jr., R.G. (1981). Survival Analysis. New York: John Wiley and Sons.

Milliken, G.A. and DeBruin, R.L. (1978). A procedure to test hypotheses fornonlinear models. Communications in Statistics, A7(1), 65-79.

Morrison, D.F. (1971). Expectations and variances of maximum likelihood estimatesof the multivariate normal distribution parameters with missing data. Journal ofthe American Statistical Association, 66, 602-604.

Morrison, D.F. and Bhoj, D.S. (1973). Power of the likelihood ratio test on the meanvector of the multivariate normal distribution with missing observations.Biometrika, 60(2), 365-368.

Morrison, D.F. (1976). Multivariate Statistical Methods (2nd ed.). New York, NewYork: John Wiley & Sons.

Muller, K.E. and Barton, C.N. (1989). Approximate power for repeated measuresANOVA lacking sphericity. Journal of the American Statistical Association. 84,549-555.

Muller, K.E. and Helms, R.H. (1984). Repeated measures in nonlinear models.unpublished manuscript, University of North Carolina.

Muller, K.E. and Malott, C.M. (1988). Repeated measures in nonlinear models:exploiting compound symmetry via approximate weighted least squares. paper inreview.

aIkin, I. and Pratt, J.W. (1958). Unbiased estimation of certain correlationcoefficients. Annals of Mathematical Statistics, 29, 201-211.

Orchard, T. and Woodbury, M.A. (1972). A missing information principle: Theoryand applications. Proceedings of the Sixth Berkeley Symposium on MathematicalStatistics and Probability, Volume 1, 697-715.

Page 170: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

158

Racine-Poon, A. (1985). A Bayesian approach to nonlinear random effects models.Biometrics, 41, 1015-1023.

Rao, B.L.S. Prakasa. (1984). The rate of convergence of the least squares estimatorin a non-linear regression model with dependent errors. Journal of MultivariateAnalysis, 14, 315-322.

Ratkowsky, D.A. (1983). Nonlinear Regression Modeling: A Unified PracticalApproach. New York, New York: Marcel Dekker, Inc.

Rawlings, J.O. (1988). Applied Regression Analysis: A Research Tool. Belmont, CA:Wadsworth, Inc.

Rubin, D.B. (1976). Inference and missing data. Biometrika, 63, 581-592.

Sandland, R.L. and McGilchrist, C.A. (1979). Stochastic growth curve analysis,Biometrics, 35, 255-271.

Satterthwaite, F.E. (1941). Synthesis of variance. Psychometrika, 6(5), 309-316.

Satterthwaite, F.E. (1946). An approximate distribution of estimates of variancecomponents. Biometrics, 2, 110-114.

Schaff, D.A., Milliken, G.A. and Clayberg. (1988). A method for analyzing nonlinearmodels when the data are from a split-plot or repeated measures design. Biom.J., 2, 139-146.

Scheiner, L.B. and Beal, S.L. (1980). Evaluation of methods for estimating populationpharmacokinetic parameters. I. Michaelis-Menton model: routine clinicalpharmacokinetic data. Journal of Pharmacokinetics and Biopharmaceutics, 8(6),553-571.

Schwertman, N.C. (1978). A note on the Geisser-Greenhouse correction forincomplete data split-plot analysis. Journal of the American StatisticalAssociation, 73, 393-396.

Schwertman, N.C., Flynn, W., Stein, S., and Schenk, K.L. (1985). A Monte Carlostudy of alternative procedures for testing the hypothesis of parallelism forcomplete and incomplete growth curve data. J. Statist. Comput. Simul., 21, 1-37.

Scott, A.J. and Holt, D. (1982). The effect of two-stage sampling on ordinary leastsquares methods. Journal of tf&e American Statistical Association, 77, 848-854.

Searle, S.R. (1971). Linear Models. New York, New York: John Wiley & Sons.

Searle, S.R. (1982). Matrix Algebra Useful for Statistics. New York, New York:John Wiley and Sons.

Seber, G.A.F. and Wild, D. (1989). Nonlinear Regression. New York: Wiley andSons.

Page 171: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

159

Serfling, R. (1980). Approximation Theorems of Mathematical Statistics. NY: JohnWiley and Sons.

Seth, A.K. and Mazumdar, S. (1986). Estimation of parameters of a polynomialmodel under intra class correlation structure for incomplete longitudinal data.Communications in Statistics, 15(5), 1549-1559.

Shoukri, M.M. and Ward, R.H. (1984). On the estimation of the intraclasscorrelation. Communications in Statistics, 13(10), 1239-1255.

Srivastava, J.N. (1966). Some generalizations of multivariate analysis of variance inMultivariate Analysis, ed. P.R. Krishnaiah, NY: Academic Press.

Srivastava, V.K. and Dwivedi, T.D. (1979). Estimation of seemingly unrelatedregression equations. Journal of Econometrics, 10, 15-32.

Srivastava, V.K. and Giles, D.E.A. (1987). Seemingly Unrelated Regression EquationsModels. NY: Marcel Dekker, Inc.

Swallow, W.H. and Monahan, J.F. (1984). Monte carlo comparison of ANOVA,MIVQUE, REML, and ML estimators of variance components. Technometrics,26(1),47-57.

Szatrowski, T.H. (1980). Necessary and sufficient conditions for explicit solutions inthe multivariate normal estimation problem for patterned means and covariances.The Annals of Statistics, 8(4), 802-810.

Velu, R. and McInerney, M. (1985). A note on statistical methods adjusting forintraclass correlation. Biometrics, 41, 533-538.

Ware, J.H. (1985). Linear models for the analysis of longitudinal studies. TheAmerican Statistician, 39(2), 95-101.

Williams, P.L. (1984). Estimation and testing accuracy of a method for nonlinearmodels with repeated measures. Unpublished Masters paper, University of NorthCarolina at Chapel Hill.

Winer, B.J. (1971). Statistical Principles in Experimental Design (2nd ed.). NewYork, New York: McGraw-Hill Book Company.

Wiorkowski, J.J. (1975). Unbalanced regression analysis with residuals having acovariance structure of intra-class form. Biometrics, 31, 611-618.

Wu, C.F.J., Holt, D. and Holmes, D.J. (1988). The effect of two-stage sampling onthe F statistic. Journal of the American Statistical Association, 83, 150-159.

Zellner, A. (1962). An efficient method of estimating seemingly unrelated regressionsand tests for aggregation bias. Journal of the American Statistical Association,57,348-368.

Page 172: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

160

Zellner, A. (1963). Estimators for seemingly unrelated regression equations: Someexact finite sample results. Journal of the American Statistical Association, 58,977-992.

Page 173: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 1. Least squares terminology

161

ProcedureObjectiveFunction

CovarianceAssumptions

Propertiesof i3

MLE undernormality

OLS Y=!n BLUE, yesasymptoticallyconsistent andnormally distributed

EWLS

AWLS

BLUE, y~

asymptoticallyconsistent andnormally distributed

asymptotically nounbiased, efficient,consistent andnormally distributed

asymptotically yesunbiased, efficient,consistent andnormally distributed

Page 174: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

162

Table 2. Wald based statistics for testing Ho : M~o) = Q vs. Ha : ~(~o) =1= Ql

Description

F STATISTICS:

(WI) Wald statistic

(W2) new Wald statistic,general ~o

(W3) new Wald statistic,~o = ~c,

x2 STATISTICS:

(W4) Wald statistic

(W5) new Wald statistic

ComputationalFormula

SSHW / sWI = 2s,

W _ SSHWlJ / (CwJIWlJ )3 - [(np-q)s~] / [Cc,(Vc,-q)]

W 4 = SSHw

ApproximateDistributionunder Ho

F o[s, np-q; w]

x~[s]

Iprimary references for the above statistics:

(WI) Gallant [1987; Ch. 5] or §3.4 - Theorem 18 for an alternate proof(W2) §3.4, Theorem 19(W3) §3.4, Theorem 20(W4) Gallant [1987; Ch. 5] or §3.3 - Theorem 14 for an alternate proof(W5) §3.3, Theorem 15

Page 175: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

163

Table 3. Likelihood ratio based statistics for testingHo : ~(~o) = Q VS. Ha : M~o) #: 91

Description

F STATISTICS:

(L1) Likelihood ratio statistic

ComputationalFormula

L_ SSH l • /5

1 - 2s.

ApproximateDistributionunder Ho

F ,,[5, np-q]

(L2) new Likelihood ratio statistic, L - SSHlu / (cluvlJgeneral ~o 2 - [(np-q)s~] / (cuvu)

x2 STATISTICS:

(L4) Likelihood ratio statistic x~[s]

(L5) new Likelihood ratio statistic Ls = SSHlu / clu

1primary references for the above statistics:

(L1) Gallant [1987; Ch. 5] or §3.4 - Theorem 21 for an alternate proof(L2) §3.4, Theorem 22(L3) §3.4, Theorem 23(L4) Gallant [1987; Ch. 5] or §3.3 - Theorem 16 for an alternate proof(L5) §3.3, Theorem 17

Page 176: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 4. Approximate 95% confidence intervals for various observed type I errorrates and numbers of replications

number of replications observed type I error rate 95%CI

500 0.05 (0.031, 0.069)0.10 (0.073, 0.127)0.15 (0.118, 0.182)0.20 (0.164, 0.236)

1000 0.05 (0.036, 0.064)0.10 (0.081, 0.119)0.15 (0.127, 0.173)0.20 (0.175, 0.225)

164

Page 177: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

165

Table 5a. Number of replications used for model 1 in the complete data study.

Estimation Hypothesis Testing

AWLS orn p ITAWLS Reduced Full Wald Likelihood Ratio

Constrained covariance estimation:

10 0 A 999 1000 1000 999I 1000 1000 1000 1000

0.3 A 1000 1000 1000 1000I 1000 1000 1000 1000

0.6 A 1000 1000 1000 1000I 1000 1000 1000 1000

20 0 A 999 1000 1000 999I 999 998 998 997

0.3 A 999 1000 1000 999I 1000 1000 1000 1000

0.6 A 1000 1000 1000 1000I 1000 1000 1000 1000

40 0 A 1000 1000 1000 10000.3 A 1000 1000 1000 10000.6 A 1000 ~ 999 999

NOT CONVERGED: 4 3

Unconstrained covariance estimation:

20 0 A 1000 1000 1000 1000I 997 1000 1000 997

0.3 A 1000 1000 1000 1000I 997 1000 1000 997

0.6 A 1000 999 999 999I 999 1000 1000 999

40 0 A 1000 999 999 9990.3 A 998 1000 1000 9980.6 A ~ .lQQQ 1000 998

NOT CONVERGED: 11 2

TOTAL NOT CONVERGED: 15 5

Page 178: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

166

Table 5b: Average number of iterations until convergence criteria were reached forestimation of the full model in the complete data study.

Model 1 Model 2

n p ~ -iterations ')'-iterations ~-iterations ')'-iterations

AWLS, constrained covariance estimation:

10

20

40

0 2.2 2.30.3 2.7 2.80.6 3.0 3.2

0 2.1 2.10.3 2.6 2.60.6 2.9 3.0

0 2.0 2.00.3 2.5 2.50.6 2.9 2.8

AvVLS, unconstrained covariance estimation:

20

40

0 3.0 3.20.3 3.0 3.1 e0.6 3.0 3.1

0 2.9 2.90.3 2.9 2.90.6 2.9 2.9

ITAWLS, constrained covariance estimation:

10

20

0 4.6 2.9 4.1 2.60.3 6.4 3.6 5.7 3.20.6 9.0 4.7 7.5 3.9

0 3.9 2.7 3.5 2.40.3 5.6 3.3 5.0 3.00.6 7.1 3.9 6.4 3.5

ITAWLS, unconstrained estimation:

20 o0.30.6

17.517.717.9

9.39.59.5

18.318.218.2

9.49.39.4

Page 179: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

167

Table 6a. Average of the parameter estimates for model 1 in the complete data study.

reduced model full model

AWLS orn p ITAWLS 01 O2 011 o21 012 022

Constrained covariance estimation:

10 0 A 894.40 0.245 895.83 0.245 895 . 08 O. 245I 894 .40 O. 245 895 .83 O. 245 895 .08 O. 245

0.3 A 893.14 01.245 894. 78 O. 245 893.39 0.246I 893.14 0.245 894 . 78 O. 245 893.40 O. 246

0.6 A 892.48 0.246 893.53 O. 246 893.98 0.245I 892 . 48 O. 246 893.52 0.246 893.01 0.245

20 0 A 893. 65 0.245 894 . 08 O. 245 894 .42 0.245I 893 . 68 0.245 894 . 11 O. 245 894 . 40 O. 245

0.3 A 893 . 1 7 0.245 893 . 03 O. 245 894.34 0.244I 893.20 0.245 893 . 03 O. 245 894.33 0.245

0.6 A 892 . 70 0.245 893.05 0.245 893 . 07 O. 245I 892 . 70 O. 245 893.06 0.245 893 . 07 O. 245

40 0 A 893.99 0.245 893. 06 O. 245 893 . 53 0.2450.3 A 893 .38 0.245 893.36 0.245 893 . 90 O. 2450.6 A 893 . 15 0.245 893 . 11 O. 245 893 . 59 O. 245

Unconstrained covariance estimation:

20 0 A 893.65 0.245 894 . 34 O. 245 894 .30 O. 245I 893 . 60 O. 245 894.52 0.245 894 . 19 0.245

0.3 A 893.41 0.245 893 . 28 O. 245 894 . 60 O. 244I 893 . 53 O. 245 893 . 50 O. 245 894 . 76 0.245

0.6 A 892 • 66 O. 245 892 . 97 O. 245 893 . 20 O. 245I 892 . 72 O. 245 893.13 0.245 893 .32 0.245

40 0 A 892.98 0.245 893.02 0.245 893.61 0.2450.3 A 893.41 0.245 893.52 O. 245 893.82 0.2450.6 A 893 . 1 7 O. 245 893 . 15 O. 245 893 . 64 O. 245

Population values: 892.56 0.245

Page 180: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

168

Table 6b. Average of the parameter estimates for model 2 in the complete data study.

reduced model full model

AWLS orn p ITAWLS 81 82 811 821 812 822

Constrained covariance estimation:

10 0 A 213.92 0.548 214 . 08 O. 548 213.97 0.550I 213.93 0.548 214 . 08 O. 548 213.97 0.550

0.3 A 214.01 0.547 214.06 0.548 214.130.547I 214.01 0.547 214.06 0.548 214 . 13 O. 547

0.6 A 213.89 0.547 214.08 0.546 213.84 0.547I 213.90 0.546 214 . 08 O. 546 213 . 85 O. 547

20 0 A 213.91 0.547 213.90 0.549 214.02 0.547I 213.91 0.547 213.90 0.549 214.02 0.547

0.3 A 213 . 85 O. 548 213.90 0.549 213.89 0.548I 213.85 0.548 213.90 0.549 213.89 0.548

0.6 A 213.89 0.547 213.750.547 214.09 0.547 eI 213.89 0.547 213 . 75 O. 547 214.10 0.547

40 0 A 213 . 72 O. 548 213.72 0.549 213 . 77 O. 5480.3 A 213.89 0.547 213.90 0.546 213.93 0.5470.6 A 213.91 0.547 213.85 0.547 214.00 0.548

Unconstrained covariance estimation:

20 0 A 213.88 0.548 213.89 0.549 213 . 98 O. 548I 213 . 85 O. 548 213.86 0.549 213.96 0.549

0.3 A 213.83 0.547 213.91 0.549 213.86 0.547I 213.83 0.547 213.92 0.549 213.870.547

0.6 A 213.92 0.547 213.81 0.547 214.130.547I 213.96 0.547 213.81 0.547 214.20 0.547

40 0 A 213 . 70 O. 548 213.90 0.547 214.01 0.5480.3 A 213.91 0.546 213.92 0.546 213.95 0.5470.6 A 213.94 0.547 213.90 0.547 214.01 0.548

Population values: 213.81 0.547

Page 181: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

169

Table 7a. Mean estimates of the asymptotic covariance matrix for ~ in the reducedmodel 1 for the complete data study and corresponding percent of samplevalue achieved by the mean asymptotic covariance estimates. The varianceestimates for 82 are multiplied by 104

Mean Estimated Percent SampleAsymptotic Value Value Achieved

AWLS orn p ITAWLS V0(8 1 ) V0(8 2 ) (;0(8 1 ,8 2 ) %V(8 1 ) %V(8 2 ) %(;(8 1 ,8 2 )

Constrained covariance estimation:

10 0 A 570 1.55 -0.287 100 91 97I 570 1.55 -0.287 100 91 97

0.3 A 390 1.28 -0.204 93 91 95I 390 1.28 -0.204 93 91 95

0.6 A 224 1.00 -0.125 97 91 96I 223 1.00 -0.125 101 91 96

20 0 A 283 0.79 -0.144 99 98 99I 283 0.79 -0.144 99 98 99

0.3 A 199 0.67 -0.105 99 92 95I 199 0.67 -0.105 99 92 95

0.6 A 115 0.51 -0.065 99 93 98I 114 0.50 -0.065 97 91 98

40 0 A 142 0.40 -0.073 96 93 950.3 A 100 0.34 -0.053 93 94 950.6 A 58 0.27 -0.027 100 93 97

Unconstrained covariance estimation:

20 0 A 208 0.58 -0.106 63 62 63I 203 0.57 -0.103 53 52 53

0.3 A 147 0.50 -0.078 62 59 60I 143 0.49 -0.075 53 51 51

0.6 A 84 0.40 -0.048 62 63 63I 82 0.38 -0.047 52 53 53

40 0 A 123 0.34 -0.063 79 76 790.3 A 86 0.30 -0.046 82 81 810.6 A 50 0.23 -0.028 78 70 74

Page 182: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

170

Table 7b. Mean estimates of the asymptotic covariance matrix for ~ in the reducedmodel 2 for the complete data study and corresponding percent covarianceachieved by the mean asymptotic covariance estimates. The varianceestimates for O2 are multiplied by 104

Mean Estimated Percent SampleAsymptotic Value Value Achieved

AWLS orn p ITAWLS V0(° 1 ) V0(° 2 ) (;0(° 1 ,0 2 ) %V(Od %V(02) %(;(0 1 ,0 2 )

Constrained covariance estimation:

10 0 A 14.1 10.5 -0.092 99 96 103I 14.1 10.5 -0.092 99 96 103

0.3 A 15.1 8.0 -0.046 78 91 98I 15.1 8.0 -0.046 78 91 98

0.6 A 15.6 5.5 -0.001 85 92I 15.7 5.5 -0.001 85 92

20 0 A 7.4 5.4 -0.047 97 95 96I 7.4 5.4 -0.047 97 95 96 e

0.3 A 8.1 4.2 -0.022 92 95 96I 8.1 4.2 -0.022 93 95 96

0.6 A 8.4 2.9 0.002 90 94 100I 8.4 2.9 0.002 91 94 100

40 0 A 3.7 2.7 -0.023 109 100 1100.3 A 4.3 2.1 -0.011 96 95 920.6 A 4.4 1.5 -0.001 88 93 50

1The sample value is +0.009 so that the percent sample value achieved could not be computed.

Page 183: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

171

Table 8a. Average of the estimated variance components and the percent ofpopulation value achieved for transformed model 1 in the complete datastudy.

Mean Percent ofEstimate Population Value

OLS, AWLS orn p ITAWLS ~1 ~2 %'\1 %,\ 2

10 0 a 694 822.2 82 97A 694 822.5 82 97I 694 822.5 82 97

0.3 a 1673 567.4 79 96A 1705 564.9 81 97I 1709 564.7 81 97

0.6 a 2788 327.0 83 97A 2918 321.9 86 95I 2955 321.3 88 95

20 0 a 772 827.2 91 98A 772 827.3 91 98I 772 827.4 91 98

0.3 a 1908 579.7 90 98A 1920 578.3 91 98I 1921 578.2 91 98

0.6 a 2973 332.0 88 98A 3039 329.3 90 98I 3048 329.2 90 98

40 0 a 805 836.8 95 99A 805 836.6 95 99

0.3 a 1988 586.7 94 99A 1996 585.9 95 99

0.6 a 3208 335.0 95 99A 3242 334.0 96 99

Page 184: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

1~')

1-

Table Bb. Average of the estimated variance components and the percent ofpopulation value achieved for transformed model 2 in the complete datastudy.

Mean Percent ofEstimate Population Value

OLS, AWLS orn p ITAWLS ~1 ~2 %A 1 %A2

10 0 0 237.6 279.1 81 96A 237.6 279.2 81 96I 237.6 279.2 81 96

0.3 0 571.3 196.9 78 96A 576.9 196.4 79 96I 577.4 196.4 79 96

0.6 0 945.9 112.6 81 96A 969.3 111.8 83 96I 973.4 111. 7 83 96

20 0 0 265.9 287.8 91 99A 265.9 287.8 91 99I 265.9 287.8 91 99 e

0.3 0 645.2 200.8 88 98A 648.0 200.5 89 98I 648.1 200.5 89 98

0.6 0 1038 114.4 89 98A 1051 113.9 90 98I 1052 113.9 90 98

40 0 0 276.1 288.3 95 99A 276.1 288.3 95 99

0.3 0 701.8 203.5 96 100A 703.4 203.3 96 100

0.6 0 1105 115.6 96 100A 1111 115.8 96 100

Page 185: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

173

Table 9a. Average of the estimated variance components and the percent populationvalue achieved for model 1 in -the complete data study.

Mean Percent ofEstimate Population Value

OLS, AWLS orn p ITAWLS iP P %0'2 %p

10 0 0 800.9 -0.026 95A 801.1 -0.027 95I 801.1 -0.027 95

0.3 0 751.8 0.227 91 76A 754.9 0.233 91 78I 755.5 0.233 91 78

0.6 0 737.1 0.519 87 86A 754.7 0.534 89 89I 760.2 0.536 90 89

20 0 0 818.0 -0.012 97A 818.0 -0.012 97I 818.2 -0.012 97

0.3 0 800.3 0.267 95 89A 801.9 0.270 95 90I 802.1 0.270 95 90

0.6 0 772.2 0.552 92 92A 780.9 0.560 93 93I 782.3 0.560 93 93

40 0 0 831.5 -0.006 98A 831.5 -0.006 98

0.3 0 820.1 0.280 97 93A 821.0 0.280 97 94

0.6 0 813.9 0.580 96 97A 818.5 0.584 97 97

Page 186: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

li4

Table 9b. Average of the estimated variance components and the percent populationvalue achieved for model 2 in the complete data study.

Mean Percent ofEstimate Population Value

OLS, AWLS orn p ITAWLS ;p P %0'2 %p

10 0 0 272.2 -0.025 93A 272.2 -0.025 93I 272.2 -0.025 93

0.3 0 259.3 0.223 89 74A 259.9 0.226 89 75I 259.9 0.226 89 75

0.6 0 251.5 0.516 86 86A 254.7 0.523 87 87I 255.3 0.523 87 87

20 0 0 284.1 -0.013 97A 284.1 -0.013 97I 284.1 -0.013 97

0.3 0 274.8 0.261 94 87 eA 275.1 0.263 94 88I 275.1 0.263 94 88

0.6 0 268.3 0.556 92 93A 270.1 0.560 92 93I 270.2 0.560 93 93

40 0 0 286.3 -0.007 98A 286.3 -0.007 98

0.3 0 286.5 0.286 98 95A 286.7 0.287 98 96

0.6 0 280.8 0.578 96 96A 281.7 0.580 96 97

Page 187: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

175

Table lOa. Type I error rates for approximate F-tests of the joint hypothesisHo : ~g = ~ g' at the .05 level of significance in model 1 of the completedata study.

WALD LIKELIHOOD RATIO

AWLS orn p ITAWLS W 1 W 2 W 3 L1 L2 L3

Constrained covariance estimation:

10 0 A 0.052 0.040 0.037 0.076" 0.062 0.060I 0.053 0.041 0.037 0.077" 0.063 0.061

0.3 A 0.068" 0.076" 0.077" 0.074" 0.082" 0.083"I 0.067" 0.076" 0.077" 0.074" 0.082" 0.083"

0.6 A 0.069" 0.069" 0.064" 0.075" 0.093" 0.089"I 0.064 0.066" 0.061 0.070" 0.087" 0.084"

20 0 A 0.046 0.036 0.037 0.057 0.048 0.047I 0.046 0.036 0.037 0.057 0.048 0.047

0.3 A 0.045 0.047 0.047 0.050 0.055 0.055I 0.044 0.049 0.046 0.049 0.053 0.054

0.6 A 0.052 0.052 0.049 0.054 0.061 0.057I 0.052 0.049 0.047 0.055 0.060 0.056

40 0 A 0.063 0.058 0.058 0.067" 0.061 0.0610.3 A 0.060 0.045 0.042 0.057 0.070" 0.065"0.6 A 0.056 0.045 0.042 0.057 0.070" 0.065"

Unconstrained covariance estimation:

20 0 A 0.189" 0.130" 0.202" 0.096"I 0.240" 0.191" 0.253" 0.083"

0.3 A 0.163" 0.108" 0.168" 0.070"I 0.202" 0.152" 0.213" 0.074"

0.6 A 0.161" 0.103" 0.165" 0.106"I 0.210" 0.147" 0.214" 0.089"

40 0 A 0.121" 0.098" 0.126" 0.082"0.3 A 0.110" 0.090" 0.114" 0.072"0.6 A 0.108" 0.078" 0.107" 0.081"

.. denotes a type I error that is more than 2 standard errors from the nominal rate of .05

Page 188: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

176

Table lOb. Type I error rates for approximate F-tests of the joint hypothesisHo: ~ 9 = ~ g' at the .05 level of significance in model 2 of the complete datastudy.

WALD LIKELIHOOD RATIO

AWLS orn p ITAWLS W 1 W 2 W 3 L 1 L 2 L 3

Constrained covariance estimation:

10 0 A 0.101 * 0.060 0.058 0.106* 0.066* 0.065*I 0.102* 0.060 0.058 0.107* 0.066* 0.065*

0.3 A 0.083* 0.079* 0.079* 0.084* 0.081* 0.082*I 0.083* 0.079* 0.079* 0.083* 0.081* 0.082*

0.6 A 0.086* 0.079* 0.079* 0.084* 0.081* 0.078*I 0.085* 0.079* 0.075* 0.082* 0.078* 0.077*

20 0 A 0.068* 0.058 0.058 0.070* 0.062 0.062I 0.068* 0.058 0.058 0.070* 0.062 0.062

0.3 A 0.063 0.055 0.056 0.065* 0.060 0.060I 0.063 0.055 0.056 0.065* 0.059 0.060 e

0.6 A 0.059 0.064 0.059 0.058 0.066* 0.063I 0.059 0.064 0.057 0.058 0.064 0.060

40 0 A 0.062 0.050 0.050 0.063 0.052 0.0520.3 A 0.050 0.056 0.056 0.051 0.057 0.0570.6 A 0.053 0.046 0.046 0.053 0.060 0.057

Unconstrained covariance estimation:

20 0 A 0.204* 0.137* 0.206* 0.095*I 0.240* 0.181* 0.241 * 0.080*

0.3 A 0.179* 0.135* 0.200* 0.100*I 0.244* 0.175* 0.246* 0.081*

0.6 A 0.199* 0.117* 0.202* 0.091*I 0.249* 0.148* 0.252* 0.076*

40 0 A 0.100* 0.094* 0.104* 0.068*0.3 A 0.097* 0.085* 0.097* 0.075*0.6 A 0.094* 0.083* 0.093* 0.072*

*denotes a type I error that is more than 2 standard errors from the nominal rate of .05

Page 189: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

177

Table lla. Type I error rates for approximate x2-tests of the joint hypothesisHo: ~g = ~g' at the .05 level of significance in model! of the complete datastudy.

WALD LIKELIHOOD RATIO

AWLS orn p ITAWLS W 4 W s L4 Ls

Constrained covariance estimation:

10 0 A 0.067* 0.055 0.088* 0.086*I 0.069* 0.055 0.088* 0.086*

0.3 A 0.091* 0.112* 0.105* 0.115*I 0.091* 0.108* 0.104* 0.112*

0.6 A 0.101* 0.112* 0.102* 0.127*I 0.094* 0.115* 0.097* 0.123*

20 0 A 0.058 0.044 0.072* 0.061I 0.058 0.044 0.061 0.059

0.3 A 0.054 0.062 0.057 0.064I 0.054 0.060 0.063 0.047

0.6 A 0.066* 0.071* 0.066* 0.080*I 0.063 0.068* 0.079* 0.047

40 0 A 0.073* 0.062 0.066* 0.066*0.3 A 0.058 0.055 0.081* 0.0500.6 A 0.058 0.055 0.081* 0.050

Unconstrained covariance estimation:

20 0 A 0.204* 0.157* 0.215* 0.111*I 0.252* 0.201* 0.268* 0.094*

0.3 A 0.176* 0.125* 0.181* 0.101*I 0.221* 0.165* 0.223* 0.086*

0.6 A 0.178* 0.118* 0.182* 0.123*I 0.235* 0.173* 0.235* 0.104*

40 0 A 0.127* 0.104* 0.134* 0.084*0.3 A 0.118* 0.101* 0.120* 0.084*0.6 A 0.113* 0.085* 0.115* 0.091*

*denotes a type I error that is more than 2 standard errors from the nominal rate of .05

Page 190: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

178

Table llb. Type I error rates for approximate x2-tests of the joint hypothesisHo: ~ 9 = ~ 9' at the .05 level of significance in model 2 of the complete datastudy.

WALD LIKELIHOOD RATIO

AWLS orn p ITAWLS W 4 W s L4 Ls

Constrained covariance estimation:

10 0 A 0.126* 0.085* 0.133* 0.099*I 0.126* 0.085* 0.134* 0.099*

0.3 A 0.104* 0.106* 0.105* 0.107*I 0.105* 0.105* 0.105* 0.107*

0.6 A 0.107* 0.103* 0.108* 0.117*I 0.106* 0.100* 0.106* 0.112*

20 0 A 0.080* 0.067* 0.084* 0.070*I 0.080* 0.067* 0.084* 0.070*

0.3 A 0.075* 0.074* 0.076* 0.080*I 0.075* 0.074* 0.076* 0.080*

e ,0.06A 0.069* 0.080* 0.067* 0.083*

I 0.067* 0.079* 0.069* 0.081*

t40 0 A 0.066* 0.055 0.065* 0.057

0.3 A 0.053 0.062 0.053 0.0630.6 A 0.056 0.052 0.057 0.066*

Unconstrained covariance estimation:

20 0 A 0.219* 0.151* 0.220* 0.105*I 0.254* 0.199* 0.257- 0.085*

0.3 A 0.212* 0.150* 0.213* 0.115*I 0.257* 0.191* 0.258* 0.100*

0.6 A 0.217* 0.142* 0.218* 0.120*I 0.271* 0.168* 0.274* 0.098*

40 0 A 0.111* 0.098* 0.111* 0.076*0.3 A 0.101* 0.093* 0.101* 0.080*0.6 A 0.098* 0.096* 0.099* 0.081*

r*denotes type I error that is more than 2 standard errors from the nominal rate of .05

Page 191: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

179

Table 12. Percent coverage for approximate 95% (Wald-based) confidence intervalsfor reduced model parameter estimates in model 1 of complete data study.

constrained covariance unconstrained covariance

AWLS orn p ITAWLS 91 92 91 92

10 0 A 94.7 94.2I 94.7 94.2

0.3 A 93.8 94.2I 93.8 94.2

0.6 A 94.7 93.9I 94.7 94.2

20 0 A 95.2 95.2 86.2 86.5I 95.2 95.2 83.6 83.0

0.3 A 94.7 93.4 86.0 84.2I 94.8 93.4 82.9 80.6

0.6 A 94.3 94.0 85.7 86.7I 94.0 94.0 82.4 83.1

40 0 A 98.3 96.6 89.8 90.00.3 A 95.5 96.9 91.4 92.00.6 A 94.4 97.1 91.5 89.6

Page 192: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

180

Table 13. Observed F at which DmoZ" occurs and corresponding p-value for DmoZ"

from Kolmogorov-Smirnov goodness-of-fit test for approximate F statisticsused to test Ho: ~g = ~ I with constrained covariance and AWLSestimation in the compl~te data study.

Wi L 1

n p F DmoZ" p F DmoZ" p

1l1.Q!W 1:

10 0 1.38 0.029 0.386 3.79 0.026 0.4850.3 1.09 0.046 0.027 1.85 0.057 0.003"0.6 1.20 0.062 0.001" 1.22 0.062 0.001"

20 0 1.20 0.036 0.148 1.26 0.047 0.0260.3 0.96 0.018 0.901 1.84 0.018 0.8850.6 0.81 0.034 0.192 0.81 0.033 0.237

40 0 1.34 0.036 0.154 1.37 0.044 0.0410.3 0.12 0.021 0.786 1. 71 0.022 0.7120.6 0.58 0.021 0.746 0.58 0.021 0.767

emodel 2:

"10 0 1.61 0.070 <0.001" 2.05 0.076 <0.001"

0.3 1.14 0.063 0.001" 1.14 0.065 <0.001"0.6 1.04 0.060 0.001" 1.04 0.059 0.002"

20 0 1.21 0.034 0.189 1.19 0.035 0.1780.3 1.01 0.033 0.232 1.01 0.033 0.2150.6 1.29 0.026 0.492 1.27 0.028 0.329

40 0 1.00 0.014 0.989 1.00 0.014 0.9900.3 0.33 0.032 0.241 0.33 0.033 0.2310.6 0.40 0.031 0.291 0.40 0.031 0.308

"denotes rejection of the goodness-of-fit hypothesis at the Bonferroni corrected

error rate a = .05/18 = .003

Page 193: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

181

Table 14a. Average correction factors for Wald and likelihood ratio based tests ofHo : ~ 9 = ~ , in model 1 using constrained covariance estimation andAWLS in the complete data study.

n p factorl mean std. dev. min max

10 0 fW2 1.04 0.11 0.74 1.18fW3 1.04 0.10 0.74 1.18fL2 1.10 0.24 0.71 3.46fL3 1.10 0.24 0.74 3.45

0.3 fW2 0.82 0.10 0.70 1.64fW3 0.85 0.10 0.74 1. 75f L2 0.78 0.11 0.70 2.51fL3 0.81 0.10 0.74 2.49

0.6 fW2 0.95 0.21 0.70 2.20fW3 1.01 0.23 0.74 2.34fL2 0.89 0.17 0.70 1.78fL3 0.95 0.18 0.74 1.89

20 0 fW2 1.02 0.09 0.74 1.16fW3 1.02 0.09 0.75 1.16fL2 1.03 0.12 0.73 1.45fL3 1.03 0.12 0.75 1.44

0.3 fW2 0.78 0.06 0.72 1.22fW3 0.80 0.06 0.74 1.26fL2 0.76 0.05 0.72 1.17fL3 0.77 0.04 0.74 1.17

0.6 fW2 0.95 0.14 0.72 1.72f W3 0.98 0.15 0.74 1.73fL2 0.92 0.13 0.72 1.60fL3 0.95 0.13 0.75 1.64

40 0 fW2 1.01 0.07 0.79 1.16f W3 1.01 0.07 0.80 1.15f L2 1.02 0.08 0.79 1.31f L3 1.02 0.08 0.80 1.31

0.3 f W2 0.76 0.03 0.72 0.98f W3 0.77 0.03 0.74 1.00f L2 0.75 0.02 0.73 0.91fL3 0.76 0.02 0.74 0.91

0.6 fW2 0.97 0.10 0.74 1.29fW3 0.98 0.10 0.74 1.31fL2 0.95 0.09 0.73 1.29f, 3 0.97 0.10 0.74 1.31

lfW2 = cui cWu' f W3 = cell cWu' fL2 = cui cLu' fL3 = Cell cLu

Page 194: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

182

Page 195: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

183

,-Table I5a. Average degrees of freedom for Wald and likelihood ratio based tests of

Ho : ~ 9 = ~ , in model 1 using constrained covariance estimation andAWLS in t~e complete data study.

n p df mean std. dev. min max

10 0 vWu 1.90 0.13 1.15 2.00vLu 1.90 0.13 1.15 2.00v u 54.52 1.80 39.47 56.00ve, 58.20 2.13 41.55 60.00

0.3 vWu 1.70 0.20 1.24 2.00vLu 1.70 0.20 1.24 2.00vu 44.03 9.35 18.45 56.00ve, 46.69 10.12 19.63 60.00

0.6 vWu 1.36 0.16 1.13 2.00v Lu 1.36 0.16 1.13 2.00vu 25.56 9.11 12.83 55.97Ve' 27.03 9.60 13.38 59.96

20 0 vWu 1.95 0.07 1.47 2.00v Lu 1.95 0.07 1.47 2.00Vu 114.19 2.29 94.58 116.00Ve' 118.02 2.49 97.24 120.00

0.3 vWu 1.63 0.15 1.21 2.00vLu 1.63 0.15 1.21 2.00Vu 85.83 15.78 36.75 116.00Ve' 88.26 16.36 37.91 120.00

0.6 vWu 1.30 0.09 1.14 1.72vLu 1.30 0.09 1.14 1.72Vu 47.54 11.37 26.89 98.19Ve' 48.94 11.64 27.54 101.01

40 0 vWu 1.97 0.04 1.72 2.00v Lu 1.97 0.04 1.72 2.00Vu 234.10 2.50 216.13 236.00Ve' 238.01 2.61 219.41 240.00

0.3 vWu 1.60 0.11 1.35 1.96vLu 1.60 0.11 1.35 1.96Vu 170.21 23.41 109.99 233.03ve, 172.54 23.80 111.58 236.90

0.6 vWu 1.27 0.06 1.16 1.57vLu 1.27 0.06 1.16 1.57Vu 89.42 14.13 61.88 166.07Ve' 90.77 14.30 62.77 168.28

Page 196: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

184

Table 15b. Average degrees of freedom for Wald and likelihood ratio based tests ofHo: ~g = ~ , in model 1 using unconstrained covariance estimation andAWLS in t~e complete data study.

n p df mean std. dev. min max

20 0 /lwu 1.82 0.14 1.27 2.00/lLu 1.82 0.14 1.27 2.00/lu 88.00 6.32 59.76 104.13

0.3 /lwu 1.53 0.20 1.10 2.00/lLu 1.53 0.20 1.10 2.00/lu 71.83 11.29 35.87 101.15

0.6 /lWu 1.26 0.13 1.03 1.88/lLu 1.26 0.13 1.03 1.88/lu 44.25 9.07 26.37 83.64

40 0 /lWu 1.91 0.08 1.55 2.00/lLu 1.91 0.08 1.55 2.00/lu 202.85 9.01 170.95 225.52

0.3 /lwu 1.55 0.15 1.18 2.00/lLu 1.55 0.15 1.18 2.00/lu 155.23 19.58 102.02 207.02 e

0.6 /lwu 1.25 0.09 1.09 1.85/lLu 1.25 0.09 1.09 1.85

.~

/lu 86.62 12.84 60.94 150.24

Page 197: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 16. Number of replications used for model 1 in the incomplete data study.

185

Estimation Hypothesis Testing

n p % missing Reduced Full Wald Likelihood Ratio

10 0.3 5 499 500 500 49910 500 500 500 500

0.6 5 499 500 500 49910 500 500 500 500

20 0.3 5 500 500 500 50010 500 500 500 500

0.6 5 500 498 498 49810 500 500 500 500

NOT CONVERGED: 2 2

Page 198: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 17. Average number of iterations until convergence criteria were reached forestimation of the full model using AWLS and constrained covarianceestimation in the incomplete data study.

n p % missing ~-iterations

10 0.3 5 2.810 2.9

0.6 5 3.010 3.1

20 0.3 5 2.810 2.8

0.6 5 3.010 3.0

186

Page 199: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 18. Average of the parameter estimates for model! using AWLS andconstrained covariance estimation in the incomplete data study.

187

n p % missing

reduced model full model

10 0.3 5 893 . 86 O. 245 894.97 0.245 895 . 20 O. 24510 894.46 O. 245 896 . 22 0.245 895.15 0.245

0.6 5 893.04 0.245 894.20 0.245 893 . 71 O. 24610 892.87 0.245 893 . 20 0.246 894 . 12 O. 245

20 0.3 5 892.90 0.245 892.52 0.245 894. 27 O. 24510 893.26 0.245 894.07 0.245 893 . 59 O. 245

0.6 5 893 . 66 O. 245 893.57 0.245 894 . 55 O. 24410 892 . 94 O. 245 894.24 0.245 892 . 51 O. 246

Population values: 892 . 56 0.245

Page 200: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

188

Table 19. Mean estimates of the asymptotic covariance matrix for ~ in the reducedmodel 1 for the incomplete data study and corresponding percent of samplevalue achieved by the mean asymptotic covariance estimates. The varianceestimates for O2 are multiplied by 104

Mean EstimatedAsymptotic Value

Percent SampleValue Achieved

10 0.3 5 393 1.28 -0.205 85 81 8210 397 1.28 -0.205 89 87 88

0.6 5 228 0.96 -0.127 95 81 9110 227 0.97 -0.126 88 81 88

20 0.3 5 198 0.67 -0.105 91 86 8910 198 0.67 -0.105 90 89 91

0.6 5 115 0.52 -0.065 82 87 8310 116 0.52 -0.066 80 80 80

e

Page 201: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 20. Average of the estimated variance components and the percent ofpopulation value for model 1 using AWLS and constrained covarianceestimation in the incomplete data study.

Mean Percent ofEstimate Population Value

OLS, AWLS orn p ITAWLS 7i 2 P %0'2 %p

10 0.3 5 0 756.8 0.235 90 78A 760.3 0.241 90 80

10 0 745.1 0.221 88 74A 748.7 0.228 89 76

0.6 5 0 716.7 0.500 85 83A 733.6 0.516 87 86

10 0 692.3 0.493 82 82A 707.9 0.510 84 85

20 0.3 5 0 790.8 0.265 94 88A 792.5 0.269 94 90

10 0 779.0 0.249 92 83A 780.6 0.253 92 84

0.6 5 0 774.9 0.553 92 92A 783.4 0.561 93 94

10 0 755.8 0.539 90 90A 765.4 0.549 91 92

189

Page 202: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 21. Type I error rates for approximate F-tests of the joint hypothesisHo : q9 = ~9' at the .05 level of significance in model 1 of the incompletedata study.

190

WALD LIKELIHOOD RATIO

n p % missing W 1 W 2 W 3 L1 L2 L3

10 0.3 5 0.064 0.028· 0.060 0.072· 0.028· 0.07010 0.068 0.028 0.058 0.084· 0.041 0.072

0.6 5 0.082· 0.048 0.070· 0.084· 0.048 0.094·10 0.098· 0.052 0.086· 0.100· 0.066 0.100·

20 0.3 5 0.066 0.030· 0.084· 0.072· 0.030· 0.076·10 0.080· 0.022· 0.064 0.084· 0.030· 0.070·

0.6 5 0.062 0.024· 0.034 0.062 0.024· 0.05410 0.092· 0.048 0.070· 0.094· 0.070· 0.092·

·denotes a type I error that is more than 2 standard errors from the nominal rate of .05 e

Page 203: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 22. Percent coverage for approximate 95% (Wald-based) confidence intervalsfor reduced model parameter estimates in model 1 of the incomplete datastudy.

n p % missing 81 82

10 0.3 5 92.0 91.810 93.8 93.8

0.6 5 94.2 91.210 93.2 90.6

20 0.3 5 94.0 92.410 93.0 94.8

0.6 5 92.2 94.010 92.4 92.8

191

Page 204: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

192

Table 23. Observed F at which D maz occurs and corresponding p-value for Dmaz

from Kolmogorov-Smirnov goodness-of-fit test for approximate F statisticsused to test Ho : ~ 9 = ~ I with constrained covariance and AWLSestimation in the incom~lete data study.

n p % missing F Dmaz p F Dmaz p

10 0.3 5 1.35 0.058 0.071 1.34 0.068 0.02010 0.86 0.062 0.043 0.85 0.067 0.022

0.6 5 1.13 0.076 0.006 1.01 0.078 0.00510 1.50 0.066 0.027 1.50 0.068 0.019

20 0.3 5 1.25 0.054 0.113 1.27 0.057 0.07910 0.20 0.040 0.401 1.80 0.042 0.348

0.6 5 0.47 0.020 0.991 0.49 0.021 0.98310 2.01 0.045 0.271 2.00 0.044 0.285

e•denotes rejection of the goodness-of-fit hypothesis at the Bonferroni corrected error rate

0' = .05/18 = .003

Page 205: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

193

Table 24. Average correction factors for Wald and likelihood ratio based tests ofHo : ~ 9 = ~ I in model 1 using constrained covariance estimation andAWLS in the incomplete data study.

n p % missing factor1 mean std. dev. min max

10 0.3 5 f W2 0.76 0.13 0.56 1.15f W3 0.88 0.11 0.75 1.61f L2 0.72 0.14 0.55 1.51f L3 0.83 0.10 0.75 1.51

10 f W2 0.80 0.12 0.63 1.49fW3 0.89 0.11 0.77 2.00f L2 0.76 0.13 0.62 1.34f L3 0.84 0.09 0.76 1.34

0.6 5 f W2 0.73 0.13 0.58 1.47f W3 1.01 0.24 0.76 2.21f L2 0.68 0.09 0.58 1.39f L3 0.94 0.19 0.75 2.09

10 f W2 0.79 0.13 0.60 1.37f W3 1.01 0.20 0.77 1.81f L2 0.73 0.09 0.60 1.14f L3 0.95 0.15 0.76 1.57

20 0.3 5 f W2 0.72 0.06 0.62 1.02f W3 0.81 0.05 0.75 1.12f L2 0.69 0.06 0.62 1.03f L3 0.78 0.03 0.75 1.03

10 f W2 0.76 0.07 0.66 1.11f W3 0.83 0.06 0.76 1.27f L2 0.73 0.06 0.66 1.12f L3 0.80 0.04 0.76 1.12

0.6 5 f W2 0.76 0.09 0.63 1.30f W3 1.00 0.16 0.76 1.76f L2 0.73 0.07 0.63 1.04f L3 0.97 0.14 0.75 1.53

10 f W2 0.82 0.10 0.66 1.25f W3 1.01 0.15 0.76 1.55f L2 0.79 0.09 0.66 1.19f L3 0.97 0.13 0.76 1.52

IfW2 = cu/cWu' f W3 = cc,/cWu' f L2 = CU/cLu ' f L3 = cc,/c Lu

Page 206: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

194

Table 25. Average degrees of freedom for Wald and likelihood ratio based tests ofHo : ~ g = ~ , in model 1 using constrained covariance estimation andAWLS in the incomplete data study.

n p % missing df mean std. dev. min max

10 0.3 5 vWu 1. 70 0.19 1. 21 2.00vLu 1.70 0.19 1.21 2.00Vu 32.79 4.99 17.12 39.00Ve, 44.14 9.81 18.24 57.00

10 vWu 1.73 0.19 1.28 2.00vLu 1.72 0.19 1.27 2.00v u 33.56 5.49 17.20 42.00ve, 42.59 9.09 18.61 54.00

0.6 5 v Wu 1.39 0.17 1.12 2.00v Lu 1.39 0.17 1.12 2.00vu 23.37 5.97 12.52 38.99v e, 27.56 9.69 12.52 56.96

10 vWu 1.40 0.18 1.13 2.00vLu 1.40 0.18 1.13 2.00Vu 22.99 6.35 13.17 41.94Ve, 26.43 9.22 13.20 53.88 e

20 0.3 5 iiwu 1.64 0.15 1.31 2.00vLu 1.64 0.15 1.31 2.00Vu 65.88 8.33 41.57 81.63Ve, 84.57 14.78 46.19 114.00

10 iiwu 1.67 0.16 1.26 2.00ii Lu 1.67 0.16 1.26 2.00iiu 67.41 9.39 36.66 85.83Ve, 81.86 14.44 39.25 107.99

0.6 5 vWu 1.31 0.10 1.15 1.81vLu 1.31 0.10 1.15 1.81Vu 42.04 8.22 27.27 74.37ii e, 47.25 11.73 27.73 100.04

10 iiwu 1.33 0.11 1.14 1.82vLu 1.33 0.11 1.13 1.82iiu 42.12 8.57 26.25 79.08ii e, 46.27 11.10 26.99 96.51

Page 207: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

195

Table 26. Parameter summary for model (6.1) fitted to the TSH response data usingOLS.

asymptotic asymptoticparameter estimate std. err. correlation matrix

01 3.46 80.30 1

°2 46.22 3.71 -0.289 1

°3 0.0318 0.0021 0.363 0.502 1

Table 27. Parameter summary for the full model (6.2) under Hal fitted to the TSHresponse data using AWLS for incomplete data.

asymptotic asymptoticparameter estimate std. err. 95% confidence interval

°If 3.26 0.43 ( 2.36, 4.16 )(J2f 53.89 7.03 (39.33, 68.45 )(J 3f 0.0325 0.0013 ( 0.0298, 0.0353)

(Jlm 3.62 0.40 ( 2.78, 4.45 )°2m 41.14 4.55 (31.71, 50.56 )° 3m 0.0310 0.0012 ( 0.0286, 0.0334)

Page 208: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

196

Table 28. Test statistics, degrees of freedom and p-values for testing Hl.

statistic F dfnum dfden p

W l 3.39 3.00 92.00 0.021*W 2 0.27 1.14 22.13 0.639W 3 0.40 1.14 23.45 0.563L l 3.71 3.00 92.00 0.014*L2 0.29 1.14 22.13 0.623L3 0.43 1.14 23.45 0.545

*denotes significant at the 0.025 level of significance

Table 29. Parameter summary for the full model (6.2) under Ha2 fitted to the TSHresponse data using AWLS for incomplete data.

asymptotic asymptoticparameter estimate std. err. 95% confidence interval

81a 2.50 0.24 ( 1.98, 3.01 ) e()2a 30.21 2.89 (24.14, 36.28 )8 3a 0.0307 0.0015 ( 0.0275, 0.0339)

81d 3.92 0.35 ( 3.19, 4.65 )82d 45.78 4.10 (37.17, 54.39 )8 3d 0.0347 0.0017 ( 0.0313, 0.0382)

81n 4.03 0.36 ( 3.27, 4.78 )82n 65.49 5.85 (53.21, 77.77 )() 3" 0.0290 0.0013 ( 0.0263, 0.0316)

Page 209: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 30. Test statistics, degrees of freedom and p-values for testing H2.

statistic F dfnum dfden P

WI 8.49 6.00 92.00 <0.001·W 2 7.62 2.89 35.82 <0.001·W 3 12.51 2.89 40.76 <0.001·L1 9.08 6.00 92.00 <0.001·L2 17.20 1.43 35.82 <0.001·L3 28.24 1.43 40.76 <0.001·

·denotes significant at the 0.025 level of significance

197

Page 210: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Table 31. Asymptotic correlation among parameter estimates from the full model(6.3) under Ha2 using AWLS for incomplete data.

parameter ()19 ()29 ()39

()la 1()2a 0.448 1() 3a 0.242 0.302 1

()ld 1()2d 0.461 1() 3d 0.298 0.311 1

()In 1()2n 0.482 1() 3n 0.147 0.457 1

198

Page 211: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Figure 1a

MODEL 1aao

7IlO

C

~aaoW50llC

m~C!l

?<3000

2llQ

100

2 3 4 • 7

TIME

Figure 1b

MODEL 2230

•Z!lI

210 •

c 2llQ

~ 180110

W 170

C 110

di 150

C!l140

~1300 120

110

100

80

2 3 4 I • 7 • II 10

TIME

199

Page 212: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Figure 2: Complete data simulation study design.

STUDY 1: complete data

200

Modell Model 2(mOdi,e puame,\ec,,) (low parameter effects)

/ \model CS ignore CS model CS ignore CS

(constrained (unconstrained (constrained (unconstrainedcovariance covariance covariance covarianceestimation) estimation) estimation) estimation)

AwL AwL AJLS Awls e ..

n=10, 20, 40 n=20, 40 n=10, 20, 40 n=20, 40It

p=O, .3, .6 p=O, .3, .6 p=O, .3, .6 P=f' .3, .,

ITALLS=ML ITAlLS ITALLS=ML ITAWLSn=10,20 n=20 n=10, 20 n=20p=O, .3, .6 p=o, .3, .6 p=o, .3, .6 p=O, .3, .6

Page 213: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

Figure 3.

10

9 -

8'

7-

l.J...

l.J...0

6-W::l......J« s->0W>~

WU')fD0

2

F-plot analog of the W 1 statistic with n =20, p =0.6 and complete datausing AWLS estimation in model 1.

, .....'f""

.'.'..,,,,,

,,:,I,

I.,I•I

III..

.*fI',I,

I,.;

201

o 2 3 .. 5 6 7 8 9 10

HYPOTHESIZED VALUE OF F

Page 214: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

202

Figure 4a. F-plot analog of the W 2 statistic with n = 20, p = 0.6 and complete datausing AWLS estimation in model 1.

15

14 .

13 .

12-

11 -

l..&.. 10­

l..&..o g_

w::J-l 8«>o 7-

W> 60::W(/) 5'CDo

1 -

,....-...-./,,,

/,,,I,,,

I,,,.,,,,i-

"".iI

II

P'-';',,,,,

---,•,~,~",

tt

,

o 2 3 4 5 6 7 8 9 10 11 12 13 14 15

HYPOTHESIZED VALUE OF F

Page 215: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

203

Figure 4b. F-plot analog of the W 3 statistic with n = 20, p = 0.6 and complete datausing AWLS estimation in model 1.

15-

14-

13 -

11 -

I.J.. 10­

I.J..o

9-W::>.....J 8«>£::) 7-

W> 6~

WU'l 5­f:Do

4

2

1 .

,"'00.---..,.,,I,

I,,I

I.,I,

I,,,I

I

l,,,,,:,,..

,,-'",I,,,

..*,

;~,..

".;'";.

!•

o 2 3 4 5 6 7 8 9 10 11 12 13 14 15

HYPOTHESIZED VALUE OF F

Page 216: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

204

Figure 5. F-plot of the W 1 statistic with n = 20, p = 0.6 and 5% missing data usingAWLS estimation in model 1.

8 .

Jf' .'......,.,..I

•II·•·II..

;t.

,.,.."l-

I

;t-I,.

fI.

o...;=,...,,..,.....,..,...,...,"'M""I"TT"'...,.. .,... ....,....,"T"'I"":•..,.• .,......,.--,~,h-...,..,.....,... .-r-~"""""""""''''''''''''''''''''''''''''''~M"."".M"..,....,... .,..."T"'I"":"'l"T"I'~.•

1 •

lL..

lL..0 5 .

W::>---J«> 4'

£:)W>n:: 3.W(/)COo

o 2 3 4 5 6 7 8

HYPOTHESIZED VALUE OF F

Page 217: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

205

Figure 6a. F-plot analog of the W 2 statistic with n = 20, P = 0.6 and 5% missing datausing AWLS estimation in model 1.

11

11109876532

o-T-.......-:---r--:--....-"'T""......~-,........-:--.......-r--r-,............-"'T""-..---r-,........-:--.......--r

o

1 -

2

3

8

g-

10

u...u... 7­oW::::> 6--l«>a 5W>0::W 4­(/)!Do

HYPOTHESIZED VALUE OF F

Page 218: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

206

Figure 6b. F-plot analog of the W 3 statistic with n =20, p =0.6 and 5% missing data eusing AWLS estimation in model 1.

11

10-

HYPOTHESIZED VALUE OF F,

..

1110987

,]1--- .. -----.,,

I

",-,,,,,,_tl.'

~"",,,.,,

65

I,.'"1"~

~.,I~

432o

2

9 -

1 -

8

u...u... 7-0W~ 6-.....J«>0 5-W>~

W 4-(f)

CD0

3-

Page 219: by - stat.ncsu.eduboos/library/mimeo.archive/ISMS_19… · vii CHAPTER 5: SIMULATION STUDIES 5.1 Introduction 112 5.2 Models Simulated 113 5.3 Simulation Design 117 5.4 Results of

207

Figure 7

TSH Response CurvesA-alcoholic, D-depressecl. N-normal

30

N

N

25

N

N

N

N

NN

N

N"·.. ·· ... , ... ,, .....

" " NO,/NB ;--,__" B 0 " ..., -

:' DA -tf~I - N: A 0 --_

: ~----- A --- D" ;'1Ii --Jl."" ~ --_ A N, "A A" 0 -: / A '"'' ~ C>-~_, 0

I I "... -- NO: I A A "... 0 --,_I I '....... A --DI '..... .. .........

N I A "... oA -"-_ 0I ...... , ~A -NfjI A -N I "

N / A -"---BiN~ A AN/~ A

5

10

15

20

Time (in minutes)