methodological unification - uh

42
METHODOLOGICAL UNIFICATION * JIM GRANATO Center for Public Policy and Department of Political Science, University of Houston, USA [email protected] MELODY LO Department of Economics, College of Business, University of Texas at San Antonio, USA [email protected] SUNNY M. C. Wong Department of Economics, University of San Francisco, USA [email protected] * Versions of this paper have been presented at the 2004 EITM Summer Program (Duke University), the annual meeting of the Southern Political Science Association (January 6- 8, 2005 in New Orleans, Louisiana), the annual meeting of the Canadian Political Science Association (June 2-4, 2005 in London, Ontario), Ohio State University, Penn State University, the University of Texas at Austin, the 2006 EITM Summer Program (University of Michigan), and the 2007 Essex Summer School in Social Science Data Analysis and Collection Program (University of Essex, Colchester, UK). We thank Jim Alt, Mary Bange, Harold Clarke, Michelle Costanzo, John Freeman, Mark Jones, Skip Lupia, David Primo, and Frank Scioli for their comments and assistance.

Upload: others

Post on 24-Nov-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: METHODOLOGICAL UNIFICATION - UH

METHODOLOGICAL UNIFICATION*

JIM GRANATO

Center for Public Policy and Department of Political Science, University of Houston, USA

[email protected]

MELODY LO

Department of Economics, College of Business, University of Texas at San Antonio, USA [email protected]

SUNNY M. C. Wong

Department of Economics, University of San Francisco, USA [email protected]

* Versions of this paper have been presented at the 2004 EITM Summer Program (Duke University), the annual meeting of the Southern Political Science Association (January 6-8, 2005 in New Orleans, Louisiana), the annual meeting of the Canadian Political Science Association (June 2-4, 2005 in London, Ontario), Ohio State University, Penn State University, the University of Texas at Austin, the 2006 EITM Summer Program (University of Michigan), and the 2007 Essex Summer School in Social Science Data Analysis and Collection Program (University of Essex, Colchester, UK). We thank Jim Alt, Mary Bange, Harold Clarke, Michelle Costanzo, John Freeman, Mark Jones, Skip Lupia, David Primo, and Frank Scioli for their comments and assistance.

Page 2: METHODOLOGICAL UNIFICATION - UH

Abstract:An important disconnect exists between the current use of formal analysis and applied statisticaltechniques. In general, a lack of linkage between formal analysis and applied statistical proceduresleads to a failure to identify invariant parameter estimates. Consequently, current methodologi-cal practice can produce statistically significant parameters of ambiguous origin that fail to assistin falsifying theories and hypotheses. We demonstrate that methodological unification, because itleverages the mutually reinforcing properties of formal and applied statistical analysis, contributesto identified and invariant parameter estimates. These estimates are suitable for hypothesis test-ing and enabling scientific cumulation. A framework for methodological unification is proposed.This framework includes: 1) connecting behavioral and applied statistical concepts; 2) developingand relating formal and applied statistical analogues of these concepts, and 3) linking the ana-logues. We demonstrate the various elements of this framework with examples from development,voting behavior, and the political economy of macroeconomic policy. We contrast this unifiedframework with common applied statistical procedures and demonstrate the added value method-ological unification possesses for achieving scientific cumulation.

1

Page 3: METHODOLOGICAL UNIFICATION - UH

“Empirical observation, in the absence of a theoretical base, is at best descriptive. It tells one whathappened, but not why it has the pattern one perceives. Theoretical analysis, in the absence ofempirical testing, has a framework more noteworthy for its logical or mathematical elegance thanfor its utility in generating insights into the real world. The first exercise has been described as“data dredging,” the second as building “elegant models of irrelevant universes.” My purpose is totry to understand what I believe to be a problem of major importance. This understanding cannotbe achieved merely by observation, nor can it be attained by the manipulation of abstract symbols.Real insight can be gained only by their combination.” (Aldrich 1980, 4).

“...there is still far too much data analysis without formal theory – and far too much formal theorywithout data analysis.” (Bartels and Brady 1993, 148).

1. IntroductionAn important disconnect exists between the current use of formal analysis and applied statistical

techniques.1 This discontinuity contributes to an overemphasis on attaining statistical significance

through the manipulation of standard errors, exercises in data mining, and overall inattention in

relating theoretical specifications to applied statistical tests (Achen 2002, 2005). It can also be

demonstrated that decoupling formal analysis from applied statistical procedures results in a fail-

ure to identify invariant parameter estimates or contributing to the falsification of theories and

hypotheses.2

1 We center the discussion on formal analysis and applied statistical analysis. Formal analysis refers to deductivemodeling that can include a theorem and proof presentation or computational modeling that requires the assistance ofsimulation. Applied statistical analysis involves data analysis using statistical tools. We use the terms analysis andmodeling interchangeably.

In addition, the linkage of formal and applied statistical analysis — the form of methodological unification wedevelop in this paper — possesses important attributes that aid in falsification and, ultimately, scientific cumulation.Formal models, for example, force clarity about assumptions and concepts; they ensure logical consistency, and theydescribe the underlying mechanisms, typically behavioral, that lead to outcomes (Powell 1999, 23-39). The other com-ponent part of methodological unification — applied statistical models and tests — provide generalizations and ruleout alternative explanations through multivariate analysis. Applied statistics assist in distinguishing between causesand effects, allow for reciprocal causation, and also help assess the relative size of the effects. When these respectiveapproaches are unified a researcher has the opportunity to use their scientific attributes in mutually reinforcing waysto aid in developing transparent relations between theory and test.2 The intuition behind the terms identification and invariance is as follows. For applied statistical models identifi-cation relates to model parameters (e.g., bβ) and whether they indicate the magnitude of the effect for that particularindependent variable. Or, in more technical terms, “A parameter is identifiable if different values for the parameterproduce different distributions for some observable aspect of the data” (Brady and Collier 2004, 290).

2

Page 4: METHODOLOGICAL UNIFICATION - UH

Current practice, particularly applied statistical practice, is based in part on a tradition that

borrows and applies statistical tools that improve upon the use of older techniques. But, as this

process of replacing old with new technique took hold, the search for causal mechanisms was

largely ignored. In more technical language, the creation of methodologies that isolated structural

parameters, for the identification of these parameters, became secondary to the use of hand-me-

down applied statistical techniques that end up doing things such as manipulating standard errors

and their associated t-statistics.3 In this methodological environment we find ourselves with “re-

sults” that include statistically significant parameters of ambiguous origin — and little in the way

of scientific cumulation.

The scientific consequences of this methodological status quo are far reaching. Because

the usual methodological approach cannot produce identified and invariant parameter estimates,

policy recommendations, for example, lack a scientific basis since there is no useful information

regarding the mechanism(s) on policy and treatment effectiveness. Among other things, current

practices do not typically model an agent’s behavior and responses to alternative policies as well

as other social, political, and economic factors. As a result, there is no way of knowing how

citizens’ behavior may affect the estimate of structural parameters when the behavioral response

In applied statistical practice, invariance refers to the (non)constancy of the parameters of interest. More gener-ally, “the distinctive features of causal models is that each variable is determined by a set of other variables througha relationship (called “mechanism”) that remains invariant when those other variables are subjected to external influ-ences. Only by virtue of its invariance do causal models allow us to predict the effect of changes and interventions...”(Pearl 2000, 63). In short, the terms invariant or invariance, when applied to applied statistical models, centers onwhether a relation (signified by a parameter) remains constant in the face of a treatment (or policy) shift.

The argument made here is that identification and invariance are not guarantees that a model is “correct;” rather,the point is the analytical transparency produced by models that meet these conditions have falsifiable properties thataid in cumulation.3 We adopt Heckman’s (2000, 59) terminology below:

Structural causal effects are defined as the direct effects of the variables in the behavioral equations...When theseequations are linear, the coefficients on the causal variables are called structural parameters (emphasis added), andthey fully characterize the structural effects.

Heckman also notes there is some disagreement about what constitutes a structural parameter. The disagreementcenters on whether one uses a linear model, a non-linear model or, more, recently a fully parameterized model. Inthe latter case, structural parameters, can also be called “deep” to distinguish between “the derivatives of a behavioralrelationship used to define causal effects and the parameters that generate the behavioral relationship” (page 60).

3

Page 5: METHODOLOGICAL UNIFICATION - UH

of citizens can influence the success or failure of a policy or treatment. The reason, as Lucas (1976)

has argued, is that in-sample estimation provides little guidance to policymakers in predicting the

effects of policy changes because the parameters of current econometric models are unlikely to

remain stable under alternative policies.4

In this paper we argue for methodological unification between formal and empirical analysis

to help ensure that identified and invariant parameter estimates are the consequence.5 We propose

a unified approach that combines formal and empirical analysis — which is consistent with what

has been termed empirical implications of theoretical models (EITM).6

Methodological unification is not new. In its earlier versions, scholars from organizations

such as the Cowles Commission unified formal and empirical analysis to identify causal mech-

anisms through the use of structural parameters. What this version of EITM does is build on

the Cowles Commission approach by placing an emphasis on developing behavioral and applied

statistical analogues and linking these analogues.7 We argue that this version of methodologi-

cal unification would further assist in scientific cumulation because it emphasizes operationalizing

(modeling) behavioral mechanisms to foster a more transparent linkage between theory (predic-

tion) and test. In sum, the methological unification proposed here also places a focus on parameter

identification and parameter invariance but it leverages key aspects of human behavior in this mod-

eling and testing process.

4 The Lucas critique is based on the following intuition: “...given that the structure of an econometric model consistsof optimal decision rules ... and that optimal decision rules vary systematically with changes in the structure of seriesrelevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometricmodels” (Lucas 1976, 41).5 There is a large literature devoted to identification problems (see, for example, Fisher 1966; Manski 1995). Someresearchers treat the issue of simultaneity and identification as one and the same. We consider identification in abroader sense that includes simultaneity, but not limited to simultaneity.6 The linking of formal and empirical analysis is part of the Empirical Implications of Theoretical Models (EITM)initiative supported by the National Science Foundation. For more information see:http://www.nsf.gov/sbe/ses/ polisci/ reports/eitmreport.jsp and http://www.nsf.gov/pubs/2003/ nsf03552/nsf03552.pdf.7 An analogue can be thought of as a device in which a concept is represented by continuously variable — andmeasurable — quantities. Analogues serve as analytical devices for behavior and, therefore, provide for changes inbehavior as well as a more transparent interpretation of the formal and applied statistical model. Analogues alsoemphasize measurement validity — the relation between a measure and the (unmeasured) concept it is meant torepresent.

4

Page 6: METHODOLOGICAL UNIFICATION - UH

This paper is organized as follows. In the next section, we first outline how methodologi-

cal unification contributes to a modeling dialogue that can result in scientific cumulation. We also

discuss how some current research practices hinder scientific cumulation because they create sep-

aration between theory and test — and end up being nonfalsifiable. Section 3 contains a macro

political economy example to demonstrate the weaknesses associated with some current method-

ological practices. Specifically, the example relates these practices to the estimated parameters

(inference) and the predictions of variables of interest.8 Section 4 introduces an original and foun-

dational approach to methodological unification — the Cowles Commission. We then introduce

an early example of methodological unification, Robert Solow’s model of economic development

(Solow 1956). A strength of Solow’s model, and methodological unification, is its ability to al-

low for the successive extensions of his initial model and ultimately support subsequent efforts to

achieve scientific cumulation. In Section 5, we introduce an EITM framework. Section 5 provides

a means of accomplishing falsification by treating social, behavioral, political, and economic con-

cepts (analogues) and applied statistical concepts (analogues) as linked entities. Using examples

from voting behavior and macroeconomic policy and outcomes, we show that the properties of the

social, behavioral, political, and economic analogue(s) and the properties of the applied statisti-

cal analogue(s) can be mutually reinforcing. For each example the unifying properties also point

to extensions of the respective models and tests. We discuss extensions or potential extensions

that build on the original work. Section 6 summarizes these results and provides some conclud-

ing comments on how methodological unification, in general, and EITM, in particular, can change

evaluation procedures as well as how it extends to other research areas.

2. Modeling Dialogues and Scientific CumulationAn enormous amount of research has been devoted to what constitutes scientific cumulation.

Methodological unification can assist in scientific cumulation because it creates a dialogue be-

8 We will use the word inference to refer to a parameter in a regression or likelihood (b). We use the word predictionto refer a model’s forecast of a dependent variable (y). For a technical treatment of these two concepts see Engle,Hendry, and Richard (1983).

5

Page 7: METHODOLOGICAL UNIFICATION - UH

tween theory and test. In more recent versions of methodological unification (e.g., EITM) the

dialogue between model and test now takes on the form of exploiting the mutually reinforcing

properties of formal and empirical analogues.

2.1. Non-Cumulative Research Practices

Unfortunately, the methodological status quo is one in which so-called technical work is only

loosely connected to the fundamental scientific considerations of identification, invariance, and

falsification. Current quantitative research practice places an overwhelming focus on whether sig-

nificant results can be found via various applied statistical techniques. The reliance on statistically

significant results means nothing when the researcher makes little attempt to identify the precise

origin of the parameters in question. Absent this identification effort, it is not evident where the

model is wrong.

We can also evaluate current practice in relation to how it contributes to a modeling dialogue.

What we see is that the process of prediction and validation are never directly applied. Instead,

the empirical test(s) remains in a loop or dialogue with itself. An iterative process of data mining,

overparameterization, and the use of statistical patches (Omega matrices) replaces prediction and

validation. This inevitably follows because current practice does not attempt to identify model

parameters (b’s).9

If we narrow research practice to the use of applied statistics, then three common prac-

tices (data mining, overparameterization, and the use of statistical weighting and patching (e.g.,

“Omega Matrices”)) impairs scientific cumulation (Granato and Scioli 2004). We further refine

this discussion by showing how these practices affect a widely used test indicator, the t-statistic,

which is defined as the ratio of an estimated coefficient (b) to its standard error (s.e.(b)), that is,b

s.e.(b). We have argued that falsifiability is imperative to scientific cumulation and that falsifiabil-

ity requires the mechanisms relating cause and effect are identified. In the case of a t-statistic, this

9 While this example uses linear procedures, non-linear procedures are subject to many of the same weaknesses.Achen (2002), for example, raises questions about the forms of statistical patching in various likelihood proceduresand how these practices obscure identification.

6

Page 8: METHODOLOGICAL UNIFICATION - UH

means linking the formal model to the test and focusing on the identification of the parameter b.

Let us now briefly describe these three common practices and whether they have any relation

to identifying b:

(1) Data Mining: Data mining involves putting data into a statistical package with minimal the-

ory. Regressions (likelihoods) are then estimated until either statistically significant coefficients or

coefficients the researcher uses in their “theory” are found. This step-wise search is not random

and has little relation to identifying causal mechanisms (see Lovell 1983; Denton 1985).

(2) Overparameterization: This practice, related to data mining, is where a researcher includes,

absent any systematic specification search, a plethora of independent variables into a statistical

package and gets significant t-statistics somewhere. Efforts to identify an underlying causal mech-

anism are also ineffectual (Achen 2005).10

(3) Omega Matrices: Data mining and overparameterized approaches are virtually guaranteed

to break down statistically. The question is what to do when these failures occur. There are

elaborate ways of using (error) weighting techniques to correct model misspecifications or to use

other statistical patches that influence s.e.(b). Many intermediate econometric textbooks contain

chapters containing the Greek symbol: Omega (Ω) (e.g., Johnston and DiNardo 1997, 162-164).

This symbol is representative of the procedure whereby a researcher weights the data that are

arrayed (in matrix form) so that the statistical errors, ultimately the standard error noted above, is

altered and the t-statistic is manipulated.11

10 In the absence of formal models, purely applied statistical procedures that use systematic rules in a specificationsearch can be cumulative. See Clarke et al. (2004, 79-129) for an example of building composite empirical modelsthat follow a systematic specification search. Composite models can be crucial in the process of methodologicalunification.11 By way of example (using OLS), consider the following model in scalar form (we drop the constant for simplicity):

yt = βxt + ηt,

and assume there is first-order serial correlation:ηt = ρηt−1 + νt,

where νt is a white noise process. With this estimate of ρ a researcher can “remove” the serial correlation:yt − ρyt−1 = β (xt − ρxt−1) + νt.

Alternatively, in matrix form, we can express this transformation as:βGLS =

¡X 0Ω−1X

¢X 0Ω−1Y,

7

Page 9: METHODOLOGICAL UNIFICATION - UH

In principle, there is nothing wrong with knowing the Omega matrix for a particular statisti-

cal model. The standard error(s) produced by an Omega matrix can serve as a check on whether

inferences have been confounded to such an extent that a Type I or Type II error has been commit-

ted. Far too often, however, researchers treat the Omega weights (or alternative statistical patches)

as the result of a true model. Given that the parameter of “ρ” is based on estimating the error of the

regression, researchers that decide to use this weight are taking the mistakes of their model to “fix”

their standard errors. It is akin to painting over a crack on a bridge (Hendry 1995; Mizon 1995).

This attitude hampers scientific progress because it uses a model’s mistakes to obscure flaws.

Another issue is when using a ρ-restriction, researchers are imposing a restriction on variables

in the estimation system. In the time series case, researchers assume all variables in the system

have the same level of persistence. In the time series-cross-section case, the restriction is even

more severe as researchers assume that all the cases and all the independent variables have the

same level of persistence.

A final scientific problem with ρ-restrictions is that the algebraic manipulations involved with

their imposition can fundamentally alter the original model so that there is no longer any relation

between the theory and the test. A theory that assumes, for example, a certain level of persistence

in behavior (e.g., party identification) can be fundamentally altered by tools used to “filter out”

persistence.

3. Non-Cumulative Research Practice: An ExampleWe demonstrate the problem with current practice in the following macro political economy il-

as opposed to the OLS estimator:βOLS = (X 0X)

−1X 0Y.

The difference is the Ω matrix which can be represented as:

Ω =

⎡⎢⎣ 1 ρ ρ2 ...ρ 1 ... ...ρ2 ... 1 ...... ... ... 1

⎤⎥⎦and with the “ρ” correction:

Ω−1 =

⎡⎢⎣ 1 −ρ ... ...−ρ 1 + ρ2 ... ...... ... 1 + ρ2 ...... ... ... ...

⎤⎥⎦ 1

1− ρ2.

8

Page 10: METHODOLOGICAL UNIFICATION - UH

lustration. In this example we use a structural model to show a relation between parameters.12

This is contrary to the current practice which typically ignores structural equation systems (and

reduced forms), let alone identification conditions. We then show via simulation how the weak-

nesses of current practices leads to serious scientific consequences such as incorrect inference and

misleading policy advice.

We examine the relation between a particular macroeconomic policy and outcome: counter-

cyclical monetary policy and inflation. To keep things simple in this example, we have left out

various political and social influences on the policy rule. While this research area holds great po-

tential for explaining why policymakers behave certain ways, it does not affect the methodological

point of this example.13

The model incorporates a lagged expectations augmented Phillips curve, an IS curve (aggre-

gate demand), and an interest rate policy rule (Taylor 1993). Each of these structural equations is

a behavioral relation and they can be derived from microfoundations. In this example, however,

we will also abstract out the microfoundations since they are not central to our demonstration.

The model has the following structure:

yt = ynt + γ (πt −Et−1πt) + u1t, γ > 0, (1)

yt = λ1 + λ2 (it −Etπt+1) + λ3Etyt+1 + u2t, λ1 > 0, λ2 < 0, λ3 > 0, (2)

it = πt + αy(yt − ynt ) + απ (πt − π∗t ) + r∗t , (3)

where yt is the log of output, ynt is the natural rate of output which follows a linear trend: α + βt,

and πt is price inflation. Et is an conditional expectations operator such that Et−1πt is expected

inflation for period t given the information available up to period t−1, Etπt+1 is expected inflation

one period ahead, and Etyt+1 is expected output one period ahead.14 The variable it is a nominal

interest rate that the policymaker can influence, π∗t is the inflation target, r∗t is the real interest rate,

12 Classic work in macro political economy does not rely on structural models (e.g., Hibbs 1977), but see Chappelland Keech (1983), Alesina and Rosenthal (1995), and Freeman and Houser (1998) as exceptions.13 See Chappell and Keech (1983) for an application of a structural model that incorporates the length of a presidentialterm.14 See Sargent (1979) for the detailed discussion on the rational expectations operator.

9

Page 11: METHODOLOGICAL UNIFICATION - UH

u1t is an iid shock (demand shock), and u2t is an iid shock (supply shock).

At issue is the relation between policy and inflation. The model posits that aggregate supply

and demand depend on the expectations over the course of policy and this policy follows some

stable probability. Furthermore, agents understand the policy rule and augment their behavior to

include the expected gains or losses implied by the policy rule and policymaker behavior (Lucas

1976).

The coefficients αy and απ represent the aggressiveness that policymakers possess in stopping

inflationary pressures. Positive values of αy and απ indicate an aggressive inflation-stabilizing

policy tack. These positive parameter values reflect policymaker willingness to raise nominal

interest rates in response to excess demand (inflation), whether it is when output is above its natural

rate (yt > ynt ), or when inflation exceeds its prespecified target (πt > π∗t ). The coefficients

typically range between [0, 2] (Clarida, Gali, and Gertler 2000).15

With the relation between countercyclical monetary policy and inflation stated, we now solve

for inflation using the method of undetermined coefficients. The minimum state variable solution

(MSV) can be presented as:16

πt =

µJ0

1− J1 − J2+

J2J3β

(1− J1 − J2)2

¶+

µJ3

1− J1 − J2

¶ynt +Xt,

15 Clarida, Gali, and Gertler (2000) refer to (3) as a “backward looking” rule. They estimate αy, απ in (3) for theUnited States (1960:1-1996:4). They find that αy ranges between 0.0 to 0.39 and απ ranges between 0.86 to 2.55.They conclude that the United States monetary authority moved to nearly an exclusive focus on stabilizing inflation.16 The MSV solution is the simplest parameterization one can choose using the method of undetermined coefficients(see McCallum 1983).

10

Page 12: METHODOLOGICAL UNIFICATION - UH

where:

J0 = (λ1 − λ2αππ∗t + λ2r

∗t + λ3β)Θ

−1,

J1 = (γ − λ2αyγ)Θ−1,

J2 = λ2Θ−1,

J3 = (λ3 − 1)Θ−1,

Xt = [(λ2αy − 1)u1t + u2t]Θ−1,

and Θ = γ (1− λ2αy)− λ2(1 + απ).

In more compact form the solution is:

πt = Ξ+Ψynt +Xt, (4)

where:

Ξ =J0

1− J1 − J2+

J2J3β

(1− J1 − J2)2 ,

Ψ =J3

1− J1 − J2.

Equation (4) has the attribute that it can relate parameters (αy, απ)with πt. The problem, however,

is that — absent added information — it is impossible to explicitly relate policy and treatment

changes to outcomes.

3.1. Current Practice and Scientific Cumulation

There are important scientific consequences when we fail to link formal and applied statistical

analysis for purposes of deriving structural parameters. First, consider the utility of a reduced form

— which is usually not derived in the current methodological environment. A reduced form such

as (4) possesses weaknesses when it comes to making inferences. The reduced form parameters

(Ji0s,Ξ,Ψ) cannot strictly identify the parameters that relate cause and effect (i.e., αy, απ). Where

reduced forms have power is not in making inferences but in making predictions: reduced form

estimates provide assistance in ex-ante forecasts.

How useful then are reduced forms in making predictions based on specific values of the

independent variables — the so-called conditional forecast? With a reduced form a researcher

11

Page 13: METHODOLOGICAL UNIFICATION - UH

cannot make conditional forecasts. The reason is that the system (described in equations (1) -

(3)) depends on the behavior of the agents and whether responses from agents are invariant. If

agents’ behavior is not invariant, the parameters (in equations (1) - (3)) are not invariant. Without

a mechanism detailing how structural parameters (and behavior) remain (in)variant as independent

variables change, an applied statistical model fails in assessing alternative independent variable

shifts.

While reduced forms lack inferential power necessary to make conditional forecasts, it still

has a linkage to theoretical foundations. Things can be far worse, however, if we use a single

empirical equation similar to equation (4), with no formal-theoretical linkage, and rely on current

applied statistical practices. The parameter(s) “identified” by these current practices lack any real

meaning or use.

To see the flaws associated with current practice more straightforwardly consider equation (5)

which we treat as a single equation estimate of πt:

πt = d0 + d1ynt + v1t, (5)

where v1t is the error term. If we contrast the similarly situated parameters in equations (4) with

(5), we note that d0 would serve the purpose of Ξ and d1 serves the purpose of Ψ. But, already

we can see that equation (5) fails to make any structural statement about the relation between the

policy rule in (3) and inflation in (5). There is no explicit relation between d0 or d1 with αy or απ.

Assume the researcher chooses to ignore any process of methodological unification. Instead

the analyst tries to estimate a shift in policy regime and reestimates (5) with new variables. One

typical way is to add dummy variables signifying policy shifts due to factors such as partisanship

and other factors (e.g., Hibbs 1977). We keep the empirical model small and rewrite (5) with just

one added variable, a policy shift dummy variable (SHIFTt) signifying a change in the intercept or

the level of inflation:

πt = d0 + d1ynt + d2SHIFTt + v2t, (6)

where v2t is the error term.

12

Page 14: METHODOLOGICAL UNIFICATION - UH

Does this really help with identification and falsification? Notice that the parameter d2 in

(6) does not actually reflect actual policy shifts relating policy parameters to inflation (one such as

equation (3)). This shortcoming is severe. It means we do not know what parameters in the system

can lead to counterintuitive results and, ultimately, incorrect policy or treatment recommendations.

Consider the following simulation of the system expressed in (1) , (2) , and (3). In this sim-

ulation the structural explanation, relating parameters in the system, shows that aggregate demand,

as represented by equation (2), must respond in a certain way to changes in real interest rates and

the expected output level. This relation is represented by the parameter λ3 in equation (2). What

happens to the relation between inflation-stabilizing policy and inflation if λ3 is not invariant and

contains alternative responses?

(Figure 1 About Here)

In Figure 1, Panels 1a, 1b, and 1c report the results in a simulation of the system.17 Changes

in inflation are represented in each panel, given changes to λ3. In Panel 1a, when λ3 > 1, an

aggressive policy (αy, απ = 0.5 > 0) does not reduce inflation. On the other hand, in panel Panel

1b, when 0 < λ3 < 1, inflation falls. It is only in the third case, when λ3 = 1, we find that an

aggressive policy keeps inflation roughly around a 2 percent target (see Panel 1c).

The implication of these simulations has direct significance for both reduced form estimation

and current practice — with the consequences being especially direct for current practice. If a

researcher were to estimate the reduced form of inflation as represented in (4) they would find

this instability in the forecast and forecast error. However, they would have difficulty finding the

source of the instability given the lack of information in identifying the relation(s) between the

parameters in (2) and the remaining parameters (variables) in the model.

Yet, this is far better than what occurs using current practices. Specifically, depending on the

value of λ3, there are alternative outcomes that would not be illuminated using (6) and the practices

17 The simulation parameter values are the following for the AS, IS, and policy rule:AS: α = 5.0;β = 0.05; yn0 = 1.0; γ = 10;σu1 = 1.0.IS: λ1 = 3.0;λ2 = −1.0;λ3 = (0.5, 1.0, 1.5);π∗ = 2.0;σu2 = 1.0.Policy rule: αy = 0.5;απ = 0.5; r∗ = 3.0.

13

Page 15: METHODOLOGICAL UNIFICATION - UH

associated with that equation. But, now assume we estimate (6) and find a significant value for

d2 indicating that when policy shifts in an aggressive manner, the level of inflation falls. Does

this mean that aggressive policy reduces inflation? We do not know using this research practice.

Absent an explicitly posited relation, all we can say is that some parameter, that may or may not

relate the independent variable to the dependent variable, is significant.

What is more important is that a formal model would show the substantive findings are not

generalizable when alternative and relevant exogenous conditions are considered. Indeed, the

indeterminacy result shown in Figure 1 is not robust to the inclusion of nominal variables to the

interest rate rule.18 But, we would not know what we do not know under current practices that give

us estimates of equations (5) and (6).

4. Methodological Unification: The Cowles CommissionFor some social sciences, such as political science, the emphasis on applied statistical technique

as opposed to identifying structural parameters can be traced back to a pedagogical tradition that

involved an aversion to mathematical modeling (Arrow 1948). But, this aversion came with a cost.

Absent mathematical modeling, the discipline lacks a basic tool to identify causal mechanisms.

More importantly, academic research exists that can aid this break with current methodologi-

cal practice. The Cowles Commission, for example, has established conditions in which structural

parameters can be identified.19 It was the Cowles Commission that explored the differences be-

tween structural and reduced-form parameters. Conditions for identifiability were introduced to

aid in this differentiation. Along with their work on structural parameters, the Cowles Commis-

sion also gave formal and empirical specificity to issues such as exogeneity and policy invariance

(Aldrich 1989; Morgan 1990; Christ 1994; and Heckman 2000).20

18 Alternatively, McCallum and Nelson (1999) demonstrate that when output is not modeled as a constant, an IScurve of the form (2) can produce indeterminacies.19 Morgan (1990) provides an extensive historical account of the contributions of the Cowles Commission. Forfurther background on the Cowles Commission consult the following URL: http://cowles.econ.yale.edu/.20 With the passage of time, the Cowles Commission’s approach has been criticized. Ironically, the effort to satisfymathematical conditions for parameter “identification” led to practices that undermined the promise of methodologicalunification. In particular, Sims (1980) pointed out the “incredible” theoretical restrictions that were imposed.

14

Page 16: METHODOLOGICAL UNIFICATION - UH

These contributions rested, in part, on a scientific vision that involved merging formal and

applied statistical analysis. The basis for this linkage was the idea that random samples were

governed by some latent and probabilistic law of motion (Haavelmo 1944; Morgan 1990). Further,

this viewpoint meant that formal models, when related to an applied statistical model, could be

interpreted as creating a sample draw from the underlying law of motion. A well-grounded test

of a theory could be accomplished by relating a formal model to an applied statistical model and

testing the applied statistical model. This methodological approach was seen, then, as a valid

representation and examination of underlying processes in existence.

4.1. Example 1: Development

We demonstrate the process of scientific cumulation using the well-known Solow (1956) model

of economic development. This example shows how the traditional approach to methodological

unification can provide a basis for a modeling dialogue and scientific cumulation.

4.1.1. The Formal Model

In his seminal paper, Solow (1956) argues that capital accumulation and exogenous technological

progress are fundamental mechanisms in economic development. Using a simple Cobb-Douglas

production function with a dynamic process of capital accumulation, Solow concludes that a coun-

try experiences a higher transitory growth rate when the country increases its national saving to

stimulate capital accumulation. The formal model has the following structure:

Yt = BtKαt L

1−αt , 0 < α < 1 (7)

St = sYt, 0 < s < 1 (8)

Kt+1 −Kt = St − δKt, (9)

Lt+1 = (1 + n)Lt, (10)

At+1 = (1 + g)At, (11)

where aggregate production in year t (Yt) is determined by capital (Kt) , labor (Lt), and techno-

logical progress (Bt) which is defined as Bt ≡ A1−αt under the Cobb-Douglas specification. St

15

Page 17: METHODOLOGICAL UNIFICATION - UH

is gross domestic savings in year t and the savings rate (s) is a constant proportion of total pro-

duction. Equation (9) represents the capital accumulation (Kt+1 −Kt) which equals the domestic

savings minus capital depreciation (δKt) , where δ is the depreciation rate. Equations (10) and

(11) show that both labor and technological progress are exogenously increasing at the rates of n

and g, respectively.

4.1.2. Linkage to an Empirical Test

One important prediction from the Solow model is the conditional convergence hypothesis which

states that countries with lower initial levels of capital and output tend to grow faster when the

countries’ characteristics are held constant. Using the technique of linear approximation, we can

derive the average growth rate of the output per worker level (yt ≡ YtLt) as:

ln yT − ln y0T

≈ g +1− (1− λ)T

T

µlnA0 +

α

1− α(ln s− ln (n+ g + δ + ng))− ln y0

¶, (12)

where λ = (1− α) (n+ g + δ) and (ln yT−ln y0)T

represents the average (annual) growth rate be-

tween period t = 0 and t = T . A0 and y0 are the respective technological level and output per

worker level in the initial period, t = 0. Equation (12) shows that the level of initial output per

worker level y0 is negatively associated with the growth rate of output per worker. It implies that a

country tends to grow faster if the output per worker level in the country is lower initially.

Empirical studies typically use a large sample of cross-country data to regress the average of

the annual growth rate on the initial level of real GDP per capita and other related control variables

suggested in the model, such as the national savings (or investment) rate and the population growth

rate (for country i):

giT,0 = β0 − β1 ln yi0 + β2Z

i, (13)

where Zi ≡ ln si − ln¡ni + gi + δi + nigi

¢, and giT,0 ≡

(ln yiT−ln yi0)T

. From equation (13) the

parameters β0, β1 and β2 correspond to: g + 1−(1−λ)TT

lnA0,1−(1−λ)T

T, and α

1−α

³1−(1−λ)T

T

´, re-

spectively.

4.1.3. Extending the Model

Using equation (13) , a common empirical finding is that the initial level of GDP per capita pos-

16

Page 18: METHODOLOGICAL UNIFICATION - UH

sesses a significant negative correlation with the economic growth rate in a country. This finding

supports the conditional convergence hypothesis that the country with a lower level of real GDP

per capita tends to grow faster (Barro and Sala-i-Martin 1992; Mankiw, Romer, and Weil 1992).

What role, then, does the Solow model play in cumulation? The typical estimated coefficient of

initial real GDP per capita (bβ1) is significantly less than the Solow model prediction (1−(1−λ)T

T).

In other words, the Solow model theoretically overestimates the speed of convergence.

One theoretical alternative is to relax the assumption that output production depended on

homogeneous labor and capital. Mankiw, Romer and Weil (1992) modify the Solow model by

introducing the stock of human capital in the production function. By controlling for the level

of human capital (education), the authors find that the rate of convergence is approximately two

percent per year which is closer to the prediction in the modified Solow model.

The Solow model and the extensions of it provide an example of a cumulative research pro-

cess. The modifications in the theoretical model were due to the empirical tests, but this is feasible

due to methodological unification. The modeling dialogue here gives the researchers a better un-

derstanding of the regularities in economic development. If researchers do not consider the Solow

model and solely apply an empirical procedure by regressing the average growth rate on the initial

real GDP per capita, they could misinterpret the empirical results and fail to understand the mech-

anism(s) that influence economic development. On the other hand, if the researchers only set up

a theoretical model but do not test it empirically, they would not know that the model is actually

inconsistent with the empirical observations.

5. Methodological Unification: EITMWe have been arguing that scientific cumulation is more likely to occur when efforts are made to

identify parameters in the model. An emphasis on parameter identification, structural parameters,

provides more transparent interpretation and allows for refutation. However, if one were to strictly

adhere to the Cowles Commission approach we would, for example, forego the chance of modeling

new uncertainty created by shifts in behavioral traits (e.g., public tastes, attitudes, expectations, and

17

Page 19: METHODOLOGICAL UNIFICATION - UH

learning). The scientific consequence of this omission directly affects the issues of identification

and invariance because these unaccounted behavioral shifts of variables would not be linked with

the other variables and specified parameters.

An EITM framework addresses these issues through the unification of formal and empirical

analysis. This framework takes advantage of the mutually reinforcing properties of formal and

empirical analysis to address the challenge(s) above. Another attribute of this framework is the

emphasis on concepts that are quite general and integral to many fields of research but that are

seldom modeled and tested in a direct way. EITM places emphasis on finding ways to model

human behavior and action and, thereby, aids in developing realistic representations that improve

upon simple socio-economic categorization.

Recall that we accept the Cowles Commission’s focus on identification of structural param-

eters. We add the emphasis on modeling behavior to the Cowles Commission approach so that

new uncertainty created by shifts in behavioral traits such as public tastes, attitudes, expectations,

learning, and the like can be properly studied.

The framework we develop also can address the critique of the structural approach leveled by

Sims (1980). It is well known that reduced form estimates are not identified. The practice of finding

ways to identify models can lead to “incredible” theoretical specifications (Sims 1980; Freeman,

Lin, and Williams 1989). Our framework, by way of adding behavioral concepts and analogues,

holds the potential to strengthen the credibility for various identification restrictions. Analogues,

in particular, have important scientific importance since they hold the promise of operationalizing

mechanisms.21

This framework is summarized as follows:

21 We argue that operationalizing mechanisms can be thought of in much broader terms than having a variable “de-termined by a set of other variables” (see Footnote 2). This is particlarly true in trying to use analogues for behavior.An example of operationalizing a mechanism can be seen in the work of Converse (1969). He advanced the theorythat strength of party identification (and voting behavior) is primarily a function of intergenerational transmission plusthe number of times one had voted in free elections. To operationalize his proposed mechanism — intergenerationaltransmission — and the number of times one voted, he made use of the following analogue: the Markov chain. Thisparticular analogue allowed for a particular dynamic prediction that he tested against actual voting data.

18

Page 20: METHODOLOGICAL UNIFICATION - UH

1. Unify Theoretical Mechanisms and Applied Statistical Concepts

Given that human beings are the agents of action, mechanisms should reflect overarching social

and behavioral processes. Examples include (but are not limited to):

• decision making

• bargaining

• expectations

• learning

• social interaction

It is also important to find an appropriate statistical concept to match the theoretical concept. Ex-

amples of applied statistical concepts include (but are not limited to):

• persistence

• measurement error

• nominal choice

• simultaneity

2. Develop Behavioral (Formal) and Applied Statistical Analogues

To link concepts with tests, we need analogues. An analogue can be thought of as a device in which

a concept is represented by continuously variable — and measurable — quantities. Analogues

serve as analytical representations for behavior and, therefore, provide for changes in behavior as

well as a more transparent interpretation of the formal and applied statistical model. Analogues

also emphasize measurement validity, the relation between a measure and the (unmeasured) con-

cept it is meant to represent. Examples of analogues for the behavioral (formal) concepts such as

expectations, nominal choice, and learning include (but are not limited to):

• conditional expectations procedures

• decision theory

• Bayesian and adaptive learning procedures

On the other hand, examples for the applied statistical analogues for the applied statistical

concepts of persistence, measurement error, nominal choice, and simultaneity include:

19

Page 21: METHODOLOGICAL UNIFICATION - UH

• autoregressive estimation

• error-in-variables regression

• discrete choice modeling

• multi-stage estimation (e.g., two-stage least squares)

3. Unify and Evaluate the Analogues

The third step uses the mutually reinforcing properties of the formal and empirical analogues and

helps identify the parameters of interest. Examples of this linkage exist in social science disci-

plines such as political science and economics. In political science, Kramer (1983) focuses on

economic voting where he makes the assumption that voters would respond to changes in their

income. He further assumes that voters respond to government induced changes in income, but

researchers only see total changes in income. This “signal extraction” problem creates a mea-

surement error challenge. The applied statistical consequence (error-in-variable result) is that

aggregate level data can mitigate this measurement error, but individual level data still possesses

the error.

In economics, one can also find a “behavioral” argument linking expectations (uncertainty)

and measurement error. On different substantive topics, both Friedman (1957) and Lucas (1973)

unify “error in variables” regressions with formal models involving expectations and uncertainty.

Their linkage provides a behavioral explanation on why parameters in their empirical model mimic

error-in-variables regressions.

5.1. Example 2: Voting with Compensational and RepresentationalPreferences

The act of voting provides a useful window into methodological unification. Hotelling (1926) and

Downs (1957) argue that voters choose one party over the others based on the relative political

positions of parties – proximity voting theory. Voters are more likely to vote for a political party if

the position of the party is closer to voters’ ideal position. As the party’s position further deviates

from a voter’s ideal position, the voter receives less utility and is less likely to vote for it.22

22 Applications of this particular utility function abound. Erikson, Mackuen and Stimson (2002), for example,

20

Page 22: METHODOLOGICAL UNIFICATION - UH

5.1.1. The Relation Between Decision Theory and Discrete Choice Models

Kedar (2005) questions the validity of the assumption. She argues that, besides the proximity of

parties’ positions, voters are also concerned about each party’s contribution to the aggregate policy

outcome. Kedar (2005) starts with the proximity model:

Uij = −β1 (vi − pj)2 , (14)

where Uij is the disutility of voter i for party j0s policy position, vi is the ideal point of voter i, pj is

the policy position of party j, and β1 is a scalar. Equation (14) states that voter i perceives disutility

for party j when the position of party j deviates from voter i0s ideal point. On the other hand, if

the position of party j is equivalent to his ideal point (i.e., vi = pj), no disutility is perceived to

result from party j.

Assuming that party positions can affect policy outcomes, Kedar (2005) argues that the policy

outcome is the weighted average of policy positions of the respective parties:

P =mXj=1

sjpj, (15)

where 0 < sj < 1 is the relative share of party j in the legislature andPm

j=1 sj = 1 for all j.

If voters are policy-outcome oriented, and concerned that the policy outcome may deviate

from their ideal point if party j is not elected, then the utility of voter i pertaining to party j

becomes:

Uij = −β2h(vi − P )2 −

¡vi − P−pj

¢2i, (16)

and

P−pj =

Ã1P

k 6=j sk

!Xk 6=j

skpk, (17)

where β2 is a scalar. Equation (17) represents the policy outcome if party j is not in the legislature.

Equation (16) provides an important insight on how voters view the contribution of party j to the

policy outcome affecting their utility. If party j takes part in policy formulation and makes the

policy closer to voter i’s ideal point vi, then¡vi − P−pj

¢2> (vi − P )2 . Therefore, voter iwill gain

positive utility to have party j to be in the process of policy formation (i.e., Uij > 0). However,

assume that voters’ utility is an inverse function of the squared distance of party political position and the voters’ idealposition.

21

Page 23: METHODOLOGICAL UNIFICATION - UH

if the inclusion of party j makes the policy outcome increase in distance from voter i’s idea point

such that¡vi − P−pj

¢2< (vi − P )2 , then the utility of voter i for party j is negative.

Now assume that voter i has expectations concerning party j that are based on the weighted

average of both the party’s relative position and its contribution to policy outcomes. Voter i’s

utility for party j can now be written as:

Uij = θn−γ (vi − pj)

2 − (1− γ)h(vi − P )2 −

¡vi − P−pj

¢2io+ δjzi, (18)

where θ is a scalar, δj is a vector of coefficients on voter i’s observable variables zi for party j,

and β1 and β2 correspond to γ and 1 − γ, respectively, such that β1 + β2 = γ + (1− γ) = 1.

When γ → 1, it implies that voters are solely concerned with a party’s positions. This situation is

called representational voting behavior. On the other hand, γ → 0 implies that voters vote for a

party such that the policy outcome can be placed at the voter’s desired position(s). This outcome

is called compensational voting behavior.

From equation (18) , we obtain voter i’s optimal or “desired” position for party j by solving

the first order condition of Uij with respect to pj:

p∗j = vi

"γ (1− sj) + sj

γ¡1− s2j

¢+ s2j

#−(1− γ)

³sjPm

k=1,k 6=j skpk

´γ¡1− s2j

¢+ s2j

. (19)

When γ → 1 (representational voting), we have:

p∗j = vi for γ → 1. (20)

When γ → 0 (compensational voting), we have:

p∗j =vi −

Pmk=1,k 6=j skpk

sjfor γ → 0, (21)

and the policy outcome would be:

P¯γ→0,pj=p∗j =

mXk=1

skpk = sjpj +mX

k=1,k 6=jskpk

= sjp∗j +

mXk=1,k 6=j

skpk

= sjvi −

Pmk 6=j skpk

sj+

mXk=1,k 6=j

skpk

= vi. (22)

22

Page 24: METHODOLOGICAL UNIFICATION - UH

For (20)— (22), we see that voters make an optimal voting decision based on representational

(proximity) and compensational voting considerations. Kedar (2005) argues that the representa-

tional and compensational voting considerations can reflect the levels of political bargaining in

different institutional systems. In majoritarian systems, where the winning party is able to im-

plement its ideal policy with less need for compromise, voters would place greater value on γ so

that they would vote for the party positioned closest to their ideal position. However, in the case

where institutional power sharing (γ is small) exists, voters would vote for a party whose position

is further from their ideal positions to draw the collective outcome closer to their, the voter’s, ideal

point. While the voting literature generally supports the proximity model, where voters choose

the party which is closer to their political positions, Kedar (2005) argues that such a consideration

would be reduced if the institutional environment involves more power-sharing.

5.1.2. Unifying and Evaluating the Analogues

Kedar (2005) tests the folllowing two hypotheses with voting survey data from Britain, Canada,

Netherlands, and Norway:

Hypothesis 1: Voters’ behavior in the countries with a majoritarian system follows the proximity

model more closely (larger γ) than those in the countries with a consensual system (smaller γ).

Hypothesis 2: The pure proximity model (γ = 1) does not sufficiently represent voting behavior.

To test Hypothesis 1, Kedar (2005) first identifies the institutional features of Britain, Canada,

Norway and the Netherlands. Using the indicators of majoritarianism and power-sharing in Li-

jphart (1984), she concludes that Britain and Canada are more unitary whereas the Netherlands

and Norway are more consensual. Methodological unification occurs when Kedar derives the log-

likelihood multinomial model based on equation (18) and estimates issue voting in four political

systems using three measures: (i) seat shares in the parliament; (ii) vote shares; and (iii) portfolio

allocation in government. The empirical results support the first theoretical hypothesis: voting be-

havior in the majoritarian systems (i.e., Britain and Canada) is more consistent with the proximity

model relative to that in the consensual systems (i.e., the Netherlands and Norway). Hypothesis 2

is tested using the likelihood ratio test. The results show that, in all four political systems, com-

23

Page 25: METHODOLOGICAL UNIFICATION - UH

pensational voting behavior exists in the survey data. The pure proximity model is an insufficient

explanation.

5.1.3. Extending the Model

The model and tests that Kedar presents provide the foundation for extensions that can examine

the robustness of her findings. One way to build on her formal model is to relax the assumption

that voters’ expectations are error free. It is well known that equilibrium predictions change or are

fragile when expectations are based on imperfect or limited information. In addition, to this ex-

tension on expectations is a determination of whether voters (in Kedar’s model) learn equilibriums

consistent with rational expectations.

5.2. Example 3: Economic Voting

Starting with Kramer (1983) and continuing with work that includes (but is not limited to) Alesina

and Rosenthal (1995), Suzuki and Chappell (1996), and Lin (1999) there is a substantial liter-

ature on how the economy affects voting behavior. For our purposes we demonstrate how the

combined work of Alesina and Rosenthal, Suzuki and Chappell, and Lin have in their own way

contributed constituent parts that allow for the study of economic voting within a unified method-

ological framework.

5.2.1. The Relation Between Expectations, Uncertainty, and Measurement Error

We first start with the formal model that has behavioral attributes of expectations and uncertainty.

Of the studies of economic voting mentioned above, Alesina and Rosenthal (1995) provide the

formal model (pages 191-195). Their model of economic growth can be derived using an expec-

tations augmented aggregate supply curve (see equation (1)):

yt = yn + γ (πt − πet) + εt, (23)

where yt represents the rate of economic growth (GDP growth) in period t, yn is the natural level

of the economic growth rate23, πt is the inflation rate at time t, and πet is the expected inflation rate

23 This variable has similar properties to equation (1) where the natural rate of output (ynt ) is assumed to grow at aconstant rate of β. This follows from yn = ynt − ynt−1 = β.

24

Page 26: METHODOLOGICAL UNIFICATION - UH

at time t formed at time t− 1.

With voter expectations over inflation established we now turn to uncertainty in those ex-

pectations. Let us assume that voters want to determine whether to attribute credit or blame for

economic growth (yt) outcomes to the incumbent administration. Yet, voters are faced with the

challenge in determining what part of the economic outcomes is due to incumbent “competence”

(i.e., policy acumen) or simply good luck. Based, in part on equation (23), equation (24) presents

this “signal extraction” or measurement error problem:

εt = ηt + ξt. (24)

The variable εt represents the shock that is comprised of the two unobservable characteristics noted

above. The first, represented by ηt, reflects “competence” that can be attributed to the incumbent

administration. The second, symbolized as ξt, are shocks to growth that are beyond administration

control (and competence). Both ηt and ξt have zero mean and have second moments σ2η and σ2ξ

respectively.

Note also that competence is argued to have a persistent property, which can allow for reelec-

tion. This feature can be characterized as an MA(1) process:

ηt = μt + ρμt−1, 0 < ρ ≤ 1 (25)

where μt is iid¡0, σ2μ

¢. The parameter ρ represents the strength of the persistence. The lag or lags

allow for retrospective voter judgements.

Next, let us assume that voters’ judgements include a general sense of the average rate of

growth (yn) and that they also observe actual growth (yt). Equation (23) implies that when voters

predict inflation with no systematic error (i.e., πet = πt), the result is non-inflationary growth that

does not adversely affect real wages. Now we can link economic growth performance with voter

uncertainty at the source. We formalize how economic growth rate deviations from the average

can be attributed to administration competence or fortuitous events:

yt − yn = εt = ηt + ξt. (26)

Equation (26) shows that when the actual economic growth rate is greater than its average or

25

Page 27: METHODOLOGICAL UNIFICATION - UH

“natural rate” (i.e., yt > yn), then εt = ηt + ξt > 0. Yet, voters are unable to distinguish the

incumbent’s competence (ηt) from the stochastic economic shock (ξt) as they can and do make

forecasts.

Because competence can persist, voters use this property to make forecasts, thereby providing

greater or less weight to competence over time. To demonstrate this behavioral effect, substitute

equation (25) in (26):

μt + ξt = yt − yn − ρμt−1 (27)

With these equations in place, we can now determine the optimal estimate of competence, ηt+1,

when the agents only see yt. We demonstrate this result by making a one-period forecast of

equation (25) and then solve for its expected value (conditional expectation) at time t:

Et

¡ηt+1

¢= Et

¡μt+1

¢+ ρE (μt|yt) = ρE (μt|yt) , (28)

where Et

¡μt+1

¢= 0. In equation (28), we see that competence, ηt+1, can be forecasted by pre-

dicting μt+1 and μt. Since there is no information available for forecasting μt+1, voters can only

forecast μt based on observable yt (at time t) from equation (27) .

Using the method of recursive projection and equation (27) we demonstrate how the be-

havioral analogue for expectations is linked to the empirical analogue for measurement error (an

error-in-variables “equation”24):

Et

¡ηt+1

¢= ρE (μt|yt) = ρ

σ2μσ2μ + σ2ξ

¡yt − yn − ρμt−1

¢, (29)

where 0 < ρσ2μ

σ2μ+σ2ξ< 1. Equation (29) shows that voters can forecast competence using the

difference between yt − yn, but also the “weighted” lag of μt¡i.e., ρμt−1

¢.

In equation (29), the expected value of competence is positively correlated with the economic

24 The traditional applied statistical view of measurement errors is that the correlated signs of the measurementerrors among independent variables can lead to inappropriate signs for regression coefficients. This is exactly whatmethodological unification accomplishes. The theory — the formal model — implies an applied statistical modelwith measurement error. Consequently, one can examine, with a unified approach, the joint effects and identify thecause. Applied statistical tools cannot untangle conceptually distinct effects on a dependent variable.

We also hasten to add that Friedman (1957) and Lucas’s (1973) substantive findings would not have been achievedhad they just treated their research question as a pure measurement error problem requiring only an applied statisticalanalysis (and “fix” for the measurement error) that focused on a significant t-statistic.

26

Page 28: METHODOLOGICAL UNIFICATION - UH

growth rate deviations. Voter assessment is filtered by the coefficient, σ2μσ2μ+σ

2ξ, which represents a

proportion of competence that voters are able to interpret and observe. The behavioral implications

are straightforward. If voters interpret that the variability of economic shocks solely comes from

the incumbent’s competence (σ2ξ → 0), then σ2μσ2μ+σ

2ξ→ 1. On the other hand, the increase in the

variability of uncontrolled shocks, σ2ξ , confounds the observability of incumbent competence since

the signal coefficient σ2μσ2μ+σ

decreases. In this case, voters assign less weight to the economic

performance in assessing the incumbent’s competence.

5.2.2. Unifying and Evaluating the Analogues

Alesina and Rosenthal (1995) test the empirical implications of their model with data on economic

outcomes and political parties for the period 1915 to 1988. To perform these tests, they first

estimate the theoretical specification for economic growth in equation (23) and collect the esti-

mated exogenous shocks (εt) in the economy. With these estimated exogenous shocks, they then

construct their variance-covariance structure. Since competence (ηt) in equation (25) follows an

MA(1) process, they hypothesize that a test for incumbent competence, as it pertains to economic

growth, can be performed using the covariances between the current and preceding year. The spe-

cific test centers on whether the changes in covariances with the presidential party in office are

statistically larger than the covariances associated with a change in presidential parties. They re-

port null findings (e.g., equal covariances) and conclude that there is little evidence to support that

voters are retrospective and use incumbent competence as a basis for support.

5.2.3. Extending the Model

Alesina and Rosenthal use equations (23) and (25) to form their tests. However, the empirical

models of both Suzuki and Chappell (1996) and Lin (1999) provide alternative tests. While

neither Suzuki and Chappell or Lin provide formalization, their results clearly could be linked

with the formalization set forth by Alesina and Rosenthal. The key attributes of both Suzuki

and Chappell (1996) and Lin (1999) is that they treat economic voting as having a dynamic or

episodic nature. Parameters fluctuate based on changes in circumstance — depending (in this line

27

Page 29: METHODOLOGICAL UNIFICATION - UH

of research) on how much variability there is in economic growth.

The “separate” studies of Alesina and Rosenthal (1995), Suzuki and Chappell (1996), and

Lin (1999) all focus on the issue of economic voting. These studies have addressed the matter

with alternative approaches. With a unified approach, the potential linkage between these studies

would lead to direct relation between the parameters of the formal and empirical models. An

EITM version of such work is to test the competence equation (29), which ultimately would unify

the formal model of Alesina and Rosenthal with the empirical tests of Suzuki and Chappell and

Lin.

5.3. Example 4: The Political Economy of Monetary Policy

In the past twenty five years, due in part to social, political, and economic factors, there has been

a regime shift in monetary policy. Monetary authorities now place greater emphasis on stabiliz-

ing domestic price levels than during the advent of the Phillips curve (e.g., inflation targeting).

Accompanying this regime shift has been a renewed emphasis on the study of overall policy ef-

fectiveness (see Bernanke et al., 1999 and Taylor 1999) and the use of interest rate rules in new

Keynesian models (see Clarida, Gali, and Gertler 2000). More recently, Granato and Wong (2006)

have used a model with new Keynesian properties to determine the relation between inflation sta-

bilizing policy, inflation persistence and volatility, and business cycle fluctuations.

5.3.1. The Relation between Expectations, Learning, and Persistence

In this example we establish a relation between expectations, learning, and the persistence of a

variable of interest (inflation). Substantively speaking, we demonstrate how the implementation

and aggressiveness of maintaining an inflation target affects inflation persistence. The intuition

of the model is as follows: policy influences public expectations by encouraging the public to

substitute an inflation target for past inflation. Therefore, a negative relation should exist between

periods of aggressive inflation-stabilizing policy and inflation persistence.25

The model has two components. As in the example in Section 3, we rely on a standard model

25 We define an aggressive inflation-stabilizing policy as one that includes a willingness to respond forcefully todeviations from a prespecified implicit or explicit inflation target.

28

Page 30: METHODOLOGICAL UNIFICATION - UH

of macroeconomic outcomes and policy. The second component assumes that agents learn in an

adaptive manner and that they form expectations as new data becomes available over time. The

model, containing behavioral analogues for expectations and learning, contains a unique and stable

rational expectations equilibrium (REE) (Evans and Honkapohja 2001). The stability (E-stability)

conditions have direct implications for the relation between agents and policymakers.26 Under the

REE, aggressive implementation of an inflation target guides agents to the stable equilibrium and

reduces inflation persistence.

The model assumes a two-period contract. For simplicity, prices reflect a unitary markup

over wages. The price at time t, pt, is expressed as the average of the current (xt) and the lagged

(xt−1) contract wage:27

pt =1

2(xt + xt−1) , (30)

where pt is the logarithm of the price level, and xt is the logarithm of the wage level at period t.

In addition, agents are concerned with their real wages over the lifetime of the contract:

xt − pt =1

2[xt−1 − pt−1 +Et (xt+1 − pt+1)] + θzt, (31)

where xt − pt represents the real wage rate at time t, Et (xt+1 − pt+1) is the expectation of the

future real wage level at time t + 1 formed at time t, and zt = yt − ynt is the excess demand for

labor at time t. The inflation rate (πt) is defined as the difference between the current and lagged

price level (pt − pt−1). With this definition we substitute equation (31) into equation (30) and

obtain:

πt =1

2(πt−1 +Etπt+1) + θzt + u3t. (32)

where Etπt+1 is the expected inflation rate over the next period and u3t is iid¡0, σ2u3

¢. Equation

26 Adaptive learning allows agents to achieve the REE within the context of a stochastic (updating) process that istypically represented via adaptive learning. Agents do not initially obtain the REE, but they attempt to learn thestochastic process by updating their forecasts (expectations) over time as new information becomes available.

In more technical terms, adaptive learning is used so that agents update parameters of a forecasting rule — per-ceived law of motion (PLM) — associated with the stochastic process of the variable in question to learn an REE.This process requires a condition establishing convergence to the REE — the E-stability condition. The E-stabilitycondition determines the stability of the equilibrium in which the PLM parameters adjust to the implied actual law ofmotion (ALM) parameters. See Evans and Honkapohja (2001) for details.27 See Wang and Wong (2005) for the details of the general theoretical framework.

29

Page 31: METHODOLOGICAL UNIFICATION - UH

(32) captures the main characteristic of inflation persistence. Since agents make plans about their

real wages over both past and future periods, the lagged price level (pt−1) is taken into considera-

tion as they adjust (negotiate) their real wage at time t. This model feature allows the inflation rate

to depend on both the expected inflation rate as well as past inflation.

Similar to equation (2) , equation (33) represents a standard IS curve where the quantity

demanded on output relative to natural output (zt) is negatively associated with the changes in real

interest rates:

zt = −ϕ (it − Etπt+1 − r∗) + u4t, (33)

where it is nominal interest rate, r∗ is the target real interest rate, u4t is iid¡0, σ2u4

¢, and ϕ > 0.

As in equation (3) , we assume that policymakers use an interest rate rule — the Taylor rule

— when conducting monetary policy:

it = πt + αyzt + απ (πt − π∗) + r∗, (34)

where positive values of απ and αy indicate a willingness to raise (lower) nominal interest rates

in response to the positive (negative) deviations from either the target inflation rate (πt − π∗), the

output gap (zt), or both.

The equilibrium inflation rate can be found by taking the reduced form of the system. It is

derived by substituting equation (34) into equation (33). Next solve for zt and then put that result

into equation (32). If we solve that expression for πt we have:

πt = Γ0 + Γ1πt−1 + Γ2Etπt+1 + ξt, (35)

where:

Γ0 = (θϕαππ∗)Φ−1,

Γ1 = (1 + ϕαy) (2Φ)−1 ,

Γ2 = (1 + ϕαy + 2θϕ) (2Φ)−1 ,

ξt = [θu4t + (1 + ϕαy)u3t]Φ−1,

and Φ = 1 + ϕαy + θϕ (1 + απ) .

30

Page 32: METHODOLOGICAL UNIFICATION - UH

Equation (35) shows that current inflation depends on the first-order lag of inflation and also ex-

pected future inflation. When we “close” (35) we find the MSV can be expressed as an AR(1)

process, thereby establishing methodological unification.

Methodological unification occurs when we solve for the REE since this will involve merging

the behavioral analogue of expectations with the empirical analogue for persistence. Simply take

the conditional expectations at time t + 1 of equation (35) and substitute this result into equation

(35) :

πt = A+Bπt−1 + ξt, (36)

where A = Γ0 (1− Γ2B − Γ2)−1 , B =

¡1±√1− 4Γ1Γ2

¢(2Γ2)

−1, and ξt ≡ ξt (1− Γ2B)−1 .

Equation (36) is the MSV solution of inflation — which depends solely on the lagged inflation

rate. Because we are using a formal model one complication we are alerted to, with important

empirical implications, is the nature of the coefficient of lagged inflation, B. This parameter is a

quadratic where the two values are defined as:

B+ =1 +√1− 4Γ1Γ22Γ2

,

B− =1−√1− 4Γ1Γ22Γ2

.

Based on the behavior of the model’s parameters, we determine the properties of the quadratic

solutions and solve for the relation between aggressive inflation-stabilizing policy (απ) and infla-

tion persistence (B). Granato and Wong (2006) show thatB− is a unique stationary solution when

απ ≥ 0. When policymakers adopt an aggressive inflation-stabilizing policy, a stationary AR(1)

solution can be obtained (i.e., B−) while an explosive AR(1) solution (i.e., B+) would also be pos-

sible. However, the technique of adaptive learning (McCallum 2003) serves an important selection

criteria (i.e., determining stable solutions) where only the stationary solution (i.e., B−) is attain-

able and that the explosive solution (i.e., B+) is not possible. In other words, if agents learn the

equilibrium in an adaptive manner and they form expectations as new data becomes available over

time, B− is the only learnable (E-stable) equilibrium when policymakers aggressively stabilize in-

flation (i.e., απ > 0). Finally Granato and Wong (2006) point out that equation (36) represents

the AR(1) process of the inflation rate. The empirical implications of the model as represented in

31

Page 33: METHODOLOGICAL UNIFICATION - UH

equation (36) shows that an increase in απ reduces persistence under B−.

5.3.2. Unifying and Evaluating the Analogues

To test the relation between the policy parameter(s) and inflation persistence — the equilibrium

predictions — we use quarterly United States data (for the period 1960:I to 2000:III). According

to the model, inflation persistence should be reduced significantly under an aggressive inflation-

stabilizing policy. From equation (36)we estimate a first-order autoregressive process (i.e., AR(1))

of the United States inflation rate. As a consequence of the more aggressive inflation-stabilizing

policy stance during the Volcker-Greenspan period (August, 1979 through August, 2000), we ex-

pect that the inflation-persistence parameter (Bt) in the Volcker-Greenspan period to be smaller

(statistically) relative to the pre-Volcker period.

(Figure 2 About Here)

The formal model predicts that a positive inflation stabilization policy parameter (απ) re-

duces inflation persistence, Bt. We also estimate equation (34) in order to contrast the parameter

movements in απ and αy.28 Figure 2 provides point estimates of inflation persistence (Bt) and

policy rule parameters, απ and αy, for a 15-year rolling sample starting in the first quarter of 1960

(1960:I). The results show that after 1980, inflation persistence starts falling. Figure 2 also shows

that both απ and αy de-emphasize inflation and output stability in approximately 1968. Prior to

1968, countercyclical policy emphasize output stability (αy > 0). Aggressive inflation stabilizing

policy occurrs only after 1980, when απ > 0.

5.3.3. Extending the Model

The model presented here, in some respects, is a mirror of the model proposed by Kedar. Kedar’s

model emphasizes micro foundations but makes assumptions about voter expectations. In this

example, the behavioral concepts and analogues focus on expectations and learning, but there is

little in the way of microfoundations or in the strategic interaction between policymakers and the

public. These extensions would help determine the robustness of the equilibrium predictions

28 See also Granato and Wong (2006, 198–211).

32

Page 34: METHODOLOGICAL UNIFICATION - UH

pertaining to inflation persistence and inflation volatility.

6. Summary and DiscussionIn this paper we describe the weaknesses of current research practices and how their nonfalsifiable

nature fail in building a cumulative science. We argue that identifying causal mechanisms can

be achieved through a modeling dialogue that harnesses the unification of formal and empirical

analysis. In particular, we share the Cowles Commission emphasis on structural parameters and

devising ways to identify these structural parameters. We build on the Cowles Commission’s

work to recover a model’s parameters, but we also offer an alternative framework that can help

address the Sims (1980) critique of conventional structural estimation practice. In sum, our goal

is to develop a framework to provide a behavioral explanation on why parameters in an applied

statistical model act the way they do.

The EITM examples illustrate the way methodological unification can be extended beyond

more traditional approaches. We use analogues, work through the properties of the analogues,

and focus on the relation between the formal-theoretical parameter(s) and the applied statistical

parameter(s). Explicit emphasis on the parameters allows for greater likelihood of knowing what

is being tested. We account for the problems faced when using a reduced form and we know that

the reduced form is far superior to the current practices of data mining, overparameterization, and

Omega matrices.

Having said this, it is also clear that there are further challenges dealing with establishing

linkages from parameters to tests. One ongoing issue is how to deal with overparameterization.

In our policy example, we work out a mechanism between the policy parameter (απ) and the

persistence parameter (Bt), but also note there are other free parameters in the expression for the

persistence parameter. The potential shifts in the remaining free parameters may be influencing

the results.

The fact that we are now focused on overparameterization and the issue of free parameters

as opposed to unscientific diversions is indicative of the scientific potential of methodological

33

Page 35: METHODOLOGICAL UNIFICATION - UH

unification. And yet, the existence of free parameters means that nonfalsification is still a threat.

Consider the policy example again. Because we have not worked out the behavior of certain

parameters in the system, we are open to the criticism that an autonomous shift in one of the free

parameters confounds the link between theory and test. Moreover, the policy example suggests

the framework must be extended to include a means “to fix” as many parameters in the system as

possible. The framework is flexible enough to accomplish this task. The issue will be making

appropriate use of additional formalization — informed by empirical and contextual evidence —

to provide greater specificity on the range of magnitudes that the previously free parameters take

and whether all free parameters are subject to some restriction.

From this we conclude that methodological unification can be thought of as having two lev-

els, one level involving results that are consistent with the formal and applied statistical analogues’

predictions.29 But, a second level requires even greater emphasis on identifying every structural

relation in the formal model and analogue so that every estimate in the reduced form and applied

statistical analogue are isolated. It follows from the inferential challenges posed by overparam-

eterization that efforts be directed to developing metrics on the degree to which free parameters

can confound the results. Our conjecture is to think along the lines of creating some “tolerance”

criteria on the relation between the structural equations, the number of free parameters, and the po-

tential to maximize falsification. One could envision a mix of rank and order type conditions in

conjunction with Extreme Bounds Analysis (EBA) (see Leamer 1983 and Bamber and van Santen

1985, 2000).

The framework we use is part of an ongoing process geared toward methodological unification

(see Morton 1999). What we describe can be extended in many ways. Unified methodological

frameworks that foster links to different levels of analysis (Kydland and Prescott 1982; Freeman

and Houser 1998) or make explicit use of game theory (Signorino 1999; Mebane and Sekhon

2002) are just two more examples. We also think the use of Bayesian analysis is a promising route

29 Analogues are central to our framework. Some have been created but others will have to be invented as the researchquestions demand them. For example, one can have a model and test that has changing parameters, but not follow thepath of Lucas (1973).

34

Page 36: METHODOLOGICAL UNIFICATION - UH

for unification (Achen 2006). Experiments, too, provide a rich alternative or compliment to the

secondary data analysis we use in this paper and are a natural outlet for methodological unification.

People can sit in different pews but still be members of the same congregation.

Ultimately, what methodological unification means — whether by using our framework or

some alternative — is a clean break from current practices such as data mining, overparameter-

ization, and Omega matrices. No half-measures will suffice if the goal is to build a cumulative

science that rests on falsifiable research practices. The usual conception of treating applied sta-

tistical problems as simply requiring statistical patches will need to give way to viewing these

nuisances as having a theoretical basis. With that new viewpoint a researcher can take advantage

of the mutually reinforcing properties of both formal and applied statistical analysis, and test the

linkage with data.

35

Page 37: METHODOLOGICAL UNIFICATION - UH

Achen, Christopher H. 1983. “Toward Theories of Data: The State of Political Methodology.” InPolitical Science: The State of the Discipline, ed. Ada. Finifter. Washington, D.C.: AmericanPolitical Science Association, pp. 69-93.

Achen, Christopher H. 2002. “An Agenda for the New Political Methodology: Microfoundationsand ART” Annual Review of Political Science 5: 423-450.

Achen, Christopher H. 2005. “Let’s Put Garbage-Can Regressions and Garbage-Can Probitswhere They Belong.” Conflict Management and Peace Science 22: 327-339.

Achen, Christpoher H. 2006. “Expressive Bayesian Voters, their Turnout Decisions, and DoubleProbit: Empirical Implications of a Theoretical Model.” Typescript. Princeton University.

Aldrich, John. 1989. “Autonomy.” Oxford Economic Papers 41: 15–34.

Aldrich, John H. 1980. Before the Convention: Strategies and Choices in Presidential NominationCampaigns. Chicago: University of Chicago Press.

Alesina, Alberto., and Howard Rosenthal. 1995. Partisan Politics, Divided Government, and theEconomy. New York: Cambridge University Press.

Arrow, Kenneth. 1948. “Mathematical Models in the Social Sciences.” In Advances in Experi-mental Psychology, eds. Daniel Lerner and Harold D. Lasswell. Stanford: Stanford UniversityPress.

Bamber, Donald., and Jan P. H. van Santen. 1985. “How Many Parameters Can a Model Haveand Still Be Testable.” Journal of Mathematical Psychology 29: 443-473.

Bamber, Donald., and Jan P. H. van Santen. 2000. “How to Assess a Model’s Testability andIdentifiability.” Journal of Mathematical Psychology 44: 20-40.

Bartels, Larry M., and Henry E. Brady. 1993. “The State of Quantitative Political Methodology.” InPolitical Science: The State of the Discipline II, ed. Ada Finifter. Washington, D.C.: AmericanPolitical Science Association, pp. 121-159.

Bernanke, Ben S., and Thomas Laubach, Frederick S. Mishkin, and Adam S. Posen. 1999. Infla-tion Targeting: Lessons from the International Experience. Princeton, New Jersey: PrincetonUniversity Press.

Brady, Henry E., and David Collier. 2004. Rethinking Social Inquiry: Diverse Tools, SharedStandards. Lanham, Maryland: Rowman and Littlefield).

Chappell, Henry W. Jr., and William R. Keech. 1983. “Welfare Consequences of the Six-YearPresidential Term Evaluated in the Context of a Model of the U.S. Economy.” AmericanPolitical Science Review 77, 1: 75-91.

Christ, Carl F. 1994. “The Cowles Commission’s Contributions to Econometrics at Chicago,1939-1955.” Journal of Economic Literature 62: 30-59.

Clarida, Richard H., Jordi Gali, and Mark Gertler. 2000. “Monetary Policy Rules and Macroeco-nomic Stability: Evidence and some Theory.” Quarterly Journal of Economics 115: 147-180.

36

Page 38: METHODOLOGICAL UNIFICATION - UH

Clarke, Harold D., David Sanders, Marianne C. Stewart, and Paul Whiteley. 2004. PoliticalChoice in Britain. New York: Oxford University Press.

Converse, Philip. 1969. “Of Time and Partisan Stability.” Comparative Political Studies 2: 139-171.

Denton, Frank T. 1985. “Data Mining as an Industry.” The Review of Economics and Statistics67: 124-127.

Engle, Robert F., David. F. Hendry, and Jean-Francois Richard. 1983. “Exogeneity.” Economet-rica 51: 277-304.

Erikson, Robert S., Michael B. Mackuen, and James A. Stimson. 2002. The Macro Polity. Cam-bridge: Cambridge University Press.

Evans, George W., and Seppo. Honkapohja. 2001. Learning and Expectations in Macroeconomics.Princeton: Princeton University Press.

Fisher, Franklin. 1966. The Identification Problem in Econometrics. New York: McGraw-Hill.

Freeman, John., and Daniel Houser. 1998. “A Computable Equilibrium Model for the Study ofPolitical Economy.” American Journal of Political Science 42: 628-660.

Freeman, John., Tse–min Lin., and John Williams. 1989. “Vector Autoregression and the Studyof Politics.” American Jounal of Political Science 33: 842-877.

Friedman, Milton. 1957. A Theory of the Consumption Function. Princeton: Princeton UniversityPress.

Granato, Jim., and Frank. Scioli. 2004. “Puzzles, Proverbs, and Omega Matrices: The Scientificand Social Consequences of the Empirical Implications of Theoretical Models (EITM).” Per-spectives on Politics 2: 313-323.

Granato, Jim., and M. C. Sunny Wong. 2006. The Role of Policymakers in Business Cycle Fluctu-ations. Cambridge: Cambridge University Press.

Haavelmo, Trygve. 1944. “The Probability Approach in Econometrics.” Supplement to Econo-metrica 12: S1-115.

Heckman, James. 2000. “Causal Parameters and Policy Analysis in Economics: A TwentiethCentury Retrospective.” Quarterly Journal of Economics 115: 45-97.

Hendry, David. 1995. Dynamic Econometrics. New York: Oxford University Press.

Hibbs, Douglas. 1977. “Political Parties and Macroeconomic Policy.” American Political ScienceReview 71: 1467-1487.

Hodrick, Robert J., and Edward C. Prescott. 1980. “Postwar U.S. Business Cycles: an EmpiricalInvestigation.” Typescript. Carnegie-Mellon University.

Johnston, Jack., and John DiNardo. 1997. Econometric Methods. New York: McGraw-Hill.

37

Page 39: METHODOLOGICAL UNIFICATION - UH

Kedar, Orit. 2005. “When Moderate Voters Prefer Extreme Parties: Policy Balancing in Parlia-mentary Elections.” American Political Science Review 99: 185-199.

Kramer, Gerald. 1983. “The Ecological Fallacy Revisited: Aggregate Versus Individual-LevelFindings on Economics and Elections and Sociotropic Voting.” American Political ScienceReview 77: 92-111.

Kydland, Finn., and Edward Prescott. 1982. “Time to Build and Aggregate Fluctuations.” Econo-metrica 50: 1345-1370.

Leamer, Edward. 1983. “Let’s Take the ‘Con’ Out of Econometrics.” American Economic Review73: 31-43.

Lin, Tse-min. 1999. “The Historical Significance of Economic Voting, 1872–1996.” Social ScienceHistory 23:4, 561–91.

Lovell, Michael C. 1983. “Data Mining.” The Review of Economics and Statistics 65: 1-12.

Lucas, Robert E., Jr. 1973. “Some International Evidence on Output-Inflation Tradeoffs.” Amer-ican Economic Review 63: 326-334.

Lucas, Robert E., Jr. 1976. “Econometric Policy Evaluation: A Critique.” Carnegie-RochesterConference on Public Policy 1: 19-46.

Manski, Charles. 1995. Identification Problems in the Social Sciences. Cambridge: HarvardUniversity Press.

Mankiw, N. Gregory., David Romer, and David Weil. 1992. “A Contribution to the Empirics ofEconomic Growth.” Quarterly Journal of Economics 107 (May): 407-437.

McCallum, Bennett T. 1983. “On Nonuniqueness in Linear Rational Expectations Models: AnAttempt at Perspective.” Journal of Monetary Economics 11: 134-168.

McCallum, Bennett T. 2003. “Multiple-Solution Indeterminacies in Monetary Policy Analysis.”Journal of Monetary Economics 50: 1153-1175.

McCallum, Bennett T., and Edward Nelson. 1999. “An Optimizing IS-LM Specification for Mone-tary Policy and Business Cycle Analysis.” Journal of Money, Credit and Banking 31: 296-316.

McKelvey, Richard D., and Thomas R. Palfrey. 1995. “Quantal Response Equilibria for NormalForm Games.” Games and Economic Behavior 10: 6-38.

McKelvey, Richard D., and Thomas R. Palfrey. 1996. “A Statistical Theory of Equilibrium inGames.” Japanese Economic Review 47: 186-209.

McKelvey, Richard D., and Thomas R. Palfrey. 1998. “Quantal Response Equilibria for ExtensiveForm Games.” Experimental Economics 1: 9-41.

Mebane, Walter., and Jasjeet Sekhon. 2002. “Coordination and Policy Moderation at Midterm.”American Political Science Review 96: 141-157.

38

Page 40: METHODOLOGICAL UNIFICATION - UH

Mizon, Grayham E. 1995. “A Simple Message for Autocorrelation Correctors: Don’t.” Journalof Econometrics 69: 267-288.

Morgan, Mary. 1990. The History of Econometric Ideas. Cambridge: Cambridge UniversityPress.

Morton, Rebecca B. 1999. Methods and Models: A Guide to the Empirical Analysis of FormalModels in Political Science. New York: Cambridge University Press.

Pearl, Judea. 2000. Causality: Models, Reasoning, and Inference. Cambridge: CambridgeUniversity Press.

Peltzman, Sam. 1990. “How Efficient is the Voting Market?” Journal of Law and Economics 33:27-63.

Powell, Robert. 1999. In the Shadow of Power. Princeton: Princeton University Press.

Sargent, Thomas. 1979. Macroeconomic Theory. New York: Academic Press.

Signorino, Curt. 1999. “Strategic Interaction and the Statistical Analysis of International Conflict.”American Political Science Review 93: 279-297.

Sims, Christopher. 1980. “Macroeconomics and Reality.” Econometrica 48: 1-48.

Solow, Robert M. 1956. “A Contribution to the Theory of Economic Growth.” Quarterly Journalof Economics 70 (February): 65-94.

Suzuki, Motoshi., and Henry W. Chappell Jr. 1996. “The Rationality of Economic Voting Revis-ited.” The Journal of Politics 58: 224-236.

Taylor, John. 1993. “Discretion Versus Policy Rules in Practice.” Carnegie-Rochester ConferenceSeries on Public Policy 39: 195-214.

Taylor, John. 1999. “An Historical Analysis of Monetary Policy Rules.” In Monetary Policy Rules,ed. J. B. Taylor. Chicago: University of Chicago Press.

Wang, Miao., and M. C. Sunny Wong. 2005. “Learning Dynamics in Monetary Policy: TheRobustness of an Aggressive Inflation Stabilizing Policy.” Journal of Macroeconomics 27:143-151.

39

Page 41: METHODOLOGICAL UNIFICATION - UH

Figure 1

0 10 20 30 40 50 60 70 80 90 100

4

6

8

Time

Infla

tion

Panel 1a: 3 = 1.5 > 1

Panel 1b: 3 = 0.5 < 1

Panel 1c: 3 = 1

0 10 20 30 40 50 60 70 80 90 100

-4

-2

0

Time

Infla

tion

0 10 20 30 40 50 60 70 80 90 1001

1.5

2

2.5

3

Time

Infla

tion

λ

λ

λ

Page 42: METHODOLOGICAL UNIFICATION - UH

Figure 2

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

1.0

75:1 80:1 85:1 90:1 95:1 00:1

(Inflation Aggressiveness Parameter) (Output Aggressiveness Parameter)Inflation Persistence Coefficient

1960:I|

1975:I

1965:I|

1980:I

1970:I|

1985:I

1975:I|

1990:I

1980:I|

1995:I

1985:I|

2000:I