master's thesis alexandre lauwers

115
The Federal Reserve’s response to asset prices: A step-by-step approach towards an augmented Taylor rule ESL supervisor: UNIGE supervisor: Christian HAFNER Charles WYPLOSZ Master’s thesis submitted by Alexandre LAUWERS in partial fulfillment of the requirements for the double degree of the “Master en sciences ´ economiques orientation g´ en´ erale ` a finalit´ e sp´ ecialis´ ee” UCL - FUNDP and the “Maˆ ıtrise universitaire en sciences ´ economiques- Master of science in Economics” UNIGE. Academic year 2012-2013 Universit´ e Catholique de Louvain/ESL – Place Montesquieu 3 – BE-1348 Louvain-la-Neuve Universit´ e de Namur/ESL – Rempart de la Vierge 8 – BE-5000 Namur University of Geneva/SES – Uni Mail, Bd du Pont d’Arve 40 – CH-1211 Gen` eve

Upload: alexandre-lauwers

Post on 16-Jan-2017

42 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Master's Thesis Alexandre Lauwers

The Federal Reserve’s responseto asset prices:A step-by-step approach

towards an augmented Taylor rule

ESL supervisor: UNIGE supervisor:

Christian HAFNER Charles WYPLOSZ

Master’s thesis submitted by

Alexandre LAUWERS

in partial fulfillment of the requirements for the double degree of the

“Master en sciences economiques orientation generalea finalite specialisee” UCL - FUNDP and the

“Maıtrise universitaire en sciences economiques-Master of science in Economics” UNIGE.

Academic year 2012-2013

Universite Catholique de Louvain/ESL – Place Montesquieu 3 – BE-1348 Louvain-la-Neuve

Universite de Namur/ESL – Rempart de la Vierge 8 – BE-5000 Namur

University of Geneva/SES – Uni Mail, Bd du Pont d’Arve 40 – CH-1211 Geneve

Page 2: Master's Thesis Alexandre Lauwers
Page 3: Master's Thesis Alexandre Lauwers

I would like to sincerely thank Professor Christian Hafner for hisavailability and advice in the drafting of this master’s thesis.

I would also like to express my deep gratitude to Professor CharlesWyplosz, the co-supervisor of this thesis, for his encouragementand continuous support throughout this research.

I am particularly grateful to the University of Geneva for welcomingme during the year 2012, as well as to my fellows and friends SimonDerauw and Antoine Nyssen who accompanied me throughout theyear.

And on a more humorous note, I would like to thank my parentsfor their patience and for lending me a part of their cellar fromwhich I could not escape.

Page 4: Master's Thesis Alexandre Lauwers

Contents

Introduction 1

1 Monetary Policy and Asset Prices:A should-(can)-do dimensions debate 5

1.1 Should central banks respond to asset price changes? . . . . . . . . . . 5

1.2 Do central banks respond to asset price changes? . . . . . . . . . . . . 8

2 Model specification and estimation method 11

2.1 Model specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1.1 The original Taylor Rule . . . . . . . . . . . . . . . . . . . . . . 11

2.1.2 A standard forward-looking monetary policy rule . . . . . . . . 12

2.1.3 The real interest rate equation and the Taylor Principle . . . . . 14

2.1.4 A dynamic specification : interest rates smoothing . . . . . . . 15

2.1.5 The rational expectations assumption . . . . . . . . . . . . . . . 17

2.2 An adequate estimation method . . . . . . . . . . . . . . . . . . . . . . 18

2.2.1 Avoiding OLS : a close inspection of the error term . . . . . . . 18

2.2.2 An instrumental variables estimation : the GMM procedure . . 19

2.2.3 Specification tests : on the validity and relevance of instruments 21

2.3 An augmented version of the Taylor rule : including the potential roleof asset prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 Empirical evidence and implications 26

3.1 Choice of the baseline model . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.1 The baseline instrument set . . . . . . . . . . . . . . . . . . . . 26

3.1.2 Implementation of an AR(3) interest rate structure . . . . . . . 27

3.1.3 Choice of the baseline forecasting horizon . . . . . . . . . . . . . 28

3.1.4 Recover the inflation target . . . . . . . . . . . . . . . . . . . . 32

3.2 A standard forward-looking Taylor rule . . . . . . . . . . . . . . . . . . 33

3.2.1 Estimation results for the baseline case . . . . . . . . . . . . . . 33

3.2.2 Historical performance . . . . . . . . . . . . . . . . . . . . . . . 36

3.2.3 Alternative measures and robustness of recommendations . . . . 40

3.2.4 Subsample stability analysis and an issue of identification . . . . 43

3.3 An augmented forward-looking Taylor rule . . . . . . . . . . . . . . . . 49

3.3.1 Initial empirical results on the augmented reaction functions . . 50

3.3.2 Some robustness tests for the Stock price augmented rule . . . . 55

3.3.3 Historical Performance and Decomposition . . . . . . . . . . . . 62

Page 5: Master's Thesis Alexandre Lauwers

3.3.4 A recursive window estimation . . . . . . . . . . . . . . . . . . . 66

3.3.5 A presumed asymmetric response towards stock prices . . . . . 71

Conclusion 78

Appendix A Data Selection i

Appendix B Time series properties of the data x

Appendix C Further results in reference to Section 3.2 xv

Appendix D Further results in reference to Section 3.3.1 xx

Appendix E The normative debate: a decision tree xxii

Page 6: Master's Thesis Alexandre Lauwers

List of Figures

1 Actual and Fitted values of the U.S. Federal funds rate . . . . . . . . . . . . 37

2 Actual and Target values of the U.S. Federal funds rate . . . . . . . . . . . 38

3 Range of rule prescriptions for various measures of inflation and the output gap 41

4 Isolating the respective influence of inflation and the output gap . . . . . . . 43

5 Reverse Recursive Coefficient Estimates . . . . . . . . . . . . . . . . . . . . . 47

6 Actual and target values of the U.S. Federal funds rate: augmented Taylor rule 63

7 Interest rates target decomposition . . . . . . . . . . . . . . . . . . . . . . . . 65

8 Recursive GMM results for the augmented policy rule . . . . . . . . . . . . . 69

9 Rolling GMM t-statistics for the asymmetric term . . . . . . . . . . . . . . . 74

A.1 Inflation rate measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

A.2 Output gap measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

A.3 Federal funds rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

A.4 Yearly change in the Stock and House price indexes . . . . . . . . . . . . . . vii

A.5 Alternative measures of stock price misalignments . . . . . . . . . . . . . . . viii

A.6 Time paths and correlograms . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

List of Tables

1 Sensitivity analysis to varying inflation forecast horizons . . . . . . . . . . . 30

2 OLS vs. GMM Baseline Estimates . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Standard Forward-Looking Taylor rule in High and Low Inflation Subperiods 45

4 GMM Baseline Estimates: Augmented Forward-Looking Taylor Rule . . . . 52

5 Robustness: alternative horizons for the output gap . . . . . . . . . . . . . . 55

6 Robustness: alternative measures for the stock market variable . . . . . . . . 57

7 GMM alternative estimations: Augmented Forward-Looking Taylor Rule . . 61

8 Summary statistics: actual vs. Taylor rule target rates . . . . . . . . . . . . 64

9 Asymmetric monetary policy with respect to stock price . . . . . . . . . . . . 73

B.1 Unit-root tests on the full sample period . . . . . . . . . . . . . . . . . . . . xiii

C.1 Standard Forward-Looking Taylor Rule: alternative samples . . . . . . . . . xv

C.2 Standard Forward-Looking Taylor Rule: alternative horizons for the output gap xvi

C.3 Standard Forward-Looking Taylor Rule: alternative measures of inflation andeconomic slack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

C.4 Subsample stability analysis : Volcker vs. Greenspan eras . . . . . . . . . . . xix

D.1 A redundant information with the business cycle index NAPM? . . . . . . . xx

D.2 Different ways for stock returns to enter the augmented rule . . . . . . . . . xxi

Page 7: Master's Thesis Alexandre Lauwers

Introduction

Ever since Chairman Greenspan’s speech in the fall of 1996 addressing the FederalReserve’s concern about the “irrational exuberance” in the stock market, a spiriteddebate has erupted on what is the appropriate role for asset prices in monetary policydeliberations. This current debate can be delimited into two main research fields:the normative debate which considers the complex question 1) Should central banksrespond to asset price movements?, and the positive debate which studies the contro-versial question 2) Have central banks responded to asset price movements? This studydoes not attempt to answer the first question. Instead, we will focus on the seconddebate, that is, investigate whether and how the U.S. Federal Reserve has reacted toasset price developments.

It is now standard practice to represent monetary policy as a powerful stabilizationtool, which can offset potentially inefficient economic fluctuations, via the implemen-tation of an interest rate rule – namely a Taylor rule. The benchmark Taylor rule[1993] states that monetary authorities should set the short-term nominal interest ratein proportion of the deviations from their equilibrium levels of two key state variables,price inflation and the real output gap. Whereas the main advantage of the Taylorrule lies in its simplicity and intuitiveness, an extensive empirical literature has beendirected at establishing to what extent this kind of rule can adequately represent aprocess as complex as that of monetary policy [Chadha, Sarno & Valente, 2004].

Accordingly, in an effort to enhance the historical performance of the benchmarkrule, a number of important extensions have been introduced – two of which deserveparticular attention. The first attributes a role to past interest rate values, with theintention to capture the observed gradualism in interest rate setting decisions [Claridaet al., 1999, 2000]. The second extension advocates that monetary policy is forward-looking as central banks aim to anticipate trends in inflation and output [Clarida et al.,1999, 2000; Batini & Haldane, 1999] 1. Not least because monetary policy actions taketime to have full impact on the economy, central bank decisions should be based onanticipating the economic conditions which prevail at a certain future horizon. Thisprospective aspect of monetary policy is not without significant practical difficulties inthe conduct of monetary policy, insofar as predicting future risks to price and outputstability is anything but an easy task. Central banks invest considerable resources tocollect sufficiently broad information, based on statistical and spreadsheet forecasts,on the results drawn from macroeconometric simulation exercise, on data provided inbusiness and consumer surveys, etc... To these could also be added information fromcertain key financial variables with strong predictive power, such as stock prices, realestate prices and exchange rates.

Since the financial liberalization in the United States in the 1980s, asset prices havegrown in importance, allocating resources across sectors and time, and increasinglyaffecting the course of the economy. Consequently, an important issue for central banks

1As it explicitly involves the presence of expected inflation and output gap in the policy rule, theGeneralized Method of Moments (GMM) has become a common approach since the landmark paperby [Clarida et al., 1999].

1

Page 8: Master's Thesis Alexandre Lauwers

in recent years has been what weight to place on asset prices in setting interest rates.Asset prices could assume several related roles relevant to the conduct of monetarypolicy: they can act as leading indicators of inflation, output and financial distress,they can perform as a good proxy for households’ financial wealth and they can pro-vide information about market expectations and risk-taking behaviours. It has beensuggested that central banks pay increasing attention to asset price developments asthey convey useful information to improve forecasts and help guide policy decisions 2.From this perspective, asset prices might have an indirect role to play in the monetarypolicy process: central banks should respond to asset prices insofar as they provideinformation about the future course of the central bank’s ultimate objectives – for theFederal Reserve’s dual mandate, inflation and output. In this case, asset prices wouldconstitute forward-looking indicators for the central bank’s traditionally accepted goalvariables.

There might be other reasons for policymakers to carefully monitor asset pricedevelopments. Officially the main emphasis of monetary policy of many central banksis on inflation control, as inflation in the price of goods and services can distort theallocation of resources and could be seriously harmful for economies. But just as forconventional inflation, booms and busts in asset prices can represent a major threat toeconomic stability. In a nutshell, asset price developments can have important effectson the real economy through: the wealth effect on consumption, the confidence effecton private spending, the cost of raising equity capital to finance corporate investment(Tobin’s Q effect) and the borrowing capacity of households and firms using assetvalues as collateral (the credit channel). These transmission channels have taken on anew importance in recent years, with occasionally large and unusual developments inasset prices, such as strong booms and busts in the stock market and an exceptionallylong upward trend in house prices and thereafter dramatic plunge. This tendency canbe a rationale for central bank intervention.

As a result, Chadha, Sarno & Valente [2004] introduce a useful distinction: “Auseful dichotomy is whether asset price movements are a concern for central banks onlybecause they contain information about future inflation – they are used as informationalvariables – or whether they should be seen as variables to which central banks shouldreact in addition to expected inflation and possibly the output gap.”. This distinctionintroduces a more controversial role for asset prices in the conduct of monetary policy:whether the central bank should or should not take a direct concern in asset prices,independent from their influence on the central bank’s ultimate objectives. In thecontext of an interest rate rule, one should ask whether, in addition to an inflation andan output gap, a term representing stock price movements should also be included.This normative issue on the appropriate monetary policy response to asset price fluc-tuations is discussed in the literature pioneered by Bernanke & Gertler [1999, 2001]and Cecchetti, Genberg, Lipsky & Wadhwani [2000]. Using a New-Keynesian model

2Bernanke indicates that “Central Bankers naturally pay a close attention to interest rates andasset prices... [they] are potentially valuable sources of timely information about economic and financialconditions ... [and] should embody a great deal of investors’ collective information and beliefs aboutthe future course of the economy.”, What policymakers can learn from asset prices, speech before theInvestment Analyst Society of Chicago, 2004.

2

Page 9: Master's Thesis Alexandre Lauwers

augmented with wealth and financial accelerator effects, Bernanke & Gertler made aclear prescription : monetary policy should not respond to changes in asset prices,except insofar as they have implications for future inflation. According to the authors,the optimal rule in terms of macroeconomic stability is of an inflation targeting-typewithout any explicit tribute to asset prices – stock prices in this case. Drawing on thesame model but with the introduction of the output gap on the policy rule, Cecchettiet al. made inversely a case for a measured “leaning against the wind” strategy withrespect to asset prices ; their numerical simulations indicate that including an assetprice term in the rule can actually reduce inflation and output volatility. Now that weface the tangible and destructive aftermaths of the bursting of the house price bubble,the triumph of price stability as a strategy which produced the “Great Moderation”years is being reconsidered. The 2007-2009 financial crisis has understandably chal-lenged the Bernanke & Gertler prescription; price stability does not guarantee financialstability. In any event, whether one of these authors’ prescriptions play a role in theFederal Reserve’s interest rate setting are empirical questions.

This study does not take a stand on whether it would be appropriate for the FederalReserve to respond directly to asset prices. No less importantly, account must be takenon what the Federal Reserve did. It is essential to understand the role asset pricesplayed in monetary decision making in order to evaluate the policy implications ofthe last house price bubble and to reconsider the optimal response of monetary policyto asset prices. Hence, we will attempt to offer modestly an empirical assessmentof the following question : what has been the response of the U.S. Federal ReserveBank to movements in asset prices? This empirical issue has been investigated by anextensive literature, yet there are non-unified answers to the question. With respectto the Federal Reserve’s behaviour, among others, Chadha, Sarno & Valente [2004],Rigobon & Sack [2003] find evidence of a direct response to stock price movements,while Bernanke & Gertler [1999] and Fuhrer & Tootell [2008], show that the policyresponse to a stock price change is not evident.

In this master’s thesis, we consider both the inclusion of stock prices and houseprices in a standard forward-looking and inertial Taylor rule framework, based onex-post realized data. Standard and augmented policy rules are estimated with theGeneralized Method of Moments, using monthly data between October 1979 to Novem-ber 2011. Within this framework, asset prices are allowed to act both as monetarypolicy targets and as information variables. Hence, this procedure can assess whetherthe U.S. Federal Reserve has responded to asset prices during the last decades orwhether they have used asset prices only as information variables that can help predictfuture inflation or output. While the main purpose is to consider whether and howthe Central Bank reacts to asset price movements by means of an augmented interestrate rule, we address this issue a step-by-step approach. Accordingly, a large part ofthe empirical investigation is first dedicated to the study of a very general monetarypolicy rule comprising inflation and the output gap – referred to as the standardforward-looking Taylor rule. Before studying the ambiguous role of asset prices, it isimportant to highlight the various assumptions, the arbitrary choices imposed by themethodology and the lessons drawn from these initial estimations. Due to the com-plexity and controversial aspect of this subject, emphasis has been given to provide a

3

Page 10: Master's Thesis Alexandre Lauwers

rigorous and cautious interpretation of the evidence drawn. We try to present someinsights regarding the opportunities and limitations of incorporating financial variablesin a linear Taylor rule relying on a monthly frequency.

The empirical evidence confirms the main results of the literature on forward-looking policy rules: namely that during the full sample period, the Federal Reservehas been aggressive towards inflation as well as assigning a positive weight to the sta-bilization of the real economy, and that there is a considerable gradualism in interestrate setting. Of special interest, the GMM estimations of the augmented policy ruleindicate that the Federal Reserve considered stock price developments in setting inter-est rates, while the role of house prices could not be identified properly. Specifically,we find that the estimated parameter associated with stock market developments isstatistically significant, suggesting that stock prices are not only relevant as instru-ments but also as arguments in the Federal Reserve’s policy rule. Then, these resultsare exposed to several alternative interpretations. Further refinements suggest thatthe Federal Reserve does not target stock prices per se, that is as a systematic goal,but rather it seems that it reacted to stock price developments on few occasions duringthe full sample period when misalignments are relatively large, an interpretation inline with Chadha et al. [2004]. Although our results have been shown to be robustto different specifications and proxy variables, the simple structure of Taylor rulesclearly has disadvantages, hiding the fact that many aspects of rule specification aresubject to considerable uncertainty and ignoring the potential for non-linearities orasymmetries in the Central Bank’s response. Moreover, a monthly frequency settingyields considerable inertia in the interest rate series and poses significant problemswhen studying the stability of the estimated parameters. Therefore, our empirical in-vestigation should be considered as a first rough attempt to describe Federal Reserve’sbehaviour and the potential link between asset prices and monetary policy setting.

The remainder of the master’s thesis is structured as follows.Section 1 briefly discusses some key insights into the normative and positive debate onthe role of asset prices in the Federal Reserve monetary policy. Section 2 lays down thetheoretical framework of the monetary policy rules estimated in this study. Specifically,we expose Clarida, Gali & Gertler [1999]’s econometric design of the standard forward-looking Taylor rule with the adequate estimation method used and finally introducethe forward-looking Taylor rule augmented with stock and house prices. Before movingto the empirical investigation, we recommend referring to Appendix A for a descriptionof the selected data and some relevant unit root tests. Section 3 is dedicated to thepresentation and discussion of our main empirical results. First, we describe therelevant arbitrary choices imposed by the estimation procedure. Then, we examinedeeply the estimation of the standard rule. After the lessons drawn from this initialanalysis, we investigate in an augmented rule whether and how the Federal Reservereacts to asset price developments. The last section summarizes our results.

4

Page 11: Master's Thesis Alexandre Lauwers

1 Monetary Policy and Asset Prices:

A should-(can)-do dimensions debate

Section 1 briefly discusses some key insights into the normative and positive debateon the role of asset prices in the Federal Reserve monetary policy.

1.1 Should central banks respond to asset price changes?

Most central banks have adopted an approach of inflation targeting, taking ac-count of the output gap as well. This common monetary policy set-up can be bestcharacterized by a standard Taylor rule in which the target interest rate is determinedin reaction to inflation and the output gap. Within this approach, asset prices enterthe central bank’s decision making process only insofar as they convey importantsignals for future inflation and affect output. In particular, the monetary rule ofinflation targeting guides the central bank to tighten its policy stance in face of arun-up in asset prices if this increase puts pressure on inflation. Conversely, if anasset price burst affects negatively output and causes inflation expectations to decline,then the central bank should reduce its instrument rate. Thus, to the extent assetprices affect inflation and output, a central bank dedicated to achieve the desiredinflation target will stabilize asset prices as well [Botzen & Marey, 2006]. This pol-icy of “flexible inflation targeting” has been advocated by Bernanke & Gertler [1999,2001]. The authors “view price stability and financial stability as highly complementaryand mutually consistent objectives to be pursued within a unified policy framework”.Extending the Bernanke, Gertler & Gilchrist [1999] financial accelerator model byincorporating exogenous bubbles in asset prices, Bernanke & Gertler conclude fromtheir stochastic simulations that central bankers should not respond to changes in assetprices as such, except insofar as asset price fluctuations carry additional informationabout expected inflation 3. Specifically, their simulations reveal that including stockprices in a Taylor-type monetary policy rule would lead to more volatile output andinflation, independent of whether asset price developments were known to be causedby “bubbles” – i.e. non -fundamental forces – or not. In other words, the existenceof a bubble should not cause the central bank to change its policy of targeting inflation.

In contrast to Bernanke & Gertler [1999, 2001] and other authors who recommendthat monetary policy should not respond directly to asset price bubbles, Cecchettiet al. [2000] argue that central banks should actively try to stabilize asset price aroundfundamentals. This alternative view can be characterized by an augmented Taylorrule in which the target interest rate is determined in reaction to inflation, output

3The authors use a dynamic New-Keynesian model modified to allow for financial marketimperfections. These introduce the presence of a premium on external finance for firms and households,which ultimately produces a “financial accelerator” mechanism that magnifies the effects of exogenousshocks. Moreover, their model allows for stock prices to be driven not only by fundamental shocksbut also by pure bubbles that emerge and burst stochastically. This exogenous asset price bubbleplay a role in the model economy via the wealth effect on consumption and via the balance-sheeteffects on the firms’ cost of capital and thus on investment decisions. Note that the assumption ofimperfect capital markets is a crucial condition for a potential monetary policy intervention to assetprices, otherwise there is no scope for asset prices going-out of fundamentals. But, as their findingsillustrate, it is not a sufficient condition.

5

Page 12: Master's Thesis Alexandre Lauwers

gaps and asset prices. In particular, from a slightly adapted version of the Bernanke &Gertler [1999] model, Cecchetti et al. [2000] argue that a monetary policy rule whichincludes directly a reaction to stock prices is optimal as macroeconomic performancecan be improved when policymakers pay attention to asset prices. Since asset pricesbursts can cause serious damages to the economy and asset price booms can encourageover-investment and excessive borrowing by households and firms, their tendency toamplify the business cycle can be a rationale for central banks’ intervention in reactionto asset price developments.

It has been widely recognized that asset prices fluctuations are transmitted to thereal economy through a variety of channels. We briefly present three main channelsthat dominate the literature.The first main channel covers wealth effects to households. Higher stock prices increasethe wealth of households who in turn allocate more resources on consumption [Mishkin,2001]. This effect can also materialize via real estate prices 4.Another major channel involving households and firms concerns the “balance-sheet”or the “net-worth” channel. It is related to the effects of asset prices on the valueof collateral underlying loans and mortgages, which in turn affect agents’ borrowingcapacity. Specifically, an increase in stock and real estate prices improves the balancesheet of households and firms. Higher net worth translates into higher collateral usedfor borrowing by companies and households. This in turn lowers the borrowing costsand eases lending, thus, increasing spending and productive investment 5. The caseis reversed when asset prices collapse and can cause serious economic distress, as itoccurred in 2008-09.A last important channel takes place on the companies issuing stock according toTobin’s Q ratio [1969] which relates the market capitalization of a firm to thereplacement cost of its capital. In response to a rise in stock prices, the marketcapitalization expressed by the cumulative value of stock improves, which increasesthe ratio Q. This implies that the capital is more beneficial inside the company andconsequently new investments are profitable. In aggregating companies, the theorycomes to the conclusion that an equity price increase will raise investments and adecrease will have a negative impact on them.To conclude, since asset prices play a central element in the transmission mechanismsof monetary policy, Mishkin [2011] argues that “the issue of how monetary policy mightrespond to asset-price movements is not whether it should respond at all but whether itshould respond over and above the response called for in terms of objectives to stabiliseinflation and employment.”.

The evident dangers of asset price bubbles and their tendency to magnify thebusiness cycles have led some economists and central bankers, such as Cecchetti et al.[2000] and Lowe & Borio [2002] among others, to advocate a pre-emptive monetarypolicy response of tightening to prevent or “pop” bubbles. In other words, central

4It is noteworthy that the United states have one of the highest percentage of households owningshares – 45 percent in 2012 – and the country has an homeownership rate over 60 percent. Thus,shares and real estate prices act clearly as proxies for the financial wealth of US households.

5Higher collateral values decrease the external finance premium since the moral hazard andadverse selection problems are less severe, see Bernanke et al. [1999]

6

Page 13: Master's Thesis Alexandre Lauwers

banks should “lean agaisnt the wind” of asset price developments, especially duringexpansionary phases. The main rationale is that the sooner an asset price bubblebursts, the easier it would be for policymakers to deal with its aftermath, with far lessdamage to the economy.

In clear-cut contradiction with this proactive monetary policy strategy, severalauthors have promoted what can be referred as the “Greenspan doctrine”: the centralbank should not fight against asset price bubbles, but rather should just “clean up”after their burst [Greenspan, 2002]. This asymmetric policy approach to bubble becameknown as the “Jackson Hole Consensus” [Issing, 2009]. The arguments behind thisopposite view introduce somehow a parallel and more practical part of the discussionoriented towards the question: “can they react?”, i.e. how a policy of leaning againstthe wind can be carried out in practice ? [Ravn, 2012]. Several authors are scepticabout the ability of a central bank to implement a policy that is responsive to assetprices. To give an overview of this discussion, a figure from Rudebusch [2005] is includedin Appendix E, which summarizes, as a decision tree, the main practical issues relativeto a monetary policy response to asset-price misalignments. The major arguments arepresented briefly below.First, one can argue that central banks cannot identify asset price bubbles in real timewith certainty [Bernanke & Gertler, 1999]. In other words, in practice, it is unlikely thatmonetary policymakers have the ability to discern whether changes in asset prices resultfrom economic fundamentals factors, such as improvement in productivity and earnings,or from non-fundamental shocks, due to “animal spirits” or “irrational exuberance”.Second, even if central banks could, the issue of how to “prick” an identified bubble stillremains. More specifically, marginal interest rate increases in an attempt to curtail anasset price bubble does not give guarantee that the bubble will actually burst. Posen[2006] stresses that “monetary policy is too blunt a tool to curb excessive growth inasset prices”.Third, while the impact of a monetary contraction in pricking asset price inflationis highly uncertain, this policy action is sure to yield negative effects on outputand volatility of the economy. Thus, monetary policy should adopt an asymmetricbehaviour towards asset price developments: that is waiting for bubbles to burst ontheir own rather than actively attempt to pop bubbles and risking to cause greatereconomic instability.Each argument could be challenged. In particular, the third argument is naturallyquestioned, as we emerge from the current crisis and the brutal aftermath of the hous-ing bubble burst. While the recent financial crisis has challenged or consolidated manyarguments presented throughout this section, this subject is unfortunately beyond thescope of our research topic 6.

To sum up, the optimal monetary policy reaction to asset price developments is farfrom reaching a consensus. But, even if there was a consensus about the optimalityof responding to asset prices, the practical conduct of such a policy would poseconsiderable challenges for monetary policy.

6For those interested on this latest crucial subject, we recommend to look through the book ofEvanoff, Kaufman & Malliaris [2012], which reexamines and updates a number of leading pre-crisisarticles on asset price bubbles.

7

Page 14: Master's Thesis Alexandre Lauwers

1.2 Do central banks respond to asset price changes?

Other researchers have focused on the positive part of the discussion: Do centralbanks respond to asset price developments? In contrast to the inclusiveness of thenormative question, this issue can be answered by exploring the data. Yet the answeris also non-unified.

The first empirical study that addresses explicitly the response of the FederalReserve to asset price developments is that of Bernanke & Gertler [1999]. Withina monthly frequency analysis, they extend the standard forward-looking policy ruleframework proposed by Clarida et al. [1999] and estimate an augmented version wherethey add to the regressors – expected inflation and the output gap – the contem-poraneous and lagged changes in stock prices. The current change in stock pricesis then instrumented so as to help control the simultaneity bias in the relationshipbetween monetary policy and stock prices. Using the Generalized Method of Momentsto estimate this augmented Taylor rule for the period 1979:10-1997:12, they find thatthe reaction of the Federal Reserve to stock price movements is negative and insignifi-cant. As an interpretation, they suggest that, within a forward-looking policy rule, theimpact of stock prices is already incorporated in the forecasts of output and inflation.Relying instead on the actual forecasts examined by monetary policymakers beforeevery interest rate decision – the “Greenbook” forecasts – Fuhrer & Tootell [2008]find little evidence that the Federal Reserve responds directly to stock prices overthe period 1987:3-2002:3, also suggesting that the response to asset prices is filteredthrough the forecast of inflation and output, the two ultimate goals of monetary policy.They insist that methods using ex-post dataset, such as in Bernanke & Gertler [1999],fail to adequately identify the information that actually enters the Central Bank’sdecision-making process and may result in incorrect inference about the role of assetprices in setting the Federal funds rate.

By using the same econometric approach of Bernanke & Gertler [1999], Chadha,Sarno & Valente [2004] have also examined whether asset prices can be included intoan augmented Taylor rule, based on quarterly data for the United States, the UnitedKingdom and Japan for the 1979-2000 period. In particular, the dividend-price ratioand the real exchange rate are introduced in the extended rule once-lagged in orderto eliminate by definition the endogeneity problem. In contrast with Bernanke &Gertler’s results, Chadha et al. [2004] do find that both asset price and real exchangerate deviations from their equilibrium values enter significantly and robustly the Fed-eral Reserve’s reaction function. Their results suggest that monetary policymakers useasset prices not only as part of the information they convey to set interest rates, butalso as direct arguments in their reaction function. These results are however open toseveral nuances. They argue that their significance might come from the asset pricesproxying the part of expected inflation and output which cannot be explained by theinstruments used in the GMM estimation. Moreover, in conclusion to their study,the authors stress that, despite asset prices significance in the augmented policy rule,central bankers might not target asset prices per se, i.e. reacting systemically to anydisequilibria, but rather react occasionally when misalignments are large.

8

Page 15: Master's Thesis Alexandre Lauwers

Relying on the same framework with quarterly data from 1990:1 to 2003:1, Cec-chetti [2003] provides evidence that the Federal Reserve’s words and actions wereinfluenced by the stock market bubble of the late 1990s. The author finds a signifi-cant response of monetary policy to a measure of the presence of a bubble in stockmarket – defined as the excess equity premium constructed from the dividend dis-count model – and to an indicator of the stress in the banking system. He concludesthat the debate should not be about whether they should have reacted, but whetherthe Federal Reserve did enough. Concerned also with the behaviour of the FederalReserve during the new-economy bubble, Hayford & Malliaris [2005], however, findno empirical evidence that monetary policymakers attempted to deflate an apparentspeculative bubble in the stock market; a result confirmed for three different measuresof stock market overevaluation – the P/E ratio, the implied equity premium and theYardeni’s stock valuation model. Instead, their findings on their static and dynamicaugmented Taylor rules over 1987:3 to 2000:2 suggest that, perhaps unintentionally,the Federal Reserve might have accommodated the growing valuations of the stockmarket during that period. Drescher, Erler & Krizanac [2010] also report some findingsthat question whether the Federal Reserve had promoted the real estate market bymeans of loose monetary policy during the period 1985:1-2007:1. Using an asset cycledating procedure that reveals real-time developments in real estate prices, the authorsprove that the Federal Reserve does implicitly respond to real estate prices. But, theseresponses are pro-cyclical and may thus provide a setting for the build-up of asset pricebubbles. Their rolling subsample estimations point out that this pro-cyclic monetarypolicy response changes just prior to the peaks of asset price bubbles. And when thebubble effectively bursts, the Central Bank intervenes to stabilize the depressing assetprices. According to the authors, “this stabilization is the first step into a next trap”.

Another approach to identify the relationship between monetary policy and as-set prices is found in Rigobon & Sack [2003] who concentrate on the endogeneityissue that arises when one tries to measure the reaction of monetary policy to thestock market. The authors criticize the standard solution procedure that consists ofinstrumenting for the change in stock prices with lags of macroeconomic variablesand stock returns. In order to properly solve the simultaneity bias, Rigobon & Sackintroduce a two variable VAR model and propose an identification technique basedon the heteroscedasticity found in high-frequency interest rates and stock market re-turns. Following this approach and using daily and weekly US data over the period1985-1999, they find that the Federal Reserve does react significantly to changes instock market valuations (when adjusting its instrument rate). In particular, theyestimate that a 5 percent rise in the Standard Poor’s 500 stock index increases thelikelihood of a 25-basis-point tightening by 57 percent. Finally, the authors suggestthat this estimated response is approximately of the dimension required to offset theeffect of increases in stock prices on aggregate demand. Whereas Rigobon & Sack’sfindings are symmetric with respect to a decrease in the level of stock prices, Ravn[2012] reports that the U.S. central bank pursued in fact an asymmetric reaction to-wards movements in stock prices over the period 1998-2008: while a 5 percent dropin the S&P 500 index increases the probability by 1/3 of a 25-basis-point interest ratecut, no significant reaction of monetary policy to stock price increases can be identified.

9

Page 16: Master's Thesis Alexandre Lauwers

As the empirical literature has been quite substantial since Bernanke & Gertler[1999]’s estimations, we only presented some views on the debate, illustrating that thispositive question may seem a mundane question at first sight but the answers are non-unified and interpretations must be cautious. Recent contributions to this literatureare, among others, Mattesini & Becchetti [2008] who concentrate on a more appropriateproxy for stock price disequilibria, Dupor & Conley [2004] who focus on the “GreatModeration” period, Pamela [2011] who examine the existence of a “Greenspan Put”,Botzen & Marey [2006] who investigate the role of stock prices in the ECB’s interestrate setting.

10

Page 17: Master's Thesis Alexandre Lauwers

2 Model specification and estimation method

In this section, we review the theoretical framework for the monetary policy rulesthat are estimated in Section 3. Starting from the original rule of Taylor [1993], wesuccessively introduce and attempt to rationalize the extensions proposed by [Clar-ida et al., 1999]. Next, we present an adequate estimation method to overcome theendogeneity concerns induced by the rational expectations assumption. Finally, wegeneralize the standard framework to an interest rate rule that explicitly allows assetprices to act both as monetary policy targets and as information variables.

2.1 Model specification

2.1.1 The original Taylor Rule

At the November 1992 Carnegie-Rochester Conference on Public Policy, JohnTaylor established a rule that gave rise to a revolutionary approach to both monetarytheory and monetary policy in practice. Taylor’s [1993] original formulation relatesthe setting of short-term nominal interest rates to the evolution of two key economicvariables, the lagged price inflation and the output gap, as shown in Eq. (1) 7 :

rt = rr + πt−1 + fπ(πt−1 − π?

)+ fy

(yt − y?t

)t−1

(1)

where rt represents the short-term nominal interest rate, rr the equilibrium real interestrate, πt−1 the inflation rate over the previous four quarters, π? the central bank’sinflation target, and (yt − y?t )t−1 the output gap, i.e. the difference between actualoutput and potential output. Rewriting Eq. (1) slightly, the rule can be expressed as,

rt = rr + π? + βπ(πt−1 − π?

)+ βyyt−1 (2)

where yt−1 ≡ (yt − y?t )t−1, βπ ≡ 1 + fπ and βy ≡ fy are the inflation and outputresponse coefficients, respectively.

As argued by Kahn [2012], the broad appeal of the Taylor rule comes from itssimplicity, intuitiveness, and its realism. The rule is simple in the sense that it expressesthe policy rate directly as a function of the goals of monetary policy, that is stabilizinginflation relative to its target and output relative to potential output. In addition,the rule is intuitive as it recommends policymakers to move the nominal short-terminterest rate to lean against the wind in response to aggregate demand shocks and totake a balanced approach in reaction to aggregate supply shocks. Finally, the rule isrealistic because it approximates the conduct of monetary policy. Although the rulewas first intended to show the desirability of systematic policy over discretion, a rulewith parameters set arbitrarily to rr = 2, π? = 2, βπ = 1 + 0.5 and βy = 0.5 describesfairly well the setting of U.S. Federal Reserve policy between 1987 to 1992. WithTaylor’s parametrization, the rule took the following form:

rt = 2 + πt−1 + 0.5(πt−1 − 2) + 0.5yt−1

7Actually, the left-hand side variable should be the interest rate targeted by the central bank,rather than the actual short term nominal interest rate. But, the actual interest rate is assumed toadjust completely to the target each period.

11

Page 18: Master's Thesis Alexandre Lauwers

Because the period from 1987 to 1992 was characterized by a relative macroeconomicstability, the rule has been accepted as a feasible prescription and useful guidance forconducting monetary policy.

Given its practical appeal, the Taylor rule has become the standard to explain howpolicy has been set in the past and how policy should be set forward 8. As the implicitobjective of this study is to explain the determinants of the Federal Reserve policy, theTaylor rule will naturally constitute the basis of our empirical investigation. However,the original version has become slightly “obsolete” over the years as it initially assumesthat the Federal Reserve is backward looking and follows a non-inertial policy rule.The following subsections will expose some extensions and try to bring the relevantjustifications for these refinements.

2.1.2 A standard forward-looking monetary policy rule

In this section, we lay out the central bank’s monetary policy reaction function, as aforward-looking Taylor rule. This version of the Taylor rule was presented by Clarida,Gali & Gertler’s landmark papers [1999; 2000] and has been frequently applied in theempirical literature of monetary policy decisions.

As a starting point of the formal derivation, Clarida et al. assume the followingmonetary policy setting : within each operating time interval – here on a monthlybasis –, the central bank targets a specific level for the nominal short-term interestrate, r?t , depending on future expectation of macroeconomic conditions. Here, we willspecify the standard case, in which the target interest rates are solely determined bydepartures of both expected future inflation and output from their respective targets.Precisely, 9

r?t = r + βπ(E[πt+k|Ωt]− π?

)+ βyE[yt+q|Ωt],

r?t = r − βππ? + βπE[πt+k|Ωt] + βyE[yt+q|Ωt], (3)

where r denotes the long-run equilibrium nominal rate of interest (namely the desirednominal rate when both inflation and output departures are neutralized) ; π? is thetarget rate of inflation; πt+k is the percent change in the price level between periodst and t + k ; yt+q is the average output gap between period t and t + q, which isdefined as the difference between the actual output and its potential level in a perfectprice-flexibility framework; the parameters k and q designate the forecast horizonscorresponding to the degree central banks are forward-looking; E[.] is the expectationoperator and Ωt represents the information set available to the central bank at time t,at the time the target interest rate is determined. Hence, E[πt+k|Ωt] and E[yt+q|Ωt] arethe conditional expectations of the inflation rate, k-periods ahead, and of the outputgap, q-periods ahead. The monetary policy rule, described by Eq. (3), prescribes thecentral bank to raise the target interest rate r?t if expected inflation, within a k-periodtime horizon, is rising above the target level π? and if the output gap is expected to be

8For an extensive discussion of the breadth and scope of Taylor’s impact on monetary theory andpolicy, we recommend the collection of papers edited by Koenig, Leeson & Kahn [2012].

9Note that the forward-looking specification nests the original Taylor rule as a special case.

12

Page 19: Master's Thesis Alexandre Lauwers

positive. Conversely, the target interest rate is lowered if expected inflation is droppingbelow its target and if the expected output gap is negative.As a simplification, one can rewrite Eq. (3), by defining α ≡ r − βππ?:

r?t = α + βπE[πt+k|Ωt] + βyE[yt+q|Ωt]. (4)

Departing from the original Taylor rule, Clarida et al. stress the importancefor a central bank to adjust the target interest rate in response to future expectedmovements, in both inflation and output gap. Such a forward-looking specification hasits merits.The most obvious benefit is that the forecast horizons embody explicitly the “lag-encompassing” phenomenon affecting the monetary policy transmission. It is generallyrecognized that monetary policy actions take time to have their full impact on theeconomy, making them difficult to conduct. To illustrate the need for central banksto be forward looking in the presence of long-lasting lags, the following example isconsidered. Imagine an economy in which the current level of inflation is low, but inwhich the central bank expects an imminent positive aggregate demand shock thatwill cause undesirable inflationary pressures. Even though current developments donot suggest any inflation hike, if policymakers do recognize the presence of long lags,they will raise the target interest rate. However, if they keep the policy rate unchangeduntil inflation signs actually appear, it would already be too late to guarantee pricestability – “inflation expectations will already be embedded into the wage- and price-setting process, creating an inflation momentum that will be hard to contain” [Mishkin,2000]. Hence, in order to prevent inflationary surge, central banks need to be forward-looking and act in a pre-emptive and proactive manner 10.Furthermore, explicitly accounting for expected inflation in a forward-looking reactionfunction facilitates the interpretation of the estimated coefficients. With the centralbank reacting to expected inflation, it is easier to disentangle the role the central bankassigns to output – and also later to asset prices –, either as a leading indicator thecentral bank rely on to predict the future evolution of prices, or either as a fully-fledgedobjective that would be added to the price stability objective.

Contemporaneous or backward-looking monetary policy rules, such as the originalTaylor rule, in which monetary authorities focus on current or past inflation ratherthan future price developments, thus seem inappropriate. The policy rate should there-fore be set in such a forward-looking environment, in which the feedback variables –expected inflation and output gap – are directly managed by the central bank.

10Batini & Haldane [1999] found that inflation forecast-based rules delivers clear welfareimprovement than standard Taylor rules. See also Svensson [1997] who demonstrates that inflation-targeting rules, which embodies this lag-encompassing aspect, help improve the control of inflation asthey prevent the central bank to intervene too late.

13

Page 20: Master's Thesis Alexandre Lauwers

2.1.3 The real interest rate equation and the Taylor Principle

The Taylor rule embeds a central tenet of sound monetary policy that has subse-quently been refereed to as the “Taylor Principle” 11. According to this principle, whena shock provokes a shift in inflation away from its target rate, monetary policymakersmust adjust the nominal interest rate by more than one-for-one. This ensures that thereal interest rate evolves in the right direction to restore price stability. For instance,in face of increases in inflation expectations, the nominal interest rate should rise morethan proportionality to the movement in the inflation rate, so that the real interestrate increases and leads to a negative output gap, which in turn, exert a downwardpressure in inflation and keeps the economy stable. Otherwise, the initial increase inthe inflation rate is further magnified and the shock may cause an unstable explosivespiral. Hence, the Taylor Principle embeds important stability implications and pro-vides importance guidance to monetary policymakers on their objective to foster stableinflation and growth – and in a forward-looking context, to anchor long run inflationexpectations.

In order to evaluate whether policymakers adhere to the Taylor Principle and howthis condition is reflected within our framework, it is important to consider the impliedtarget for the ex-ante real rate of interest (rr?t ). The Fisher equation – expectationsaugmented – defines the real interest rate as rrt ≡ rt − E[πt+k|Ωt]. By introducing itin Eq. (3), we obtain:

r?t − E[πt+k|Ωt] = r − E[πt+k|Ωt] + βπ(E[πt+k|Ωt]− π?

)+ βyE[yt+q|Ωt].

Knowing that the long-run equilibrium real rate of interest is defined asrr = r − π? ⇔ r = rr + π?, it follows:12

rr?t = rr + π? − E[πt+k|Ωt] + βπ(E[πt+k|Ωt]− π?

)+ βyE[yt+q|Ωt].

Finally, rearranging the equation by factoring the right hand side, we obtain the finalexpression for the target real interest rate:

rr?t = rr + (βπ − 1)(E[πt+k|Ωt]− π?

)+ βyE[yt+q|Ωt]. (5)

According to Equation (5), the target real rate of interest fluctuates from its long-run equilibrium value in response to departures of either expected inflation or outputfrom their respective targets. From this expression, it is immediately clear that themagnitude of the parameter βπ is decisive [Clarida et al., 1999].If βπ > 1, the central bank would act counter-cyclically as the change in expectedinflation would be more than offset by the change in the real interest rate. This resultwould typically be interpreted as monetary policy being active or aggressive towardsinflation and thus satisfying the aforementioned Taylor Principle.On the other hand, if βπ < 1, the central bank would accommodate changes in in-flation – interpreted as a passive monetary policy. Even though the nominal interest

11This principle is discussed in Taylor [1999]. The term “Taylor principle” was first introducedby Woodford [2001], who demonstrated analytically this principle in a stylized New-Keynesian model.

12rr is determined by purely real economic factors and is therefore unaffected by monetary policy[Clarida et al., 1999].

14

Page 21: Master's Thesis Alexandre Lauwers

rate is raised in reaction to an expected increase in inflation, the policy rate would notincrease sufficiently to keep the real rate from declining. Then the Taylor Principlewould be violated and as Clarida et al. [2000] highlights, this accommodative regimecould produce self-fulfilling inflationary spirals and exploding path to output as well.

The Taylor Principle seems evident nowadays, but Clarida et al. [2000] broughtevidence that the Federal Reserve did not satisfy it during the 1970s. The pre-1979policy in the U.S. seems consistent with a rule in which the estimated parameter βπ wasless than one. The authors argued that this failure to adhere to the Taylor Principlecould have been responsible for the poor macroeconomic performance during the 1970s– the stagflation episode, a combination of high inflation and recession.

Therefore, the estimated magnitude of the parameter βπ will constitute a crucialcriterion for evaluating the Federal Reserve’s policy rule and will deserve particularattention in the empirical section.

2.1.4 A dynamic specification : interest rates smoothing

From the original reaction function first estimated by Taylor, a widely acceptedextension advocates the inclusion of an interest rates smoothing parameter. Accordingto Goodfriend [1991], the concept of interest rates smoothing is understood as thetendency of central banks to adjust their policy rate gradually, in a sequence of smallsteps, towards the desired level.

The motivations behind this tendency to smooth changes in interest rates and thusto avoid sudden and radical policy reversal, have been extensively reviewed in litera-ture. In an nutshell, interest rates smoothing enhances the effectiveness and optimalityof monetary policy. Woodford [2003] argues that including a smoothing objective intothe loss function the central bank seeks to minimize is highly desirable. The effectsof monetary policy highly depends on private sector expectations. In particular, thelong rates in the economy are driven by market expectations of the future short ratespath. In fact, when monetary policy acts with a certain degree of interest inertia,decisions are more predictable and forward-looking market participants expect a smallinitial step to be followed by further moves in the same direction 13. Thus, it permitsto enhance the control over long-term interest rates and achieve greater effects uponaggregate demand, that is delivering a more effective control over current output andinflation, without requiring excessive variation in the short-term interest rates [Sack& Wieland, 2000]. In addition, financial markets and especially commercial banks arewary of abrupt changes in the policy rate, since high interest rate volatility exacerbatesthe volatility of all other financial assets [Lowe & Ellis, 1997]. Smith & Van Egteren[2005] particularly highlights its adverse effect on systemic risks through the banks’exposure and the volatility in financial markets. An abrupt unanticipated movementin the policy rate can lead to considerable financial losses, a situation that monetary

13Rudebusch [2002, 2006], however, claims that, if a central bank actually smooths its policy,then future adjustments in the interest rate should be largely predictable. Unfortunately, he showsthat this is not supported by financial data.

15

Page 22: Master's Thesis Alexandre Lauwers

authorities must avoid to preserve financial stability. Finally, the observed sluggishmoves in the instrument rate can be explained by the great uncertainty surroundingnot only the quality of released macroeconomic data – the true states of the world –,but also the structure, the parameters of the economy and the transmission mechanismof the true macroeconomic model [Sack & Wieland, 2000] 14.

In line with Clarida et al. [1999, 2000], it is assumed that the actual short-term interest rate adjusts only partially to the central bank’s desired interest rate.Specifically, the interest rates inertia can be expressed as follows:

rt = (1− ρ)r?t + ρ(L)rt−i + υt, (6)

ρ(L) = ρ1 + ρ2L+ ...+ ρpLp−1,

ρ =

p∑i=1

ρi,

rt−i ≡ Lirt,

υt ∼ i.i.d. N(0, σ2υ),

where rt denotes the actual short-term interest rate at time t ; r?t still represents thetarget interest rate the central bank would set, as if it was unconstrained by a desireto gradually adjust the target rate ; the parameter ρ ∈ [0, 1] represents the degree ofinterest rate smoothing ; ρ(L) is the nth-order partial adjustment mechanism; υt isan exogenous random walk shock to the actual interest rate, which is assumed to beidentically and independently normally distributed with a zero-mean and a constantvariance 15. It is intended to capture a pure random shock to monetary policy and thusthe imperfect control of monetary authorities on the instrument rate. Alternatively, itcould also represent discretionary adjustments of policymakers, apart from the determi-nants captured by the Taylor rule. It is refereed hereinafter as an “interest rate shock”.

This partial adjustment mechanism states that, every period, the central bank ad-justs the actual nominal short-term interest rate rt to cancel out only a fraction (1−ρ)of the gap between the desired interest rate r?t and some linear combination of the rateinherited from previous periods. Monetary policy therefore continues to be determinedby past conditions, even though they may be irrelevant to the determination of currentand future realizations of the target variables 16.

Based on historical data, the estimated coefficient ρ is often found to be relativelylarge, in the range of 0.9 – 0.8 for quarterly frequency – and highly significant, for a

14The behaviour in the face of uncertainty is often refereed to the “Brainard Principle”, whichadvises moving policy incrementally when policymakers are uncertain on how strong their tools are.

15Cochrane [2011] argues that assuming υt is i.i.d. – that is uncorrelated with lags of itself andpast values of the right hand-side variables – is a strong, yet restrictive, assumption. This hypothesisimplies that any shocks entering into the monetary policy error should be i.i.d. In addition, the authorpoints out that this assumption goes in contradiction with the common criticism accusing the FederalReserve to deviate from the Taylor rule for years in the mid-2000s, a persistent discretionary choice.

16It is noted that, in the extreme case when ρ = 0, the instrument adjusts instantaneously to thetarget policy rate. It is driven solely by the fundamentals in the forward-looking Taylor rule, Eq. (4).In this case, there is no gradualism. Conversely, when ρ = 1, the lagged interest rates are the bestpredictor for the contemporaneous interest rate.

16

Page 23: Master's Thesis Alexandre Lauwers

wide variety of samples and countries. This implies a very slow speed of adjustmentof the short-term interest rate to its target rate, commonly interpreted as evidence ofdeliberate policy inertia from the central bank. However, Rudebusch [2002, 2006] iscritical of the interpretation given to findings of large partial adjustment coefficientsin the context of quarterly data. He argues that this is not a sign of actual gradu-alism from central banks but rather an illusion, coming from the presence of seriallycorrelated shocks. Estimated policy rules may suffer from misspecification, i.e. theomission of a persistent and serially correlated variable that influences the interestrate setting decisions. This might account for the significant estimated value of ρand would produce the illusion of apparent monetary gradualism. Such misspecifiedreaction function can also come from instability in the form of structural shifts withinthe parameters of the policy rule.

Combining the target equation (4) with the partial adjustment rule, Eq. (6), yieldsthe final specification of the standard forward-looking Taylor rule to be estimated :

rt = (1−p∑i=1

ρi)(α + βπE[πt+k|Ωt] + βyE[yt+q|Ωt]

)+

p∑i=1

ρirt−i + υt. (7)

Within this form, the reaction function is expressed as the actual policy rate, in termsof expected inflation and output gap and of an additional explanator, a lagged interestrate term. In order to estimate such reaction function, one also has to make an addi-tional crucial assumption.

2.1.5 The rational expectations assumption

For the model to be estimable, the unobserved expected values of inflationE[πt+k|Ωt] and of the output gap E[yt+q|Ωt] must be proxied. Clarida et al. [1999, 2000]recommend eliminating the unobserved forecast variables by rewriting the reactionfunction in terms of actual realized values – the observed future inflation πt+k andoutput gap yt+q –, while including the forecast errors into the new composite error term.In line with the authors’ implicit assumption of rational expectations, the followingestimable equation is obtained:

rt = (1−p∑i=1

ρi)(α + βππt,k + βyyt,q

)+

p∑i=1

ρirt−i + εt. (8)

It is noted that the new error term εt captures a linear combination of the forecasterrors of inflation and the output gap with the exogenous disturbance υt.

εt = −(1−p∑i=1

ρi)[βπ(πt,k − E[πt,k|Ωt]) + βy

(yt,q − E[yt,q|Ωt]

)]+ υt. (9)

To be consistent with the hypothesis of rational expectations of the monetaryauthority, the forecast errors contained in εt must be white noise processes, that ismean-zero and serially uncorrelated random variables. Agents forming their expecta-tions rationally do not make systematic errors and their expectations are on average

17

Page 24: Master's Thesis Alexandre Lauwers

correct, so that the expected value of the forecast error is zero and the error today isnot correlated with the error made at any other time t.

To guarantee that forecast errors are white noise and that the subsequent econo-metric approach is valid, one has to posit the assumption that the variables enteringin Eq. (8) are stationary within the analysed sample period. For this purpose, Claridaet al. regard the inflation, the nominal interest rate as well as the output gap asstationary. This necessary condition is discussed in detail in Appendix B, where usualunit root and stationary tests are developed.

Accordingly, the rational expectations assumption implies that the forecast errors –made by the central bank regarding future values of inflation and the output gap – areuncorrelated with the elements contained inside the information set of the monetaryauthority at the time of setting the interest rate, Ωt. In other words, if expectationsare formed rationally, the forecast errors should not be predictable by any variablescontained in Ωt. As will be seen in section 2.2.2, this assumption will provide us withthe opportunity to identify a set of orthogonality conditions, which are centrepiece ofthe Instrumental Variable estimation procedure.

Substituting the unobserved forecast variables with their ex-post realized valuesis a common approach for models with expectations, and was largely influenced bythe pioneer paper of Clarida et al. [1999]. However, an alternative method for modelswith expectation terms involves using actual forecasts, i.e. real time data. Indeed, themajor shortcoming of a monetary policy rule based on ex-post revised data is that ituses more information than was available to monetary policymakers at the time theinterest rate decisions were made. Thereby, estimating reaction functions with ex-postdata can yield misleading descriptions of historical monetary policy and thus raise somedoubts on their usefulness for evaluating past monetary, as noted in Orphanides [2001].

2.2 An adequate estimation method

2.2.1 Avoiding OLS : a close inspection of the error term

To properly estimate the central bank’s reaction function implied by Eq. (8), first,one needs to pay close attention to the innovation term εt. The forward-looking spec-ification of the Taylor rule, based on ex-post data, calls for drawing some econometriccaveats, given the issue of serial correlation and, more importantly, the potential en-dogeneity of the explanatory variables.

First, a time-overlapping problem might arise in the computation of the fu-ture inflation rate and output gap if the central bank’s target horizon surpassesthe frequency of the data. Therefore, by construction, the errors from predictingthe k-step-ahead inflation and the q-step-ahead output gap forecasts at time t, i.e.(πt,k−E[πt,k|Ωt])+

(yt,q−E[yt,q|Ωt]

), have a moving-average (MA, henceforth) stochas-

tic structure of order (max[k, q] − 1). This inevitably induces an MA(max[k, q] − 1)component in the disturbance εt, unless k = q = 1 [Clarida et al., 2000]. For instance,

18

Page 25: Master's Thesis Alexandre Lauwers

if monthly observations are considered and the forecast horizons request expectationsto be formed, up to 12-months ahead for example, then the forecast errors for 1-monthahead will be correlated with those for 2-months ahead, and so on. This particularoverlap will induce an MA(12) structure in the forecast errors.

Second, and most importantly, the forward-looking reaction function, such as in Eq.(8), suffers from an endogeneity problem, that is a correlation between the explanatorsand the disturbance term. In this case, such a correlation stems from both an ”error-in-variable” issue and the interaction with the interest rate shock. As discussed in section2.1.5, actual future realizations must be used instead of the unobserved expectatedterms. The expected inflation and output gap are then measured with errors by theirobserved ex-post counterparts. The future realized values are equal to their multi-stepahead forecasts combined with their forecast errors :

πt+k = E[πt+k|Ωt] + ξπt+k

yt+q = E[yt+q|Ωt] + ξyt+q .

Since the contribution of the measurement error ξ in the explanator becomes part ofthe new composite error term εt, the mis-measured explanatory variables, πt+k andyt+q are negatively correlated with the associated disturbance term (Murray, 2006).Moreover, the endogeneity issue lies clearly in the relationship between the interestrate shock and the future realized inflation and output gap. Consider a pure randomshock to monetary policy, captured in the term υt. The shock enters into the interestrate settings at time t and the new target rate will influence the economy, through thetransmission channels of monetary policy, with a certain lag of k or q. Consequently,the explanatory variables πt+k and yt+q will be affected by the contemporaneous shock,violating the orthogonality condition with the innovation εt.

Considering that, for both points, the ex-post observed macroeconomic variablesare correlated with the error εt in the monetary policy rule, it is not possible to applysimple linear regression methods such as Ordinary Least Square (OLS, henceforth)estimation. This violates the fundamental assumption of orthogonality between theerror and the independent variables. As a result, the application of OLS to Eq. (8)would produce biased and inconsistent estimates of the relevant parameters. To controlfor the endogeneity bias, the empirical implementation of such forward-looking rulesthus calls for the use of an instrumental variables methodology (hereinafter, IV).

2.2.2 An instrumental variables estimation : the GMM procedure

Aside from the presence of endogenous independent variables, the very nature ofthe model prevents naively regressing Eq. (8). At the time of its interest rate decisionmaking, the central bank cannot simply observe the ex-post realized explanatory vari-ables πt+k, yt+q. None of those are known at time t. Hence, the central bank has tobase its decisions on information known at time t−1 or earlier, i.e. on lagged variablesonly.

The orthogonality condition between the forecast errors and any variable in theinformation set at time t plays a crucial role. It allows for constructing valid – i.e.

19

Page 26: Master's Thesis Alexandre Lauwers

orthogonal to the error term – instruments for the relevant unobserved independentvariables. The assumption of rational expectations ensures that the forecast errors,derived from conditional expectations, are uncorrelated with any information availableto the central bank at the time it chooses the target policy rate. In addition,given the i.i.d. property of the policy shock υt, the latter is also orthogonal to Ωt.Therefore, orthogonality is satisfied between the composite disturbance εt and all thevariables contained in the information set by the central bank at time t. This leadsto the following zero conditional expectation, which in turn, implies the subsequentunconditional moment condition:

E[εt|Ωt] = 0 =⇒ E[εt · z′t] = 0, (10)

where z′t ∈ Ωt, is a I × 1-vector containing all the variables included in the centralbank’s information set. Inside z′t are all the instruments known at the time of settingthe interest rate, z′t = [z1

t , z2t , . . . , z

It ]. As potential ”good” instruments, they must

meet two crucial criteria, namely validity and relevance: are they 1) orthogonal tothe error process, and are they 2) correlated with the endogenous variable? In otherwords, one must select instruments that help the central bank to forecast inflation andthe output gap, and at the same time be orthogonal with the innovation εt – which thecentral bank does not react directly upon. It is usually quite common in such a timeseries model to make extensive use, as instruments, of lagged values of the regressorsand the dependent variable, in conjunction with other variables.

Rewriting this orthogonality condition in terms of the model specification in Eq.(8) yields,

E

[rt − (1−

p∑i=1

ρi)(α + βππt,k + βyyt,q

)−

p∑i=1

ρirt−i · z′t]

= 0. (11)

This set of orthogonality conditions will provide the basis for the IV’s estimation ofthe parameter vector [βπ, βy, ρ, α].

Considering the forward-looking nature of the model and the orthogonality con-ditions derived above, an IV methodology appears to be the soundest option. Thismethodology can consistently estimate the parameter vector [βπ, βy, ρ, α], while OLScannot 17. Therefore, to estimate the reaction function implied by Eq. (8), in linewith Clarida et al. (1998, 2000) and many others inspired by it, we adopt an IV esti-mation procedure and more specifically, the Generalized Method of Moment (GMM,henceforth), as the latter accounts for the simultaneity bias as well as heteroscedasticerrors 18. Furthermore, we take into account the MA(max[k, q] − 1) component in

17As sample size grows to infinity, the instrumental variable estimate of the coefficient willconverge in probability to the true value of the coefficient. It is thus important to have a sufficientlylarge sample in order to assure the consistency of the IV estimator – this is the case for monthly dataover more than 30 years

18If the errors are homoscedastic, we can use the 2-stage least squares (2SLS) method. However,if the errors are diagnosed to be heteroscedastic, as it will be the case, the GMM is preferred.In particular, the GMM estimation technique is performed by a “2-step 2-stage least squares”,implemented in theStata routine by the command ivreg2, gmm2s robust bw(12)

20

Page 27: Master's Thesis Alexandre Lauwers

the error term – due to the overlapping nature of the data – by using a Heteroscedas-ticity and Autocorrelation Consistent (HAC, hereinafter) estimator for the weightingmatrix, based on a Bartlett kernel with autocorrelation of order (bandwitch) max[k, q].

2.2.3 Specification tests : on the validity and relevance of instruments

As discussed above, the estimation of Eq. (8) by OLS, using actual future real-izations instead of their unobserved expectations, would yield inconsistent estimators,because the relevant disturbance term would be correlated with endogenous right-handside variables. GMM, however, can circumvent this endogeneity issue and producesstrongly consistent and asymptotically normal estimators [Hansen, 1982]. The GMMapproach is therefore very appealing in this context since it does not require strongassumptions of the underlying model. Nonetheless, this technique requires the choiceof ”good instruments” as a crucial condition. Adequate instruments must be orthog-onal to the error term and at the same time, sufficiently correlated with the includedendogenous regressors.

To answer the validity condition for the instrument set, the model must be overi-denfied, in the sense that the number of orthogonality conditions exceed the number ofparameters to be estimated. To this end, one can implement the test of over-identifyingrestrictions, the so-called ”Hansen J-test” [Hansen, 1982]. This can be interpreted asa joint test for evaluating, on the one hand, the overall significance of the estimatedcoefficients and, on the other hand, the exogeneity of the selected instruments. In otherwords, the test can prove that all sample moments are close to zero, which validates theIV approach. A rejection of the null hypothesis would imply that some instruments failto satisfy the orthogonality conditions required for their employment. Such rejectioncan be the consequence of instruments that are not appropriately exogenous or thatare incorrectly excluded from the regression. Conversely, a failure to reject the nullhypothesis of valid over-identifying restrictions would be a strong validation of the IVapproach.

Nonetheless, instruments exogeneity is only one of the two necessary conditions foran instrument set to be “proper”. An instrument set satisfying the orthogonality condi-tions do not necessarily indicate that the instruments are relevant, i.e. that they help topredict our multi-step ahead forecast variables. Unfortunately, instruments’ relevanceis rarely discussed in the empirical literature on monetary policy instrumented rules –with the exception of Consolo & Favero [2009], for instance. The emphasis is almostsolely placed on instruments’ validity. Yet, as Stock, Wright & Yogo [2002] point out,“weak instruments” – or generally termed “weak identification” – are a serious concernin many GMM and IV’s applications 19. A growing body of theoretical and empiricalliterature highlights a variety of problems posed by instruments that are only weaklycorrelated with the included endogenous variables. As reviewed by Stock et al. [2002],“if instruments are weak, then the sampling distributions of GMM and IV statisticsare in general non-normal, and standard GMM and IV point estimates, hypothesis

19Instruments are “weak” in the sense that they have a low correlation with the includedendogenous variables, i.e. they do not contribute fully to explaining the instrumented variable.

21

Page 28: Master's Thesis Alexandre Lauwers

tests, and confidence intervals are unreliable”. First, weak identification worsens thefinite-sample bias inherent in IV regression [Murray, 2006]. More specifically, thoughinference based on IV is consistent, those estimates would always be biased in finitesamples. This raises concern of whether this finite-sample bias is larger than the one inOLS with an endogeneity issue. Hahn & Hausman [2005] demonstrate that the relativebias of 2SLS fades when both the sample size grows and the number of instrumentsincrease, but rises when the instruments are weakly correlated with the endogenousvariables – a result that holds in the case of GMM. Hence, when instruments are weak,there is a risk that the remedy might be worse than the disease. Moreover, the lowerthe covariance between the instruments and the endogenous explanators, the larger theasymptotic variance of the estimator would be. This can create serious distortions ontest of significance, thus invalidating conventional inference – confidence intervals areinaccurate. In particular, tests such as the Hansen’s J-statistic can yield misleadingresults.

Given the devastating consequences on IV regression, even in large samples, theproblem of weak instruments needs to be addressed by practitioners. A straightfor-ward approach to evaluate whether weak instruments are of potential concern wouldbe to inspect the first stage regressions, that is the reduced form regressions of theproblematic independent variables on the instrumental variables. To this end, can beperformed an F-test of joint significance of the instruments. A commonly used rule ofthumb is to assume a weak instrument issue when the first stage F-statistic is below 10.However, this obvious approach cannot be used in the presence of multiple endogenousvariables. Nonetheless, Stock & Yogo [2002] provide useful threshold values to assessthe 2SLS relative bias, based on a multivariate generalization of the F-statistic, theCragg-Donald statistic: they evaluate how much of the bias suffered by OLS has beenovercome by 2SLS and also assess the significance level of 2SLS-based hypothesis tests.In the worst scenario, the bias of the IV estimator is the same as that of OLS: IVbecomes inconsistent and nothing is gained by instrumenting.

Unfortunately, most measures to diagnose instrument relevance apply only withinthe linear IV regression model, that is when errors appears to be i.i.d. – homoscedasticand serially uncorrelated. However, in the case of non-linear GMM, for instancein forward-looking models estimated by GMM such as discussed above, errors areheteroscedastic and serially correlated. The literature remains unconclusive in this areaon how to test candidate instruments for weakness [Stock et al., 2002]. Nonetheless,the critical values of Stock and Yogo will be used. The test will be biased upwards, soweak instruments will appear less likely 20.

20Thus, if the test indicates a problem of weak identification, in reality the problem could be evenworse.

22

Page 29: Master's Thesis Alexandre Lauwers

2.3 An augmented version of the Taylor rule : including thepotential role of asset prices

The purpose of this master’s thesis is to measure the extent to which information onasset prices are taken into account by the monetary authorities. Note that the targetrate given by Eq. (4) considers solely the response of central banks to expected inflationand output gap. Insofar, asset prices are contained into the information set, they helpto predict future realizations of inflation and the output gap. Eq. (4) is thereforeincomplete but will still serve as the reference model in the first part of the empiricalanalysis. The rationale behind explicitly including asset markets variables in themonetary policy rule is to investigate whether the asset variables have influenced ratesetting directly, independently of the information they contain about future inflationand output. For this purpose, the standard forward-looking target rule is extendedby including asset variables. In this study, two classes of asset prices are considered,namely stock prices and housing prices. As already discussed in Section 1.1, these twoasset market variables might have a role to play in interest rate setting decisions, andtheir potential importance should be determined. The “augmented Taylor rule” is thusgiven by:

r?t = α + βπE[πt+k|Ωt] + βyE[yt+q|Ωt] + βsst−1 + βhE[ht+w|Ωt], (12)

where st−1 denotes the lagged stock price and E[ht+w|Ωt] the conditional expectationof housing price w-periods ahead. The parameters βs and βh capture the central bank’sresponse to stock and housing prices, respectively.

Following Chadha et al. [2004] and Bernanke & Gertler [1999], it is assumed that,unlike inflation and the output gap, the central bank responds to only past stock pricedevelopments. Even if Eq. (12) embodies a forward-looking representation of mone-tary policy, the assumption for using lagged stock price variable is founded upon theplausible behaviour of monetary authorities. Motivation is twofold. Given the alreadycomplicated task for monetary policy to distinguish whether a change in asset pricesresults from fundamental factors, non fundamental or both, central banks may morelikely intervene to stock market movements only when stock prices have been observed,rather than responding on the basis of its expected future value. The rationale for notexpressing stock price as a forward-looking variable may be also appreciated in thatmonetary policy can affect the stock market simultaneously, whereas, as mentionedearlier, the forecast horizons for inflation and the output gap were a natural design toexplicitly capture the lag-encompassing phenomenon for monetary policy to have itsfull impact on those macroeconomic variables. In addition, the resulting simultaneousbias problem between the current stock price variable and current interest rate deci-sions poses the quite complex task to find instruments that affect the stock marketvariable while being uncorrelated with the instrument rate movements. In order toavoid the endogeneity concerns caused by these interactions, we decide to introducelagged values for the stock prices, rather than the current value. Still, the later casewill be considered as a robustness check for our results. Note that the assumptionof considering lagged/current values for the stock market variable is aligned with theintuition brought by the efficient market hypothesis. One of the central tenets of thelatter hypothesis is that rational financial markets participants use all the available

23

Page 30: Master's Thesis Alexandre Lauwers

information to price a current stock, so that stock price changes are impossible to pre-dict from available information, with is inconsistent with the inclusion of future valuesof stock prices in the target rule. However, we prefer not to attach too much substance(interest, attention)(do not rely too heavily) on the arguments of market efficiency ofFama [1965], since, as its roots, it postulates that stock prices cannot deviate fromtheir fundamental values given the stabilizing role of rational profit-maximizing agents(arbitrageurs), which challenges the core existence of any asset price bubbles and thus,any role for central banks 21.

By contrast, the real estate variable is assumed to enter Eq. (12) in a forward-looking form, with a forecasting horizon set to w. As for inflation and the output gap,the lag encompassing aspect is again essential. Using a structural VAR model thatallows for simultaneity between monetary policy and house and stock prices, Bjornland& Jacobsen [2012] found for U.S. data that a contractionary monetary policy shockhas an immediate negative effect on real stock prices, while the effect on house pricesis much more persistent, gradual, yet more substantial – attaining their peak 3 yearslater 22. Hence, it seems appropriate to consider the forecast future value of real estateprices in the interest rate target rule specification.

Accordingly, a particular attention will be paid in Section 3.3 to the estimation ofthe augmented forward-looking reaction function given by:

rt = (1−p∑i=1

ρi)(α + βππt+k + βyyt+q + βsst−1 + βhht+w

)+

p∑i=1

ρirt−i + εt. (13)

with,

εt = −(1−p∑i=1

ρi)[βπ(πt+k−E[πt+k|Ωt]) +βy

(yt+q−E[yt+q|Ωt]

)+βh

(ht+w−E[ht+w|Ωt]

)]+υt.

Note that the estimable Eq. (13) nets, not only the standard forward-looking Taylorrule estimated by Clarida et al. but also its augmented variants, which allow only forstock price information, as in Bernanke & Gertler [1999], Chadha et al. [2004] andDupor & Conley [2004] among others, or only house price information as in Drescheret al. [2010] and both, as in Kontonikas & Montagnoli [2004] and Levieuge [2002]among others.

This augmented version will be estimated in the same fashion as the baseline one,namely with the GMM procedure. Accordingly, the parameter vector to be estimatedincludes the coefficients βs and βh for the respective additional variables st−1 and ht+w.Furthermore, we expand the instrument list to include lagged values of those additionalregressors. This latter consideration is, in fact, crucial to interpret future results onthe significance of those two asset price variables. As Fuhrer & Tootell [2008] pointsout, insight can be gained from an instrumental variable regression, compared to an

21It should be noted also that many asset market trading techniques, such as the widespread“technical analysis”, base investment decisions upon history of stock price movements to predictfuture prices.

22See also Del Negro & Otrok [2007] for roughly comparable results for house prices.

24

Page 31: Master's Thesis Alexandre Lauwers

OLS regression. For instance, if lagged stock prices are statistically significant as theresult of the estimation of Eq. (13) by OLS, the researcher faces a dual interpretation:either the central bank truly smooths the evolution of equity prices, constituting asupplementary objective and validating its incorporation into the target equation (12),or either there is useful information inside the stock market variable to forecast futureobservations of the output gap or/and inflation, which the central bank may, in turn,systemically react to. In order to discriminate between the two explanations, oneshould rely on an instrumental variable technique. The information that stock pricesdo convey about future inflation and output gap is already accounted by includingthe lags of the stock market variable in the instrument set. Thus, if a significantcoefficient βs is found in the GMM estimation, it would imply that the stock marketvariable constitutes a direct policy objective for the central bank. Conversely, if thecoefficient is insignificantly different from zero, this additional variable would onlyconstitute a forward-looking indicator for the central bank. It is then straightforwardto evaluate the role of these additional variables, either as supplementary objectivesthat policymakers seek to achieve, which would be consistent with Eq. (12), or asmerely instrumental variables, which would be contained actually to the informationset Ωt.

25

Page 32: Master's Thesis Alexandre Lauwers

3 Empirical evidence and implications

In this section, we report our main empirical results from estimating both a stan-dard forward-looking Taylor rule as well as an augmented forward-looking Taylorrule, which allows asset prices to enter the Federal Reserve’s reaction function. Theestimation strategy proceed as follows. An immediate concern is the choice of thebaseline model, namely the necessary discretionary decisions imposed by the modelspecification and estimation method adopted [Section 3.1]. Then, we proceed to theanalysis of the standard forward-looking Taylor rule, in which the initial estimationresults are discussed, the historical performance of the estimated rule is gauged andsome robustness checks are carried out [Section 3.2]. After the lessons drawn from thisinitial analysis, we finally concentrate on the rule extended to two variables related toasset prices [Section 3.3]. We recommend referring to Appendix A for a description ofthe selected data and some relevant unit root tests.

3.1 Choice of the baseline model

To begin the empirical analysis, one has to consider the choice of the baseline model.To do so, a number of decisions must be taken. More specifically, the proxies that willapproximate the inflation and the output gap measures must be chosen, as well as theinstruments set that will help to predict future inflation and output gap, the choice ofthe number of lags in the interest rate function and finally, the choice of forecastinghorizons.

3.1.1 The baseline instrument set

First, the instrument set used for the GMM estimation of the baseline model isclose to the one preferred by Clarida, Gali & Gertler [1999, 2000], with minor additions.More specifically, the basic set of instruments includes a constant and lags 1-6, 9 and12 of the log-differenced CPI, the output gap23 and the interest rate24, as well as thesame number of lags of the log-differenced world commodity price index, the growthrate of the monetary aggregate M2 and the term structure spread25. These instru-ments provide valuable information to monetary policymakers to build their forecastson future price developments and future output gap path. Moreover, they only con-vey information available to the central bank at the time of its rate setting decisions 26.

23FollowingClarida et al. [1999], the baseline output gap measure is constructed on a quadratictime trend, that is as the residuals of a regression of U.S. industrial production on a constant, timeand time squared for the sample period 1974:1 to 2012:11.

24Lags of the endogenous variable among the instruments is common with time series data. Therationale is that, while there exist a reverse causality between the policy rate and future inflation andthe output gap, it less likely that the interest rate has the ability to influence past values of inflationand the output gap.

25The interest rate spread is defined as the difference between the 10-year government bond yieldand the 3-month Treasury Bill rate (money market rate). Term spread lags are included among otherinstruments since it was shown that spread forecasts output growth well (see, for instance, Stock &Watson [2003])

26When considering the augmented policy rule, the instrument set will be extended by includinglags of the two additional asset prices up to six periods, when necessary.

26

Page 33: Master's Thesis Alexandre Lauwers

However, it is important to draw a caveat here concerning over-instrumenting.With this baseline set of instruments for the standard reaction function, the totalnumber of instruments amounts to 49. This represents approximately more than 25instruments per endogenous parameters, considerably more than what is needed foridentification. As a general rule, it is recommended to be sparing when assemblingan instrument set. By increasing the number of instruments, the researcher increasesthe risk of selecting weak instruments leading to biased estimates. This is especiallya risk in small samples and can lead to the devastating consequences discussed inSection 2.2.3. The researcher is thus confronted with a trade-off between a smallerstandard deviation and an increasing bias in point estimates [Tauchen, 1986]. Tauchenconcludes that the gain in precision obtained by using longer lags may be outweighedby the potentially biased results. Nevertheless, a robustness analysis considering in-strument sets with shorter lags reveals no considerable change in the estimation results.

3.1.2 Implementation of an AR(3) interest rate structure

As argued in Section 2.1.4, it is often believed that policymakers follow a sort of“Brainard conservatism” while implementing their monetary policy decisions and inthis line the central bank operates in a series of incremental steps. Given the observedsluggishness, this research implements an auto-regressive interest rate structure into theregression, as performed by Clarida et al. [1999, 2000]. Clarida et al. advocate a partialadjustment of order two for their reaction function of the Federal Reserve. However,an estimation over the entire sample, ranging from 1979:11 to 2011:11, suggests a morecomplex smoothing function 27. Diagnosis checks, by means of a Portmanteau test (Q)for white noise, still indicates the presence of serial correlation in the residuals whenassuming an AR(2) dynamic. Three lags seem sufficient, however, to remove any serialcorrelation – at least, for up to six autocorrelations, Q(6). In addition, we compute theBIC criteria associated with the same specification over the entire sample period, butallowing the number of lags for interest rate smoothing, p, to range from zero to four.Results – not presented in the study – first, strongly favour the interest rate smoothingrationale and second show that the rule estimated with three rate lags achieves thelowest BIC. In light of this evidence, this study adopts a partial adjustment for theinterest rate of order three, i.e. AR(3), which better fits the evolution of the Federalfunds rate. The latter is defined in the following way:

rt = (1− ρ1 − ρ2 − ρ3)r?t + ρ1rt−1 + ρ2rt−2 + ρ3rt−3 + υt.

The standard reaction function in Eq. (8) has therefore been rewritten with thisparticular interest-rate smoothing model, as follows :

rt = (1− ρ1 − ρ2 − ρ3)(α+ βππt+k + βyyt+q

)+ ρ1rt−1 + ρ2rt−2 + ρ3rt−3 + εt. (14)

Finally,by defining φ0 ≡ (1 − ρ)α ; φπ ≡ (1 − ρ)βπ ; φy ≡ (1 − ρ)βy ; φ1 ≡ ρ1 ;φ2 ≡ ρ2 ; φ3 ≡ ρ3 and ρ ≡ ρ1 + ρ2 + ρ3, this last specification was converted into anestimable equation, in the following reduced form:

rt = φ0 + φππt+k + φyyt + φ1rt−1 + φ2rt−2 + φ3rt−3 + εt. (15)

27Note that their estimation period covers the period from 1979 to 1997.

27

Page 34: Master's Thesis Alexandre Lauwers

Equation 15 is the short-run monetary policy reaction function, where the Federalfunds rate in period t is determined in part by the expected future values of themacroeconomic variables and in part by its past realizations. In particular, estimationof this equation yields the sample reduced-form parameters – or the short-run responsecoefficients – φ = [φπ φy φ0 ρ]′. However, we are interested to recover and interpretestimates, along with their standard errors, of the structural-form parameters of theimplied target equation (4), i.e. the long-run response coefficients [βπ βy α ρ]′. Bythe sample reduced-form parameters, we can approximate the sample structural-formparameters via [βπ βy α]′ = (1 − ρ)−1 [φπ φy φ0]′. To compute the approximatestandard errors for the sample estimates of the structural parameters, we need toimplement the so-called “Delta method” 28.

3.1.3 Choice of the baseline forecasting horizon

As discussed in Section 2.1.2, this research assumes a forward-looking monetarypolicy rule, as it explicitly involves the expected inflation and output gap. Monetarypolicy is reasonably regarded as prospective, that is that k and/or q are positive.

Nevertheless, the exact forecast horizon of monetary policy action remains uncer-tain. Clearly the choice of horizons over which central banks target inflation and theoutput gap should be consistent with the lags affecting monetary policy transmission.Empirical evidence, based on impulse response studies, suggest that monetary policyactions attain their peak on inflation following a lapse of 6 to 24 months from an un-expected interest-rate shock (see, among others, Batini & Haldane [2001]). Moreover,it is widely accepted to assume a differential horizon between the two: the outputforecast horizon should be shorter or no longer than the inflation one 29.

Concerning the output gap’s expectation, monetary policymakers may not put asmuch attention to its future values. As mentioned in Appendix A, the output gapmeasure is ridden with uncertainty. Since the potential output cannot be observed, itmust be proxied with a possible substantial margin of error. This reason might explainwhy the empirical literature on monetary policy rules often adopts the current level ofthe output variable, rather than a forward-looking output gap. Kuttner [1992] arguesthat the best response to uncertainty is often to adopt a “wait-and-see” approach,until more information becomes available. Hence, in the baseline model, this analysisassumes that monetary policymakers are reacting to the contemporaneous output gap,that is k = 0. As explained in Clarida et al. [1999], using a contemporaneous outputgap in the rule, alongside a forward-looking inflation variable, will reveal whether thecentral bank responds to the output gap independently of concerns for future inflation.Nonetheless, the current output gap is still treated as an endogenous regressor in theGMM estimation, i.e. it is instrumented, since the central bank does not have accessto current-period realizations of the output gap.

28This technique permits to derive approximately the variance of a non-linear transformationof a vector of parameter estimates. The Delta method is implemented in Stata software using thepost-estimation command nlcom.

29Friedman [1961] suggests that “monetary changes take much longer to affect prices than toaffect output”. Moreover, in the Phillips Curve, output affects inflation with a lag.

28

Page 35: Master's Thesis Alexandre Lauwers

Regarding the selection for the baseline inflation forecast lead k, the choice shouldfirst be based on the estimates of the lag affecting the monetary policy transmissionmechanism. For this reason, the literature’s standard is often assumed to be one-year.A twelve-month projected inflation corresponds approximately to the effective targetinghorizon. Also, it limits the loss of degrees of freedom and the increase in the predictionerror. This is an important factor considering the asymptotic properties of the GMMestimators. However, the choice for k should not be based entirely on the transmissionlag aspect. We suspect a weak identification problem when using long-horizon forecastfor the inflation rate. More specifically, it was noted during the analysis carried outfor this study that the longer the time horizon k, the more substantial the decay in thecorrelation between the lags of the instruments and the endogenous inflation variable.As discussed in Section 2.2.3, a badly instrumented endogenous variable could haveseriously adverse consequences on the GMM estimations.

Therefore, the empirical investigation for this study started with a sensitivity anal-ysis. The standard forward-looking regression, as described by Eq. (15), is estimatedby GMM over the full sample period, but with varying inflation forecast horizons:where the lead for inflation k ∈ 0, 3, 6, 9, 12 and that for the output gap is q = 030.As mentioned in Section 2.2, due to the overlapping observations structure, the dis-turbance εt follows a MA(k − 1) process, unless k = 1. Thus, GMM is implementedwith the HAC estimator for the weighting matrix, which is robust to autocorrelationand heteroscedasticity, based on a Bartlett kernel with a bandwitch of order (i.e.) k.

Table 1 compares the fitted coefficients of interest – both short-run and long-runparameters – corresponding to different inflation forecasting horizons, for the full sam-ple period. While the magnitude of the point estimates will be discussed later, someinteresting regularities can be observed at this point.

A close inspection of the short-run coefficients suggests that the longer the horizonk for inflation, the larger the point estimates; this is particularly true for the inflationcoefficient φπ. In addition, for horizons larger than six months, the inflation coefficient’srobust standard errors rise. As mentioned in Section 2.2.3, these two informalobservations can be a sign of weak identification. The Kleibergen-Paap F-statistic– K-P stat – falls considerably when forward-looking specification is undertaken andthe statistic falls below the informal threshold of 10 for horizons larger than 6 months.Applying the Stock & Yogo [2002] test at a 6-month horizon, the null hypothesis of abias larger than 20% of the OLS bias can be rejected, but one cannot reject the null thatthe nominally sized test statistics of a 5% level does not exceed a level of 25% rejectionrate. However, more serious problems with regard to bias in the estimators arise withlonger horizons. For a twelve-month ahead horizon – as in Clarida et al. [1999] specify–, the Stock and Yogo-test cannot reject the hypotheses that the bias in the estimatorsis larger than 30% and that the nominally sized test statistics at 5% exceed an actuallevel of 25%. It therefore appears that large horizons suffer from a weak identificationshortcoming, with biased estimate problems exacerbated for horizons longer than sixmonths.

30We also experimented with various choice of q, but the time horizon for the output gap doesnot play an important role, at least for this preliminary analysis.

29

Page 36: Master's Thesis Alexandre Lauwers

Table 1GMM Baseline Estimates (full sample):

Sensitivity analysis to varying inflation forecast horizons

Inflation horizons k = 0 k = 3 k = 6 k = 9 k = 12

Short-run coefficients

φπ .035∗∗

(.014).045∗∗∗

(.013).048∗∗∗

(.012).083∗∗∗

(.013).117∗∗∗

(.015)

φy .002(.002)

.003∗

(.002).005∗∗∗

(.002).005∗∗

(.002).005∗∗∗

(.002)

ρ .981∗∗∗

(.006).976∗∗∗

(.006).976∗∗∗

(.006).966∗∗∗

(.005).956∗∗∗

(.006)

Long-run coefficients

βπ 1.85∗∗∗

(.478)1.85∗∗∗

(.391)1.99∗∗∗

(.343)2.38∗∗∗

(.284)2.68∗∗∗

(.222)

βy .090(.078)

.128∗

(.080).204∗∗

(.090).137∗∗

(.058).120∗∗∗

(.043)

Sample period : 1979:10 - 2011:11

RMSE .560 .543 .548 .543 .544

Q-test .050 .673 .592 .964 .970

J-test .387 .943 .989 .978 .996

K-P stat 267.4 19.29 10.81 5.451 3.231

Notes: The estimated parameters refer to the standard equation (15), with the Federalfunds rate as the dependent variable and, as explanatory variables the CPI inflation rate(πCPIt ) and the quadratic trend based output gap (yt

Q). The table displays the short-run

and the implied long-run coefficients. ρ, βπ and βy denote the policy inertia, inflationand output gap coefficients respectively. Estimates are obtained by GMM with an HACestimator of the variance-covariance matrix and a Bartlett kernel with autocorrelationof order 12. J-test is the test for overidentifying restrictions [Hansen, 1982], which isdistributed as χ2 under the null. For this test, only p-values are reported. RMSE isthe root mean square deviation of the estimated interest rate from the actual interestrate. Q-test denotes the Ljung-Box autocorrelation test p-value for up to 6th-orderautocorrelation. K-P stat reports Kleibergen-Paap rk Wald F statistics to assess theissue of weak identification in the case of non-i.i.d. errors. The critical values for theKleibergen-Paap statistic are the Stock-Yogo IV critical values for the Cragg-Donald i.i.d.case, reported below. HAC corrected standard errors are reported in parentheses and,those of the long-run coefficients, are computed with the Delta method. See notes in Table2 for further explanations on the instruments and on the GMM estimation procedure.** p < 0.01, ** p < 0.05, *p < 0.1.

Stock-Yogo weak ID test critical values: Source: Stock & Yogo [2002]5% maximal IV relative bias 19.98 10% maximal IV size 38.0810% maximal IV relative bias 10.93 15% maximal IV size 20.6020% maximal IV relative bias 6.19 20% maximal IV size 14.6530% maximal IV relative bias 4.50 25% maximal IV size 11.58NB: Critical values are normally intended for the Cragg-Donald F statistic and i.i.d. errors.

30

Page 37: Master's Thesis Alexandre Lauwers

A close inspection of the short-run coefficients suggests that the longer the horizonk for inflation, the larger the point estimates; this is particularly true for the inflationcoefficient φπ. In addition, for horizons larger than six months, the inflation coeffi-cient’s robust standard errors rise. As mentioned in Section 2.2.3, these two informalobservations can be a sign of weak identification. The Kleibergen-Paap F-statistic –K-P stat – falls considerably when forward-looking specification is undertaken and thestatistic falls below the informal threshold of 10 for horizons larger than 6 months.Applying the Stock & Yogo [2002] test at a 6-month horizon, the null hypothesis of abias larger than 20% of the OLS bias can be rejected, but one cannot reject the nullthat the nominally sized test statistics of a 5% level does not exceed a level of 25%rejection rate. However, more serious problems with regard to bias in the estimatorsarise with longer horizons. For a twelve-month ahead horizon – as in Clarida et al.[1999] specify –, the Stock and Yogo-test cannot reject the hypotheses that the biasin the estimators is larger than 30% and that the nominally sized test statistics at 5%exceed an actual level of 25%. It therefore appears that large horizons suffer from aweak identification shortcoming, with biased estimate problems exacerbated for hori-zons longer than six months.

Regarding the long-run coefficient estimates, as with their short-run counterparts,the coefficient reflecting the response to inflation βπ generally rises as the horizonbecomes forward 31. However, this analysis discerns an opposite pattern on the long-run coefficient’s standard errors in comparison with their short-run counterparts. Thelonger the forecast horizon, the higher the precision on the inflation coefficient esti-mates. Aside from the weak identification issue, another driving force actually causesan opposite trend: the interest rate smoothing parameter ρ. Irrespective of the horizonk, the estimate of the smoothing parameter is very high and close to unity. But, it isnoted that it generally declines the larger the horizon gets forward. A large ρ inevitablyimpacts the significance of the inflation and output gap parameters. Therefore, thelower the ρ value, the higher the precision is on the relevant estimates, resulting inlower standard errors. For that reason, this analysis does not observe the same patternfor the short run coefficients, namely here the precision is increasing with the degreeof forward-lookingness. As discussed above, a large ρ could be a symptom of misspec-ification, either an omitted variable problem or a serial correlation in the error term.In this case, a too short horizon clearly suffers from a misspecification issue whichis reflected by persistence in the error term. In other words, the potential source ofpersistence in the error, which results in into a high ρ, can be explained partly by thefact that, for instance, the six-month forecasting horizon assumed in the estimation ofthe rule is not the effective horizon that policymakers actually use.

To conclude, two issues stand out from the research concerning the lead for inflation:on the one hand, it seems that long forecasting horizon specifications suffer from prob-lems related to weak instruments and, on the other hand, shorter horizons are sufferingfrom misspecification problems, materializing into a very high smoothing parameter.Even though long-run point estimates are all plausible for the different leads, in the

31This increased sensitivity of the rule to inflation could be interpreted as the Federal Reserveassigns a larger weight to long-term developments in inflation. Short-term developments receive lessattention as they may be only transitory.

31

Page 38: Master's Thesis Alexandre Lauwers

sense that they all have the theoretically expected signs and are all statistically differ-ent from unity at conventional levels, a 12-months forecast horizon has been selectedfor this study, so as to make most economic sense and to allow reasonable interpreta-tion. First, the lower weight on the interest rate smoothing parameter provides higherprecision to the long-run coefficient estimates, i.e. lower standard errors. Second, itroughly corresponds to the empirical evidence on monetary transmission lags. Third, itis the industry standard, thus facilitating comparison with other studies. Nevertheless,this discussion raises concerns on the potential weak identification problem underlyinglong forecasting horizons. These must be kept in mind throughout the ensuing analysis.

The selected specification of the standard forward-looking policy rule, that willserve as the baseline case throughout the empirical analysis32 is presented below:

rt = φ0 + φππt+12 + φyyt + φ1rt−1 + φ2rt−2 + φ3rt−3 + εt. (16)

where it is assumed that the central bank responds to year-ahead movements in in-flation, that is k = 12 and a contemporaneous response for the output gap, that is q = 0.

3.1.4 Recover the inflation target

In the discussion of the model specification, this study laid out a Taylor-typerule, Eq. (4), consisting of an unobserved equilibrium real interest rate (rr) and anunobserved target rate of inflation (π?). While it is impossible to identify estimatesfor both, the regression model does supply a relationship between the two variablesof interest, which depends on the parameter estimates βπ and α. The constant termα ≡ r − βππ? and the Fisher relation r = rr + π? yields α ≡ rr + (1 − βπ)π?, whichimplies the following relation for the central bank’s target inflation rate :

π? =rr − αβπ − 1

(17)

To recover an estimate of π?, one must then proxy the unobserved equilibrium realinterest rate rr. A commonly used approach consists of estimating rr as the sampleaverage real rate, i.e. the difference between the average Federal funds rate and theaverage inflation rate 33. Since the equilibrium real rate is a long-term concept, bothaverages should be calculated over a long sample period. As Kozicki [1999] underlines,a long sample period not only enables to average out cyclical swings in the real rate, butalso prevents the capturing of short-term trends in inflation movements that may causemisleading estimates of rr. However, Kozicki also concedes that, if the equilibrium realrate is not stable over time, the long estimation period “may include information fromperiods characterized by different equilibrium real rates”. Therefore, given this estimateof the equilibrium real interest rate and the regression results βπ and α, it is possibleto recover an estimation of the Federal Reserve’s inflation target over the estimationperiod. Note that this estimate of π? can serve as an additional means to control theviability of this study’s empirical results.

32It uses the annual CPI growth rate and the deviation of the industrial production growth ratefrom its fitted quadratic time trend, as the measures of inflation and the output gap respectively.

33Examples of this approach are found in Clarida et al. [1999, 2000], Kozicki [1999], Judd &Rudebusch [1998].

32

Page 39: Master's Thesis Alexandre Lauwers

3.2 A standard forward-looking Taylor rule

3.2.1 Estimation results for the baseline case

The GMM technique is used to estimate the parameter vector. As discussed inSection 2.2 above, the GMM procedure is favoured in order to avoid a possible simul-taneity bias in the regression, i.e. a possible correlation between the right hand sidevariables and the residuals. Essentially, if some of the regressors are endogenous, thenOLS estimates are biased and inconsistent. GMM results, however, remain consistent,provided of course that the instruments are valid and relevant. On the other hand, ifthe regressors are exogenous, then OLS is consistent and, under the usual assumptions,more efficient than the GMM estimator, resulting in smaller standard errors and moreprecise estimates. While there might be reasons to suspect non-orthogonality betweenregressors and errors, the use of IV estimation to address this problem must be bal-anced against the inevitable loss of efficiency vis-a-vis OLS. Undoubtedly, this loss ofefficiency is justified if the OLS estimator is biased and inconsistent [Baum et al., 2003].Therefore, it is advisable to test whether the interest rate is endogenously determinedby expected inflation and output gap in the standard forward-looking Taylor rule.

For this purpose, the Hausman’s [1978] specification test was used. The test isformed by choosing OLS as the efficient estimator and GMM as the inefficient but con-sistent estimator of the true parameters34. Under the null hypothesis, the GMM andOLS estimators are both consistent – as the explanatory variables are uncorrelated withthe residuals – and efficiency should be lost by turning to a GMM estimation technique.Conversely, under the alternative hypothesis, the OLS estimator is not consistent – assome regressors are endogenous –, thus in this case, despite a less efficient estimator,GMM would be an appropriate estimation technique. Before conducting this test, twosources of non-spherical errors should be accounted for: the Breusch-Pagan test forheteroscedasticity indicates that the residuals do not have a constant variance and, dueto the overlapping nature of the data, the residuals contain an MA(11) component.Accordingly, both OLS and GMM variance-covariance matrices are corrected by usingan HAC estimator that is (asymptotically) robust to heteroscedasticity of arbitraryand unknown form and based on a Bartlett kernel with autocorrelation of order 12.Unfortunately, the Hausman’s specification test generates a negative statistic which isnot interpretable as the support of the test is positive. In fact, the variance-covariancematrix of the difference of the contrasted estimators [VGMM−VOLS] is non-positive def-inite, leading to a negative Hausman statistic. This outcome is mostly surprising sincethe OLS estimator, despite the presence of heteroscedasticity that was accounted for,should in principle remain more efficient – smaller standard errors – than the GMM es-timator. Put it differently, neither estimators are fully efficient under heteroscedasticitybut the relative efficiency of OLS to GMM should remain valid. Regrettably, despite athorough research on endogeneity tests in the presence of heteroscedasticity, we wereunable to solve this issue with the Hausman test. As an alternative, a regression-based

34The Hausman statistic is distributed as χ2 and is computed as:(βc − βe)′[Vc − Ve]−1(βc − βe) with βc and βe are the coefficient vectors from the consistent and theefficient estimators respectively ; Vc and Ve are the covariance matrices of the consistent and theefficient estimators respectively. Here, the Hausman statistic is defined as follows:(βGMM − βOLS)′[VGMM − VOLS ]−1(βGMM − βOLS)

33

Page 40: Master's Thesis Alexandre Lauwers

Table 2OLS vs. GMM Baseline Estimates:

Standard Forward-Looking Taylor Rule (Full Sample)

α βπ βy ρ π? RMSE Q

OLS estimation1 −2.197(2.105)

2.259∗∗∗

(.694).162∗∗

(.069).963∗∗∗

(.011)3.43 .541 .097

GMM estimation2 −3.193∗∗∗

(.792)2.679∗∗∗

(.222).120∗∗∗

(.043).956∗∗∗

(.006)3.17 .544 .235

Endogeneity test Result : Prob > χ2 = 0.1330

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the standard baseline equation (16), withthe Federal funds rate as the dependent variable and, as explanatory variables the CPIinflation rate (πCPIt ) and the quadratic trend based output gap (yt

Q). The table displays

the implied long-run coefficients. α, ρ, βπ, βy and π? denote the constant term, the policyinertia (the sum of the autoregressive parameters associated with the lagged interest rateinstrument), inflation and output gap coefficients respectively and the implied inflationtarget. RMSE is the root mean square deviation of the estimated interest rate from theactual interest rate. Q-test denotes the Ljung-Box autocorrelation test p-value for up to6th-order autocorrelation. *** p < 0.01, ** p < 0.05, *p < 0.1.1 Estimates are obtained by OLS with an HAC estimator of the variance-covariancematrix and a Bartlett kernel with autocorrelation of order 12. HAC corrected standarderrors are reported in parentheses.2 Estimates are obtained by GMM with an HAC estimator of the variance-covariancematrix and a Bartlett kernel with autocorrelation of order 12 and obtained by two-stepnonlinear two-stage least square [Hansen, 1982]. The optimal weighting matrix is obtainedfrom the first-step of two-stage least square parameter estimates. The GMM instrument setincludes a constant and lags 1-6, 9 and 12 of the log-differenced inflation rate, the outputgap, the interest rate, the log-difference of a world commodity price index, the growth rateof the monetary aggregate M2 and the term structure spread. The J-test for overidentifyingrestrictions [Hansen, 1982], distributed as χ2 under the null, is easily passed (p > 0.99),which shows evidence in favour of the validity of the instruments. HAC corrected standarderrors are computed with the Delta method and reported in parentheses.

procedure is conducted as an F-test in an “auxiliary regresssion”. Specifically, wefirst regress individually the two potentially endogenous variables – expected inflationand output gap – on all the available instruments and save the residuals of these tworeduced form regressions. Then, we include these reduced form residuals as additionalregressors in the original model and estimate this “articial” regression by OLS witha robust variance-covariance matrix. Table 2 shows the result of the robust F-test ofjoint significance of the coefficients on the two included residuals. Although we areunable to reject the null hypothesis at a 10 percent significance level, the p-value issufficiently low to be concerned that at least one suspected explanatory variable ispotentially endogenous. To conclude, despite the large uncertainty identified on thisendogeneity test, we follow the empirical literature and will estimate both standard andaugmented forward-looking Taylor rules by means of the GMM estimation technique.

34

Page 41: Master's Thesis Alexandre Lauwers

Next, this study examines more in depth the long-run GMM coefficient estimates ofthis standard baseline reaction function.

First, the estimated reaction of the Federal Reserve to expected future inflationis given by the coefficient βπ, equal to 2.679 with a standard error of 0.222. Thiscoefficient indicates that, in the event of a one percentage point increase in expectedannual inflation, the Federal funds rate will be raised by 2.679 percentage points, ev-erything else being equal. Given that the estimated coefficient is significantly greaterthan one, at any conventional significance level, one can easily reject the hypothesisthat monetary policy has not been active during the full sample period. As mentionedin Section 2.1.3, to allow the Central Bank to meet the objective of maintaining pricestability, the nominal interest rate must rise more than proportionally to the increasein expected inflation. This ensures an increase in real terms in the interest rates inresponse to higher inflation. The Taylor Principle is indeed satisfied – with a 1.679percentage point rise in the real rates –, implying a stabilizing policy on future infla-tionary expectations. This aggressive commitment to inflation stabilization is in linewith empirical findings and conventional thinking that policy has been active over thissample period.

Second, another interesting result concerns the estimated weight on the outputgap βy, equal to 0.120 with a standard error of 0.043. The coefficient representsthe obstinacy of the central bank response to deviations of industrial growth ratefrom its potential level. As indicated by the positive and statistically significantcoefficient estimate, the Federal Reserve appears to have implemented a stabilizingpolicy regarding the real economy, independently of its concerns for future inflation.Controlling for the effect of expected inflation, a one percentage point increase in theexpected output gap – expected for the current month – will trigger, on average, anincrease of 12-basis-points in the policy rate. Accordingly, the Federal Reserve didnot only consider price stability as its policy objective, but also aimed to alter theperformance of the real economy. This result would be well aligned with the FederalReserve’s dual mandate, balancing the goal of inflation and output stabilization.This result, though, differs from the evidence presented by Clarida et al. [1999] andChadha et al. [2004], who found a βy insignificantly different from zero at conventionallevels of significance. It should be noted, however, that the sample period coveredby Clarida et al. [1999] ranges from 1979:10 to 1994:12. In Appendix C, Table C.1first row attempt to recover their results for their baseline specification. Even thoughthe output gap measure used by Clarida et al. is constructed in a similar fashion,and despite using the same sample period, the findings of this analysis still reveal asignificant and positive estimate for the output gap. This result might be explainedby the fact that this current study uses a more recent version of the real industrialproduction series which has been considerably revised since Clarida et al. publishedtheir study.For further empirical evidence on the output gap coefficient response, please refer toAppendix C, in which alternative study cases are considered: in particular, the recentfinancial crisis sample is removed (Table C.1) and alternative target horizons for theoutput gap are considered in (Table C.2).

35

Page 42: Master's Thesis Alexandre Lauwers

Third, the sum of the estimated coefficients for the lagged policy rates, embodiedin the parameter ρ, is fairly close to one (0.956) and statistically different from zero atany conventional significance level. This very high level of sluggishness in the policyrates suggests a considerable gradualism in the Federal Reserve’s decision making.Each month, the policy rate typically adjusts enough to eliminate approximately 4.5percent of the difference between the lagged actual rate and the target interest rate,so that the actual policy rate reaches the rule-recommended funds rate in very smallsteps over time.

Finally, we obtain a rather plausible estimate of the long run inflation target π?.Given a sample average real rate of 2.127 percent – assumed as the estimate of thereal equilibrium funds rate, and interestingly not far from Taylor’s assumption of 2percent –, Eq. (17) induces a 3.17 percent value for the estimated inflation target.This number, fully plausible, reassures the conceivability of the estimates. But, wedo recognize that it is highly implausible that the Federal Reserve has pursued oneunique and invariant target inflation rate for the whole sample. While over the fullsample period the Federal Reserve was committed to price stability, nowadays π? isseen around 2 percent whereas, for the Volcker period, Judd & Rudebusch [1998] founda wide range of estimates for the implicit inflation target, that could have gone up to6.4 percent.

To conclude, the estimates of the baseline standard forward-looking reaction func-tion yield theoretically plausible results, that are mostly consistent with most of theforward-looking rules estimated in other studies. This analysis found that, over theentire sample period, the Federal Reserve has implemented an active monetary policyand attributed a much larger weight on inflation stabilisation than the stabilization ofthe real economy. The divergence between the two estimated parameters can nonethe-less be seen as rather large, given the Federal Reserve’s dual mandate.

3.2.2 Historical performance

Even though estimation results are satisfactory and the root mean square erroris small, it is necessary to ensure the quality of the model. Where it is not possibleto obtain a measure of R2 with respect to the econometric method used, a graphicalcontrol can still be conducted.

Figure 1 compares the actual value and fitted values of the Federal funds rate.Unsurprisingly, the estimated reaction function tracks the historical rate very closely.It is often the case when estimating a model that accommodates lagged adjustmentof the actual rate to the target rate and when interest rate smoothing coefficient isvery high. It is noted, however, that the estimated rate is quite far from the actualrate at the beginning of the sample period. Deviations amounted on average 1.16percentage points during the 1979:10-1982:10 period. The early 1980s actually coversVolcker’s drastic disinflationary period, when the Federal Reserve put special emphasison the control of monetary aggregates and non-borrowed reserves, rather than on theinterest rate as the instrument of monetary policy. This may explain the apparent

36

Page 43: Master's Thesis Alexandre Lauwers

Figure 1: Actual and Fitted values of the U.S. Federal funds rate

-5

0

5

10

15

20

1980 1990 2000 2010

Actual FittedDeviation (Actual - Fitted)

U.S. Fed funds Rate (%)

Notes: The chart shows the actual and fitted values of the U.S. Federal funds rate, with fittedvalues derived from the estimated regression reported in Table 2 in shaded line, which is based on amodel that accommodates lagged adjustment of the actual rate to the target rate.

mismatch for this short period of time. In particular, the deviation reaches a peakof approximately 6 percentage points early in the year of 1980. A closer look revealsthat during the first months of 1980, the short term interest rate observed a sharpand persistent increase, but suddenly reversed in May 1980. This indecision cannot becaptured by the estimated rule.

To gauge the historical performance of the baseline estimated reaction function,Figure 2 is relied on instead. This figure plots the actual Federal funds rate againstthe estimated target rate implied by the baseline estimation, for the period 1984:01to 2011:1135. It is noted that the difference between the target value and the fittedvalue lies in the fact that the former corresponds to the predicted target policy rate ofEq. (4), in which the interest rate smoothing parameters are set to zero 36. It is alsonoted that the construction of the target values requires to make the assumption thatthe Federal Reserve had, for each month, a perfect foresight of the 12-month projectedinflation and the current output gap.

The quality of the specification can thus be assessed in light of Figure 2. Upon

35For the sake of clarity, we explicitly remove from the figure the early sample period – scalewould have otherwise been imprecise.

36The target value are thus obtained the following way : r?t = α+ βπ ∗ πt+12 + βy ∗ yt.

37

Page 44: Master's Thesis Alexandre Lauwers

examination, it is found that the baseline specification model is “satisfactory” intracing the dynamics of the instrument rates. Recommendations from the estimatedrule essentially follow the same general pattern as the one for the historical rates.This is the case with the monetary tightening in the late 1980s stagflation years, themonetary easing in the early 1990s recession, the monetary tightening at the very endof the decade during the economic expansion, the monetary easing in the early 2000years recession (following the dot-com bubble burst) and finally with the substantialmonetary easing in the aftermath of the severe 2008 financial crisis.

Figure 2: Actual and Target values of the U.S. Federal funds rate

-10

-50

510

15

1985 1990 1995 2000 2005 2010

Actual Target

U.S. Fed Funds Rate (%)

Notes: The chart shows the actual and estimated target values of the U.S. Federal funds rate,derived from the estimated regression reported in Table 2 in shaded line. The target value differs fromthe fitted value in that the former implicitly set the interest rate smoothing parameters to zero.

Nevertheless, deviations of the target rate from the actual Federal funds rate remainnumerous, persistent and sizeable. Five periods of deviation can be distinguished.A first deviation appears in 1985-86, when the prescription of the rule suggests a morepowerful monetary easing than the one followed by the Federal Reserve. This periodsaw a sharp decline in headline inflation due to a significant fall in oil prices. Bernanke& Gertler [1999] argues that much of the deviation of actual rates from target in 1985could be explained by the fact that this negative shock in inflation was unanticipatedby the central bank, “contrary to the perfect foresight assumption made in constructingthe figure”.Second, the Federal Reserve retained its policy rate significantly above target from late1996 until 1999, even though inflationary pressures diminished substantially duringthose years. It is likely that the Federal Reserve pursued a more stabilizing role with

38

Page 45: Master's Thesis Alexandre Lauwers

regard to the real economy by attaching a larger weight on output gap than the oneprescribed by the rule.Third, a relatively strong and persistent deviation of actual rates below target appearsafter the 2001 recession until mid-2004, even though inflation had started to rise and theeconomy was recovering fast. It appears that the Federal Reserve eased its monetarystance by a period longer than necessary, with a 3.06 percentage points deviation onaverage over this period. Some observers have argued that the Federal Reserve keptrates “too low for too long” after the 2001 recession, and that the abundance of easycredit contributed to fuel the real estate bubble and entice risk-taking among investors[Taylor, 2007]. This “overly” accommodating stance of U.S. monetary policy couldstill be explained by a series of negative shocks, which could not be captured by thissimple policy rule. These include, for example, the 11 September 2001 attacks, theEnron financial scandal or the 2003 Iraq war.Fourth, a short deviation started in 2007 when the Federal Reserve did not tightenmonetary policy even though a sharp rise in oil prices increased the headline inflationrate by mid 2008. It is possible that the surge in inflation was also unanticipated –as with the first deviation case – or that policymakers did not respond to this supplyshock because it was seen as transitory.Finally, the last period of significant deviation concerns the time after the 2008Lehman Brothers’ collapse. The economic conditions were such that the Federal Re-serve strongly moved the short term interest rates down as the rate recommended bythe rule, in a series of short steps that ended in when it actually hit the zero-lowerbound. However, it is noteworthy that the rule even calls for negative rates, whichtends naturally to deepen the divergence between the estimated series and the effec-tive one 37. Although nominal rates cannot, in principle, go below zero, this singularsituation could indicate that, facing such an economic turmoil, the Federal Reservenot only had to install a zero-interest rate policy but also had to find other unconven-tional instruments to reduce stress in financial markets, such as quantitative easing 38.From the beginning of 2010, it is noteworthy that the rule was prescribed to end thisaccommodating monetary policy due to a regain in inflationary pressures. Instead,monetary authorities have striven to maintain rates close to zero, moving partly fromtraditional objectives in putting more emphasis on the recovery of the economy (i.e.a higher βy) and on financial stability. The unusual monetary conditions of zero ratecoupled with a second round of quantitative easing (QEII) in late 2010 allowed theFederal Reserve to avoid a double dip recession and to counterbalance the ongoingeffects of the financial crisis39.

37Alternative specification of empirical policy rules, such as in Rudebusch [2009], also generallyrecommend a negative Federal funds rate.

38As the ability of the Central Bank has been heavily constrained by the zero lower bound, theFederal Reserve has introduced a number of non-traditional programs since December 2008, whichhave been termed quantitative easing: i.e., QEI was initiated in December 2008, QEII in November2010, operation twist in September 2011 and QEIII in September 2012. Because these large assetspurchases have not been offset by liquidating its portfolio of Treasury securities, these programs havesharply increased the amount of assets held by the Federal Reserve and in turn much recent attentionhas been focused on the monetary base measure. Since the fall of 2008, the latter has expanded byalmost 280 percent.

39A major threat was the phenomenon of a credit crunch leaving households and businesses morereluctant to borrow and spend, and lenders to extend credit.

39

Page 46: Master's Thesis Alexandre Lauwers

To conclude, the historical performance analysis demonstrates the poor fit of theestimated rule, evident in the sizeable differences between the rule recommendationsand the effective policy rate.

3.2.3 Alternative measures and robustness of recommendations

The estimation of the baseline forward-looking specification, illustrated in Table 2,uses the annual headline CPI growth rate and the deviation of the industrial pro-duction growth rate from its fitted quadratic time trend, as the measures of inflationand the output gap, respectively. The Federal Reserve, however, may have actuallyfocused on alternative measures during monetary policy deliberations. This couldpartly explain the sizeable divergence discussed in Figure 2. For instance, it seemsthat the measure of inflation used in monetary policy setting has been refined manytimes during Greenspan’s tenure. Mehra & Sawhney [2010] highlight that, in thepresentation of its semi-annual monetary policy reports to the Congress, the FederalReserve Board presented the inflation projections of Federal Open Market Committee(FOMC) participants using different measures of inflation over time. Through July1988, the FOMC’s inflation outlooks were presented in terms of the implicit deflatorof the gross national product. It was then switched to the CPI. In February 2000, in-flation forecasts were refined in terms of the personal consumption expenditure (PCE)price index and, since July 2004, the FOMC have used the core PCE deflator thatexcludes food and energy prices. In addition, as a measure of economic slack, theFederal Reserve might have focused on the unemployment gap rather than the outputgap. This uncertainty surrounding the measures entering into the rule specificationis likely to affect the results in the coefficient estimates reported in Table 2, and as aresult alter the policy recommendation path examined previously in Figure 2. Kozicki[1999] presented evidence on how Taylor type rules are sensitive to changes in rulespecifications, when some assumptions about the rule are altered. The same authorfound the range of policy recommendations to be extensively wide, depending on howthe output gap is measured and which measures of the inflation and equilibrium inter-est rate are used.

Therefore, in the spirit of the robustness analysis of Kozicki [1999], the robust-ness of the baseline model across various measures of the output gap and inflationgauges will be investigated below. Specifically, the analysis considers three alternativemeasures of economic slack: 1) the deviation of the industrial growth rate from a HPtrend, 2) the deviation of the unemployment rate from a fitted quadratic time trendand 3) from a similar HP trend – note that the signs of the series are switched inorder to preserve the sign interpretation of βy. Similarly, the analysis considers threealternative measures of inflation: the year-over-year rate of change in 1) the core CPI,2) the PCE price index and 3) the core PCE price index. For a description of the dataand a visual inspection, please refer to Appendix A.

Table C.3 in Appendix C reports the sixteen estimated rule results for the full sam-ple period, the baseline case included. At first glance, both signs and significance of

40

Page 47: Master's Thesis Alexandre Lauwers

the estimated parameters remain largely unchanged. Even though orders of magnitudeare fairly similar, some coefficients lies outside the confidence intervals implied by thestandard errors of the coefficients estimated in the baseline regression. The followingmain observations are drawn in Appendix.

Figure 3: Range of rule prescriptions for different measures of inflation and theoutput gap

-10

-50

510

15

1985 1990 1995 2000 2005 2010

Range of recommendations Actual

Baseline Target Alternative Target

U.S. Fed funds Rate (%)

Notes: The shaded area represents the range of rule recommendations based on prescriptionscalculated for each of the four measures of slack and each of the four measures of inflation discussedpreviously – making a total of 16 rule recommendations. “Baseline Target” reproduces the path ofthe target rate represented in Figure 2, for sake of comparison. “Alternative Target” shows the targetrate implied by the rule where the CPI core and the deviation of the industrial production growthrate from its HP trend are used for measures of inflation and the output gap, respectively.

The robustness of rule recommendations implied by these alternative measures ofinflation and the output gap were then evaluated 40. The analysis examined whetherthere is a considerable variation or not in the prescriptions of the various estimatedrules.

Figure 3 shows the range of rule recommendations for the period 1984:01 to 2011:11,obtained across the alternative inflation and economic slack measures, and relative to

40Unlike Kozicki’s [1999] procedure, the weights are unrestricted and based on a given choice ofinflation and output gap measures – the weights reported in Table C.3.

41

Page 48: Master's Thesis Alexandre Lauwers

the historical path of the Federal funds rate. The envelope of rule recommendations isbased on prescriptions calculated for each of the four measures of slack and each of thefour measures of inflation discussed previously – making a total of 16 rule recommenda-tions. More specifically, in each month, the upper bound corresponds to the maximumof the 16 rule recommendations for that month and conversely, the lower bound repre-sents the minimum of the 16 rule prescriptions. Therefore, the wider the range is, theless robust the rule recommendations are, and inversely, the narrower the range is themore robust the rule recommendations are. Overall, there is a substantial variationin the prescriptions of the various rule specifications – from 1984:01 to 2011:11, theaverage range is 4.25 percentage points. However, the range fluctuates significantlyover the period, reaching its narrowest at 1.35 percentage points in September 1996and its widest at 11.86 percentage points in July 2008.

This analysis also plotted the target rate implied by the rule where the CPI coreand the deviation of the industrial production growth rate from its HP trend are usedfor measures of inflation and the output gap, respectively – Specification 6. Thisspecification achieves the lowest root mean square error of the deviations from theactual rate, and tracks the actual path fairly closely. Briefly, we observe that the gapbetween the baseline target rate and the actual rate is greatly reduced with this newspecification for some episodes discussed in Section 3.2.2. The same is true during thenegative oil price shock in 1985-86, the accommodating period in 2001-2004, and duringthe 2007-08 oil price hike period. Notably, the choice of inflation and output gap mea-sures seem to matter for the interpretation of the accommodating episode after 2001 41.

Panel A and B in Figure 4 isolate the robustness of rule recommendations to thechoice of inflation and the output gap measures, respectively. For each month, onlythe maximum and minimum recommendations over the four inflation measures aredisplayed in Panel A, controlling that all prescriptions use the baseline measure ofeconomic slack. Similarly, Panel B shows the range of recommendations for the fourmeasures of the output gap, controlling that all recommendations use the baseline in-flation gauge. We can observe that the sensitivity of rule recommendations stems fromboth sides, with a slightly wider average range from 1984:01 to 2011:11 for inflationgauges, 2.48 percentage points, compared to 2.42 percentage points for the outputgap measures. Both panels achieve their widest range in 2008, differing by as muchas 10.27 percentage points in July 2008 for Panel A and by 4.43 percentage points inApril 2008 for Panel B 42.

41But, as insisted earlier, the prescribed policy interest rate cannot served to evaluate past policiessince they are based on ex-post data. Regarding the use of the Taylor rule as a policy benchmark, itis more convenient to use real time data, that is the inflation rate and output gap that correspond tothe same period in which the policy decision was made [Bernanke, 2010]. In that spirit, one may usethe FOMC real-time economic projections, compiled in the Greenbook database.

42The range becomes very wide in July 2008 as the 12 month-ahead inflation forecasts differedconsiderably from the headline CPI to the core CPI, due to the sharp fall in oil price. In nominalterms, a slump in oil prices from $126.33 in June 2008 to $31.04 in February 2009 was noted but byJune 2009, prices increased to $61.4.

42

Page 49: Master's Thesis Alexandre Lauwers

Figure 4: Isolating the respective influence of inflation and the output gap

Actual-1

0-5

05

1015

U.S

. Fed

Fun

ds R

ate

(%)

1985 1990 1995 2000 2005 2010

Panel A - for different inflation measures and the baseline output gap

Actual

-10

-50

510

15U

.S. F

ed F

unds

Rat

e (%

)

1985 1990 1995 2000 2005 2010

Panel B - for different measures of the output gap and the baseline inflation

Notes: In Panel A the range of rule recommendations is based on prescriptions calculated for thequadratic trend based output gap and each of the four measures of inflation, while in Panel B, therange is based on recommendations derived for the CPI inflation rate and each of the four measuresof economic slack.

To sum up, the discussion above demonstrated that, although point estimatesremain fairly stable across measures of inflation and the output gap, rule recommen-dations are not at all robust. On average, different assumptions on the measure ofinflation and different assumptions on the measure of the output gap independentlylead to a range of rule recommendations that is roughly 2 1/5 percentage points wide.This number is significantly high when compared to the average in the absolute changein the Federal funds rate from 1984:10 to 2008:12, of roughly 0.15 percentage point.Therefore, this brief robustness analysis showed that the choice of measures employedin estimated rules is not unimportant and this additional source of uncertainty sur-rounding monetary policy rules estimation will be kept in mind.

3.2.4 Subsample stability analysis and an issue of identification

Before proceeding further with the estimation of an augmented reaction function,it is necessary to explore the stability of the parameters estimated over the entiresample. Estimation of Eq. (16) over the full sample period yields plausible results.However the historical performance analysis revealed significant gaps with the actualseries. Even though estimation over a long sample period enhances the quality ofthe regression, it is highly implausible that the Federal Reserve’s reaction functionwas stable throughout the entire analysed period. It is therefore suspected that the

43

Page 50: Master's Thesis Alexandre Lauwers

point estimates of the parameter vector [βπ, βy, ρ, α] has changed. It seems ratherinconceivable to assume that a central bank may respond to inflationary pressures, fore.g. an expected future 4% inflation rate, with exactly the same obstinacy, whether theaverage inflation has been low or high over the recent past. Moreover, the assumptionof a constant inflation target within the period in question is highly restrictive, as π?

may not remain constant. A significant illustration of parameters drifting is foundin the contribution of Clarida, Gali & Gertler [2000]. By dividing their sample intotwo periods – before and after Volker appointment as the Federal Reserve Chairmanin 1979 –, Clarida et al. document a crucial shift in the response of monetary policytowards inflation: the inflation coefficient more than doubles after 1979, reflecting ashift in policymaker’s behaviour from a passive monetary policy towards an activeand aggressive one. Prior to the Volcker era, a period of macroeconomic instability,the Federal Reserve Chairmen focused on fighting unemployment, while leaving theinflation problem aside.

A common practice when estimating policy rules is to decompose the observationperiod in accordance with the incumbencies of the respective Chairmen [Clarida et al.,2000; Judd & Rudebusch, 1998]. The reaction function is thus implicitly assumed tobe stable during their tenures, but may vary across them. Regarding our estimationsample, it implies, for example, distinguishing the Volcker era between November 1979to August 1987 and the Greenspan era between September 1987 to May 2008 43.

As will be investigated below, this subsample stability analysis might, however,not yield the expected results. A potential issue may arise in the identification of theparameter vector. The suspicion stems from the fact that the literature has rarely beenestimated, with monthly frequency, a standard forward-looking reaction function, thatdistinguishes Volcker from Greenspan tenure. Studies usually mix both incumbenciestogether. Yet, the practice is more widespread in the case of quarterly frequency.Quarterly studies actually often observe a jump in the interest rate smoothing coeffi-cient from one tenure to the other. For instance, Clarida et al. [2000] report a surgein the parameter ρ, surging from 0.74 under Volcker to 0.91 under Greenspan. Thissurge is seen as an increase in the degree of gradualism between the two periods anddoes not pose any problem with the identification of the policy parameters. The issuehere is that the point estimates of ρ is usually higher with monthly data than withquarterly data. It is rather logical to expect increased gradualism when decisions aremade at a lower frequency. Estimation over the Volcker subsample yields an interestrate smoothing of 0.88. If the estimation is made instead over the Greenspan era,this should lead to a surge in ρ, of roughly the same magnitude as with quarterlydata, which would yield ρ dangerously close to unity (see Table C.4 in Appendix C).This would have devastating consequences on the precision of other parameter esti-mates, particularly for the inflation coefficient, since the correlation between the laggedinterest rates and the expected inflation measure is on average approximately 0.50.

43Bernanke’s incumbency actually started in January 2006, but in order to get an estimationperiod before the financial crisis, it is preferable to group Greenspan’s terms with the two years ofBernanke’s tenure. A subsample specially intended to Bernanke’s tenure is not recommended as itmay suffer from a small sample bias issue and the simple Taylor rule framework might not capturewell the unconventional times during his incumbency.

44

Page 51: Master's Thesis Alexandre Lauwers

The theoretically implausible point estimates and the noisy value of ρ might explainthe common practice among studies using monthly data to disregard the subsamplestability analysis from their studies.

This study will further explore this potential problem of identification andinvestigate the rationale behind the run-up in the coefficient ρ, using a strikingillustration. To this end, the analysis adopts the uncommon subsample decompositionby Dupor & Conley [2004]. We estimate the standard forward-looking reactionfunction, as described in Eq. (16), in two subperiods: the “high-inflation period”(1979:10 - 1991:07) and the “low-inflation period” (1991:08 - 2005:08) 44.

Table 3Standard Forward-Looking Taylor rulein High and Low Inflation Subperiods

α βπ βy ρ π? RMSE

1979:10 - 1991:071 4.576∗∗∗

(.621)1.245∗∗∗

(.102).285∗∗∗

(.048).907∗∗∗

(.006)−1.404 .847

1991:08 - 2005:082 −9.102∗

(4.853)4.757∗∗∗

(1.809).120∗∗

(.061).991∗∗∗

(.002)2.799 .143

Notes: The estimated parameters refer to the standard baseline equation (16),with the Federal funds rate as the dependent variable and, as explanatory variablesthe CPI inflation rate (πCPIt ) and the quadratic trend based output gap (yt

Q).The table displays the implied long-run coefficients. The J-test for overidentifyingrestrictions [Hansen, 1982], distributed as χ2 under the null, is easily passed(p > 0.99) for both specifications. The Q-test for serial correlation indicates nopattern of correlations in the error term. HAC corrected standard errors arecomputed with the Delta method and reported in parentheses. 1 High-inflationsubperiod. 2 Low-inflation subperiod. See notes in Table 2 for further explanationson the instruments and on the GMM estimation procedure. *** p < 0.01, **p < 0.05, *p < 0.1.

Table 3 reports the subsequent estimates for the two subperiods. The first rowcontains results for the high-inflation subperiod. The analysis reveals that all estimatesare significantly different from zero at any given statistical significance level. The in-flation coefficient βπ is 1.245 with a standard error of 0.120. Policy is thus active. Theoutput gap coefficient βy equals 0.285. The sum of coefficients ρ1, ρ2 and ρ3 (0.907)indicates that the Federal Reserve was operating on an incremental basis during thissubperiod: it takes approximately 11 months to fully adjust interest rates to the targetinterest rate implied by the Taylor rule. The second row provides estimates for thelow-inflation subperiod. Note that every point estimate changes considerably. Themost significant finding is the dramatic loss of precision for all parameters, with the

44Those subperiods closely replicate the selection made by Dupor & Conley, with minordifferences: the chosen breakpoint is July 1991, i.e. the final month during which the annual CPIgrew at a rate greater than 4% and the end point of the second subsample chosen is August 2005,during which the final month CPI inflation was lower than the threshold of 4%.

45

Page 52: Master's Thesis Alexandre Lauwers

exception of ρ. The latter value now equals 0.99, with a standard error of 0.002.Practically speaking, it means that current policy is solely determined by its laggedpolicy. The very high degree of interest rate smoothing results in higher implied policyparameter of inflation, βπ equals 4.757 and it significantly undermines its precision,with a standard error of 1.809. Unlike Dupor & Conley, we are still able to reject thenull hypothesis that policy is not active with respect to inflation at the 5% significancelevel in this subperiod. However, having a ρ close to unity is still highly uncomfortable.

The large run-up in the smoothing parameter, from one subsample to the other,is at first sight not surprising. It could actually stem from an increase in the degreeof gradualism pursued by the Federal Reserve. Indeed, the first subperiod is markedessentially by the Volcker disinflation era. Not only the sample covers the drasticdisinflationary period of the early 1980’s, but also the Federal Reserve’s emphasis onmonetary aggregates had overshot the volatility of short-term interest rates. As for thesecond subperiod, a more inertial approach to policy was supported during Greenspan’sincumbency. It is telling that former chairman Alan Greenspan earned the nicknameof “Quarter Point Al” for his habit of moving in quarter-point increments, that is toraise or cut interest rates no more than 25 basis points at a time.

However, this large increase in the parameter ρ, and the subsequent drop in es-timation precision, might not only stem from increased gradualism. It could also beexplained by differences in variation across the two subsamples. Sufficient variation inthe sample is indeed a necessary condition to properly identify the slope coefficients inthe policy reaction function, as well as the target inflation rate [Clarida et al., 2000].The second sample considered here, however, appears to contain insufficient varia-tion, particularly in inflation. The average annual CPI growth decreased from 5.462percent, with a standard deviation of 3.163 in the first subsample, to 2.562 percent,with a standard deviation of merely 0.623, in the second subperiod 45. The suspectedSuspicion on the insufficient variation, that may account for some of the increase inthe coefficient ρ, is in fact highly influenced by the evidence presented in the originalClarida et al. paper [1999]. Their analysis of monthly data for the U.S. reveals thatthe passage from the original sample period, running from 1979:10 to 1994:12, to thereestimation of the model for the post-1982 sample period, results in a jump in thesum of coefficients ρ1 and ρ2 from 0.92 to 0.97. Therefore, removing the sample thatcontains the early part of Volcker disinflation period seems to account largely for theincrease in the coefficient ρ and the corresponding loss in the estimates’ precision.

To gain some insight on the possible impact of sample variation loss on the smooth-ing parameter, we proceed to a reverse recursive estimation of Eq. (16), that is a forwardreduction of the window size. Generally speaking, a rolling regression estimates a par-ticular relationship over many different sample periods and each regression producesa set of estimated coefficients. For the purpose of this analysis, a reverse recursiveestimation is applied since the aim is to trace the evolution of the coefficients as thesample data decreases by one observation less, for each estimation. More specifically,

45The standard deviation of the output gap rose from 3.696 to 6.273. But when measured by thepercentage deviation of industrial production from its HP trend, the deviation fell from 2.819 to 1.882percent.

46

Page 53: Master's Thesis Alexandre Lauwers

Figure 5: Reverse Recursive Coefficient Estimates

.95

.96

.97

.98

.99

1

r co

effi

cien

t

1980 1985 1990 1995 2000

Sum of lagged interest rates

0

2

4

6

8

b p c

oef

fici

ent

1980 1985 1990 1995 2000

Inflation Gap

-.1

0

.1

.2

.05

-.05

.15

b y c

oef

fici

ent

1980 1985 1990 1995 2000

Output Gap

Notes: This figure shows the results from a reverse recursive estimation which starts out estimatingEq. (16) for the whole sample period (i.e. 1979:10 - 2011:11), then reduces the estimation forwardmonth by month, until the last period, ranging from 2001:12 to 2011:11 (i.e. window of 120 months),is left. The starting date of each subsamples are represented in the x-axis.

the regression starts out estimating the whole sample period (i.e. 1979:10 - 2011:11),then reduces it forward month by month, until the last period, ranging from 2001:12to 2011:11 (i.e. window of 120 months), is left. Figure 5 outlines the results from thisreverse recursive estimation of Eq. (16). The visual assessment of the stability of thecoefficient ρ is striking. By departing from the early part of the Volcker disinflation, asignificant jump in its point estimates. The coefficient then exacerbates into uncom-fortable levels when the sample is successively shortened by removing the completeVolcker era. The potential impact of a close-to-unity rho on the implied policy param-eters of inflation and the output gap in the early 1990’s is also noteworthy. As Dupor& Conley [2004] recognize, it seems that most of the estimates’ precision come fromthe initial 2-4 years of the sample, since it brings sufficient variation in inflation to havea small ρ and thus allows to estimate the inflation coefficient with greater precision.

In conclusion, dealing with monthly data renders the stability analysis of the policyparameters problematic. The smoothing coefficient, inherently large with monthlyfrequency, exacerbates as soon as the sample moves away from the Volcker disinflationperiod: this is due not only to a greater tendency to smooth the interest rate, butperhaps also because of the loss of precious variation in inflation. Hence, while it mayseem surprising46, the very brief period of the Volcker era needs to be included to

46Over the period 1979:10 - 1982:12, the non-borrowed reserves was the operating instrument of

47

Page 54: Master's Thesis Alexandre Lauwers

compensate the increased gradualism of the following years and the Great Moderationperiod. If this period were not to be included, estimation would suffer from a serious lossof precision. This might explain why subsample analyses, for instance discriminatingVolcker and Greenspan era or any search for break points, are generally discardedwithin a monthly setting.

To sum up, the sections regarding the choice of the baseline model and the estima-tion of the standard forward-looking interest rate rule have provided some interestinglessons. Over the full sample period, our estimation results suggest that the FederalReserve has been aggressive towards inflation, fulfilling the Taylor Principle, and alsovery attentive to developments in the real economy, confirming its dual mandate of sta-bilizing inflation as well as output. In addition, it seems that the Federal Reserve putssignificant effort on smoothing interest rates, which could be commonly interpreted asa sign of a great preference of gradualism. The historical performance analysis reveals,however, a poor fit of the baseline estimated rule with the actual behaviour of theCentral Bank. But this observation is greatly sensitive to margin modifications in theproxy variables used to measure inflation and the output gap. Nevertheless, a fewcaveats are in order. A problem of weak identification might exist when consideringinstruments for the inflation variable at long forecasting horizons. Also, the choice ofproxies, although they do not affect our general conclusions, can greatly influence therule recommendations. Finally, a monthly frequency setting yields considerable inertiain the interest rate series and poses significant problems when studying the stabilityof the estimated parameters. Therefore, even in this standard rule, many aspects ofrule specification are subject to considerable uncertainty. As a result, the augmentedrule should be carefully addressed in the next section and should be considered as afirst rough attempt to describe the Federal Reserve’s behaviour and the potential linkbetween asset prices and monetary policy.

monetary policy as opposed to the Federal Funds rate.

48

Page 55: Master's Thesis Alexandre Lauwers

3.3 An augmented forward-looking Taylor rule

Ever since Chairman Greenspan speech in the fall of 1996 addressing the FederalReserve’s concern about the “irrational exuberance” in the stock market, a spiriteddebate has erupted on what is the appropriate role for asset prices in monetary policydeliberations. The Federal Reserve’s position as to this debate appears relatively clear.The Central Bank does not officially target asset prices, either because smoothing assetprices do not constitute directly one of its fully-fledged objective, or because they areonly important insofar as they provide useful information about its ultimate objectives,the stabilization of inflation and output [Chadha, Sarno & Valente, 2004]. Instead ofattempting to review the crucial and yet unresolved Hamletian dilemma – “to respond”or “or not to respond” to asset prices – this section will address one aspect of thatdebate: what has been the response of the Federal Reserve to movements in assetprices? To this end, we will use the augmented forward-looking reaction function, Eq.(13) presented in Section 2.3, which allows asset prices to act as both monetary policytargets and as information variables.

We begin this section by investigating empirically whether and how the FederalReserve has responded to stock and house price developments. In other words, we testwhether the coefficients of the stock market (βs) and the real estate (βh) variables arestatistically different from zero and economically viable. We then report a battery ofrobustness checks to assess to which extent our initial findings are driven by arbitrarychoices in the model specification. Next, we try to provide some refinements on ourpreliminary results. In particular, we examine the contribution of stock prices to thestandard Taylor rule, then we consider how the parameters of the rule evolve over timeand finally, we try to explore whether the estimated reaction to stock prices has beenasymmetric or not.

As to the investigation on the role of asset prices, a natural question concerns theappropriate measure to signal a possible asset price misalignment on which the FederalReserve may focus.Regarding the stock market variable, we decided to follow Bernanke & Gertler[1999] and augment the standard Taylor rule with the one-month lag of the stockmarket returns, that is the logarithm difference in the SP&500 index – consideredas a representative stock market index for the U.S. The reason for applying thisstraightforward approach is twofold. First, it can be argued that the potentialsignificance of the stock market variable is the result of spurious correlations. Unitroot tests, performed in Appendix B, indicate that the SP&500 index is non-stationary, whereas the changes in that index have no unit root and thus are stationary.Accordingly, this possible problem of non-stationarity is taken into consideration andthe stochastic trend is naturally eliminated when considering stock market returns, asnoted by Fuhrer & Tootell [2008]. Second, it can also be argued that such measuredoes not indicate directly a probable stock price misalignment, in the sense that thereis no distinction between fundamental and non-fundamental stock price shocks. Yet,we motivate this choice as in practice it is unlikely that the Federal Reserve knowsthe fundamental value of the SP&500 index and that is able to discern a genuine

49

Page 56: Master's Thesis Alexandre Lauwers

misalignment from a change in stock prices driven by fundamental factors 47 [Bernanke& Gertler, 1999]. Still, baseline results will be tested for different measures for thestock market variable in Section 3.3.2.Concerning the real estate variable, we favour the log-difference in the S&P Case-Shiller Home Price Index, for the same reasons explained above with regard to thestock market variable. It is noted that the series starts in 1988 due to data availabilityand thus the estimation of the rule augmented with house price inflation will run from1988:01 to 2011:11.

The two asset measures are presented in the form of a yearly growth rate, thatis the year-on-year log-difference of the two considered indices. As argued by Dupor& Conley [2004] and Levieuge [2002], it seems more likely that the Federal Reservewould react to lower-frequency changes in those financial variables than to their veryshort-term evolutions. A systematic reaction of the Central Bank to high-frequencychange would imply the adverse effect of a volatility of the Federal funds rate as strongas the one for the asset prices.

3.3.1 Initial empirical results on the augmented reaction functions

It is now time to begin the empirical investigation of the augmented monetarypolicy rule and try to assess the extent to which the Federal Reserve takes into accountdevelopments in these two asset variables while setting its instrument rate policy. Todo so, we define the estimable reduced form of the augmented forward-looking policyrule that will serve as the baseline case throughout this investigation. Specifically,

rt = φ0 + φππt+12 + φyyt + φsst−1 + φhht + φ1rt−1 + φ2rt−2 + φ3rt−3 + εt. (18)

where the Central Bank is assumed to respond to year-ahead movements in inflation,the contemporaneous output gap, a lagged interest rate term, and two additionalexplanators: the once-lagged stock price returns and contemporaneous house priceinflation. Estimation of Eq. (18) is done by GMM, again implemented by a Bartlettkernel with bandwitch q = 12 to account for the moving average error induced byoverlapping forecasts. Importantly, we expand the instrument set by including lags ofthe two additional asset prices up to six periods, when necessary. Thus, in the followingestimates, the response of the Central Bank to stock market returns and housing pricegrowth that may arise from their predictive power for output and expected inflationare fully accounted for 48.

The GMM results for the augmented forward-looking Taylor rules are shown inTable 4. To allow for ease of comparison, we include in the first row the results wepreviously found for the standard forward-looking Taylor rule reported in Table 2, for

47If the Central Bank is actually able to detect non-fundamental developments, then the stockprice growth series might include some measurement errors, biasing upwards the estimates of theresponse of interest rates to the stock market variable [Dupor & Conley, 2004].

48In fact, in the baseline specification (18), the information that asset prices do convey is onlyaccounted for future inflation. But we will also test further whether the horizon q for the output gaphas any influence on our preliminary results.

50

Page 57: Master's Thesis Alexandre Lauwers

the full sample period. Similarly, the third row contains the long-run coefficient of thestandard rule for the reduced sample period from 1988:01 to 2011:11. Generally speak-ing, the J-test for overidentifying restrictions is easily passed for all the specifications.Thus the selected sets of instruments satisfy the exogeneity property with respect tomonetary policy decisions. In addition, the Q-test for serial correlation indicates nopattern of correlations in the error term for all specifications.

In the results reported in the second row of Table 4, we explore the possibility thatthe Federal Reserve responded to stock market returns by including the once-laggedstock price variable. The estimated reaction of the Federal funds rate to stock returns is0.067 for the full sample period, with a standard error of 0.015. The coefficient is highlysignificant and our main hypothesis of interest, that βs equals zero, can be rejected atthe 1 percent significance level. The magnitude and the positive sign imply that for ayearly 10 percent rise in stock prices, the Federal Reserve raises the target interest ratein the next month by about 67-basis-points, everything else being equal. Hence, theregression reveals that the instrument rate was raised when the growth rate of the stockindex was booming, while monetary policy was eased when stock market returns werefalling sharply. In sum, it is an indication that monetary authorities were stabilizingthe stock market over the whole estimation period, a result economically strong. Thisresponse might appear feeble at first sight but, considering the values that the yearlychange in stock prices can reach – on average 15.670 percentage points ranging mostlyfrom 0 to 55 over the period 1979:10 to 2011:11 – its contribution to interest ratescould represent 3.685 percentage points in extreme cases 49. Contrary to what wasfound in Bernanke & Gertler [1999], our result suggests that the Federal Reserveconsidered stock price developments in setting their interest rate target, and wouldthus constitute a fully-fledged objective, in addition to their implication for forecastsof inflation. However, it should be noted that a statistically significant coefficient forstock returns may be the broad evidence that monetary policy has other objectives, andthere is information about these objectives inside the stock price variable [Bernanke &Gertler, 1999]. In this case, the Federal Reserve would not directly react to stock marketreturns strictly speaking, but rather to some information contained in the stock priceseries about potential relevant variables absent from the standard reaction function 50.Compared to the standard rule estimates, the inclusion of the stock market variablereduces very slightly the implied parameters for expected inflation and the outputgap. They remain precisely estimated, with the theoretically expected sign and stillhighly significant. It is worth noting that augmenting the policy rule by means of thisadditional regressor leaves the interest rate smoothing parameter unchanged. Stockreturns do not represent the omitted serially correlated variable that biased upwardsthe parameter ρ in the standard specification.

49In other words, stock returns may account for as much as 3.7 percentage points deviation ofthe interest rate from the path outlined only by inflation and the output gap.

50Accordingly, there would be a risk of misspecification that would involve some relevantinformation omitted from the standard rule that is also contained into the stock price series. Webelieve this hypothesis could be fairly examined by introducing into the rule, as Levieuge [2002],the business cycle index NAPM. For details on the results and on the NAPM index, please consultTable D.1 in Appendix D. Overall, it seems that stock prices do have a direct influence on the FederalReserve interest rate setting.

51

Page 58: Master's Thesis Alexandre Lauwers

Table 4GMM Baseline Estimates: Augmented Forward-Looking Taylor Rule

α βπ βy ρ βs βh π? RMSE

Standard −3.193∗∗∗

(.792)2.679∗∗∗

(.222).120∗∗∗

(.043).956∗∗∗

(.006)n.a. n.a. 3.17 .544

Stock1 −3.250∗∗∗

(.762)2.494∗∗∗

(.209).109∗∗

(.048).959∗∗∗

(.006).067∗∗∗

(.015)n.a. 3.599 .540

Sample period : 1979:10 - 2011:11

Standard −4.548∗∗∗

(1.087)3.036∗∗∗

(.434).128∗∗∗

(.040).983∗∗∗

(.002)n.a. n.a. 2.928 .151

House2 −0.408(1.254)

1.132∗∗

(.493)−.147(.131)

.992∗∗∗

(.002)n.a. .381∗∗

(.169)16.36 .148

Stock+ House3 1.505

(1.368).334∗∗

(.569)−.201(.143)

.993∗∗∗

(.002).083∗∗∗

(.027).441∗∗

(.189)−.934 .148

Sample period : 1988:01 – 2011:11

Notes: The ”Standard” specifications report the estimated parameters previously foundfor the standard forward-looking Taylor rule for the full sample period (reported in Table 2)and for the reduced sample period from 1988:01 to 2011:11. See notes in Table 2 forfurther explanations on the instruments and on the GMM estimation procedure on thesestandard Taylor rules. In the other specifications, the estimated parameters refer to theAugmented baseline equation (18), with the Federal funds rate as the dependent variable.These specifications were estimated using the CPI inflation rate (πCPIt ) and the quadratictrend based output gap (yt

Q). The table displays the implied long-run coefficients estimatesobtained by GMM with an HAC estimator of the variance-covariance matrix and a Bartlettkernel with autocorrelation of order 12. βs, βh denote the stock market and the real estatevariable coefficients, respectively. The GMM extended instrument set includes a constant andlags 1-6, 9 and 12 of the log-differenced inflation rate, the output gap, the interest rate, thelog-difference of a world commodity price index, the growth rate of the monetary aggregateM2, the term structure spread as well as lags 1-6 of the log-differences in the stock prices andthe log-differences in the real house prices, when necessary. The J-test for overidentifyingrestrictions [Hansen, 1982], distributed as χ2 under the null, is easily passed (p > 0.99) forall specifications. The Q-test for serial correlation indicates no pattern of correlations in theerror term for all specifications. βπ coefficients in bold are not statistically different from 1 atany conventional significance levels. HAC corrected standard errors are computed with theDelta method and reported in parentheses. n.a. stands for not applicable. *** p < 0.01, **p < 0.05, *p < 0.1.1 Estimates are obtained from the baseline Taylor rule augmented with the stock marketreturns, that is the yearly percentage change of the S&P500 index, evaluated as the naturallog-difference of the S&P500 index over the prior twelve months.2 Estimates are obtained from the baseline Taylor rule augmented with the yearly percent-age change of the S&P Case-Shiller Home Price Index in real terms (adjusted for inflationusing CPI inflation), evaluated as the natural log-difference of the home price index over theprior twelve months. The sample period is 1988:01– 2011:11.3 Estimates are obtained from the baseline Taylor rule augmented with stock marketreturns and the yearly growth in house prices. The sample period is 1988:01– 2011:11.

52

Page 59: Master's Thesis Alexandre Lauwers

Moreover, when stock returns are added to the rule, the overall fit of the model isimproved as can be seen from reduced root mean squared errors – based on thedifference between actual and the implied target rate – from 0.544 to 0.540. Thispossible gain in realism will be further discussed in Section 3.3.3 when we will analysethe historical performance of the augmented rules.Finally, the consistency of these first estimates can be tested, as was done previously,by computing the Federal Reserve long term inflation target, on the grounds of theeconomy’s long-term average real interest rate. The average over the period 1979:11- 2011:11 gives a long-term rate rr of 2.127, such that π? takes a value of 3.599.Although the value derived is slightly greater than that implied by the standard rule,it confirms that the regression is plausible.

It is noted that we augment the policy rule by including the stock returns once-lagged. This choice was influenced by a number of arguments presented in Section 2.3.Nevertheless, we checked whether having stock market returns enter the augmentedpolicy rule contemporaneously and in a forward-looking form made a difference. Theempirical results, reported in Table D.2 in Appendix D, are qualitatively and quanti-tatively similar to our baseline estimates recorded in the second line of Table 4. Thisis confirmation that the coefficient in the monetary policy rules associated with stockmarket returns has a positive sign, lies within the range from 0.067 to 0.150 and ishighly significant at conventional statistical levels. Overall, our estimates suggest thatthe Federal Reserve may have been attempting to stabilize the stock market while, atthe same time, pursing traditional objectives of stabilization of output and expectedinflation.

The fourth row of Table 4 explores the possibility that the Federal Reserve re-sponded to the yearly growth in real house price by including the contemporaneousreal estate variable. Our main hypothesis of interest, that βh equals zero, can be re-jected at the 5 percent significance level. At first sight, this finding would suggest thatthe U.S. Central Bank not only closely observes developments in real estate marketas a forward-looking indicator for its ultimate objectives, but also that policymakersconsider house price changes in setting their interest rates, as a fully-fledged argumentin the interest rate rule. The estimated response to real house prices movements is0.381 in the reduced sample, with a standard error of 0.169. As to the positive sign ofthis variable, it follows economic reasoning : this says that a yearly 10 percent increasein housing price growth is associated with a 381-basis-point rise in the instrumentrate. This is a clear-cut indication that monetary policy was stabilizing the real estatemarket during 1988 to 2011. It is noted that the aforementioned results are qualita-tively and quantitatively similar with longer forecasting horizons w for the housingprice variable.

However, we must not leave this first observation without some critical observations.First, a 381-basis-point interest rate response to a year-over-year 10 percent house priceshock is clearly a too large number to be taken seriously: the yearly growth in houseprice, compounded over February 1998 to May 2006 – when yearly house prices growthremained above 5 percentage points – reached 101 percent. This would have impliedan unrealistic cumulated rise of 38.5 percentage points in the policy rate.

53

Page 60: Master's Thesis Alexandre Lauwers

Second, with the inclusion of the real estate variable, the realism of the other pointcoefficients dies out. Although the GMM estimates for the augmented rule provide agood fit with a very small value for the root mean square errors, this is mainly due tothe weight of the lagged policy rates. Indeed, ρ is dangerously close to one (0.992) andis highly significant. As the process of the Federal funds rate approaches a near-unitroot, the augmented version of the policy rule produces very imprecise estimates, farless accurate than the standard case. The size of the expected inflation coefficientdecreases substantially from 3.026 to 1.132 and the standard deviation remains highat 0.493. We believe this loss in significance for the impact of expected inflation isdue to the extremely high weight of the smoothing coefficient since expected inflationand the lagged interest rate are highly correlated (0.422) and strongly collinear. Thecombination of higher standard deviation with a far smaller coefficient size results inan accommodative monetary policy that responds to inflation in a magnitude smallerthan one to one. This is confirmed by the Wald test in which the hypothesis that βπequals one cannot be rejected at any conventional significance level. This contradictionwith the Taylor Principle, previously satisfied, is surprising. It should be recalled thatthe principle posits that an active monetary policy is characterized by an interest rateresponse to inflation greater than one-to-one, which translates into price determinacyor stable inflation. But here we found an opposite result. The fact that it is combinedwith extremely low inflation volatility and output during the analysed sample periodseems to be counter-intuitive 51.Finally, the introduction of property prices renders the response to the output gap neg-ative and insignificant. It implies that monetary policy was destabilizing with respectto the real economy. In fact, this loss of significance could be explained by the highcorrelation (0.595) between the output gap and the real estate market series. Houseprices are indeed likely to contain some information about the output gap objective.It is conceivable, therefore, that what appears to be a direct response towards houseprices is rather a reaction to the expected output gap. We ascertain this possibilityby including the forward looking output gap instead of its contemporaneous specifi-cation 52. Results – not reported here – indicate that the significance on the houseprices coefficient dies out as the forecasting horizon of the output gap becomes longer– the p-value becomes 0.148 with the six-month ahead output gap measure. Thus,house prices may contain precious information on future output gaps and, as such,developments in real estate markets are presumably scrutinized by the Federal Reserveto refine their projections as regards future economic activity.

In conclusion, we do not necessarily interpret these results literally by saying thatthe Federal Reserve was actively attempting to stabilize real house prices during thepast 20 years. Rather than playing a separate role in setting the policy rate, it appears

51We are aware that there are empirical and theoretical arguments consistent with this counter-intuitive result. For instance, theoretically, Cochrane [2011] provides conditions for having apassive monetary policy associated with stable inflation and an active monetary policy associatedwith unstable inflation. Nevertheless, we can’t give support of this last finding given all signs ofmisspecification.

52To disentangle the two contributions, we could have also imposed a restriction on the outputgap coefficient, re-estimated the equation and seen whether βh remains significant. Unfortunately, toour knowledge, Stata software does not allow for the imposition of coefficient restrictions alongside aGMM regression.

54

Page 61: Master's Thesis Alexandre Lauwers

that house prices act more like a leading indicator on which the U.S. Central Bank candetect future output developments. Moreover, adding the real estate market variableseriously affects the estimates of inflation and the output gap parameters, and makesthe regression less stable. In light of all the unpleasant signs, we should admit thatour monthly frequency analysis do not offer a suitable framework to fully capture therole assigned by monetary policy to boom/bust cycles in house prices. It would beinteresting therefore to apply this analysis to quarterly data and, in particular, touse the Greenbook forecasts that has the advantage of compiling today the FOMC’sreal-time expectations of housing price from 1980 to 2007. Unfortunately, this routewas not pursued here. Instead, we focus on the role assigned to stock price fluctuationsand try to provide in the next subsections robustness and further refinements of ourbaseline results.

3.3.2 Some robustness tests for the Stock price augmented rule

To investigate the robustness of the results reported in Table 4 second line, wereestimate Eq. (18) using (a) alternative target horizons for the output gap variable,(b) different measures of stock price disequilibrium, and (c) alternative measures ofinflation and the output gap. As indicated below, this three-dimension robustnessanalysis largely confirms the findings of our baseline estimates.

Table 5Stock price Augmented Forward-Looking Taylor RuleRobustness: alternative horizons for the output gap

α βπ βy ρ βs π? RMSE

k=12, q=3 −3.359∗∗∗

(.787)2.552∗∗∗

(.214).147∗∗∗

(.051).959∗∗∗

(.006).059∗∗∗

(.015)3.536 .540

k=12, q=6 −3.471∗∗∗

(.795)2.595∗∗∗

(.219).138∗∗

(.055).959∗∗∗

(.006).056∗∗

(.014)3.510 .541

k=12, q=9 −3.627∗∗∗

(.820)2.649∗∗∗

(.227).161∗∗∗

(.060).960∗∗∗

(.006).052∗∗∗

(.015)3.490 .542

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the augmented baseline equation(18), withthe Federal funds rate as the dependent variable. These specifications were estimatedusing the CPI inflation rate (πCPIt ), the quadratic trend based output gap (yt

Q) and thestock market returns. The table displays the implied long-run coefficients. The J-test foroveridentifying restrictions [Hansen, 1982] is easily passed (p > 0.99) for all specifications.The Q-test for serial correlation indicates no pattern of correlations in the error term forall specifications. HAC corrected standard errors are computed with the Delta method andreported in parentheses. See notes in Table 4 for further explanations on the instrumentsand on the GMM estimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.

55

Page 62: Master's Thesis Alexandre Lauwers

In the baseline estimation results introduced in the previous subsection, the CentralBank was assumed to have a target horizon of up to 12-month for inflation (i.e. k = 12)and to respond to the contemporaneous output gap (i.e. q = 0). Here, we want toexamine whether stock market returns remain a statistically significant determinant ofinterest rate policy when the output gap variable enters the rule as a forward lookingvariable – that is with a strictly positive forecasting horizon q > 0. In doing so, we cantest whether the stock market variable would cease to be significant after accountingfor the future output gap. In this case, it would only constitute a reliable forecastingindicator, actively monitored by the Federal Reserve to predict both future output andinflation– in other words, merely as an instrument for forecasting yt+q and πt+q. Forthis purpose, we consider in turn a target horizon q of up to 3-, 6-, and 9-month forthe output gap. The regression results from estimating Eq. (18) using these differenthorizons are reported in Table 5. These results show that the parameter estimatesare all virtually unaffected across the various specifications: the new implied policyparameters are always located in the confidence intervals implied by the standarderrors of the coefficients estimated in the baseline regression. Thus, the stock marketvariable resists to the introduction of the output variable in forward looking insteadof contemporaneous form. These robustness results indicate that the Federal Reservemight consider stock market movements in setting interest rates, independently of theinformation they contain, not only about future inflation, but also about future outputgap 53.

Next, we examine the robustness of the significance of the parameter βs for alterna-tive measures of stock price misalignment. In the baseline case, as done by Bernanke& Gertler [1999], we included the stock market variable in a straightforward way, thatis the yearly percentage change of the S&P500 index – the stock market returns. Theliterature, however, is fairly diverse, with a multitude choice of proxies to representstock price disequilibrium, such as the use of valuation ratios or stock price gaps.Also, it may be argued that using stock returns does not directly indicate a possiblestock price disequilibria. It does not specifically take into account any distinction be-tween the fundamental value and non-fundamental shocks. Accordingly, we reestimateEq. (18) using four different measures for the stock variable, as compiled in Table 6 54.

First, as in Hoffmann [2012], we employ a stock price gap measure, constructedby subtracting the year-over-year log-differences of the S&P500 index from its trend,which is approximated by the Hodrick-Prescott filter with the HP smoothing parame-

53Also, we tested the robustness of our main results against shorter leads for future inflation.Broadly speaking, the stock market variable remains statistically significant and the implied coefficientestimates are quantitatively similar to the baseline case reported in the second row of Table 4 – rangingfrom 0.068 to 0.077. But, as already mentioned, shorter forecasting horizons for inflation may bringspurious significance in our asset price variable: imagine that in practice a central bank targets theinflation up to 12 months, then the estimation of a rule targeting inflation at 6 months for instancemay increase the significance of our asset variable simply because of it contains information for the12-month ahead inflation. Regarding the other policy parameters, we invite the reader to return tosection 3.1.3 for a description of similar observed regularities.

54Note that the stock market variable enters, in all the regressions, in a once-lagged form. But,as we already observed in the previous subsection, the assumed form has little incidence on thesignificance of the parameter of interest, nor on its magnitude.

56

Page 63: Master's Thesis Alexandre Lauwers

Table 6Stock price Augmented Forward-Looking Taylor Rule

Robustness: alternative measures for the stock market variable

α βπ βy ρ βs π? RMSE

Stock returns1 −3.250∗∗∗

(.762)2.494∗∗∗

(.209).109∗∗

(.048).958∗∗∗

(.006).067∗∗∗

(.015)3.599 .540

Stock gap2 −3.113∗∗∗

(.802)2.619∗∗∗

(.225).117∗∗

(.046).961∗∗∗

(.006).042∗∗

(.018)3.236 .541

P/E gap3 −3.108∗∗∗

(1.429)2.637∗∗∗

(.356).129∗

(.068).954∗∗∗

(.006)−.015(.072)

3.199 .544

P/E change4 −3.113∗∗∗

(.777)2.550∗∗∗

(.215).132∗∗∗

(.049).960∗∗∗

(.006).069∗∗∗

(.015)3.381 .539

D/P change5 −3.337∗∗∗

(.761)2.636∗∗∗

(.208).121∗∗∗

(.043).959∗∗∗

(.006)−.054∗∗∗

(.014)3.340 .541

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the augmented baseline equation(18), with theFederal funds rate as the dependent variable. These specifications were estimated using theCPI inflation rate (πCPIt ), the quadratic trend based output gap (yt

Q) and various measuresfor the stock market variable. The table displays the implied long-run coefficients. The J-testfor overidentifying restrictions [Hansen, 1982] is easily passed (p > 0.99) for all specifications.The Q-test for serial correlation indicates no pattern of correlations in the error term forall specifications. HAC corrected standard errors are computed with the Delta method andreported in parentheses. See notes in Table 4 for further explanations on the instrumentsand on the GMM estimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.1 Baseline measure : stock market returns that is the yearly percentage change of theS&P500 index.2 A stock price gap measure, constructed by subtracting the year-over-year log-differencesof the S&P500 index from its trend, which is approximated by the Hodrick-Prescott filterwith the HP smoothing parameter λ = 1296003 The price-earnings ratio series minus its historical mean (18.47)4 The yearly growth rate in the price-earnings ratio .5 The yearly growth rate in the dividend-price ratio. Note that negative growth in theD/P ratio can be considered as evidence that the stock price index is overpriced and vice versa

ter λ = 129600 55. This measure is supposed to capture the deviations of stock marketreturns from their trend, seen as its optimum/fundamental value – just as with the

55We construct the gap from growth rates. It should be noted, however, that the standard two-sided HP-filter uses observations at t + i, i > 0 to construct the current time point t. Thus thefilter uses ex-post information to extract the trend from the series and to determine whether a boomexists. To overcome this shortcoming, Detken & Smets [2004] and Lowe & Borio [2002] work witha rolling/recursive HP filter (also called a one-sided HP-filter), intended to use only the informationavailable to the Central Bank at the time it makes the assessment of a potential boom/burst in stockprice. Unfortunately, the program for running such a filter is not accessible from Stata nor Excelsoftwares.

57

Page 64: Master's Thesis Alexandre Lauwers

estimate of the potential output. Hence, a deviation of the price series from its trendis seen automatically as the sign of an overvalued market driven by non-fundamentals,to which the Central Bank should react to. Although it is unlikely that the FederalReserve has the ability to discern the fundamental value of the stock market, we regardthe HP-filtered stock price gap as a rough indicator of the presence of a stock marketboom. The estimated parameters, shown in the first row of Table 6, are not quali-tatively different from the estimates recorded in the baseline case. In particular, theimplied policy coefficient of the HP-filtered stock price gap is statistically significantand equal to 0.042. This implies that the Federal Reserve raises the target interestrate by 42-basis-points when yearly stock returns are 10 percentage points above theirtrend over the whole sample period. Hoffmann [2012], however, found a positive butinsignificant coefficient for the period 1979:10 to 2008:12. Despite a common monthlydata analysis, and using the same period, we still get a highly significant point estimate56. Estimation with this alternative measure does not, therefore, change our conclu-sions and confirms the potential role of stock prices in Federal Reserve monetary policy.

As a second alternative measure, following Hayford & Malliaris [2004], we employthe real S&P500 stock price index relative to after-tax corporate earnings as a mea-sure of the stock market’s valuation 57. In equity pricing theory, the price-earnings(P/E, henceforth) ratio is a well-accepted and commonly used method for determiningvaluations of both individual stocks and the market as a whole. According to thisapproach, the stock market is considered to be appropriately valued if the P/E ratioequals the inverse of some appropriately risk adjusted return [Hayford & Malliaris,2005]. In practice, the P/E ratio is typically compared to its historic mean. A P/Eratio in excess of its average is thus a signal of a potential stock market overevaluation.Figure A.5 in Appendix A compares, for the period 1979:11 to 2011:11, the S&P500P/E ratio with its historical average – the latter computed for the period 1948-2013(18.47). It suggests that the market was constantly overvalued since 1992 until the2008 financial crisis, despite a sharp correction after the burst of the Internet bubblein late 2000. Results shown in the second row of Table 6 contrast considerably withthe conclusions drawn from the baseline regression. It indicates a negative, but sta-tistically insignificant, stock price coefficient (-0.015) for the whole estimation period.As in Hayford & Malliaris [2004], we also ran the regression for the reduced period1987:08 to 2001:12, and found a negative βs (-0.175) but this time highly significant,a result in line with the authors’ findings. This says that a rise of 2.5 in the P/Eratio – it represents approximately 10 percent of the P/E average value from 1987 to2001 – was associated with a decrease in the Federal funds rate by 44 basis-points. Asfor the interpretation, the authors argue that the Federal Reserve not only avoidedneutralizing the apparent stock price bubble but also, perhaps unintentionally, mayhave contributed to the bubble’s growth since 1995. The contrast with our baselinefindings in Table 4 probably comes from the over-representation of the estimated stock

56Unfortunately, the author is not specific about which smoothing parameter λ he used. Webelieve the difference might come from this arbitrary parameter, as it determines the size of stockprice misalignments.

57The normalized – or cyclically adjusted – price-earnings ratio was taken from Robert Shiller’sdatabase and is calculated as the inflation-adjusted S&P 500 index at the start of each year dividedby an average of the most recent 10-years of S&P 500 real earnings.

58

Page 65: Master's Thesis Alexandre Lauwers

market bubble in their analysed sample, while our full estimation sample takes intoaccount the subsequent decade, marked by profound declines in stock prices. Thus, thecontrast could be the sign that the response of monetary policy to the stock markethas changed over time. This crucial question of stability will be further discussed inSection 3.3.4. However, we note two related drawbacks with this approach. One liesin the use of a constant arithmetic average to construct the stock price gap. As shownin Figure A.5 in Appendix A, the P/E ratio has grown considerably since the 1970s,thus the notion by which a stock market was considered as high is more likely tohave change over time. The other drawback is a potential problem of non-stationarityof the P/E ratio series. All three unit-root tests are unanimously supportive on thepresence of a unit root in our full sample period 58. The non stationarity of the P/Eratio implies that it no longer exhibits mean-reverting properties. Thus, confrontingthe P/E ratio to its historical average may not be coherent since non stationary seriesbear no relation to any past average value [Weigand et al., 2006]. Also, it may beargued that the significance of the coefficient of the P/E ratio is caused by its apparentnon-stationarity and thus a result of spurious correlation. That is why, in the thirdalternative measure below, we make use of the fact that the P/E ratio series is anintegrated process, i.e. it can be made stationary by differencing.

To remove any doubt concerning the dubious stationarity of the P/E ratio series,we augment the policy rule with the yearly P/E growth rate, as was done by Dupor& Conley [2004] and Lee & Son [2011] 59. It is computed as the year-over-year log-differences of the S&P500 P/E ratio. Since the earning part of the ratio can representthe fundamental factors, a positive growth in the ratio – the growth in price exceedsthe growth in earnings – can be regarded as a positive stock price gap. The estimationresults are reported in the third line of Table 6. The Federal Reserve’s response toP/E growth is positive and statistically different from zero, confirming our baselinefindings. In comparison with the two cited studies, Dupor & Conley find a negative andinsignificant coefficient for the full period 1979:4 to 2002:4, but they do find a positiveand significant response in their low-inflation subsample, from 1991:2 to 2002:4. Lee& Son also find a positive and significant stock price coefficient for the low-inflationperiod from 1991:2 to 2008:2, as well as in the full sample period running from 1979:4to 2011:1. But they do get a negative and statistically close to 0 coefficient in thehigh-inflation period from 1979:4 to 1991:1. Hence, it seems that the high-inflationsubperiod exerts downward pressure on the significance of the stock price parameter,whereas the 2000s decade, marked by two profound corrections in the stock market,

58The depiction of the P/E ratio as a non-stationary series implies that it can stay above trendfor extended periods, and possibly forever, at least theoretically. This is in sharp contradiction to theevidence provided by Campbell & Shiller [2001]. They find that the P/E ratio, and in general valuationratios, are mean-reverting and a stationary combination of two cointegrated variables. Basically, itmeans that when the ratio drifts far from the average, as was the case with the unprecedented levelduring the dot-com bubble, stock prices –the numerator– would eventually fall in the future to bringthe ratio back to more normal historical levels. A correction predicted by Shiller in his book “IrrationalExuberance” (2001). Harney & Tower [2002] argue that “The existence of a fundamental relationshipbetween equity valuations and corporate profits, however, constrains the degree to which PEs canrationally fluctuate, so these ratios should revert to their means”.

59These authors actually use a two-year price-earnings growth rate and perform a quarterlyfrequency analysis.

59

Page 66: Master's Thesis Alexandre Lauwers

tended to increase its significance. The recursive analysis in Section 3.3.4 will try tobring us more insight into these evolutions.

As a final alternative measure of stock price disequilibrium, we rely on the bench-mark proxy used by Chadha et al. [2004], namely the year-over-year growth rate in thedividend-price ratio (D/P, henceforth) 60. The D/P ratio, or dividend yield, can beused as an alternative indicator of the overall value of the market. A negative growthin the D/P ratio can be considered as evidence that the stock price index is overpricedand vice versa [Pamela, 2011]. The estimation results in Table 6, final row, indicatethat the coefficient for the growth in dividend yields is negative and highly significant.This is in line with the findings of Chadha et al. for the period from 1979:4 to 2000:4.

Concluding this robustness check for alternative measures of stock price disequi-librium, except for the P/E ratio expressed in level, all different specifications yieldestimates qualitatively and quantitatively similar to our baseline results reported in thesecond line of Table 4. The differences in the coefficient βs across most specificationsare not marked and in almost every alternative regressions, the parameter is statisti-cally significant at conventional statistical levels. Therefore, these results reasonablysuggest that the Federal Reserve βs adjust interest rates according to stock marketdevelopments, in a stabilizing approach and independent of concern for future inflationas well as output. Despite our efforts to confront several proxies for possible stockprice misalignments, these measures suffer a serious drawback. They, unfortunately,present an overly restrictive view of the fundamental value of the stock market. Eventhough the valuation ratios make a decomposition between fundamental factors andnon-fundamental shocks, the equilibrium value of the ratio is assumed to be constant.The ratio must remain constant, i.e. the growth in earnings or dividends and thegrowth in price are in the same proportion, so that there is no stock price misalign-ment. As a result, any variations, either positive or negative, are systematically seenas the result of non-fundamental shocks, though they could just represent a correctrevision of the ratio towards its equilibrium level [Pepin, 2010]. Thus, the estimatedresponse of the Central Bank might be biased upwards. As with the measuring un-certainties surrounding the potential output, estimating stock price misalignments isan ambitious task and carries some degree of arbitrariness. Apart from the simplemeasures used in this analysis, there exists more original and adequate methodologiesto compute stock price misalignment such as studies based on discounted cash-flowmodels? Unfortunately, these cannot be applied to our knowledge within a monthlyfrequency study 61.

60The normalized price-dividend ratio comes also from Dr. Shiller’s Database and is computedusing an average of the most recent 10-years of S&P 500 dividends. It is noted that the authors Chadhaet al. do not precise whether the dividend-price ratio is used in growth rate or level. Nevertheless, wedecide to use the growth rates given the arguable stationarity property in valuation ratios.

61For instance, Mattesini & Becchetti [2008] construct an index of stock price misalignmentcomputed as the ratio between the observed S&P 500 index and the aggregate discounted cash-flowfundamental of the index. As to Pepin [2010], the author develops a methodology based on a dividenddiscount model, in which the fundamental value is determined by the dividend remuneration, theforetasted dividend growth rate and the discount factor. Unfortunately, we were not able to introducethese measures into our framework due to the unavailability of certain key variables in the monthlyfrequency, such as the I/B/E/S consensus forecasts of earnings/dividends.

60

Page 67: Master's Thesis Alexandre Lauwers

Table 7Stock price Augmented Forward-Looking Taylor Rule

Alternative preferred specification

α βπ βy ρ βs π? RMSE

Standard1 −1.974∗∗

(.396)1.981∗∗∗

(.093).378∗∗∗

(.081).929∗∗∗

(.007)n.a. 3.141 .532

Stock2 −0.997∗∗∗

(.416)1.874∗∗∗

(.095).364∗∗∗

(.080).936∗∗∗

(.006).034∗∗∗

(.011)3.550 .530

Sample period : 1979:10 - 2011:11

Notes: The table displays the implied long-run coefficients. The J-test foroveridentifying restrictions [Hansen, 1982] is easily passed (p > 0.99) for bothspecifications. The Q-test for serial correlation indicates no pattern of correlationsin the error term for all specifications. HAC corrected standard errors are computedwith the Delta method and reported in parentheses. n.a. stands for non applicable.*** p < 0.01, ** p < 0.05, *p < 0.1.1 The estimated parameters refer to the standard baseline equation (16). Thisalternative specification was estimated using the CPI core inflation rate (πCPIcoret ),the HP output gap (yt

HP ). See notes in Table 2 for further explanations on theinstruments and on the GMM estimation procedure.2 The estimated parameters refer to the augmented baseline equation (18). Thisalternative specification was estimated using the CPI core inflation rate (πCPIcoret ),the HP output gap (yt

HP ) and the stock market returns. See notes in Table 4 forfurther explanations on the instruments and on the GMM estimation procedure.

Lastly, we extend the robustness analysis a step further, exploring whether alter-native measures of inflation and the output gap has any impact on the magnitudeand significance of our baseline findings reported in Table 4. Because this robustnessanalysis has been already done for the standard forward-looking policy rule in 3.2.3,we focus here on the coefficient of interest βs. For sake of brevity, the 16 regressionsare not reported. Nevertheless, after an in-depth inspection, they confirm unanimouslythe statistical relevance of the stock price variable in the Federal Reserve’s reactionfunction, for the full estimation period analysed. As for the magnitude of βs, it rangesbetween 0.034 and 0.097 across all specifications – a range approximately identical tothe confidence interval implied by the standard errors of βs estimated in the baselinespecification. We report in Table 7 below the estimation results for the specificationthat achieves the smallest root mean square error (0.530). It uses the annual the CPIcore growth rate and the deviation of the industrial production growth rate from itsHP-filtered trend, as measures of the inflation rate and the output gap, respectively.This alternative specification will be further used as the preferred one since it tracksthe actual path fairly closely.

To summarize, the robustness checks carried out in this subsection reveal thatthe baseline estimation results for the stock price augmented Taylor rule, recordedin Table 4, appear to be fairly robust to different specifications and proxy variables.

61

Page 68: Master's Thesis Alexandre Lauwers

Indeed, the significance and magnitude of the estimated coefficient on the stock marketvariable remains reasonably unaffected by the choice of the horizon over which theFederal Reserve forms its expectations of the output gap, to changes in the measuresfor stock price disequilibria, nor in the proxy of inflation and output gap. Insofar,our result suggests that stock prices do represent an important aspect of the FederalReserve’s monetary policy design. While committed to implementing monetary policyto keep inflationary pressures under control and bring output in line with potential,our findings suggest that the Federal Reserve seems to act systemically in response tostock prices misalignments. As Botzen & Marey [2010] emphasize, in the absence ofa perfect knowledge about Central Bank policy frameworks, this result may representvaluable information for participants in financial markets. Indeed, when it concernsinterest rates, the future course of policy rates is often much more crucial than itscurrent level. Since unexpected interest rate changes usually have a large impact ontrading decisions, financial market traders – such as Forex traders – will then try toanticipate the Central Banks rate setting as it can lead to more profitable positions.It is thus crucial for them to be well-informed as to the principles guiding central bankdecision-making. Accordingly, discerning whether the Central Bank aims to influencestock prices can result in more accurate forecasts of subsequent decisions regardingthe official rate. Heretofore, our findings suggest that participants in financial marketsshould consider stock price developments when they aim to predict the future pathof the Federal funds rate. Nevertheless, it would be premature to come to any firmconclusions at this stage of the analysis. We will continue to investigate the potentialimpact stock prices could have on central bank decisions, and will try to provide severalrefinements on our results we got so far.

3.3.3 Historical Performance and Decomposition

Using a Taylor rule, augmented with a reaction to the stock market, we wereable to identify a significant positive relation between policy interest rates and stockprices over the full estimation period from 1979:10 to 2011:11. Thus, our econometricapproach supports the evidence of a direct and systematic consideration of stock pricedevelopments in setting the Federal funds rate’s policy. In view of this finding, twonatural interrogations are introduced in this subsection. First, we would like to evaluatewhether taking this financial activism into account helps to capture closely the FederalReserve’s behaviour. In other words, to see whether the augmented rule results ininterest rate predictions that are much closer to the actual behaviour than what thestandard rule captures. Second, we believe it may be useful to make a subtle distinctionbetween the notion of an estimated reaction function and of a policy rule. In theperspective of Taylor [1993], a policy rule can be seen as a description of how theinstrument rate is adjusted in response to inflation, the output gap and, in our case,to stock price developments. Seen in this way, should we then consider the rule as asystematic approach followed by the Central Bank to cope with stock price disequilibriaor rather as an occasional discretionary practice? Again, the interpretation of ourfindings calls for prudence. Accordingly, the following subsection will attempt to clarifythis important question by means of a visual decomposition of the target interest rate.

62

Page 69: Master's Thesis Alexandre Lauwers

Figure 6: Actual and Target values of the U.S. Federal funds rate(alternative preferred specification)

-4

-2

0

2

4

6

8

10

1985 1990 1995 2000 2005 2010

Actual Target Target with stock prices

U.S. Fed funds Rate (%)

Notes: The chart shows the actual and estimated target values of the U.S. Federal funds rate,derived from the estimated regressions reported in Table 7. The alternative specification uses the CPIcore to measure inflation (πCPIcore), the HP filtered trend to construct the output gap (yHP ) and thestock market returns as for the stock market variable. The target value differs from the fitted valuein that the former implicitly set the interest rate smoothing parameters to zero.

In Figure 6, we plot the actual path of the policy rate together with the estimatedtarget value of the interest rates implied by the alternative preferred specification62.It is clear from Section 3.2.3 that this alternative model outperforms the baselinespecification in tracing the dynamics of the historical rates. Figure 6 shows in adotted line the target interest rate implied by the standard model in the first rowof Table 7, while the estimated target rate implied by our stock price augmented re-action function reported in the second row of Table 7 is represented by a thin solid line.

The two estimated reaction functions both track the general trend of actual interestrates for most of the period under investigation. They seem to explain well interestrate settings in the U.S. However, the difference in interest rate predictions betweenthe standard forward-looking Taylor rule and the augmented version is not evident. Infact, summary statistics reported in Table 8 prove that the standard rule produces atarget rate closer on average to the actual behaviour of the Federal Reserve, whereas theaugmented rule slightly outperforms in terms of median and variability. In addition,from 1982:01 to 2008:03, the deviation of the target rates to the actual path amounts

62It uses the annual CPI core growth rate and the deviation of the industrial production growthrate from its HP-filtered trend, as measures of inflation and the output gap, respectively.

63

Page 70: Master's Thesis Alexandre Lauwers

on average to 1.341 percentage points for the standard model and to 1.382 percentagepoints for the augmented one. Yet, the augmented rule seems to better track theFederal Reserve behaviour for some isolated episodes, such as the accommodative moveof the Federal Reserve during the period following the stock market crash in October1987 or the tighter monetary policy in the mid-to late 1990s. Still, within this simplevisual analysis, the inclusion of stock prices as one of the state variables monitored bythe Central Bank does not set out a clearer picture. The contribution of stock pricefluctuations are not very distinguishable.

Table 8Summary statistics of actual and Taylor rule target interest rate, 1982:01-2008:03

Mean Median Std. dev. Min./Max. ‖Actual − Target‖

Actual 5.7 5.49 2.76 0.98/14.94 –

Target (standard) 5.17 4.78 2.29 0.31/10.50 1.34

Target (wt. stock price) 5.14 4.81 2.34 -0.53/10.47 1.38

In order to get more insight into the difference between the two specifications, wepresent in Figure 7 the interest rate target decomposition for the augmented policyrule. This representation, used by Chadha, Sarno & Valente [2004], isolates the contri-bution in percentage terms of each explanatory variable of the Central Bank’s reactionfunction to the target value for the interest rate. Even though the target rate levelimplied by the augmented rule is very close to the one implied by the standard model,Figure 7 shows clearly the relative importance of stock price developments comparedwith the two traditional indicators. Not surprisingly, we find that the componentrelated to inflation dynamics is of overwhelming importance. It plays a dominant rolein the Federal funds rate setting, accounting on average for approximately 80 percentof the target rate setting. Interestingly, the component corresponding to stock pricesmovements has approximately the same magnitude as the output gap component – 8percent compared to 12 percent for the output gap to be precise. Moreover, Figure 7shows that, starting from the mid-1990s, the output gap and stock prices componentshave played a much larger role, while the inflation stabilization objective has decreasedin prominence. This observation is in line with Driffill et al. [2006], in which theyshow that financial stability variables – stock prices, “basic risk” and credit spreads –have become increasingly important since the establishment of a low-inflation environ-ment and the globalization of financial markets, at the expense of variables related tomacroeconomic stability. Finally, and related to our second interrogation, we observethat the contribution of stock prices to the determination of the interest rate targetis particularly important in earlier episodes of significant financial turmoil, wherelarge price misalignments are easily recognizable : during the stock market bubble’sformation in the mid-1980’s and its subsequent crash in October 1987; throughout theperiod of “irrational exuberance” in the mid- to late 1990s; over the period followingthe burst of the dot-com bubble in 2001 ; and finally, during the sharp decline in equitymarkets after the Lehman Brothers’ shock in 2008.

64

Page 71: Master's Thesis Alexandre Lauwers

Figure 7: Interest Rates Target Decomposition

0

10

20

30

40

50

60

70

80

90

100

1980 1990 2000 2010

Inflation Output gap Stock price

U.S. Fed funds Rate (%)

Notes: The representation of the contribution in percentage terms of the explanatory variables ofthe central bank’s reaction function has been used for the first time in the literature on interest raterules by Chadha, Sarno & Valente [2004]

This observation has certain implications for the interpretation of our previousfindings. Given the positive and significant estimated coefficient of the stock marketvariable, we hastily concluded that the Federal Reserve may have reacted systemati-cally to asset price misalignments. But the estimation of a reaction function does notdistinguish clearly between what comes under a discretionary practice of the CentralBank and what is part of a systematic approach [Eyssartier & Aubert, 2002]. Nev-ertheless, our graphical decomposition of the target interest rates conveys a first lineof response. It suggests that the Federal Reserve has reacted to stock price develop-ments on only a few occasions during the estimated sample period, when stock pricesappeared to deviate substantially from their fundamental equilibrium value and wererisking endangering the objective of financial stability. In this regard, it would thusbe more accurate to interpret our findings as repeated discretionary behaviour ratherthan as a systematic rule pursued by the Central Bank.

In conclusion, in the light of the evidence presented in this subsection, we logicallysupport the interpretation given by Chadha et al. [2004] that, although targetingstock prices has not been a key policy objective pursued systematically by the FederalReserve, it has induced a reaction from the Central Bank when misalignments are rel-atively large. Indeed, one really has the perception that the reaction of central banksto movements in asset prices are based on a pragmatic approach, where monetary

65

Page 72: Master's Thesis Alexandre Lauwers

policy would response only in exceptional situations to preserve financial stability andto prevent any systemic developments.

If we believe that monetary policymakers respond more vigorously to asset priceswhen misalignments are large, then addressing this behaviour requires a more sophisti-cated model than the conventional linear Taylor rule. So far, the linear characterizationof the monetary policy rule has considered a constant response to stock prices, in whichthe Central Bank has reacted to the slightest stock price developments. Therefore, ourestimation approach must be considered as a first rough attempt, and a logical exten-sion of this study would be to explicitly capture these asymmetries using a non-linearreaction function. This would involve a reaction function partly non-linear in stockprices and partly linear in expected inflation and output gap [Chadha et al., 2004].However, as remarked by Chadha et al., “ [. . . ] no GMM or instrumental variables es-timator exists to date for models of this kind (or indeed for any multivariate thresholdmodel) [. . . ]”63.

As discussed above, the augmented forward-looking reaction function, Eq. (18) inits preferred alternative specification, seems to be consistent with the actual path of theFederal funds rate, successfully capturing its major developments. Still, a few signifi-cant gaps are easily distinguishable and remain despite the inclusion of the stock pricevariable in the Central Bank’s reaction function. This may be an indication, amongother reasons, that the policy reaction function has changed over the full estimationperiod 64. In order to determine whether the response coefficients have remained stableover time, the following section presents the results of a recursive window regressionon the Taylor rule, augmented with stock prices.

3.3.4 A recursive window estimation

The full sample period adopted throughout the different sections covers more than31 years of monetary policy. Three Federal Reserve Chairmen have since been intro-duced and each has had to cope with a changing policy environment: from Volcker’ssuccessful battle against inflation, which set the stage for the so-called Great Mod-eration during Greenspan’s tenure, to the recent financial crisis that Bernanke hadto deal with. As Chairman Bernanke stated in his recent speech (2013) 65 on the100-year history of the U.S. Central Bank, “The broader conclusion is what might bedescribed as the overriding lesson of the Federal Reserve’s history: that central bankingdoctrine and practice are never static. We and other central banks around the worldwill have to continue to work hard to adapt to events, new ideas, and changes in theeconomic and financial environment.” From all of the foregoing, it appears unlikely

63To the best of our knowledge, this econometric technique suitable for estimating simultaneouslyboth asymmetric and symmetric reactions is not yet available in the literature

64It may also reflect the possibility that our monetary policy rule is misspecified, in particularthe potential non-linearities discussed above or the omission of other key explanators.

65A Century of U.S. Central Banking: Goals, Frameworks, Accountability remarks by Ben S.Bernanke at The First 100 Years of the Federal Reserve: The Policy Record, Lessons Learned,and Prospects for the Future, a conference sponsored by the National Bureau of Economic ResearchCambridge, Massachusetts

66

Page 73: Master's Thesis Alexandre Lauwers

that the estimated monetary policy rule has been perfectly stable over such a longsample period. It seems reasonable to investigate this issue further to examine howthe coefficients ρ, βπ, βy and particularly βs evolve over time. In addition, one mightwell ask whether including the recent financial crisis has any impact on our pointestimates, since monetary policy has reached for the first time its effective lower boundresulting in the application of unconventional policy that unfortunately is unlikely tobe captured in this simple Taylor rule framework.

In order to ascertain whether the parameters appear to be constant, it is oftenconsidered in the literature – at least in quarterly frequency analyses – to studysub-samples drawn according to the chairman incumbencies or to test for unknownbreakpoints. However, as was highlighted in Section 3.2.4, the main problem withestimating separate reaction functions, over the Volker and Greenspan periods for ex-ample, is that the latter period may not contain enough variation in the inflation, norin the policy rate, to identify properly the relevant parameters. In view of these con-siderations, we adopt instead a recursive window technique. This allows us to explorethe potential impact newly available information about macroeconomic fundamentalshas on our reaction function coefficients 66. In particular, we first run GMM estimationon the augmented equation (18) – in its preferred alternative specification – using awindow of 90 observations, that is over the period 1979:10 to 1987:03, which roughlycorresponds to the Volcker era. Then, keeping the starting date of the regressions fixed,we continually re-estimate the reaction function advancing the ending date in stepsof one month at a time, until the end of our observation period (2011:11) is reached.We obtained 297 regression equations, each one containing the point estimates of theshort-run parameters as well as their standard errors. To give a visually impression ofhow stable the parameters are, the time paths of the long-run estimated coefficients aredisplayed in Figure 8. Importantly, it should be emphasized that we did not representthe point estimates along with their corresponding confidence intervals, as is usuallyand judiciously done in such frameworks 67.

As regards the policy inertia coefficient (ρ), Figure 8 shows that recursive point es-timates have remained stable overall throughout the estimation period, evolving withina narrow range of 0.89 for the starting sample period to 0.94 for the full sample period.This evidence suggests that the speed of adjustment in the Federal funds rate has notmoved significantly over the observation period. More specifically, however, we noticethat the sum of coefficients

∑3i=1 ρi increased significantly from 1987 to 1994, and then

broadly stabilized close to 0.91. But, since mid-2003, the policy inertia coefficient hasrisen constantly until the end of our observation period. We can logically recognize

66It has also the benefit to keep the sufficient variation provided by the early 1980s.67Unfortunately, we were not able to compute the standard errors of the long run coefficients.

We made many attempts to build a “wrapper program” in the statistical software Stata to couple thenecessary delta method with the rolling function, but ultimately we did not succeed. Yet, a measureof significance is essential. Hence, we decided to display along the path of the inflation, output gapand stock price coefficients, their z-statistics, except that they are defined with respect to the shortrun coefficients and their standard errors. Both tend to move in the same direction and with the sameapproximate magnitude. It goes without saying that we were careful to check that the interpretationon the coefficient’s significance is unaltered when the z-statistics is close to conventional critical valuesof significance.

67

Page 74: Master's Thesis Alexandre Lauwers

three abrupt increases at times when the Federal Reserve held the policy rate steady:at the previous rate’s all-time record low of 1 percent from July 2003 to June 2004,or at 5 1/4 percent from July 2006 to July 2007 and, finally, with the instrument ratenear its effective lower bound since early 2009 until the end of the estimation period.However, surprisingly, the recursive point estimate does not show any drop as themagnitude and international scope of the recent credit crisis became apparent. It evenshows a continuing surge in ρ. This result is most surprising given that the FederalReserve quickly dropped the key rate to a historically very low level in order to shoreup a financial system affected by a massive liquidity shortage and to shore up theeconomy in general, as growth slowed and job losses mounted. From August 2007 toDecember 2008, the total contraction in the Federal funds rate had reached 486-basispoints. Accordingly, we would have expected a declining policy inertia 68. Overall,the high degree of policy inertia has remained broadly unaltered during most of theperiod, only a slight upward trend. Hence, there is a substantial degree of gradualismin the adjustment of the Federal funds rate.

Concerning the response coefficient related to expected inflation (βπ), Figure 8indicates a certain instability in the recursive point estimates, an instability mainlyconcentrated over the period prior to 1994. In particular, we can observe in themid-1980s a reaction coefficient βπ relatively low, of about 1.3. Then, following theappointment of Alan Greenspan there was a more aggressive commitment to inflationstabilization. Indeed, as Levieuge [2002] and Mehra [1999] also noticed, the inflationpoint estimate swiftly increases by the end of the 1980s. Since 1994 it has remainedbroadly unaltered, hovering around 1.85 during the remaining estimation period. Re-garding the end of the sample period, it is noteworthy that the inflation coefficientremained practically unaffected by the addition of data points related to the 2008financial turmoil. In fact, this observed behaviour appears quite puzzling as we wouldhave expected a downward pattern since the tipping point of the turmoil. We believethe Central Bank might have prioritized other objectives than the traditional onesgiven the significant threat to the stability of the financial system. Yet, this issue willnot be addressed further 69. In any case, the inclusion of the recent financial crisisperiod has no major impact on the estimated reaction to expected inflation, excepta small decrease in precision. Overall, even though it increases moderately from itsinitial value, the weight of inflation always evolves above the stability threshold ofone, suggesting that monetary policy has been sufficiently aggressive towards expectedinflation over the whole sample period.

As regards the output gap coefficient βy, Figure 8 suggests that it remains relativelystable over the observation period. It seems to settle around roughly 0.45, with amaximum of 0.557 for the sample 1979:11-1993:4 and a minimum of 0.318 attainedwhen the sample ends in 2010:2. It is noteworthy that the recursive points are very closeto the weight of the output gap (0.5) proposed by Taylor (1993) in his original paper.

68It could reflect the omission of an increasingly important and persistent variable that hasemerged since 2008.

69It is beyond the scope of our simple representation of monetary policy. In addition, the level ofuncertainty surrounding the coefficient estimates rises substantially, as reflected by the reduction inthe z-statistics from the advent of this unprecedented period.

68

Page 75: Master's Thesis Alexandre Lauwers

Figure 8: Recursive GMM results for the augmented policy rule(alternative specification : πCPIcore, yHP , sL.Stock )

.89

.9

.91

.92

.93

.94

r co

effi

cien

t

1985 19951990 2005 20102000

Sum of lagged interest rates

10

15

20

25

z-st

atis

tics

1.2

1.4

1.6

1.8

2

b p c

oef

fici

ent

1985 1990 1995 2000 2005 2010

bp coefficient z-statistics

Inflation

5

10

15

20

z-st

atis

tics

.3

.35

.4

.45

.5

.55

b y c

oef

fici

ent

1985 1990 1995 2000 2005 2010

by coefficient z-statistics

Output gap

69

Page 76: Master's Thesis Alexandre Lauwers

(2.17 / 2.11)

(2.25 / 2.18)

(2.66 / 2.53)

5% significance threshold2

3

4

5

6

7

0

1

z-st

atis

tics

.02

.03

.04

.05

.06b s

co

effi

cien

t

1985 1990 1995 2000 2005 2010

bs coefficient z-statistics

Stock price

Notes: This figure shows the results from a recursive estimation which starts out estimatingthe augmented equation (18) – in its preferred alternative specification – over the period 1979:10 to1987:03 (i.e. window of 90 months), then keeping the starting date of the regressions fixed, we con-tinually re-estimate the reaction function advancing the ending date in steps of one month at a time,until the end of our observation period (2011:11) is reached. The ending date of each subsamples arerepresented in the x-axis. In the last chart, the dashed horizontal line indicates the significance levelsof the stock price coefficients at the 5 percent level. Important note : the z-statistics are defined withrespect to the short-run coefficients and their standard errors. The numbers in parenthesis are de-fined as follows : (z-statistics for the short-run coefficient / true z-statistics for the long-run coefficient).

Despite the relative stability of the output gap coefficient, we do observe twoepisodes of perturbation. First, the spike in April 1993 accounts for the increasedsupport in the early 1990s of monetary policy towards the real economy. But thishigher concern on the output gap rapidly reverses as soon as potential inflation scaresemerged 70. Second, the output gap point estimates decline swiftly with the inclusionof data related to the recent financial turmoil. It implies a weaker response of theFederal Reserve to the real economy, probably indicating a higher concern about thefragility of the financial system. Despite this recent episode, the output gap pointestimates remain positive and significant. Hence, results are broadly qualitativelysimilar before and after the broadening of the 2008 financial crisis.

Finally, of particular interest, the last chart in Figure 8 shows the time path of therecursive estimates of the stock price coefficient (βs). At first glance, the βs coefficientsare always positive and the z-statistics never touch the 5 percent significance threshold

70From 1990 to 1993, policy rates were cut from 8 to 3 percent while the U.S. economy wasexperiencing a deep recession, exacerbated by the Kuwait war and rising unemployment. But, fromJanuary 1994 and February 1995, the Federal Reserve vigorously raised its instrument rate 300-basispoints despite a remaining negative output gap.

70

Page 77: Master's Thesis Alexandre Lauwers

(1.96). Hence, the recursive analysis confirms the static estimations results, reportedin Table 7, that stock price misalignments in general have played an important rolein the Federal funds rate’s policy setting. Monetary policy has been stabilizing to-wards stock market developments: interest rates increases (cuts) in times of positive(negative) misalignments. Note that the minimum z-statistic – yet significant at the 5percent level – is attained in 1996:2. This is not far from the ending sample date usedby Bernanke & Gertler [1999], in which the authors found an insignificant reaction tostock returns. As regards the possible changing impact of stock price on monetarypolicy decision making, the dynamic analysis reveals a far from stable coefficient βs.In fact, Figure 8 suggests that the reaction to stock returns changed over time withrespect to two dimensions. First, the coefficient response looks particularly strongerat times recognizable as large misalignments. Both the magnitude and the significancelevel of the βs coefficient increase sharply during the stock market crash of 1987, theearly 1990’s (the Japanese bubble economy burst in 1990) with a peak in 1993 71, afterthe burst of high-tech bubble in 2001, and finally at the height of the financial crisis inlate 2008. Thus, it seems that the reaction of the Central Bank to stock price fluctua-tions tends to spike when misalignments are relatively large, which is in line with thediscretionary interpretation supported in Section 3.3.3. Second, it appears that theimpact of stock returns on interest rate decisions was stronger in times of stock mar-ket busts than in boom phases. The Federal Reserve seems to give greater weight tostock prices when the stock market collapses. Otherwise, stock prices play a minor role.

To summarize, the recursive estimation approach reveals that policy response coeffi-cients are reasonably stable over time, except mainly for the stock price variable. Also,the advent of the recent financial turmoil has modest impact on our point estimates. Inaddition, the recursive estimation shows that the Federal Reserve was more sensitive tostock price developments in earlier episodes of large misalignments, corroborating thecall for a non-linear characterization of the augmented reaction function. Finally, whilethe static estimation provides evidence that stock prices in general had played an im-portant role in the interest rate setting, the dynamic analysis reveals a somewhat moresensitive reaction to stock market busts than boom phases. Although the recursivecoefficients and z-statistics suggest higher magnitude and significance level during neg-ative large misalignment, they do not provide systematic evidence for an asymmetricreaction function. Thereby, the following section is focused on analysing more in depththe potential asymmetric response of policy interest rates to stock price misalignments.

3.3.5 A presumed asymmetric response towards stock prices

This subsection explores a new perspective on the established significant link be-tween the Federal funds rate and stock price movements. Estimations based on Eq.(18) only allow a symmetric response of monetary policy to stock price to be captured.For example, the significant estimated coefficient found for the alternative augmentedspecification, reported in Table 7 first line, tells us that monetary policy increases(decreases) the instrument rate by 34-basis-points in face of a yearly 10 percent rise(decline) in stock prices. We made the assumption that the reaction of monetary au-

71This episode is, however, less easily recognizable as a large misalignment.

71

Page 78: Master's Thesis Alexandre Lauwers

thorities to stock price increases and decreases are of the same magnitude. However, inthe recursive estimation in Figure 8 gave the impression that monetary policy reactionsto stock price are much stronger in periods of falling stock prices. Hence, it would beinteresting to study whether the Federal Reserve has indeed reacted asymmetricallyto stock prices over the period 1979-2011.

Unfortunately, empirical literature on asymmetric monetary policy reactions to thestock market is limited. Mattesini & Becchetti [2008], for the period 1980:1-2001:4,argue that the Federal Reserve has reacted asymmetrically to stock market misalign-ments. Using quarterly data, they estimate by GMM a forward-looking Taylor ruleaugmented with an ”Index of Stock Price Misalignment”, in which the fundamentalvalue of stock price is constructed on the basis of a discounted cash flow method.By differentiating the impact on interest rates of negative and positive stock pricedeviations, their econometric results show that the Federal Reserve tends to lower theinstrument rate when stock prices are below their fundamental value, whereas there isno evidence of pre-emptive monetary tightening during episodes of overevaluation inthe stock market.

Within the same econometric set-up, but in a monthly frequency analysis, Hoff-mann [2012] looks for asymmetric monetary policy reactions to the stock market duringthe period 1979:08-2008:12 and allows this asymmetry to vary over time by means ofrolling estimations. The author applies a slightly different stock market variable, inwhich the fundamental value is approximated by the HP filter. While the full es-timation period does not yield a significant coefficient of the stock market gap, anasymmetry in the Federal Reserve’s reaction function is however identified during theGreenspan era. According to his rolling window estimation, the asymmetry emergedprincipally after the 2001 crash of the dot-com bubble where interest rates were cutextensively, but were not raised in accordance with the asset market recovery in 2003.

To investigate this issue, following Hoffmann [2012], we include into Eq. (13) athreshold dummy variable D that distinguishes between ups and downs. The dummyD takes the value 1 when the yearly stock price growth is negative and 0 otherwise.The augmented Taylor rule now takes the form:

rt = (1−3∑i=1

ρi)(α + βππt+12 + βyyt + st−1(β+

s + µD))

+3∑i=1

ρirt−i + εt. (19)

converted into the following estimable equation,

rt = φ0 + φππt+12 + φyyt + st−1(φ+s + φµD)

)+ φ1rt−1 + φ2rt−2 + φ3rt−3 + εt. (20)

where β+s represents the coefficient for positive stock price movements and µ captures

the additional effect on the interest rate that stems from falling stock prices. The totalcoefficient of negative stock price growth is equal to the sum of the coefficients (β+

s +µ).

As Table 9 reveals, the asymmetry analysis yields counter-intuitive results : theadditional impact on the policy rate that stems from negative price growth (µ) issignificant at the 1 percent level, but the coefficient is negative. According to these

72

Page 79: Master's Thesis Alexandre Lauwers

results, the Federal Reserve had an asymmetric reaction to stock price drops and hikesbut, contrary to what was expected, the Central Bank has responded stronger to priceincreases than price drops. Also, the total coefficient of negative stock price growthis equal to −.065 and highly significant, suggesting a destabilizing monetary policyduring a bust phase.

Table 9Asymmetric monetary policy with respect to stock price :

(alternative preferred specification)

α βπ βy ρ β+s µ

Stock +dummy D

−2.189∗∗∗

(.594)1.926∗∗∗

(.093).451∗∗∗

(.076 ).932∗∗∗

(.005).094∗∗∗

(.020)−.158∗∗∗

(.036)

H0 : β+s + µ = 0 Prob > χ2(1) = 0.002

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the augmented equation (20). Thisalternative specification was estimated using the CPI core inflation rate (πCPIcoret ),the HP output gap (yt

HP ) and the stock market returns. The table displays theimplied long-run coefficients. β+

s denotes the coefficient for positive stock pricemovements while µ captures the additional effect on the interest rate that stemsfrom falling stock prices.The total coefficient of negative stock price growth isequal to the sum of the coefficients (β+

s + µ). See notes in Table 4 for furtherexplanations on the instruments and on the GMM estimation procedure. The J-testfor overidentifying restrictions [Hansen, 1982] is easily passed (p > 0.99). The Q-testfor serial correlation indicates no pattern of correlations in the error term for allspecifications. HAC corrected standard errors are computed with the Delta methodand reported in parentheses. *** p < 0.01, ** p < 0.05, *p < 0.1.

Given the unexpected and implausible findings, a rolling window regression is ap-plied in order to identify whether the initial years or/and the last observations of thesample affect the estimation. The asymmetric impact of the stock market variableon U.S. interest rates may have been weak during the 1980s and may have becomestronger during the 1990s and the 2000s. Thus, Eq. (20) is estimated as a rollingregression of a ten year moving window (120 months), starting with the period 1979:11to 1989:9 and moving forward to 2001:12 to 2011:11.

Figure 9 illustrates the ten year rolling t-statistics for the asymmetric µ . As theregression rolls in the end of nineties and the period after the burst of the Internetbubble, the results change significantly: the t-statistics turn positive and stay abovethe 5 percent significance level. However, any further comment of the analysis at thisstage would be inappropriate. First, as already highlighted in earlier subsections, theprocess of the Federal funds rate approaches a near-unit root as soon as the samplerolls out the high-inflation period, leading to uncomfortable problems of identification.Second, the previous results of the static and dynamic estimations collapse under

73

Page 80: Master's Thesis Alexandre Lauwers

different specifications: they are not robust to changes in the measurement of thestock market variable, nor to the way this variable enters the augmented policy rule– contemporaneously or in a forward-looking form. Unfortunately, we are unable tovalidate empirically the presumed asymmetric behaviour of the Federal Reserve thatwe visually perceived in Section 3.3.4. Our last investigation reveals again the limitsof using a single linear Taylor rule within a monthly frequency analysis. Using adistinct econometric method – an extension of the model of Rigobon & Sack [2003] –Ravn [2012] investigates whether the reaction of Federal Reserve has been asymmetricto movements in stock prices over the period 1998-2008. The author confirms thatthe U.S. Central Bank has indeed followed such a policy. While a 5 percent drop inthe S&P 500 index increases the probability by 1/3 of a 25-basis-point interest ratecut, no significant reaction of monetary policy to stock price increases can be identified.

Figure 9: Rolling GMM t-statistics for the Asymmetric term (µ)(alternative specification : πCPIcore, yHP , sL.Stock )

-10

-5

0

5

10

15

20

25

t-st

atis

tics

1990 2000 201020051995

Asymmetric term (m)

Notes: This figure shows the ten year rolling t-statistics for the asymmetric µ. Specifically, equation(20) was estimated as a rolling regression of a ten year moving window (120 months), starting withthe period 1979:11 to 1989:9 and moving forward to 2001:12 to 2011:11. The ending date of eachsubsamples are represented in the x-axis. We plot here the t-statistics instead of the z-statistics aswe run estimations on smaller sub-samples. Unfortunately, as for the z-statistics, the t-statistics aredefined on the short-run estimates of the asymmetric term. The dashed horizontal lines indicate thesignificance levels of the µ coefficients at the 5 percent level.

Although it is not possible to find plausible and interpretable findings within ourframework, the following discussion will be based on the prevalent conclusion found inexisting literature: an asymmetric reaction of the Federal funds rate to stock prices.

74

Page 81: Master's Thesis Alexandre Lauwers

The apparent asymmetric approach pursued by monetary authorities to stock priceis not surprising. As already discussed in the Section 1, the Federal Reserve underboth chairmen Alan Greenspan and Ben Bernanke followed an asymmetric policy ap-proach to asset price bubble, which became known as the “Jackson Hole Consensus”[Issing, 2009]. In the absence of inflationary pressures, this approach advocates no, orlittle, monetary policy action during a bubble’s expansion stage whereas, if the bubblebursts, the central bank should follow a “mop-up strategy” by cutting aggressively theinstrument rates to minimize the potential destabilizing impact of the burst on thereal economy, its production, jobs, and on price stability. Therefore, while the needto inject liquidity and cut interest rates is a strategy commonly recommended andimplemented to relieve a financial system experiencing serious difficulties, restrictivemonetary policy action might not have been pursued during episodes of exuberance inthe stock market 72.

At face value, the behaviour supported by the Jackson Hole Consensus has cer-tainly raised the idea of an asymmetric response of central banks to asset prices andto stock prices in particular. Nonetheless, the findings of the literature on asymmetricmonetary policy should not be conflated too hastily with the asymmetric approachrecommended in the aftermath of a burst bubble. Whereas the former depicts anasymmetric response towards stock price developments in a systematic way, there hasnever been any consideration of targeting asset prices in a systematic fashion by thehead of the Federal Reserve. Conventional wisdom is clear cut on this point: centralbanks should not target asset prices. Thus, on the one hand, the Federal Reservepractically endorses the role of saviour once the bubble bursts but, on the other hand,it strongly objects to any attempt to stabilize asset prices, refuting any suspicions of fi-nancial activism. As already remarked, the findings of the literature can be justified bythe fact that what they are depicting in their estimation as an asymmetric systematicapproach, may be in fact driven by occasional discretionary policy aimed at injectingenough liquidity after a severe collapse of asset prices. As bubble burst episodes havecertainly intensified over the last decades and the “mop-up strategy” implemented,the frontier between a repeated discretionary behaviour and a systematic strategy hasindeed become very narrow. Hence, the role the Federal Reserve ascribes to the stockmarket is anything but clear.

One may therefore ask why policymakers develop, intentionally or not, this feelingof confusion. As part of the discussion on their findings, Chadha et al. [2004] argue,among other heuristic explanations, that central bankers might target asset prices butthey either cannot or will not admit out loud that they do so. In this regard, if weimagine a perfectly transparent central bank as to its asymmetric role on the financialmarkets, this could create severe problems of moral hazard since it would reasonably

72Greenspan (2002) claimed that “the notion that a well timed incremental tightening could havebeen calibrated to prevent the late 1990s bubble is almost surely an illusion. Instead, we...need to focuson policies to mitigate the fallout when it occurs and ease the transition to the next expansion.” AsRavn [2012] underlines, the former Chairman admitted (2007) that on at least one occasion during hisincumbency, and more precisely in March 1997, the Federal Reserve tried to counteract a perceivedbubble in the stock market by increasing the policy rate. Even though the strategy paid off initially,stock prices soon recovered and rocketed to an even higher level. To quote Greenspan: “In effect,investors were teaching the Fed a lesson”.

75

Page 82: Master's Thesis Alexandre Lauwers

mean that they declare themselves lender in last resort. This might be one reasonamong others to explain their discretion as regards the role they assign to stock pricefluctuations [Pepin, 2010].

Despite the Federal Reserve’s ambiguous role and what seems to be a deliberatelyopaque communication, the aggressive monetary easing carried out with each burstingof an asset price bubble – such as those in 1987, 2001 and 2008 – has certainly con-vinced more investors as to the existence of the so-called “Greenspan Put”, a conceptthat first emerged in the late 1990s. The term reflects the perception among U.S.investors that, in cases of a large disruption in stock prices, the Federal Reserve willstep in and cut its policy rate to insure liquidity in the capital markets and thus, ineffect, “bail out” investors. In this sense, asset prices have the equivalent of a putoption to protect them against a sharp market decline since policymakers will takedecisive action to prevent the market falling but not to stop it rising [Miller, Weller &Zhang, 2002]. By submitting a questionnaire to U.S. fund managers and economists,Cecchetti, Genberg, Lipsky & Wadhwani [2000] found evidence that an overwhelmingmajority of respondents judge the behaviour of the Federal Reserve to the stock marketas asymmetric, reacting more to a fall than a rise, and they believe that this type ofreaction is in part responsible for high market valuations at the end of the 1990s.Recently, Pamela [2011] addresses whether there is any evidence of the Greenspan Putover the period 1987:08 to 2008:10. Extending the Taylor rule with variables thataccommodates misalignments in stock prices, the author finds that the reaction of theU.S. Central Bank to asset prices was significant and was greater at times of financialcrises. She concludes that if financial markets take the historical Taylor rule as giventhen their confidence that, in the face of sharp stock price deflation monetary authori-ties would act in their favour with an easing of monetary condition, was not unfounded.

The Greenspan Put, as well as a more general asymmetry in monetary policy tostock price drops and hikes, be it confirmed or just putative, naturally creates seriousmoral hazard problems and distorts investors’ behaviour, as demonstrated by Milleret al. [2002]. Indeed, if we consider a central bank that systematically reacts with anumerically larger factor βs to stock price drops than to stock price rises, investors willnaturally perceive that the central bank is covering part of the downside risk of theirinvestment and is reducing only a small proportion of their potential gain. For a givenlevel of risk aversion, the investor will be thus inclined to take more risks [Ravn, 2012].

As with the connection highlighted by Bernanke & Gertler [1999] between the sys-tematic reaction of Japanese monetary authorities to stock price developments and theJapanese economy’s misfortune in the late 1980s, it seems legitimate to question herewhether the episodes of aggressive monetary easing operated by the Federal Reserveover the previous decades helped build up financial imbalances and ultimately laid theground for the great recession of 2007-2009 73. Specifically, Bini Smaghi [2009] argues

73For instance, Taylor [2007] argues that the recent financial crisis can be traced back, at leastpartly, to the overly easy monetary policy during the period 2002-04. By means of a counterfactualexercise, the author demonstrates that the real estate boom in the U.S. would have been smallerthan it actually was if the Federal Reserve had followed a path for interest rates more in line withthe prescriptions of the original Taylor-rule. However, on the basis of a dynamic factor model anda VAR analysis, Del Negro & Otrok [2007] argue that there is no clear-cut association between theexpansionary monetary policy episode after 2001 and the swelling of the house price bubble: the

76

Page 83: Master's Thesis Alexandre Lauwers

that the “pre-crisis consensus view” on the nexus between monetary policy and assetprices may hold serious intrinsic risks, namely “excessive accommodation” and “therisks of triggering new imbalances”74. The implementation of such a policy in practiceis likely to encourage “a policy of tolerance” during the boom phase and “a policy ofexcessive accommodation” during the bust phase, as the central bank is legitimatelyafraid that the removal of loose policy might impair the initially still fragile economicrecovery. Hence, after the bursting of an asset price bubble, policy rates will oftenbe maintained too low for too long. In addition, this exceptional accommodationmight trigger an endogenous cycle in the “balance-sheet channel”: the low interestrate environment initially reboosts the price of assets and encourages the demand forcredit; as the value of collateral increases with higher asset valuations, lenders aremore willing to lend and charge lower risk premiums; as a result, credit expands andmore assets are bought, pushing their price out of line with their fundamental value.Therefore, while asymmetric monetary policy aims to support the economy in theshort-term, there is a risk of triggering the build-up of new financial imbalances andultimately planting the seeds of the next asset price bubble. As a result, Bini Smaghi[2009] stresses the importance to evaluate the inter-temporal dimension of this kindof policy. In this perspective, Jalilvand & Malliaris [2010] underline the necessity toreassess the risk management approach to monetary policy, as the optimality of theasymmetric approach to asset price bubbles may be questioned in a dynamic contextof a “sequence of bubbles”.

In conclusion, we were unable to validate empirically the presumed asymmetricbehaviour that we visually perceived in Section 3.3.4. Our econometric frameworkmight not be appropriate for this kind of exercise. Nevertheless, we showed thatexisting empirical literature, albeit limited, supports the notion that the U.S. CentralBank has indeed pursued during previous decades an asymmetric approach to stockprices over the cycle. Undoubtedly, this kind of monetary policy has reinforced theperception among financial market actors that the Federal Reserve will engage inextremely accommodating policy at times of acute financial distress. While in periodsof financial turmoil this “mop-up” seems appropriate, its asymmetric character surelycreates a moral hazard problem, encouraging excessive risk-taking behaviours, andpotentially paving the way to new financial imbalances. Hence, economists need touse the lessons learned from the recent financial crisis to investigate further about theconditions that give rise to such bubbles, in particular the responsibility of monetarypolicy, and the links that might exist between bubbles. If the asymmetric approach hasencountered its limits, it is not obvious however that the Federal Reserve should haveconducted its activist policy towards the stock market in a symmetric way. Much needsto be investigated before central banks can implement a leaning-against-the-wind-typepolicy. The next challenge would be to extend theoretical models of asset price bubblesto better assess the risks and benefits, in terms of welfare, of both the symmetric andasymmetric approaches, while taking into account both the short-term and long-termdimensions of such policies and the distortion they can have on investment behaviours.

impact of policy shocks on house prices are very small. The issue is still pending.74The author uses the qualifier “pre- crisis” since the consensus view has been shaken so

profoundly by recent events that is unlikely to remain unaffected. The “post- crisis” view is notknown yet.

77

Page 84: Master's Thesis Alexandre Lauwers

Conclusion

To what extent should central banks take account of asset price developments?Over the past two decades, this crucial question has received increasing attention anda greater concern with the unusual and large financial turmoil of recent years. Despitea heated debate between supporters and opponents of an eventual financial activismon the part of the Central Bank, there is no clear consensus among the literature.In parallel with and as a complement of this on-going debate, a more empirical lineof research has emerged: de facto, do central banks take into account asset prices?We have focused on the second debate. The main goal of this master’s thesis was toexplore whether and how the U.S. Federal Reserve Bank has reacted to asset pricemovements over and above their predictive power for future inflation and the outputgap.

To investigate this hypothesis, we have explored the relationship between short-term nominal interest rates, macroeconomic fundamentals and asset prices to estimatea forward-looking and inertial monetary policy rule, using monthly data for the period1979:10 to 2011:11 for the United States. Drawing on the theoretical framework intro-duced by Clarida et al. [1999] and on the GMM estimation method, the thesis has notlimited its study to the estimation of the effects of inflation and output deviations onthe setting of the Federal funds rate, but has explicitly considered the effects of stockprice and house price fluctuations, following Bernanke & Gertler [1999], Chadha et al.[2004], Levieuge [2002], and references therein.

Our empirical results suggest that the Federal Reserve took stock price develop-ments into account in setting interest rates over the last decades. The yearly growthrate of stock prices enter significantly into the monetary policy reaction function.Unfortunately, we were not able to properly capture the weight assigned to boom/bustcycles in house prices. The significance and magnitude of the estimated coefficient onthe stock market variable remains reasonably unaffected by the choice of the horizonfor the output gap, so as do changes in the measures of stock price disequilibria, anddifferent proxies of inflation and output gap. Generally, a 10 percent increase in theS&P 500 index over the preceding twelve months would lead – in the next month– to a rise in the Federal funds rate ranging between 34 to 97-basis points, acrossall the different specifications. Hence, in contrast with Bernanke & Gertler [1999]’sresults, this exercise repeated several years later, suggests that the Federal Reserve hasreacted systematically to stock price developments independent from their influence onfuture inflation and output, and would therefore constitute a supplementary objectivepursued by the Central Bank.

However, it must be emphasized that the estimations made as part of this analysisare limited to studying the average or general behaviour of the Central Bank towardsstock price movements. The estimation of a reaction function does not distinguishclearly between what comes under a discretionary practice and what is part of asystematic approach. Further refinements seem to suggest that policymakers maynot smooth the evolution of stock prices systematically, as is the case for inflationor output, which would coincide with the more nuanced interpretation provided by

78

Page 85: Master's Thesis Alexandre Lauwers

Chadha et al. [2004]. Indeed, from our graphical decomposition of the target interestrates and our recursive estimation, it can be seen that targeting stock prices has notbeen a fully-fledged policy objective pursued systematically by the Federal Reserve,but rather that the Central Bank has reacted to these variables on only a few occa-sions during the estimated sample period, that is during episodes of acute financialturbulence when stock prices appeared to deviate substantially from their fundamentalequilibrium value. At crucial times like the stock market crash of 1987, the crises of1997-98, the burst of the dot-com bubble at the beginning of the 2000s and, morerecently, during the sharp decline following the Lehman Brothers’ shock in 2008, themonetary authority appears to respond more forcefully to the stock market in an effortto restrain the destabilising effect that an abrupt correction in asset markets couldhave on economic activity. Such an asymmetric behaviour vis-a-vis the stock market– though it was not possible to provide systematic evidence – is consistent with theperception among financial market actors on the existence of the so-called GreenspanPut. We also provide narrative evidence on the moral hazard problems this asymmetricapproach could generate.

In parallel to this investigation, the empirical evidence confirms that the ultimategoal of the Federal Reserve has been the pursuance of price level stability and thepromotion of the real economy. The estimation results indicate that – in addition tostock prices – inflation and the output gap enter significantly in the monetary policyreaction function. The sizes of the coefficients of inflation and output are close toestimates of these coefficients in other studies and theoretically plausible. Over thefull sample period, it appears that the Federal Reserve has striven to satisfy the TaylorPrinciple, was concerned to bring output in line with potential and put significanteffort on smoothing interest rates.

No less importantly than drawing sensitive interpretations, account must be takenon the background issues inherent in estimating Taylor rules. In order to avoid drawing“too big” conclusions based on “too little” evidence, it would be interesting to bringfurther possible extensions, based on the limitations of the approach chosen in thisstudy.

First, this study relies exclusively on a monthly basis which is the natural frequencyto choose as the Federal funds rate target is established by the FOMC at meeting ap-proximately every six weeks. However, a monthly frequency setting yields considerableinertia in the interest rate series and poses significant problems when studying thestability of the estimated parameters. In addition, it is unfortunate – due to theextremely high value of ρ and the difficulty it poses to obtain a satisfying identificationof the policy rule augmented with house prices – that the role of the yearly houseprices growth has not been thoroughly tested. Yet, real estate assets represent a majorshare of households’ net worth and the recent experience has educated policymakerson the devastating consequences a housing market bubble could have on financial andmacroeconomic stability. In this regard, moving to quarterly frequency might providegreater opportunities for studying the stability of the parameters of interest as well asfor establishing the weight house prices played in the Federal Reserve’s reaction func-tion.

79

Page 86: Master's Thesis Alexandre Lauwers

Second, on a quarterly frequency basis, one would have the possibility to rely onreal-time data and particularly on the Greenbook forecasts. Indeed, one potentialdrawback in our approach is the use of ex-post revised data, which were not availableat the time of policy making, and could consequently alter the estimation results onthe behaviour of the Federal Reserve’s policy 75. In order to overcome such a draw-back, it would be worthwhile to inspect how our results would change if the approachin this study was modified and applied to handle real-time data, and specifically tomake use of Greenbook forecasts dataset following Orphanides [2001] 76. Regardingparticularly the role of asset prices, Fuhrer & Tootell [2008] argue that the inclusionof these forecasts in estimated Taylor rules clarifies the interpretation of the coefficienton equity prices as it allows us to better disentangle the Central Bank’s reaction tomacroeconomic variables from its independent reaction to movements in stock prices.The Greenbook dataset would also present the advantage of compiling the FOMC’sreal-time expectations of housing price, which can improve the interpretation of theestimated coefficient on house prices.

Third, apart from the key benefit of using Greenbook forecasts to replicate theFederal Reserve’s real-time information set, an additional appealing feature is that onecould estimate a Taylor rule such as Eq. (7), or its augmented counterpart, by leastsquares. Coibion & Gorodnichenko [2011] provide evidence that the OLS procedureis appropriate when estimating Taylor rules using Greenbook forecasts. If indeed theorthogonality condition is satisfied, i.e. current forecasts be uncorrelated with thecurrent monetary policy innovation, we believe that the use of OLS could greatlyease the estimation procedure and offer much more flexibility to handle this issue. Ithas been particularity difficult to apply common statistical hypothesis tests and toimpose restrictions and dummies in a GMM estimation framework. Moreover, eventhough the null hypothesis of a valid instrument set was clearly not-rejected in all ourestimations, we have raised concerns on the potential problem of weak identificationwhen considering instruments for the inflation variable at long forecasting horizons.These concerns would naturally disappear if the IV procedure is not necessary.

Last but not least, the opportunity to estimate by least squares might help to takeproperly into account the non-linearities in the reaction to asset prices witnessed inthis study. If we believe that monetary policymakers respond more vigorously to assetprices when misalignments are large or/and they are more concerned with avoiding

75In our estimations, the rational expectations assumption implies that expected inflation one yearahead is used as a proxy for realized inflation a year later. Yet, realized inflation may significantlydiffer from inflation foretasted one year earlier (see for instance Bernanke [2010]) and this may lead toerroneous interpretations. Similarly, the difference between revised and real-time data on the outputgap series could be substantial. Bernanke [2010] insists that measuring the output gap in real-timeis particularly troublesome and the revisions that are likely to happen are, of course, not known inadvance.

76These forecasts of current and future macroeconomic variables are prepared by staff membersof the Federal Reserve a few days before each official FOMC meeting and become publicly availablewith a five-year lag. Unfortunately, the Greenbook dataset does not provide a measure of output gapin real-time. One possibility would be to construct it as the percentage deviation of GDP from itstrend, the latter computed with a rolling/recursive HP filter [Cecchetti et al., 2007].

80

Page 87: Master's Thesis Alexandre Lauwers

stock price drops than stock price hikes, one would need to relax the assumption oflinearity we maintain throughout this study. To take account explicitly of the possibil-itythat monetary policy might not always be perfectly linear or symmetric, one wouldneed to rely on a non-linear time series model. Assuming there might be economicintuitions behind these potential asymmetries in policy behaviour towards asset prices,one may estimate the monetary policy reaction function using a non-linear specificationthat relies on the class of Smooth Transition Regression (STR) models 77 [Petersen,2007]. Particularly useful for representing and capturing asymmetric behaviour, aSTR model is a non-linear regression model that allows the regressors’ coefficients tochange smoothly from one regime to another; once the yearly change in stock prices ora stock price gap approaches a certain threshold or range, the Federal Reserve mightadjust its policy-rule and begin to respond more forcefully to stock price developments.

Therefore, considering the strong assumptions and obstacles in the approach pur-sued in the study, these aforesaid lines of research might provide additional insightsinto the Federal Reserve’s behaviour towards asset prices. Thus, the empirical resultsof this investigation are to be regarded as an initial attempt to capture the uncertainnexus between monetary policy and asset prices.

77Monetary policy itself might be the source of the asymmetry. The Central Bank might haveasymmetric preferences in the sense that it assigns different weights to negative and positive gaps inasset prices variables included in its quadratic loss function. This can be rationalized as the effectsof asset prices are asymmetric; sharp asset price corrections pose a larger threat to financial stabilitythan do similar-sized asset price booms due in particular to the asymmetric properties in the financialaccelerator mechanism. On the other hand, the non-linearities could arise from inherent asymmetriesin the stock market [Ravn, 2012]. It can be argued that asset prices series are inherently non-linear,with an asymmetric adjustment mechanism; for instance, over a boom/bust cycle, the stock market orthe real estate market sometimes exhibit sharp and very sudden corrections whereas long and smoothupward trend.

81

Page 88: Master's Thesis Alexandre Lauwers

References

Batini, N. & Haldane, A. (1999). “Forward-Looking Rules for Monetary Policy”. InMonetary Policy Rules, NBER Chapters(pp. 157–202). National Bureau of Eco-nomic Research, Inc.

Batini, N. & Haldane, A. (2001). “The Lagfrom Monetary Policy Actions to Infla-tion: Friedman Revisited”. InternationalFinance, 4 (3), 381–400.

Baum, C. F., Schaffer, M. E., & Stillman, S.(2003). “Instrumental variables and GMM:Estimation and testing”. Stata Journal,3 (1), 1–31.

Bernanke, B. (2010). “Monetary policy andthe housing bubble”. A speech at the An-nual Meeting of the American EconomicAssociation, Atlanta, Georgia, January 3,2010.

Bernanke, B. & Gertler, M. (1999). “Mone-tary policy and asset price volatility”. Eco-nomic Review, (Q IV), 17–51.

Bernanke, B. S. & Blinder, A. S. (1992). “TheFederal Funds Rate and the Channels ofMonetary Transmission”. American Eco-nomic Review, 82 (4), 901–21.

Bernanke, B. S. & Gertler, M. (2001).“Should central banks respond to move-ments in asset prices?” The American Eco-nomic Review, 91 (2), 253–257.

Bernanke, B. S., Gertler, M., & Gilchrist,S. (1999). “The financial accelerator ina quantitative business cycle framework”.Handbook of macroeconomics, 1, 1341–1393.

Bini Smaghi, L. (2009). “Monetary Policyand Asset Prices”. Speech held at the uni-versity of freiburg, germany.

Bjornland, H. C. & Jacobsen, D. H. (2012).“House prices and stock prices: Differentroles in the U.S. monetary transmissionmechanism”. (0006).

Botzen, W. W. & Marey, P. S. (2006). “Doesthe ECB respond to the stock market?”(0017).

Botzen, W. W. & Marey, P. S. (2010). “Didthe ECB respond to the stock market be-fore the crisis?” Journal of Policy Model-ing, 32 (3), 303–322.

Campbell, J. Y. & Shiller, R. J. (2001). “Val-uation Ratios and the Long-run Stock Mar-ket Outlook: An Update”. Cowles Foun-dation Discussion Papers, (1295).

Cavaliere, G. & Xu, F. (2011). “Testing forunit roots in bounded time series”. Journalof Econometrics.

Cecchetti, S. G. (2003). “What the FOMCsays and does when the stock marketbooms”. Asset Prices and Monetary Pol-icy, Sydney: Reserve Bank of Australia.

Cecchetti, S. G., Genberg, H., Lipsky, J., &Wadhwani, S. (2000). “Asset prices andcentral bank policy”. International Centerfor Monetary and Banking Studies.

Cecchetti, S. G., Hooper, P., Kasman, B. C.,Schoenholtz, K. L., & Watson, M. W.(2007). “Understanding the evolving in-flation process”. In US Monetary PolicyForum, volume 8.

Chadha, J. S., Sarno, L., & Valente, G.(2004). “Monetary Policy Rules, AssetPrices, and Exchange Rates”. IMF StaffPapers, 51 (3), 529–552.

Clarida, R., Gali, J., & Gertler, M. (1999).“The Science of Monetary Policy: a NewKeynesian Perspective”. Journal of Eco-nomic Literature, 37 (4), 1661–1707.

Clarida, R., Gali, J., & Gertler, M. (2000).“Monetary Policy Rules And Macroeco-nomic Stability: evidence And Some The-ory”. The Quarterly Journal of Economics,115 (1), 147–180.

82

Page 89: Master's Thesis Alexandre Lauwers

Cochrane, J. H. (2011). “Determinacy andIdentification with Taylor Rules”. Journalof Political Economy, 119 (3), 565 – 615.

Coibion, O. & Gorodnichenko, Y. (2011).“Monetary Policy, Trend Inflation, and theGreat Moderation: An Alternative Inter-pretation”. American Economic Review,101(1): 341–70.

Consolo, A. & Favero, C. A. (2009). “Mon-etary policy inertia: More a fiction thana fact?” Journal of Monetary Economics,56 (6), 900–906.

Del Negro, M. & Otrok, C. (2007). “99 Luft-ballons: Monetary policy and the houseprice boom across U.S. states”. Journal ofMonetary Economics, 54 (7), 1962–1985.

Detken, C. & Smets, F. (2004). “Asset pricebooms and monetary policy”. Working Pa-per Series 364, European Central Bank.

Drescher, C., Erler, A., & Krizanac, D.(2010). “The Fed’s TRAP: A Taylor-typeRule with Asset Prices”. MPRA Paper23293, University Library of Munich, Ger-many.

Driffill, J., Rotondi, Z., Savona, P., & Zaz-zara, C. (2006). “Monetary policy and fi-nancial stability: What role for the futuresmarket?” Journal of Financial Stability,2 (1), 95–112.

Dupor, B. & Conley, T. (2004). “The FedResponse to Equity Prices and Inflation”.American Economic Review, 94 (2), 24–28.

Evanoff, D. D., Kaufman, G. G., & Malliaris,A. G. (2012). “New perspectives on assetprice bubbles”. Oxford University Press.

Eyssartier, D. & Aubert, L. (2002). “Com-mentaire de l’article de G. Levieuge”. Re-vue Francaise d’Economie, 16 (4), 61–79.

Fama, E. F. (1965). “The behavior of stock-market prices”. The journal of Business,38 (1), 34–105.

Friedman, M. (1961). “The Lag in Effectof Monetary Policy”. Journal of PoliticalEconomy, 69, 447.

Fuhrer, J. & Tootell, G. (2008). “Eyes on theprize: How did the fed respond to the stockmarket?” Journal of Monetary Economics,55 (4), 796–805.

Goodfriend, M. (1991). “Interest rates andthe conduct of monetary policy”. Carnegie-Rochester Conference Series on Public Pol-icy, 34 (1), 7–30.

Granger, C. W. (2010). “Some thoughts onthe development of cointegration”. Journalof Econometrics, 158 (1), 3–6.

Greenspan, A. (2002). “Opening Remarks”.Number pp. 110. in Rethinking Stabi-lization Policy (Federal Reserve Bank ofKansas City symposium, Jackson Hole,Wyo., Aug. 29-31).

Hahn, J. & Hausman, J. (2005). “Estimationwith valid and invalid instruments”. An-nales d’Economie et de Statistique, 25–57.

Hakkio, C. S. (2008). “PCE and CPI infla-tion differentials: converting inflation fore-casts”. Federal Reserve Bank of KansasCity Economic Review Q, 1, 51–68.

Hansen, L. P. (1982). “Large sample proper-ties of generalized method of moments es-timators”. Econometrica: Journal of theEconometric Society, 1029–1054.

Harney, M. & Tower, E. (2002). “RationalPessimism: Predicting Equity Returns us-ing Tobin’s q and Price/Earnings Ratios”.(02-29).

Hausman, J. A. (1978). “Specification testsin econometrics”. Econometrica: Journalof the Econometric Society, 1251–1271.

Hayford, M. D. & Malliaris, A. G. (2004).“Monetary Policy and the U.S. Stock Mar-ket”. Economic Inquiry, 42 (3), 387–401.

Hayford, M. D. & Malliaris, A. G. (2005).“How did the Fed react to the 1990s stock

83

Page 90: Master's Thesis Alexandre Lauwers

market bubble? Evidence from an ex-tended Taylor rule”. European Journal ofOperational Research, 163 (1), 20–29.

Hodrick, R. J. & Prescott, E. C. (1997).“Postwar U.S. Business Cycles: An Em-pirical Investigation”. Journal of Money,Credit and Banking, 29 (1), 1–16.

Hoffmann, A. (2012). “Did the Fed and ECBreact asymmetrically with respect to assetmarket developments?” (103).

Issing, O. (2009). “In search of monetary sta-bility: the evolution of monetary policy”.BIS Working Papers 273, Bank for Inter-national Settlements.

Jalilvand, A. & Malliaris, A. G. (2010). “Se-quence of Asset Bubbles and the Global Fi-nancial Crisis”.

Judd, J. P. & Rudebusch, G. D. (1998). “Tay-lor’s rule and the Fed, 1970-1997”. Eco-nomic Review, 3–16.

Kahn, G. A. (2012). “The Taylor Rule andthe Practice of Central Banking”. In E. F.Koenig, R. Leeson, & G. A. Kahn (Eds.),The Taylor Rule and the Transformation ofMonetary Policy chapter 3. Hoover Institu-tion, Stanford University.

Koenig, E. F., Leeson, R., & Kahn, G. A.(2012). “The Taylor Rule and the Transfor-mation of Monetary Policy”. Hoover Press.

Kontonikas, A. & Montagnoli, A. (2004).“Has Monetary Policy Reacted to AssetPrice Movements? Evidence from the UK”.Ekonomia, 7 (1), 18–33.

Kozicki, S. (1999). “How useful are Taylorrules for monetary policy?” Economic Re-view, (Q II), 5–33.

Kuttner, K. N. (1992). “Monetary policy withuncertain estimates of potential output”.Federal Reserve Bank of Chicago EconomicPerspectives, 16 (1), 2–15.

Lee, D. J. & Son, J. C. (2011). “Nonlinearityand Structural Breaks in Monetary PolicyRules with Stock Prices”. (2011-19).

Levieuge, G. (2002). “Banques centrales etprix d’actifs : une etude empirique”. Re-vue Francaise d’Economie, 16 (4), 25–59.

Lowe, P. & Borio, C. (2002). “Asset prices,financial and monetary stability: exploringthe nexus”. (114).

Lowe, P. & Ellis, L. (1997). “The Smooth-ing of Official Interest Rates”. In Mone-tary Policy and Inflation Targeting, RBAAnnual Conference Volume. Reserve Bankof Australia.

Mattesini, F. & Becchetti, L. (2008). “Thestock market and the Fed”. CEIS ResearchPaper 113, Tor Vergata University, CEIS.

Mehra, Y. P. (1999). “A forward-lookingmonetary policy reaction function”. Eco-nomic Quarterly, (Spr), 33–54.

Mehra, Y. P. & Sawhney, B. (2010). “Inflationmeasure, Taylor rules, and the Greenspan-Bernanke years”. Economic Quarterly,(2Q), 123–151.

Miller, M. H., Weller, P., & Zhang, L. (2002).“Moral Hazard and the US Stockmarket:Analyzing the Greenspan Put”. WorkingPaper Series WP02-1, Peterson Institutefor International Economics.

Mishkin, F. S. (2000). “What should centralbanks do?” Review, (Nov), 1–14.

Mishkin, F. S. (2001). “The TransmissionMechanism and the Role of Asset Pricesin Monetary Policy”. NBER Working Pa-pers 8617, National Bureau of EconomicResearch, Inc.

Mishkin, F. S. (2011). “Monetary policystrategy: lessons from the crisis”. Tech-nical report, National Bureau of EconomicResearch.

Murray, M. P. (2006). “Avoiding invalid in-struments and coping with weak instru-ments”. The Journal of Economic Perspec-tives, 20 (4), 111–132.

84

Page 91: Master's Thesis Alexandre Lauwers

Orphanides, A. (2001). “Monetary policyrules based on real-time data”. AmericanEconomic Review, 964–985.

Pamela, H. (2011). “Is there any evidence of aGreenspan put?” Working Papers 2011-06,Swiss National Bank.

Pepin, D. (2010). “La BCE reagit-elle au prixdes actifs financiers ?” in Economies et So-cietes, (P.763-794.).

Petersen, K. (2007). “Does the Federal Re-serve Follow a Non-Linear Taylor Rule?”Working papers 2007-37, University ofConnecticut, Department of Economics.

Posen, A. S. (2006). “Why Central BanksShould Not Burst Bubbles”. InternationalFinance, 9 (1), 109–124.

Ravn, M. O. & Uhlig, H. (2002). “On ad-justing the Hodrick-Prescott filter for thefrequency of observations”. The Review ofEconomics and Statistics, 84 (2), 371–375.

Ravn, S. H. (2012). “Has the Fed ReactedAsymmetrically to Stock Prices?” TheB.E. Journal of Macroeconomics, 12 (1), 1–36.

Rigobon, R. & Sack, B. (2003). “MeasuringThe Reaction Of Monetary Policy To TheStock Market”. The Quarterly Journal ofEconomics, 118 (2), 639–669.

Rudebusch, G. D. (2002). “Term structure ev-idence on interest rate smoothing and mon-etary policy inertia”. Journal of MonetaryEconomics, 49 (6), 1161–1187.

Rudebusch, G. D. (2005). “Monetary pol-icy and asset price bubbles”. FRBSF Eco-nomic Letter, 18.

Rudebusch, G. D. (2006). “Monetary Pol-icy Inertia: Fact or Fiction?” InternationalJournal of Central Banking, 2 (4).

Rudebusch, G. D. (2009). “The Fed’s mone-tary policy response to the current crisis”.FRBSF Economic Letter, (May 22).

Sack, B. & Wieland, V. (2000). “Interest-rate smoothing and optimal monetary pol-icy: a review of recent empirical evidence”.Journal of Economics and Business, 52 (1-2), 205–228.

Shiller, R. J. (2001). “Irrational exuberance”.

Smith, R. T. & Van Egteren, H. (2005). “In-terest rate smoothing and financial sta-bility”. Review of Financial Economics,14 (2), 147–171.

Stock, J. H. & Watson, M. W. (2003). “Fore-casting Output and Inflation: The Role ofAsset Prices”. Journal of Economic Liter-ature, 41 (3), 788–829.

Stock, J. H., Wright, J. H., & Yogo, M.(2002). “A Survey of Weak Instrumentsand Weak Identification in GeneralizedMethod of Moments”. Journal of Business& Economic Statistics, 20 (4), 518–29.

Stock, J. H. & Yogo, M. (2002). “Testingfor Weak Instruments in Linear IV Regres-sion”. NBER Technical Working Papers0284, National Bureau of Economic Re-search, Inc.

Svensson, L. E. (1997). “Inflation forecast tar-geting: Implementing and monitoring in-flation targets”. European Economic Re-view, 41 (6), 1111–1146.

Tauchen, G. (1986). “Statistical Propertiesof Generalized Method-of-Moments Esti-mators of Structural Parameters Obtainedfrom Financial Market Data”. Journal ofBusiness & Economic Statistics, 4 (4), 397–416.

Taylor, J. B. (1993). “Discretion versus pol-icy rules in practice”. Carnegie-RochesterConference Series on Public Policy, 39 (1),195–214.

Taylor, J. B. (1999). “A historical analysis ofmonetary policy rules”. In Monetary policyrules (pp. 319–348). University of ChicagoPress.

85

Page 92: Master's Thesis Alexandre Lauwers

Taylor, J. B. (2007). “Housing and monetarypolicy”. Federal Reserve Bank of KansasCity Proceedings, 463–476.

Tobin, J. (1969). “A general equilibrium ap-proach to monetary theory”. Journal ofmoney, credit and banking, 1 (1), 15–29.

Weigand, R., Irons, R., & Washburn (2006).

“Will the Market P/E Ratio Revert Backto ”average”?”

Woodford, M. (2001). “The Taylor rule andoptimal monetary policy”. The AmericanEconomic Review, 91 (2), 232–237.

Woodford, M. (2003). “Optimal Interest-RateSmoothing”. Review of Economic Studies,70 (4), 861–886.

86

Page 93: Master's Thesis Alexandre Lauwers

Appendices

A Data Selection

Our dataset comprises monthly time series spanning from October 1979 to Novem-ber 2011 for the United States. Following Clarida et al. [1999], we consider the year1979 to be the starting date of our sample. They judiciously mention that October1979 marked a turning point in U.S. monetary policy since monetary policies were outof control in the pre-1979 period and thereby the estimations were unstable. Still, thedata from 1974 to 1979 are used to produce the trends required for the estimations.The ending point of our estimation sample is fixed twelve months prior to the latestavailable data, thus in 2011:11. Indeed, the inflation variable enters the Taylor rulesone year ahead and it is advisable to ignore the last data points when considering anHP filter for the output gap.

In line with the choice made by Clarida et al. [1999], we proceed our estimationsbased solely on monthly data. In comparison with quarterly frequency, monthly datacarries more extensive information since it is not unusual to observe several policyrate adjustments within a quarter. Nevertheless, this study tries and highlights thatthis gain in information is accompanied by several trade-offs. An overview of the dataselected to estimate the Federal Reserve monetary policy rules is presented below.

Inflation measures

For the U.S. inflation rate, the harmonised Consumer Price Index for all urbanconsumers and all items is selected as the basis (CPI). Because the Federal Reservehas no explicitly announced a specific index to achieve price stability, the choice isleft to the discretion of the researcher 78. Nevertheless, an extent literature pointsout the high degree of sensitiveness of the estimated target interest rate to differentmeasures of inflation (see for example Kozicki [1999] or Mehra & Sawhney [2010]).Despite these measures are closely related in the long-run, they can differ significantlyover shorter time periods – due to factors such as energy prices–, affecting inevitablythe desired policy setting. To explore the robustness of the baseline estimated rule, wealso consider three alternative measures of inflation: the core CPI that excludes foodand energy prices (CPIcore), the personal consumption expenditure price index (PCE)and 3) the core PCE price index that excludes food and energy prices (PCEcore).

As the baseline measure, the CPI index is used for the computation of the year-to-year inflation rate in percentage points, evaluated as the natural log-difference of theprice index over twelve months :

πt = 100[log(CPIt)− log(CPIt−12)

]Note that the alternative measures of inflation are computed similarly in yearly percentchange.

78Actually, in January 2012, the FOMC announced that its long-run objective for inflation was2 percent as measured by the Personal Consumption Expenditures price index (PCE).

i

Page 94: Master's Thesis Alexandre Lauwers

Figure A.1: Inflation rate measures (12 month percent change)

-2

0

2

4

6

1985 1990 1995 2000 2005 2010

CPI

CPIcore

Year-over-year inflation rate (in %)

-2

0

2

4

6

1985 1990 1995 2000 2005 2010

PCE

PCEcore

Year-over-year inflation rate (in %)

Note: Data points from 1979:10-1982:12 are voluntarily removed from this chart.

Briefly speaking, core inflation measures that exclude raw food and energycomponents are constructed to identify and eliminate temporary and volatilefluctuations from the overall measure of the price level. This can help monetarypolicymakers to determine if observed price level changes are long-lasting or transitory.Figure A.1 charts the yearly percent change of ex-post realized headline and core CPIand PCE inflations rates from 1983-2011. As can be seen, headline and core CPIinflation series are closely related for most of the period before 2000. However, from2002 to mid-2008, headline inflation roughly remained above core inflation, reflectingin part the effect of the large fluctuations in oil and food prices on headline inflation.This would imply, for instance, that a policy rule that relates the policy rate to headlineCPI inflation is likely to prescribe a higher federal funds rate target than a policy rulethat relates the policy rate to core CPI inflation, ceteris paribus – similarly for PCEvs. PCEcore. Thus, as highlighted in Section 3.2.3, the measure of inflation used inthe estimated Taylor rules will matter for predicting the target policy interest rate. Itis noted also that the CPI and the PCE gives different estimates of inflation. Althoughthey are both designed to capture changes in consumer prices, there are three majorfactors that could explain those differences : differences related to their respectiveformulas, the weights attached to their price components, and the breadth or scope ofcoverage [Hakkio, 2008]. The inflation rates are generally higher for the CPI than forthe PCE index but the differentials vary over time.

ii

Page 95: Master's Thesis Alexandre Lauwers

Output gap measures

The output gap, as a measure of economic activity or excess demand, is definedas the difference between the actual output of the economy and its potential output.The latter characterizes the optimal level of goods and services an economy can attain,with existing resources and without causing inflationary pressures. It can be seen alsoas the output capacity of an economy. As a proxy for economic activity, the standardmeasure relies on the real Gross Domestic Product. However, for the sake of obtaininga larger sample and thereby, additional information, one need a measure of economicactivity available monthly. Therefore, we decided to follow Clarida et al. [1999] anduse the overall industrial production index (IP) for the U.S. as a substitute for theoutput measure. Despite the increasing share of services in the overall economy, it isstill commonly assumed that the industrial sector is the “cycle maker” of the economyand IP index usually displays a strong co-movement with the GDP. Furthermore, theconstruction of the output gap is difficult because potential output is an unobservedvariable and therefore it must be estimated. There are many different approaches toestimate potential output. Among these methods, we decide to follow Clarida et al.who estimate the output gap as the residuals from a regression of the natural logarithmof industrial production on a constant, time and time squared for the period 1974:01-2012:11. This technique, which serves as our main measure of output gap, allows fora slowly changing tendency by employing a linear trend as well as a quadratic trend –we occasionally refer to this trend as the quadratic trend:

yt = a+ b.t+ c.t2 + vt

where yt is the log of industrial production, t is a time trend, and vt is an error term.The residuals from this regression are thus quadratically detrended time series and areinterpreted as the cyclical component of output, i.e. the output gap yQt .

As an alternative measure, we also rely on the HP output gap yHPt . Specifically,we apply to the industrial production index, a standard Hodrick-Prescott filter witha smoothing parameter of λ = 129600 79. This alternative output gap measure is

79The Hodrick Prescott (HP) filter has become the most commonly used statistical method totrack the fluctuations of potential output. This statistical method separates the cyclical componentof a time series – here, the IP series – from its growth or trend component : Yt = Y gt − Y ct , where Ytis the natural logarithm of industrial production and Y gt and Y ct are the growth/trend and cyclicalcomponents respectively. The aim of the HP filter is to remove the smooth trend component – i.e.the potential output – from the actual output series Yt by minimizing the fluctuations of outputaround this trend. Formally, the problem is addressed by minimizing the variance of the cyclicalterm Y ct , that is the sum of squared distances between actual and growth/trend output, at each pointin time, and subject to a penalty for variations in the second difference of the growth/trend term:

minY gt

∑Tt=1(Yt − Y gt )2 + λ

∑Tt=1

[(Y gt+1 − Y

gt )− (Y gt − Y

gt−1)

]2where the restriction parameter λ is referred to the smoothing parameter as it captures the importanceof cyclical shocks to output relative to trend output shocks. Thus, the smoothness of potential outputY gt depends on the choice of λ: accordingly, a larger value of λ suggests a larger importance ofcyclical shocks and yields a less volatile series of potential output. Following Hodrick & Prescott[1997], researchers typically use λ = 1600 with quarterly data. But there is less agreement in theliterature when moving to other frequencies. Ravn & Uhlig [2002] argue that one should adjust theHP parameter λ by the fourth power of the frequency change – from quarterly to monthly, 3 timesmore data. Thus, λ = 1600 ∗ (34) = 129600 for monthly data.

iii

Page 96: Master's Thesis Alexandre Lauwers

defined by the percentage deviation of logarithm of actual industrial production fromits HP-filter trend, as follows:

ytHP = yt − yt ≈ 100

[log(IPt)− log(IP trendt)

]It is important to stress that this technique, commonly used for its great flexibility,suffers from certain limitations. It is difficult to identify the appropriate value of thesmoothing parameter λ and this arbitrary choice might lead to underestimate theoutput gap series or to draw erroneously too long business cycles. Also, this approachis susceptible to what is usually referred to as “end-point bias” since it causes volatileresults at the extreme points of the time series. This end-sample bias stems from thesymmetric property of the HP filter, which requires that output gaps sum to zero overthe estimation period, even though the sample period rarely covers an exact numberof business cycles. This problem is partially corrected by applying the HP filter onthe IP series from 1974:01-2012-11. Finally, the HP filter, as the first approach used,does not take into account any implications by other times series nor any economictheory in contrast to another conventional method that relies on the evaluation of aproduction function, which models potential output as a function of potential laborand capital inputs, as well as of the potential total factor productivity.

Figure A.2 plots the monthly measure of the output gap based on the quadratictrend and on the HP filtered trend, along with the National Bureau of EconomicResearch (NBER) shaded recessions dates. As can be seen, there is a clear relationshipbetween the output gap series and the business cycles. The largest drops in the outputgap are through these recession periods; both measures of output gap begins to fallbefore and throughout every recession and when a recession ends, the output gap risesagain. Moreover, there is a substantial difference between the two measures: the HPoutput gap display much less volatility than the fitted output gap although the chosensmoothing parameter is set to a high value which tends to smooth the volatility in thepotential output series and thereby to accentuate the volatility in the output gap series.

Finally, in the robustness analyses, we also introduce the unemployment gap asa further measure of economic slack. As with the potential output, the naturalunemployment rate is not observable. We applied the two aforementioned statisticalmethods to estimate the natural unemployment rate: the “Fitted unemploymentgap” series is defined as the residuals from a regression of the natural logarithmof the monthly unemployment rate on a constant, time and time squared; the“HP unemployment gap” is defined by the percentage deviation of logarithm ofthe unemployment rate from its HP-filter trend, with a smoothing parameter ofλ = 129600.

iv

Page 97: Master's Thesis Alexandre Lauwers

Figure A.2: Output gap measures

-15

-10

-5

0

5

10

15

1980 1990 2000 20101985 1995 2005

Fitted output gap yQ HP output gap yHP

Output gap measures based on the Industrial production series (in %)

Notes: Gray shaded area denotes NBER recession dates. “Fitted output gap” series representsthe residuals from a regression of the natural logarithm of industrial production on a constant, timeand time squared. “HP output gap” is defined by the percentage deviation of logarithm of actualindustrial production from its HP-filter trend, with a smoothing parameter of λ = 129600.

Instrument policy rate

The appropriate short-term nominal interest rate instrument of the Federal Reserve,that enters into the left-hand side of the monetary policy rule, is proxied by the effectiveFederal funds rate. Figure A.3 shows the Federal funds rate series from 1979:10-2011:11and the box below reports a basic timeline of the main historical events in the policyinstrument rate since 1987.

Figure A.3: The Federal funds rate at a monthly frequency

1

2

3

4

5

67

8

9

0

5

10

15

20

1980 1990 2000 20101985 1995 2005

The actual Federal funds rate (in %)

v

Page 98: Master's Thesis Alexandre Lauwers

¬ 1987 to 1989: Sharp increase in the Federal funds rate (FF) from 6 to 9,75%.

• To stop further depreciation of the dollar (The Louvre Accord).

• Increase of the output gap. Unemployment rate reached a minimum of 5 % in 1989:03.

• Risk of a stronger inflation rate after a long decline over the period 1980 and 1987

• Exception of a short decrease in the FF rate after the stock market crash of 1987:10.

­ Mid-1990 to 1992: Severe decrease in the FF rate from 8 to 3%.

• Decrease of the output gap. Recession from 1990:07 to 1991:03 with a maximum decline ofGDP of -1,4%. Unemployment rate peaked at 7,8% in 1992:06.

• After the invasion of Kuwait and the subsequent oil price shock, CPI inflation declined froma top of 6,3% in 1990:09 to 3% in 1992:12.

® 1993: Stable FF rate at a low level of 3%.

• Interest rates were kept low to reinforce an uncertain economic expansion.

• Low and stable output gap.

• CPI inflation rate was low and still decreasing.

¯ 1994: Strong increase of the FF rate from 3 to 6%.

• Low but rising output gap. The economic expansion was clearly under way and solidlybased. Still, the unemployment was at a high level of 6,6% in 1994:02.

• CPI inflation rate was stable at a low level, but policymakers were worried about potentialinflationary pressures.

° 1995 to 2001:03 : Stable FF rate between 5 and 6,5 %.

• Period of financial turbulences with the Asian financial crises of 1997-1998, the Russian debtcrises of 1998 and finally the dot-com bubble and its burst in 2000-01.

• Output gap increased and became positive. Years of economic growth with a decline of theunemployment rate to its lowest level in 30 years at 3,8% in 2000:04.

± 2001:03 to mid-2004: Sharp decline in the FF rate from 6,5 and 1%. .

• Fall of the output gap. Recession from 2001:03 to 2001:11. Aimed also to depreciate thecurrency in order to support exports and the economy. Although GDP increased since 2002,unemployment was still high at 6,3% in 2003:06.

• Fall of the inflation rate although CPI inflation rose during the oil crisis due to the Iraq war.

² Mid-2004 to mid-2006 : Strong but gradual rise of the FF rate from 1 to 5,25%.

• Output gap increased and became positive.

• CPI inflation exacerbated as the pending Iraq war put increased pressure on oil prices.Policymakers worried about a resurgence of inflation as in 1994.

³ Mid-2006 to mid-2007 : Stable FF rate at 5,25%.

• Output gap sharply increased.

• CPI Inflation was moderate but worries about upwards movement in the core rate.

´ Mid-2007 to 2011:11 : Abrupt decline of the FF rate from 5,25% to a level approaching 0%.

• The fall in house prices and the difficulty in refinancing sub-prime mortgage loans becamea matter of concern in late 2007.

• Strong fall of output below its potential level. Severe recession from 2007:12 to 2009:06with a maximum decrease of 5,1% in GDP and unemployment rate reached 10% in 2009:10.Policymakers have decided to maintain exceptionally low rates as long as unemployment isabove 6,5% and inflation is low.

• The oil price shock in 2007 pushed CPI inflation at high levels. The subsequent crash in oilprices in 2008 caused a sharp decline in CPI inflation.

A basic narrative timeline of the Federal funds rate since 1987

vi

Page 99: Master's Thesis Alexandre Lauwers

Asset price measures

Concerning the baseline measure of stock price misalignment, we follow Bernanke& Gertler [1999] and consider to add simply the returns of some representative stockmarket index into the policy rule. Stock market returns are computed as the yearlypercentage change of the S&P500 index, evaluated as the natural log-difference of theS&P500 index over the prior twelve months, i.e.

st = 100[log(SPIt)− log(SPIt−12)

]As for the real estate variable, we favour the yearly percentage change of the S&P

Case-Shiller Home Price Index in real terms (adjusted for inflation using CPI inflation),evaluated as the natural log-difference of the home price index over the prior twelvemonths, i.e.

ht = 100[log(HPIt)− log(HPIt−12)

]It is noted that the real estate series starts with the Case-Shiller 10-City Home PriceIndex from 1988:01 and continues with the 20-city Home Price Index from 2000:01onwards.

Figure A.4 shows the year-over-year growth in S&P500 index over the period ofinvestigation as well as the year-over-year real growth of the Case-Shiller compositeIndices.

Figure A.4: Yearly change in the Stock and House price indexes

vii

Page 100: Master's Thesis Alexandre Lauwers

In the robustness analysis on the stock price augmented rule in Section 3.3.2, weintroduce alternative measures of stock price misalignment :

• A “stock price gap” computed as the percentage deviation of the year-on-yearlog differences of the S&P500 index from its trend, which is approximated by theHP filter with a smoothing parameter of λ = 129600.

• The normalized – or cyclically adjusted – price-earnings ratio was taken fromRobert Shiller’s database and is calculated as the inflation-adjusted S&P 500index at the start of each year divided by an average of the most recent 10-yearsof S&P 500 real earnings. The S&P500 P/E ratio is then compared with itshistorical average computed for the period 1948-2013 (18.47).

• The yearly growth rate in the P/E ratio, computed as the year-over-year log-differences of the S&P500 P/E ratio.

• The yearly growth rate in the dividend-price ratio, computed as the year-over-year log-differences of the S&P500 D/P ratio. The normalized price-dividendratio comes also from Dr. Shiller’s Database and is computed using an averageof the most recent 10-years of S&P 500 dividends.

Figure A.5 compiles the developments of these different measures for the period1979:11 to 2011:11.

Figure A.5: Alternative measures of stock price misalignments

-60

-40

-20

0

20

40

1980 1990 2000 2010

Yearly growth in the stock price gap (in %)

Historical average = 18.47

10

20

30

40

50

1980 1990 2000 2010

P/E ratio and its historical average

-60

-40

-20

0

20

40

60

1980 1990 2000 2010

Yearly growth in the P/E ratio (in %)

-60

-40

-20

0

20

40

60

1980 1990 2000 2010

Yearly growth in D/P ratio (in %)

viii

Page 101: Master's Thesis Alexandre Lauwers

Summary statistics

All time series, except for the policy rate, are seasonally adjusted at the source.The descriptive statistics of some relevant variables are presented in Table ??. In short,the variables vary enough so that one can apprehend relevant correlations between thedependent variable and explanatory variables in the Taylor rule.

Summary Statistics for the period 1979:10-2011:11

Variables Mean (Std. dev.) Min./Max. Skew./Kurtosis Obs.Policy rate (in %) 5.73 (3.976) 0.07/19.1 .939/4.156 386Inflation rate (CPI) (in %) 3.603 (2.533) -1.981/13.621 2.037/8.015 386Inflation rate (CPIcore) (in %) 3.623 (2.386) 0.596/12.755 1.997/6.843 386Output gap (Quadratic) (in %) -0.518 (6.421) -14.798/12.992 .107/2.082 386Output gap (HP) (in %) -0.243 (2.923) -10.678/6.118 -.656/4.062 386Stock prices growth (in %) 7.867 (16.854) -55.353/42.293 -1.08/4.567 386House prices growth (in %) 0.651 (8.82) -25.575/16.046 -.711/3.381 287Stock price gap (in %) 0.108 (14.565) -53.096/41.268 -.802/4.178 386P/E ratio 21.059 (9.086) 6.639/44.199 .548/2.881 386P/E ratio growth (in %) 2.741 (16.975) -52.872/45.518 -.721/3.702 386D/P ratio growth (in %) -3.108 (17.171) -63.787/54.565 .305/4.219 386

Variables : dependent variable, regressors, instruments

Variable Symbol Formula

Federal funds rate rt -Industrial production IP -Unemployment rate UN -CPI inflation rate πCPI

t 100[log(CPIt)− log(CPIt−12)

]CPI core inflation rate πCPIcore

t 100[log(CPIcoret)− log(CPIcoret−12)

]PCE inflation rate πPCE

t 100[log(PCEt)− log(PCEt−12)

]PCE core inflation rate πPCEcore

t 100[log(PCEcoret)− log(PCEcoret−12)

]Quadratic output gap ytQ Res. from a regression of log(IP ) on a quadratic time trend

HP output gap ytHP 100[log(IPt)− log(IP trendt)

]Quadratic unemployment gap yUnQ

t Res. from a regression of log(UN) on a quadratic time trend

HP unemployment gap yUnHPt 100

[log(UNt)− log(UNtrendt)

]S&P 500 stock price index SPI -S&P Case-Shiller Real Home Price Index HPI -Stock prices growth (stock returns) st 100

[log(SPIt)− log(SPIt−12)

]HP stock price gap - 100

[log(SPIt)− log(SPItrendt)

]House prices growth ht 100

[log(HPIt)− log(HPIt−12)

]Cyclically adjusted price-earnings ratio P/E -Cyclically adjusted price-dividend ratio D/P -P/E ratio growth - 100

[log(P/Et)− log(P/Et−12)

]D/P ratio growth - 100

[log(D/Pt)− log(D/Pt−12)

]World Commodity Price Index - -US Treasury Bill rate 3 months sr -US 10 year constant Treasury Maturity rate lr -Spread - (lr - sr)Growth in M2 - 100

[log(M2t)− log(M2t−1)

]NAPM business cycle index - 100

[log(NAPMt)− log(NAPMt−1)

]Sources : This study used the FRED (Federal Reserve Economic Data) database from the Federal Reserve Bank

of St. Louis as well as Datastream and Robert Shiller’s Database (http://www.econ.yale.edu/ shiller/data.htm) toassemble the data used herein.

ix

Page 102: Master's Thesis Alexandre Lauwers

B Time series properties of the data

Preceding the econometric modelling of time series, one should investigate theintegrational properties of the variables. Indeed, econometricians should avoidregressions that have the appearance of good fit and of a statistically significantrelationship between variables, that in reality does not exist. Regressions that involvenon-stationary variables produce biased standard errors, thus implying misleadingparameter estimates. This major problem, arising with times series modelling, is knownas the “spurious regression problem”. In this context, the estimated parameters wouldbe inconsistent, which renders inference and economic interpretations of the estimatedparameters, unreliable. Given the negative implications for econometric modelling, itis therefore of considerable importance to determine whether the time series used inthis study are stationary or not.

B.1 A rapid inspection

The obvious first action is to plot the series and visually inspect their behaviour.Recall that a time series is said to be stationary if its mean, variance and covarianceremain constant over time. Here, only a few variables are presented but in thenext subsection all the variables that are used in the estimation of Taylor rules areinvestigated with formal tests. The time paths and correlograms of the policy rateand the CPI inflation rate are displayed in Figure ?? and those of the HP outputgap in Figure A.6 for the full sample period. Plots for r and πCPI suggest that thesetime series are not stationary as they appear to have both a downward trend, withdifferent mean values at distinct time intervals and variances seem to be decreasingover time. Also, the magnitude of the autocorrelations falls off approximately linearlyand do not collapse to insignificance, indicating non stationarity at least in this sample.Concerning yt

HP , the time path suggests no clear trend and seems to follow a cyclicalpattern with short-lived waves. Its correlogram dies out more rapidly than the twoother variables. To provide formal support for the hypothesis of non-stationarity in rand π and to inspect the stationarity properties of the other variables, it is useful toimplement the so-called “unit-root test”.

B.2 Univariate unit-root and stationary testing

To investigate the stationarity properties of each time series considered in thisstudy, three tests will be performed : the Augmented Dickey and Fuller test (ADF),the Phillips-Perron (PP) test and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS)test. The standard ADF and PP tests ask whether a series is non-stationary, whilethe KPSS test asks the opposite question, namely whether a series is stationary, i.e ithas stationarity as the null hypothesis and a unit-root as the alternative hypothesis.As an alternative to the ADF test, the PP test proposes a non parametric procedure,in which no additional lags of the explanatory variable are needed in the presenceof serial correlation in the residuals. An advantage of this test is that it assumesno functional form for the error process. It is often argued that unit-root tests havegenerally low power in small samples. Thus, the KPSS stationary test is performedto provide a further view on the time series behaviour of our variables. The latter

x

Page 103: Master's Thesis Alexandre Lauwers

Figure A.6: (a) Time paths and (b) Correlograms for r, πCPI and ytHP .

05

1015

20

in %

1980 1990 2000 2010

(a)

-0.5

00.

000.

501.

00

Sam

ple

auto

corr

elat

ions

of o

rder

k

0 10 20 30 40order k

(b)

Federal funds rate (r)

-50

510

15

1980 1990 2000 2010

(a)

-0.5

00.

000.

501.

00

0 10 20 30 40order k

(b)

Inflation rate ( p CPI )

-10

-50

5

1980 1990 2000 2010

(a)

-0.5

00.

000.

501.

000 10 20 30 40

order k

(b)

Output gap ( y HP )

Notes: (b) Bartlett’s formula for MA(q) 95% confidence bands

is implemented using the Barlett Kernel’s calculation and the automatic bandwidthselection procedure proposed by Newey and West. For all the variables in levels, weperform the ADF, PP and KPSS tests using an intercept and a time trend, except forthe output gap and the stock price gap, where neither of them are used. However,the results are found to be robust irrespective to the choice of deterministic components.

Table B.1 displays the unit-root tests of most of the variables that are used in theestimation of the Taylor rules. The results point out that the output gap (yt

HP ), theyearly stock price, P/E and D/P growth rates and the yearly stock price gap seriesseem to be stationary for the full sample period in light of the evidence from the threetests. The ADF and PP tests of the null that these series is I(1) is rejected in favour ofthe alternative of stationary at the 1% significance level, while the KPSS test does notshow evidence against the null hypothesis of stationarity. The latter result on the gapscalculated against the HP filtered trend is not surprising as these two time series arestationary by construction: applying the HP filter to integrated or highly persistenttime series, such as the actual output or a stock price index, is similar to detrending arandom walk.

The three tests are, however, unanimous on the non-stationarity of the quadratic

xi

Page 104: Master's Thesis Alexandre Lauwers

trend based output gap (ytQ), the yearly house price growth and the gross S&P 500

stock price index and P/E ratio series. Even though the output gap ytQ has a cyclical

shape, the waves are particularly long which inevitably “hide” the stationarity of theseries in a short sample. Still, this measure is used in the benchmark specification, butthe estimation results are also confronted with the HP output gap.

The usual unit-root and stationary tests yield ambiguous results with respect tothe interest rate and inflation rate series.

Regarding the policy rate, the different tests reach conflicting results for the fullsample. Thus, the order of integration in the policy rate is not as straightforward asthe visual inspection would suggest. While the ADF and PP tests are able to rejectthe null of a unit-root at the 5% and 10% significance levels respectively, the KPSStest strongly rejects its null hypothesis of trend stationarity. However, all three testsare unanimously supportive on the presence of stationarity for the policy rate in firstdifference 80. Despite the indecision in the three tests, we will further treat the policyrate as a trend stationary process, modelled as a pure I(0) process. This choice hasbeen motivated by Clarida et al. [1999], who treat interest rates as stationary series,“anassumption that we view as reasonable for the postwar U.S., even though the null of aunit-root in either variables is often hard to reject.”. This choice is also based on theargument of identification of monetary policy. Indeed, the monetary authority has aclose, but imperfect, control of the short-term nominal interest rate, as highlighted inEq. (6) with the presence of the interest rate shock υt. Thus, using first-differencesof the policy rate might attribute too much importance to the identification of short-term deviations in the interest rate variable and relatively less to the identificationof systematic monetary policy actions. In this respect, Bernanke & Blinder [1992]argue that the interest rate should not be included in first-differenced, otherwise thepredictive power of the Federal funds rate as an indicator of monetary policy wouldbe greatly reduced. Finally, note that the near unit-root behaviour of the policy ratemight have consequences on the estimations of the smoothing version of the Taylor ruleassumed in this study. Specifically, if the policy rate is close to a unit-root, it meansthat shocks to the policy rate can have permanent effects on that series. But, in thesmoothing Taylor rule, part of the effects of the permanent component in the interestrate is taken into account by introducing the lagged values of the interest rate inthe right-hand-side of the Taylor rule regression equation. Thus, the more permanentthe interest rate series is, the closer will be the coefficient on lagged interest rates to one.

As regards the inflation rate, the ADF and PP tests are able to reject the unit-rootnull hypothesis at the 10% significance level, while the KPSS rejects its null of station-

80It is noted that integrated processes are by definition unbounded, whereas the Federal fundsrate, like any interest rates or prices, is defined by Granger [2010] as a “limited process - is one that hasbounds either below (at zero, say) or above (full capacity) or both.”. Therefore, the policy rate cannotbe interpreted as I(1) in the usual sense and one should be cautious when formally testing limitedprocess for unit-root [Cavaliere & Xu, 2011]. According to Cavaliere & Xu, standard unit-root testsare unreliable in the presence of bounded below time series. They tend to over-reject the unit-rootnull hypothesis. Thus, on the basis of our results on the ADF and PP tests, it is unclear whether therejection of the null hypothesis should be credited to the absence of a unit-root or to the presence ofthe bound. Because of very limited available literature on this matter and beyond the scope of thisthesis, but worth mentioning, we will stick to conventional unit-root tests.

xii

Page 105: Master's Thesis Alexandre Lauwers

Table B.1Unit-root tests on the full sample period

Variables ADF Z(t) PP Z(t) KPSS Results

In level :

Federal funds rate −3.713∗∗ −3.255∗ .200∗∗ mixed

CPI inflation rate −3.299∗ −3.135∗ .250∗∗∗ mixed

Output gap ytQ −2.328 −1.848 .304∗∗∗ non-stationary

Output gap ytHP −4.641∗∗∗ −3.344∗∗∗ .039 stationary

S&P 500 index −2.353 −2.197 .213∗∗∗ non-stationary

Yearly stock price growth −3.639∗∗∗ −4.133∗∗∗ .285 stationary

Yearly house price growth −2.038 −1.160 .266∗∗∗ non-stationary

Yearly stock price gap −4.765∗∗∗ −4.756∗∗∗ .021 stationary

P/E ratio −1.335 −1.086 .499∗∗∗ non-stationary

Yearly growth in P/E −5.614∗∗∗ −4.251∗∗∗ .083 stationary

Yearly growth in D/P −4.067∗∗∗ −4.388∗∗∗ .077 stationary

In first difference :

Federal fund rate −5.191∗∗∗ −12.73∗∗∗ .032 stationary

CPI inflation rate −7.285∗∗∗ −12.09∗∗∗ .133 stationary

Output gap ytQ −4.778∗∗∗ −15.836∗∗∗ .093 stationary

S&P 500 index −5.002∗∗∗ −15.854∗∗∗ .054 stationary

P/E ratio −10.312∗∗∗ −15.990∗∗∗ .085 stationary

Notes: The ADF Z(t) and PP Z(t) refer to the Augmented Dickey-Fuller and Phillips-Perrontests for unit-root in the variables. A statistically significant test shows evidence against thenull hypothesis of unit-root. To determine the choice of lags of the difference in the variablesin the ADF test, a testing-down strategy has been applied, by selecting the smallest lag-lengththat ensures the whiteness of the residuals. Choosing the appropriate number of lags involves asmuch art as science. One approach consists of a sort of informal stepwise procedure. Based onthe frequency of the data, an initial guess of thirteen lags is chosen. Then, one has to eliminatethe highest lags, until a significant t-value is encountered. Since the choice depends largelyon guesswork, several alternatives had been tested and only in the case of interest rate andthe output gap yt

Q in levels, the test statistic was significant to small changes in lag-length.But while selecting a large number of lags offsets the issue of serial correlation in the residual,widening the lag length, and thus the number of regressors, reduces significantly the power ofthe test. For the number of lags in the PP test, they are determined automatically based onNewey-West bandwitch selection. The Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test reportsthe statistic for testing the null hypothesis of level stationarity based on Newey-West automaticbandwitch selection. A statistically significant test shows evidence against the null hypothesisof stationarity. The column “Results” is determined on the basis of the ADF, PP and KPSSstatistics. *** p < 0.01, ** p < 0.05, * p < 0.1.

arity at the 1% significance level. The three tests, however, all agree on stationarity inthe first-difference of the inflation rate. Since results are inclusive, we decided to treatthe inflation rate as a trend stationary process.

xiii

Page 106: Master's Thesis Alexandre Lauwers

To conclude, based on the univariate unit-root and stationary tests performedabove, we treat the policy rate, inflation rate, output gaps and asset price variablesas stationary – I(0) processes – in the subsequent econometric modelling. Besidesciting the argument of low power of the unit-root tests and the fairly mixed evidenceregarding the time series properties of these variables in the empirical literature, anmeaningful rationale is to maintain comparability with the studies of Clarida et al.[1999], Bernanke & Gertler [1999], among others.

xiv

Page 107: Master's Thesis Alexandre Lauwers

C Further results in reference to Section 3.2

C.1 Standard Taylor rule with alternative sample periods

Table C.1GMM Baseline Estimates:

Standard Forward-Looking Taylor Rule (Alternative samples)

Period α βπ βy ρ π?

1979:10 - 1994:121 −1.675∗

(.886)1.770∗∗∗

(.152).283∗∗∗

(.074).921∗∗∗

(.005)2.35

1979:10 - 2008:052 −2.206∗∗

(.891)2.461∗∗∗

(.227).054

(.051).947∗∗∗

(.007)3.27

The estimated parameters refer to the standard baseline equa-tion (16), with the Federal funds rate as the dependent variable and, asexplanatory variables, the CPI inflation rate (πCPIt ) and the quadratictrend based output gap (yt

Q). The table displays the implied long-runcoefficients. The J-test for overidentifying restrictions [Hansen, 1982] areeasily passed (p > 0.99). The Q-test for serial correlation indicates nopattern of correlations in the error term. HAC corrected standard errorsare computed with the Delta method and reported in parentheses. Seenotes in Table 2 for further explanations on the instruments and on theGMM estimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.1 Sample period as in Clarida et al. [1999].2 Sample period without the recent financial crisis.

Table C.1 shows in the second row the estimates of the baseline standard policy rulewithout taking into account the recent financial crisis sample, i.e. 1979:10 to 2008:05.While the inflation coefficient remains quantitatively and qualitatively similar to itsfull sample counterpart, the estimate of βy is not statistically different from zero atconventional levels of significance. Over this reduced sample examined, it seems thatthe Federal Reserve has responded only to expected inflation, not to the expectedoutput gap.

xv

Page 108: Master's Thesis Alexandre Lauwers

C.2 Standard Taylor rule with alternative horizons for theoutput gap

Table C.2GMM Baseline Estimates: Standard Forward-Looking Taylor Rule

(alternative horizons for the output gap)

α βπ βy ρ π? RMSE AP −F

k=12, q=3 −3.316∗∗∗

(0.833)2.739∗∗∗

(.231).180∗∗∗

(.046).956∗∗∗

(.006)3.203 .545 214.73

k=12, q=6 −3.524∗∗∗

(.847)2.804∗∗∗

(.237).191∗∗∗

(.048).956∗∗∗

(.006)3.205 .545 52.34

k=12, q=9 −3.795∗∗∗

(.910)2.892∗∗∗

(.254).225∗∗∗

(.055).956∗∗∗

(.006)3.226 .552 24.91

Sample period : 1979:10 - 2011:11

k=12, q=3 −2.537∗∗∗

(.880)2.547∗∗∗

(.231).113∗∗

(.052).949∗∗∗

(.007)3.296 .574 184.67

Sample period : 1979:10 -2008:05

The estimated parameters refer to the standard baseline equation (16), with the Federalfunds rate as the dependent variable and, as explanatory variables the CPI inflation rate(πCPIt ) and the quadratic trend based output gap (yt

Q). The table displays the impliedlong-run coefficients. The J-test for overidentifying restrictions [Hansen, 1982] is easily passed(p > 0.99). The Q-test for serial correlation indicates no pattern of correlations in the errorterm. HAC corrected standard errors are computed with the Delta method and reportedin parentheses. The AP-F stat, compared to the K-P stat,is the first stage F statisticsthat can be used as a diagnostic for whether a particular endogenous regressor is weaklyidentified, namely here for the output gap. The test statistic can be compared to the Stock-Yogo IV critical values reported below. See notes in Table 2 for further explanations on theinstruments and on the GMM estimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.

Stock-Yogo weak ID test critical values: Source: Stock & Yogo [2002]5% maximal IV relative bias 21.34 10% maximal IV size 119.5910% maximal IV relative bias 11.17 15% maximal IV size 61.6120% maximal IV relative bias 5.93 20% maximal IV size 42.0030% maximal IV relative bias 4.14 25% maximal IV size 32.16NB: Critical values are normally intended for the Cragg-Donald F statistic with K=1 and i.i.d. errors.

Table C.2 considers alternative and arguably more realistic target horizons for theoutput gap. To sum up, while, in both cases, the results are qualitatively very similarto those reported in Table 2, the central bank’s responsiveness coefficient for the outputgap is increasing along with the forecasting horizon q. This could be interpreted asa higher concern of monetary policymakers to long-term developments in the realeconomy than short-term ones, as the latter might be seen as only transitory. Inaddition, it is worth noting in the last row, that the output gap significance is intactin the pre-crisis sample, in contrast with the evidence illustrated in Table C.1, as oneconsider longer forecasting horizons for the output gap.

xvi

Page 109: Master's Thesis Alexandre Lauwers

C.3 Robustness analysis: alternative measures of inflationand economic slack

Table C.3Standard Forward-Looking Taylor Rule

Robustness analysis: alternative measures of inflation and economic slack

α βπ βy ρ π? RMSE

1. πt = πCPIt yt = yQt(Baseline specification)

−3.193∗∗∗

(.792)2.679∗∗∗

(.222).120∗∗∗

(.043).956∗∗∗

(.006)3.170 .544

2. πt = πCPIt yt = yHPt −2.473∗∗∗

(.765)2.464∗∗∗

(.209).423∗∗∗

(.107).956∗∗∗

(.006)3.142 .540

3. πt = πCPIt yt = yUnQt −2.955∗∗∗

(.786)2.603∗∗∗

(.222).045∗∗∗

(.015).957∗∗∗

(.007)3.170 .543

4. πt = πCPIt yt = yUnHPt −1.537∗

(.805)2.162∗∗∗

(.226).110∗∗∗

(.028).956∗∗∗

(.007)3.155 .540

5. πt = πCPIcoret yt = yQt −1.734∗∗∗

(.416)2.199∗∗∗

(.103).178∗∗∗

(.031).930∗∗∗

(.006)3.204 .534

6. πt = πCPIcoret yt = yHPt −1.974∗∗

(.396)1.981∗∗∗

(.093).378∗∗∗

(.081).929∗∗∗

(.007)3.141 .532

7. πt = πCPIcoret yt = yUnQt −1.783∗∗∗

(.371)2.270∗∗∗

(.094).053∗∗∗

(.010).928∗∗∗

(.006)3.171 .535

8. πt = πCPIcoret yt = yUnHPt −.979∗∗∗

(.374)1.944∗∗∗

(.089).082∗∗∗

(.017).926∗∗∗

(.006)3.268 .533

9. πt = πPCEt yt = yQt −2.022∗∗

(.989)2.641∗∗∗

(.289).192∗∗∗

(.052).962∗∗∗

(.007)2.801 .547

10. πt = πPCEt yt = yHPt −1.333(.817)

2.439∗∗∗

(.552).556∗∗∗

(.128).957∗∗∗

(.008)2.715 .544

11. πt = πPCEt yt = yUnQt −2.293∗∗∗

(.821)2.997∗∗∗

(.264).061∗∗∗

(.017).956∗∗∗

(.008)2.708 .549

12. πt = πPCEt yt = yUnHPt −1.049(.821)

2.323∗∗∗

(.260).128∗∗∗

(.031).955∗∗∗

(.008)2.737 .545

13. πt = πPCEcoret yt = yQt −1.327∗∗∗

(.474)2.894∗∗∗

(.179).193∗∗∗

(.035).940∗∗∗

(.007)2.374 .544

14. πt = πPCEcoret yt = yHPt −.712(.542)

2.617∗∗∗

(.183).496∗∗∗

(.097).937∗∗∗

(.008)2.377 .541

15. πt = πPCEcoret yt = yUnQt −1.562∗∗∗

(.521)3.028∗∗∗

(.189).058∗∗∗

(.011).934∗∗∗

(.008)2.315 .545

16. πt = πPCEcoret yt = yUnHPt −1.564(.566)

2.558∗∗∗

(.197).108∗∗∗

(.022).935∗∗∗

(.008)2.373 .543

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the standard baseline equation (16), with theFederal funds rate as the dependent variable, (see next page)xvii

Page 110: Master's Thesis Alexandre Lauwers

(...) but with different measures of inflation and of economic slack : yQt ≡ Output gap

(Quadratic time trend) ; yHPt ≡ Output gap (HP trend) ; yUnQt ≡ Unemployment gap(Quadratic time trend) ; yUnHPt ≡ Unemployment gap (HP trend). The table displays theimplied long-run coefficients. The J-test for overidentifying restrictions [Hansen, 1982] iseasily passed (p > 0.99) for all specifications. The Q-test for serial correlation indicatesno pattern of correlations in the error term. HAC corrected standard errors are computedwith the Delta method and reported in parentheses. The AP-F stat, compared to the K-Pstat,is the first stage F statistics that can be used as a diagnostic for whether a particularendogenous regressor is weakly identified, namely here for the output gap. The test statisticcan be compared to the Stock-Yogo IV critical values reported below. See notes in Table2 for further explanations on the instruments and on the GMM estimation procedure. ***p < 0.01, ** p < 0.05, *p < 0.1.

Summarizing the estimated policy rule parameters:

Notes: The shaded area represents the confidence intervals implied by thestandard errors of the coefficients estimated in the baseline regression. Thenumbers in circle are referring to the numbered specifications in the table above.

Across the different measures of inflation, estimates of the expected inflation weight(βπ) are, in most cases, insignificantly different from the 2.67 setting reported in thebaseline case. However, for each measure of the output gap, the aggressiveness of thecentral bank towards inflation is lower for specifications that use CPI core. This is asurprising finding as it suggests that the Central Bank may have considered externalsupply pressure from oil and farm prices as permanent. In addition, it is noted thatwhenever the HP trend is used as a proxy for potential output or natural unemploy-ment, it implies a much higher response of the output gap coefficient. It is worth notingthat the Central Bank’s response to the unemployment gap is less than the responseto the output gap, whatever the inflation measures used. Moreover, target inflationvaries across the inflation gauges as real rates depend on the inflation measure used.Finally, specifications using core CPI inflation achieve much smaller root mean squareerror of the deviations from the actual rate.

xviii

Page 111: Master's Thesis Alexandre Lauwers

C.4 Subsample stability analysis

Table C.4Subsample stability analysis : Volcker vs. Greenspan eras

α βπ βy ρ π? RMSE

1979:10 - 1987:081 8.490∗∗∗

(.376).733∗∗∗

(.057).478∗∗∗

(.052).888∗∗∗

(.007)14.055 1.009

1987:09 - 2008:052 −1.786(1.181)

2.066∗∗∗

(.436).057

(.052).986∗∗∗

(.003)3.300 .161

Notes: The estimated parameters refer to the standard baseline equation (16),with the Federal funds rate as the dependent variable and, as explanatory variables,the CPI inflation rate (πCPIt ) and the quadratic trend based output gap (yt

Q).The table displays the implied long-run coefficients. The J-test for overidentifyingrestrictions [Hansen, 1982] are easily passed (p > 0.99). The Q-test for serialcorrelation indicates no pattern of correlations in the error term. HAC correctedstandard errors are computed with the Delta method and reported in parentheses.See notes in Table 2 for further explanations on the instruments and on the GMMestimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.1 Sample period under Volcker’s tenure.2 Sample period under Greenspan’s and the beginning of Bernanke’s tenures.

xix

Page 112: Master's Thesis Alexandre Lauwers

D Further results in reference to Section 3.3.1

D.1 A redundant information?

Table D.1Augmented with stock prices and the business cycle index NAPM

α βπ βy ρ βs βnapm

Stock+ ∆ NAPM

−3.45∗∗∗

(.614)2.571∗∗∗

(.190).179∗∗∗

(.038).961∗∗∗

(.004).054∗∗∗

(.012).604∗∗∗

(.088)

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the augmented baseline equation(18), with the Federal funds rate as the dependent variable. These specificationswere estimated using the CPI inflation rate (πCPIt ), the quadratic trend basedoutput gap (yt

Q) and the stock market returns. The table displays the implied

long-run coefficients. βnapm denotes the coefficient on the log-differenced thebusiness cycle index NAPM. The J-test for overidentifying restrictions [Hansen,1982] is easily passed (p > 0.99). The Q-test for serial correlation indicates nopattern of correlations in the error term. HAC corrected standard errors arecomputed with the Delta method and reported in parentheses. The instrumentset is extended to include lags 1-6 of the log-differenced NAPM. See notes inTable 4 for further explanations on the instruments and on the GMM estimationprocedure. *** p < 0.01, ** p < 0.05, *p < 0.1.

The NAPM index is supposed to capture this omitted information that is likely tobe redundant with the stock prices’ information content. The results suggest that theinformation obtained through the NAPM index is not really redundant with the onecontained into the stock price variable: the response coefficient related to stock returnsis slightly impaired by the introduction of the NAPM index (βs decreases from 0.067to 0.054) but its statistical significance remains mainly intact.

xx

Page 113: Master's Thesis Alexandre Lauwers

D.2 Different ways to enter the augmented rule

Table D.2Stock returns : Different ways to enter the augmented rule

α βπ βy ρ βs π? RMSE

st−1 −3.250∗∗∗

(.762)2.494∗∗∗

(.209).109∗∗

(.048).959∗∗∗

(.006).067∗∗∗

(.015)3.599 .540

∑6i=1 st−i −2.906∗∗∗

(.810)2.381∗∗∗

(.244).113∗∗

(.050).958∗∗∗

(.006).070∗∗

(.018)3.643 .533

st −3.315∗∗∗

(.761)2.474∗∗∗

(.208).112∗

(.048).959∗∗∗

(.006).083

(.017)3.691 .540

st +∑6i=1 st−i −3.598∗∗∗

(1.011)2.591∗∗∗

(.292).168∗

(.051).960∗∗∗

(.006).079

(.027)3.600 .547

st+3 −3.595∗∗∗

(.758)2.538∗∗∗

(.206).136∗∗∗

(.043).958∗∗∗

(.006).091∗∗∗

(.019)3.721 .545

st+6 −4.136∗∗∗

(.826)2.557∗∗∗

(.222).199∗∗∗

(.042).958∗∗∗

(.006).150∗∗∗

(.029)4.022 .551

Sample period : 1979:10 - 2011:11

Notes: The estimated parameters refer to the augmented baseline equation (18), with theFederal funds rate as the dependent variable. These specifications were estimated using theCPI inflation rate (πCPIt ), the quadratic trend based output gap (yt

Q) and the stock marketreturns. Contrary to the augmented baseline equation (18), the stock market variable entersin different ways. The shaded line corresponds to the baseline results we previously found,but repeated for ease of comparison. The table displays the implied long-run coefficients.The J-test for overidentifying restrictions [Hansen, 1982] is easily passed (p > 0.99) for allspecifications. The Q-test for serial correlation indicates no pattern of correlations in theerror term in all specifications. HAC corrected standard errors are computed with the Deltamethod and reported in parentheses. See notes in Table 4 for further explanations on theinstruments and on the GMM estimation procedure. *** p < 0.01, ** p < 0.05, *p < 0.1.

We check here whether having stock market returns enter the augmented policy rulecontemporaneously and in a forward-looking form made a difference. In particular, asin Chadha, Sarno & Valente [2004], six lags of the log-differences in stock prices havebeen used. Unsurprisingly, using either one or six lags of stock returns, the impliedcoefficient remains stable and is qualitatively equivalent to the estimates of Chadhaet al.81. We have then tried to incorporate stock returns in their contemporaneousform or in a nested model, as in Bernanke & Gertler [1999]. Dealing with the endo-geneity issue through instrumentation, the results are again fairly close to our baselineestimates. Finally, having stock returns enter the rule as a forward looking variable –with forecasting horizons either 3 or 6 months ahead– does not alter our baseline results.

81They found a smaller coefficient 0.036 with a standard error of 0.019, but note that the authorsuse quarterly data and a sample period spanning from 1979 to 2000.

xxi

Page 114: Master's Thesis Alexandre Lauwers

E The normative debate: a decision tree

Decision tree considering monetary policy response to asset-pricemisalignments. (Source: Rudebusch [2002])

xxii

Page 115: Master's Thesis Alexandre Lauwers