14093_ch01 [compatibility mode]

29
BEC2044 ECONOMETRICS 1 LECTURE 1: INTRODUCTION 1-0 © 2011 Pearson Addison-Wesley. All rights reserved.

Upload: marsha-mae

Post on 13-Dec-2015

228 views

Category:

Documents


0 download

DESCRIPTION

Econometrics

TRANSCRIPT

BEC2044 ECONOMETRICS 1

LECTURE 1:

INTRODUCTION

1-0© 2011 Pearson Addison-Wesley. All rights reserved.

What is Econometrics?

• Econometrics literally means “economic measurement”

• It is the quantitative measurement and analysisof actual economic and business phenomena—

1-1© 2011 Pearson Addison-Wesley. All rights reserved.

of actual economic and business phenomena—and so involves:

– economic theory

– Statistics

– Math

– observation/data collection

What is Econometrics? (cont.)

• Three major uses of econometrics:

– Describing economic reality

– Testing hypotheses about economic theory

1-2© 2011 Pearson Addison-Wesley. All rights reserved.

– Forecasting future economic activity

• So econometrics is all about questions: the researcher (YOU!) first asks questions and then uses econometrics to answer them

Example

• Consider the general and purely theoretical relationship:

Q = f(P, Ps, Yd) (1.1)

1-3© 2011 Pearson Addison-Wesley. All rights reserved.

Q = f(P, Ps, Yd) (1.1)

• Econometrics allows this general and purely theoretical relationship to become explicit:

Q = 27.7 – 0.11P + 0.03Ps + 0.23Yd (1.2)

What is Regression Analysis?

• Economic theory can give us the direction of a change, e.g. the change in the demand for dvd’s following a price decrease (or price increase)

• But what if we want to know not just “how?” but also “how much?”

1-4© 2011 Pearson Addison-Wesley. All rights reserved.

• But what if we want to know not just “how?” but also “how much?”

• Then we need:

– A sample of data

– A way to estimate such a relationship

• one of the most frequently ones used is regression analysis

What is Regression Analysis? (cont.)

• Formally, regression analysis is a statistical technique that attempts to “explain” movements in one variable, the dependent

1-5© 2011 Pearson Addison-Wesley. All rights reserved.

variable, as a function of movements in a set of other variables, the independent (or explanatory) variables, through the quantification of a single equation

Example

• Return to the example from before:

Q = f(P, Ps, Yd) (1.1)

• Here, Q is the dependent variable and P, Ps, Yd are the independent variables

1-6© 2011 Pearson Addison-Wesley. All rights reserved.

• Don’t be deceived by the words dependent and independent, however

– A statistically significant regression result does not necessarily imply causality

– We also need:

• Economic theory

• Common sense

Single-Equation Linear Models

• The simplest example is:

Y = β0 + β1X (1.3)

• The βs are denoted “coefficients”

1-7© 2011 Pearson Addison-Wesley. All rights reserved.

– β0 is the “constant” or “intercept” term

– β1 is the “slope coefficient”: the amount that Y will change when X increases by one unit; for a linear model, β1 is constant over the entire function

Figure 1.1Graphical Representation of the

Coefficients of the Regression Line

1-8© 2011 Pearson Addison-Wesley. All rights reserved.

Single-Equation Linear Models (cont.)

• Application of linear regression techniques requires that the equation be linear—such as (1.3)

• By contrast, the equation

Y = β0 + β1X2 (1.4)

is not linear

1-9© 2011 Pearson Addison-Wesley. All rights reserved.

is not linear

• What to do? First define

Z = X2 (1.5)

• Substituting into (1.4) yields:

Y = β0 + β1Z (1.6)

• This redefined equation is now linear (in the coefficients β0 and β1

and in the variables Y and Z)

Single-Equation Linear Models (cont.)

• Is (1.3) a complete description of origins of variation in Y?

• No, at least four sources of variation in Y other than the variation in the included Xs:

• Other potentially important explanatory variables may be missing (e.g., X2 and X3)

1-10© 2011 Pearson Addison-Wesley. All rights reserved.

• Measurement error

• Incorrect functional form

• Purely random and totally unpredictable occurrences

• Inclusion of a “stochastic error term” (ε) effectively “takes care” of all these other sources of variation in Y that are NOT captured by X, so that (1.3) becomes:

Y = β0 + β1X + ε (1.7)

Single-Equation Linear Models (cont.)

• Two components in (1.7):

– deterministic component (β0 + β1X)

– stochastic/random component (ε)

• Why “deterministic”?

1-11© 2011 Pearson Addison-Wesley. All rights reserved.

• Why “deterministic”?

– Indicates the value of Y that is determined by a given value of X (which is assumed to be non-stochastic)

– Alternatively, the det. comp. can be thought of as the expected value of Y given X—namely E(Y|X)—i.e. the mean (or average) value of the Ys associated with a particular value of X

– This is also denoted the conditional expectation (that is, expectation of Y conditional on X)

Extending the Notation

• Include reference to the number of observations

– Single-equation linear case:

1-12© 2011 Pearson Addison-Wesley. All rights reserved.

– Single-equation linear case:

Yi = β0 + β1Xi + εi (i = 1,2,…,N) (1.10)

• So there are really N equations, one for each observation

• the coefficients, β0 and β1, are the same

• the values of Y, X, and ε differ across observations

Extending the Notation (cont.)

• The general case: multivariate regression

Yi = β0 + β1X1i + β2X2i + β3X3i + εi (i = 1,2,…,N) (1.11)

• Each of the slope coefficients gives the impact of a one-unit

1-13© 2011 Pearson Addison-Wesley. All rights reserved.

increase in the corresponding X variable on Y, holding the other included independent variables constant (i.e., ceteris paribus)

• As an (implicit) consequence of this, the impact of variables that are not included in the regression are not held constant (we return to this in Ch. 6)

Example: Wage Regression

• Let wages (WAGE) depend on:

– years of work experience (EXP)

– years of education (EDU)

1-14© 2011 Pearson Addison-Wesley. All rights reserved.

– years of education (EDU)

– gender of the worker (GEND: 1 if male, 0 if female)

• Substituting into equation (1.11) yields:

WAGEi = β0 + β1EXPi + β2EDUi + β3GENDi + εi (1.12)

Indexing Conventions

• Subscript “i” for data on individuals (so called “cross section” data)

• Subscript “t” for time series data (e.g., series of

1-15© 2011 Pearson Addison-Wesley. All rights reserved.

• Subscript “t” for time series data (e.g., series of years, months, or days—daily exchange rates, for example )

• Subscript “it” when we have both (for example, “panel data”)

The Estimated Regression Equation

• The regression equation considered so far is the “true”—but unknown—theoretical regression equation

• Instead of “true,” might think about this as the populationregression vs. the sample/estimated regression

• How do we obtain the empirical counterpart of the theoretical

1-16© 2011 Pearson Addison-Wesley. All rights reserved.

• How do we obtain the empirical counterpart of the theoretical regression model (1.14)?

• It has to be estimated

• The empirical counterpart to (1.14) is:

(1.16)

• The signs on top of the estimates are denoted “hat,” so that we have “Y-hat,” for example

iiXY

10

ˆˆˆ ββ +=

The Estimated Regression Equation (cont.)

• For each sample we get a different set of estimated regression coefficients

• Y is the estimated value of Yi (i.e. the dependent variable for observation i); similarly it is the prediction of E(Y |X ) from the regression equation

1-17© 2011 Pearson Addison-Wesley. All rights reserved.

variable for observation i); similarly it is the prediction of E(Yi|Xi) from the regression equation

• The closer Y is to the observed value of Yi, the better is the “fit” of the equation

• Similarly, the smaller is the estimated error term, ei, often denoted the “residual,” the better is the fit

The Estimated Regression Equation (cont.)

• This can also be seen from the fact that

(1.17)

• Note difference with the error term, εi, given as

1-18© 2011 Pearson Addison-Wesley. All rights reserved.

• Note difference with the error term, εi, given as

(1.18)

• This all comes together in Figure 1.3

Figure 1.3 True and Estimated Regression Lines

1-19© 2011 Pearson Addison-Wesley. All rights reserved.

Example: Using Regression to Explain Housing prices

• Houses are not homogenous products, like corn or gold, that have generally known market prices

• So, how to appraise a house against a given asking price?

1-20© 2011 Pearson Addison-Wesley. All rights reserved.

asking price?

• Yes, it’s true: many real estate appraisers actually use regression analysis for this!

• Consider specific case: Suppose the asking price was $230,000

Example: Using Regression to Explain Housing prices (cont.)

• Is this fair / too much /too little?

• Depends on size of house (higher size, higher price)

• So, collect cross-sectional data on prices (in thousands of $) and sizes (in square feet)

1-21© 2011 Pearson Addison-Wesley. All rights reserved.

(in thousands of $) and sizes (in square feet) for, say, 43 houses

• Then say this yields the following estimated regression line:

(1.23)ii

SIZECEIPR 138.00.40ˆ +=

Figure 1.5 A Cross-Sectional Model of Housing Prices

1-22© 2011 Pearson Addison-Wesley. All rights reserved.

Example: Using Regression to Explain Housing prices (cont.)

• Note that the interpretation of the intercept term is problematic in this case (we’ll get back to this later, in Section 7.1.2)

1-23© 2011 Pearson Addison-Wesley. All rights reserved.

• The literal interpretation of the intercept here is the price of a house with a size of zero square feet…

Example: Using Regression to Explain Housing prices (cont.)

• How to use the estimated regression line / estimated regression coefficients to answer the question?

– Just plug the particular size of the house, you are interested in (here, 1,600 square feet) into (1.23)

– Alternatively, read off the estimated price using Figure 1.5

1-24© 2011 Pearson Addison-Wesley. All rights reserved.

– Alternatively, read off the estimated price using Figure 1.5

• Either way, we get an estimated price of $260.8 (thousand, remember!)

• So, in terms of our original question, it’s a good deal—go ahead and purchase!!

• Note that we simplified a lot in this example by assuming that only size matters for housing prices

Table 1.1a Data for and Results of the Weight-Guessing Equation

1-25© 2011 Pearson Addison-Wesley. All rights reserved.

Table 1.1b Data for and Results of the Weight-Guessing Equation

1-26© 2011 Pearson Addison-Wesley. All rights reserved.

Figure 1.4 A Weight-Guessing Equation

1-27© 2011 Pearson Addison-Wesley. All rights reserved.

Key Terms from Chapter 1

• Regression analysis

• Dependent variable

• Independent (or

explanatory) variable(s)

• Slope coefficient

• Multivariate regression model

• Expected value

1-28© 2011 Pearson Addison-Wesley. All rights reserved.

explanatory) variable(s)

• Causality

• Stochastic error term

• Linear

• Intercept term

• Residual

• Time series

• Cross-sectional data set