the simple regression model

28
FIN357 Li 1 The Simple Regression Model y = 0 + 1 x + u

Upload: zahina

Post on 07-Jan-2016

29 views

Category:

Documents


0 download

DESCRIPTION

The Simple Regression Model. y = b 0 + b 1 x + u. Some Terminology. In the simple linear regression model, where y = b 0 + b 1 x + u , we typically refer to y as the Dependent Variable, or Left-Hand Side Variable, or Explained Variable, or. Some Terminology. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: The Simple Regression Model

FIN357 Li 1

The Simple Regression Model

y = 0 + 1x + u

Page 2: The Simple Regression Model

FIN357 Li 2

Some Terminology

In the simple linear regression model, where y = 0 + 1x + u, we typically refer to y as the Dependent Variable, or Left-Hand Side Variable, or Explained Variable, or

Page 3: The Simple Regression Model

FIN357 Li 3

Some Terminology

we typically refer to x as the Independent Variable, or Right-Hand Side Variable, or Explanatory Variable, or Regressor, or Control Variables

Page 4: The Simple Regression Model

FIN357 Li 4

A Simple Assumption

The average value of u, the error term, in the population is 0. That is,

E(u) = 0

Page 5: The Simple Regression Model

FIN357 Li 5

We also assume

E(u|x) = 0

E(y|x) = 0 + 1x

Page 6: The Simple Regression Model

FIN357 Li 6

..

x1 x2

E(y|x) as a linear function of x, where for any x the distribution of y is centered about E(y|x)

E(y|x) = 0 + 1x

y

f(y)

Page 7: The Simple Regression Model

FIN357 Li 7

Ordinary Least Squares (OLS)

Let {(xi,yi): i=1, …,n} denote a random sample of size n from the population

For each observation in this sample, it will be the case that

yi = 0 + 1xi + ui

Page 8: The Simple Regression Model

FIN357 Li 8

.

..

.

y4

y1

y2

y3

x1 x2 x3 x4

}

}

{

{

u1

u2

u3

u4

x

y

Population regression line, sample data pointsand the associated error terms

E(y|x) = 0 + 1x

Page 9: The Simple Regression Model

FIN357 Li 9

Basic idea of regression is to estimate the population parameters from a sample

Intuitively, OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible.

The residual, û, is an estimate of the error term, u, and is the difference between the fitted line (sample regression function) and the sample point

Page 10: The Simple Regression Model

FIN357 Li 10

.

..

.

y4

y1

y2

y3

x1 x2 x3 x4

}

}

{

{

û1

û2

û3

û4

x

y

Sample regression line, sample data pointsand the associated estimated error terms (residuals)

xy 10ˆˆˆ

Page 11: The Simple Regression Model

FIN357 Li 11

One approach to estimate coefficients

Given the intuitive idea of fitting a line, we can set up a formal minimization problem

That is, we want to choose our parameters such that we minimize the following:

n

iii

n

ii xyu

1

2

101

2 ˆˆˆ

Page 12: The Simple Regression Model

FIN357 Li 12

It could be shown that estimated coefficient is

1

12

1

ˆ

n

i ii

n

ii

x x y y

x x

Page 13: The Simple Regression Model

FIN357 Li 13

Summary of OLS slope estimate

The slope estimate is the sample covariance between x and y divided by the sample variance of x

If x and y are positively correlated, the slope will be positive

If x and y are negatively correlated, the slope will be negative

Page 14: The Simple Regression Model

FIN357 Li 14

Algebraic Properties of OLS: in English

The sum of the OLS residuals is zero

Thus, the sample average of the OLS residuals is zero as well

The sample covariance between the regressors and the OLS residuals is zero

The OLS regression line always goes through the mean of the sample

Page 15: The Simple Regression Model

FIN357 Li 15

Algebraic Properties of OLS: In mathematics:

xy

ux

n

uu

n

iii

n

iin

ii

10

1

1

1

ˆˆ

0

ˆ

thus,and 0ˆ

Page 16: The Simple Regression Model

FIN357 Li 16

More terminology

SSR SSE SSTThen

(SSR) squares of sum residual theis ˆ

(SSE) squares of sum explained theis ˆ

(SST) squares of sum total theis

:following thedefine then Weˆˆ

part, dunexplainean and part, explainedan of up

made being asn observatioeach ofcan think We

2

2

2

i

i

i

iii

u

yy

yy

uyy

Page 17: The Simple Regression Model

FIN357 Li 17

Notations Alert

The notation SSR (Sum of Squared Residuals) in this handout and my other lecture slides= ESS (Error Sum of Squares) in our textbook.

The notation SSE (Sum of Squared Explained) in this handout and my other lecture slides = RSS (Regressed Sum of Squares) in our textbook.

Page 18: The Simple Regression Model

FIN357 Li 18

Goodness-of-Fit

How do we think about how well our sample regression line fits our sample data?

Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared of regression

R2 = SSE/SST = 1 – SSR/SST

Page 19: The Simple Regression Model

FIN357 Li 19

OLS regressions

Now that we’ve derived the formula for calculating the OLS estimates of our parameters, you’ll be happy to know you don’t have to compute them by hand

Regressions in GRETL are very simple.

Have you installed the software yet?

Page 20: The Simple Regression Model

FIN357 Li 20

Under some conditions, OLS esimated coefficients are unbiased.

22

1 1 2

21 1 1

. :

ˆ

ˆ 1 *

x i

i i

x

i ix

Define

S x x It could be shown that

x x u

S

E x x E uS

Page 21: The Simple Regression Model

FIN357 Li 21

Unbiasedness Summary

The OLS estimates of 1 and 0 are unbiased

Remember unbiasedness is a description of the estimator – in a given sample we may be “near” or “far” from the true parameter

Page 22: The Simple Regression Model

FIN357 Li 22

Variance of the OLS Estimators

Now we know that the sampling distribution of our estimated coefficient is centered around the true parameter

Want to think about how spread out this distribution is

Assume Var(u|x) =Var(u) = 2

Page 23: The Simple Regression Model

FIN357 Li 23

Variance of OLS estimators

2 is called the error variance

, the square root of the error variance is called the standard deviation of the error

E(y|x)=0 + 1x and Var(y|x) = 2

Page 24: The Simple Regression Model

FIN357 Li 24

Variance of OLS estimator

2

21̂x

VarS

Page 25: The Simple Regression Model

FIN357 Li 25

Variance of OLS Summary

The larger the error variance, 2, the larger the variance of the slope estimate

The larger the variability in the xi, the smaller the variance of the slope estimate

Problem that the error variance is unknown

Page 26: The Simple Regression Model

FIN357 Li 26

Estimating the Error Variance

We don’t know what the error variance, 2, is, because we don’t observe the errors, ui

What we observe are the residuals, ûi

We can use the residuals to form an estimate of the error variance

Page 27: The Simple Regression Model

FIN357 Li 27

Estimating the Error Variance

0 1

2

2 2

ˆ ˆˆ

an unbiased estimator of is

1ˆ ˆ / 2

2

i i i

i

u y x

u SSR nn

Page 28: The Simple Regression Model

FIN357 Li 28

Estimating Standard Error of coefficients Estimate

1

2 2

1̂ ˆ se / ix x