ch 7 - exponential smoothing - winonacourse1.winona.edu/bdeppa/fin 335/handouts/exponential... ·...

27
Ch 7 - Exponential Smoothing Rob Hyndman (with Deppa edits/additions) March 10, 2021 Table of Contents 7.0 - Introduction to Exponential Smoothing...................1 7.1 - Simple Exponential Smoothing............................2 Example 7.1 - Saudi Arabia Oil Production (1996 - 2013).....2 Weighted Average Form....................................... 4 Example 7.1 - Saudi Oil Production (cont’d).................4 Example 7.1 - Saudi Oil Production (cont’d).................7 Component Form.............................................. 8 Flat Forecasts.............................................. 9 More on Optimization........................................ 9 Example 7.1 - Saudi Oil Production (cont’d).................9 Example 7.2 - Weekly Closing Value - Fastenal Stock (3/13/2013-3/12/2018)...................................... 13 7.0 - Introduction to Exponential Smoothing Exponential smoothing was proposed in the late 1950s (Brown 1959; Holt 1957; Winters 1960), and has motivated some of the most successful forecasting methods. Forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations get older. In other words, the more recent the observation the higher the associated weight. This framework generates reliable forecasts quickly and for a wide range of time series, which is a great advantage and of major importance to applications in industry. This chapter is divided into two parts. In the first part (Sections 7.1-7.4) we present the mechanics of the most important exponential smoothing methods, and their application in forecasting time series with various characteristics. This helps us develop an intuition to 1

Upload: others

Post on 26-Mar-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Ch 7 - Exponential Smoothing

Rob Hyndman (with Deppa edits/additions)

March 10, 2021

Table of Contents7.0 - Introduction to Exponential Smoothing...........................................................................................1

7.1 - Simple Exponential Smoothing............................................................................................................ 2

Example 7.1 - Saudi Arabia Oil Production (1996 - 2013)............................................................2

Weighted Average Form............................................................................................................................... 4

Example 7.1 - Saudi Oil Production (cont’d)........................................................................................4

Example 7.1 - Saudi Oil Production (cont’d)........................................................................................7

Component Form............................................................................................................................................. 8

Flat Forecasts..................................................................................................................................................... 9

More on Optimization.................................................................................................................................... 9

Example 7.1 - Saudi Oil Production (cont’d)........................................................................................9

Example 7.2 - Weekly Closing Value - Fastenal Stock (3/13/2013-3/12/2018).............13

7.0 - Introduction to Exponential Smoothing

Exponential smoothing was proposed in the late 1950s (Brown 1959; Holt 1957; Winters 1960), and has motivated some of the most successful forecasting methods. Forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations get older. In other words, the more recent the observation the higher the associated weight. This framework generates reliable forecasts quickly and for a wide range of time series, which is a great advantage and of major importance to applications in industry.

This chapter is divided into two parts. In the first part (Sections 7.1-7.4) we present the mechanics of the most important exponential smoothing methods, and their application in forecasting time series with various characteristics. This helps us develop an intuition to how these methods work. In this setting, selecting and using a forecasting method may appear to be somewhat ad hoc. The selection of the method is generally based on recognising key components of the time series (trend and seasonal) and the way in which these enter the smoothing method (e.g., in an additive, damped or multiplicative manner).

In the second part of the chapter (Sections 7.5-7.7) we present the statistical models that underlie exponential smoothing methods. These models generate identical point forecasts to the methods discussed in the first part of the chapter, but also generate prediction intervals. Furthermore, this statistical framework allows for genuine model selection between competing models.

1

Page 2: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

7.1 - Simple Exponential Smoothing

The simplest of the exponentially smoothing methods is naturally called simple or single exponential smoothing (SES). This method is suitable for forecasting data with no clear trend or seasonal pattern. For example, the plot of oil production in Saudi Arabia (1996-2013) does not display any clear trending behaviour or any seasonality. (There is a rise in the last few years, which might suggest a trend. We will consider whether a trended method would be better for this series later in this chapter.) We have already considered the naïve and the average as possible methods for forecasting such data (Section 3.1).

Example 7.1 - Saudi Arabia Oil Production (1996 - 2013)require(fpp2)

## Loading required package: fpp2

## Loading required package: ggplot2

## Loading required package: forecast

## Loading required package: fma

## Loading required package: expsmooth

oildata = window(oil,start=1996)autoplot(oildata) + xlab("Year") + ylab("Oil (millions of tons)")

Using the naïve method, all forecasts for the future are equal to the last observed value of the series,

y¿

T +h∨T= yT ,

for h=1,2 ,…. Hence, the naïve method assumes that the most recent observation is the only important one, and all previous observations provide no information for the future. This can be thought of as a weighted average where all of the weight is given to the last observation.

2

Page 3: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Using the average method, all future forecasts are equal to a simple average of the observed data,

y¿

T+h∨T=1T ∑

t=1

T

y t ,

for h=1,2 ,…. Hence, the average method assumes that all observations are of equal importance, and gives them equal weights when generating forecasts.

We often want something between these two extremes. For example, it may be sensible to attach larger weights to more recent observations than to observations from the distant past. This is exactly the concept behind simple exponential smoothing. Forecasts are calculated using weighted averages, where the weights decrease exponentially as observations come from further in the past - the smallest weights are associated with the oldest observations:

y¿

T +1∨T=α yT+α (1−α ) yT−1+α (1−α )2 yT−2+α (1−α )3 yT−3+…

where 0≤α ≤1 is the smoothing parameter. The one-step-ahead forecast for time T+1 is a weighted average of all of the observations in the series y1 , y2 ,…, yT. The rate at which the weights decrease is controlled by the parameter α .

The table below shows the weights attached to observations for four different values of α when forecasting using simple exponential smoothing. Note that the sum of the weights even for a small value of α will be approximately one for any reasonable sample size.

α=0.2 α=0.4 α=0.6 α=0.8

yT 0.2000 0.4000 0.6000 0.8000

yT−1 0.1600 0.2400 0.2400 0.1600

yT−2 0.1280 0.1440 0.0960 0.0320

yT−3 0.1024 0.0864 0.0384 0.0064

yT−4 0.0819 0.0518 0.0154 0.0013

yT−5 0.0655 0.0311 0.0061 0.0003

For any α between 0 and 1, the weights attached to the observations decrease exponentially as we go back in time, hence the name “exponential smoothing”. If α is small (i.e., close to 0), more weight is given to observations from the more distant past. If α is large (i.e., close to 1), more weight is

3

Page 4: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

given to the more recent observations. For the extreme case where α=1, y¿

T +1= yT , and the

forecasts are equal to the naïve forecasts.

Weighted Average Form

The forecast at time T+1 is equal to a weighted average between the most recent observation yT

and the previous forecast y¿

T∨T−1,

y¿

T+1∨t=α yT+(1−α) y¿

T∨T−1

,

where 0≤α ≤1 is the smoothing parameter. Similarly, we can write the fitted values as

y¿

t+1∨t=α y t+(1−α ) y¿

t∨t−1 ,

for t=1 ,…,T . Recall that the fitted values ( y¿

t∨t−1) are simply one-step forecasts of the training

data.

Example 7.1 - Saudi Oil Production (cont’d)length(oildata)

## [1] 18

oildata

## Time Series:## Start = 1996 ## End = 2013 ## Frequency = 1 ## [1] 445.3641 453.1950 454.4096 422.3789 456.0371 440.3866 425.1944## [8] 486.2052 500.4291 521.2759 508.9476 488.8889 509.8706 456.7229## [15] 473.8166 525.9509 549.8338 542.3405

oiltrain = head(oildata,15)oiltest = tail(oildata,3)fc = ses(oiltrain,h=3)checkresiduals(fc)

4

Page 5: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## ## Ljung-Box test## ## data: Residuals from Simple exponential smoothing## Q* = 3.8387, df = 3, p-value = 0.2794## ## Model df: 2. Total lags used: 5

fc

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2011 473.3383 437.7605 508.9162 418.9267 527.7499## 2012 473.3383 430.3424 516.3342 407.5818 539.0949## 2013 473.3383 424.0280 522.6487 397.9246 548.7520

summary(fc)

## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ## ## Call:## ses(y = oiltrain, h = 3) ## ## Smoothing parameters:## alpha = 0.6786 ## ## Initial states:## l = 447.2459 ## ## sigma: 27.7615## ## AIC AICc BIC ## 144.1838 146.3656 146.3079 ## ## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 2.563418 25.84457 19.93024 0.3568356 4.234298 0.8578237## ACF1## Training set -0.02914761## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2011 473.3383 437.7605 508.9162 418.9267 527.7499## 2012 473.3383 430.3424 516.3342 407.5818 539.0949## 2013 473.3383 424.0280 522.6487 397.9246 548.7520

accuracy(fc,oiltest)

## ME RMSE MAE MPE MAPE MASE## Training set 2.563418 25.84457 19.93024 0.3568356 4.234298 0.8578237## Test set 66.036715 66.78556 66.03672 12.2129395 12.212940 2.8423068## ACF1 Theil's U

5

Page 6: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## Training set -0.02914761 NA## Test set -0.36658865 4.051901

6

Page 7: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

autoplot(fc) + ylab("Oil (millions of tonnes)") + xlab("Year") + autolayer(fitted(fc), series="Fitted") + autolayer(oiltest,series="Test Actual") + guides(colour=guide_legend(title="Series"))

7

Page 8: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Two-parameter model (α ,lo)

The process has to start somewhere, so we let the first fitted value at time 1 be denoted by lo (which we will have to estimate along with α). Then

y¿

2∨1 ¿α y1+(1−α) l0y¿

3∨2 ¿α y2+(1−α) y¿

2∨1

y¿

4∨3 ¿α y3+(1−α) y¿

3∨2¿ ¿

=α yT−1+(1−α ) y¿

T−1∨T−2 ¿ y¿

T+1∨T ¿=α yT+(1−α ) y¿

T∨T−1. ¿

Substituting each equation into the following equation, we obtain

y¿

3∨2 ¿α y2+(1−α)[α y1+(1−α)l0 ]¿ ¿

The last term becomes tiny for large T . So, the weighted average form leads to the same forecast equation above.

Example 7.1 - Saudi Oil Production (cont’d)fc = ses(oildata,h=5)summary(fc)

## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ## ## Call:## ses(y = oildata, h = 5) ## ## Smoothing parameters:## alpha = 0.8339 ## ## Initial states:## l = 446.5868 ## ## sigma: 29.8282## ## AIC AICc BIC ## 178.1430 179.8573 180.8141 ## ## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 6.401975 28.12234 22.2587 1.097574 4.610635 0.9256774## ACF1## Training set -0.03377748## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2014 542.6806 504.4541 580.9070 484.2183 601.1429## 2015 542.6806 492.9073 592.4539 466.5589 618.8023## 2016 542.6806 483.5747 601.7864 452.2860 633.0752

8

Page 9: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## 2017 542.6806 475.5269 609.8343 439.9778 645.3834## 2018 542.6806 468.3452 617.0159 428.9945 656.3667

Notice that α opt=0.83 and l¿

o=456.57, obtained by minimizing SSE over periods t=1,2 ,…,18

subject to the restriction that 0≤α≤1. That is α is chosen so that,

SSE=∑t=1

T

( y t− y t∨t−1 )2=∑t=1

T

et2

is minimized as a function of α∈[0,1] and lo.

Component Form

An alternative representation of SES is the component form. For SES, the only component included is the level at time t , lt. Other methods we will be considering in subsequent sections may also include a trend b t and season component st . Component form representations of exponential smoothing methods comprise a forecast equation and a smoothing equation for each of the components included in the model. The component form of SES is given by,

Forecast equation ¿ y¿

t+h∨t ¿ ltSmoothingequation ¿lt ¿α y t+(1−α) lt−1 ,

where lt is the level (or the smoothed value) of the series at time t . Setting h=1 gives the fitted values, while setting t=T givens the true forecasts beyond the training data.

The forecast equation shows that the forecast value at time t+1 is the estimated level at timet . The smoothing equation for the level (usually referred to as the level equation) gives the estimated level of the series at each period t .

If we replace lt with y¿

t+1∨t and lt−1 with y

¿

t∨t−1 in the smoothing equation, we will recover the

weighted average form of simple exponential smoothing (SES).

The component form of SES is not particularly useful, but it will be the easiest form to use when we start adding other components (i.e. b t , st) to the model.

9

Page 10: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Flat Forecasts

Simple exponential smoothing (SES) has a “flat” forecast function:

y¿

T+h∨T= y¿

T+1∨T=lT , h=2,3 ,….

That is, all forecasts take the same value, equal to the last level component lT . Remember that these forecasts will only be suitable if the time series has no trend or seasonal component/structure. It is also important to note that this flat forecast is not the same as naive forecast,

y¿

T +h∨T= y¿

T+1∨T= yT ,h=2,3 ,….

More on Optimization

The application of every exponential smoothing method requires the smoothing parameters and the initial values to be chosen. In particular, for simple exponential smoothing, we need to select the values of α and lo. All forecasts can be computed from the data once we know those values. For the methods that follow there is usually more than one smoothing parameter and more than one initial component to be chosen.

In some cases, the smoothing parameters may be chosen in a subjective manner - the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more reliable and objective way to obtain values for the unknown parameters is to estimate them from the observed data.

In Section 5.2, we estimated the coefficients of a regression model by minimizing the sum of the squared residuals (usually known as SSE or “sum of squared errors”). Similarly, the unknown parameters and the initial values for any exponential smoothing method can be estimated by

minimizing the SSE. The residuals are specified as e¿

t= y t− y¿

t∨t−1 for t=1,2 ,…,T . Hence, we find

the values of the unknown parameters, (lo , α ), and the initial values that minimize

SSE=∑t=1

T

¿¿

Unlike the regression case (where we have formulas which return the values of the regression coefficients that minimize the SSE), this involves a non-linear minimization problem, and we need to use an optimization tool to solve it.

Example 7.1 - Saudi Oil Production (cont’d)oildata = window(oil,start=1996)# Estimate parameters in the SES modelfc = ses(oildata,h=3)summary(fc)

## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ##

10

Page 11: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## Call:## ses(y = oildata, h = 3) ## ## Smoothing parameters:## alpha = 0.8339 ## ## Initial states:## l = 446.5868 ## ## sigma: 29.8282## ## AIC AICc BIC ## 178.1430 179.8573 180.8141 ## ## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 6.401975 28.12234 22.2587 1.097574 4.610635 0.9256774## ACF1## Training set -0.03377748## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2014 542.6806 504.4541 580.9070 484.2183 601.1429## 2015 542.6806 492.9073 592.4539 466.5589 618.8023## 2016 542.6806 483.5747 601.7864 452.2860 633.0752

# Accuracy of one-step-ahead training errors over periods t = 1,2,...,18.round(accuracy(fc),2)

## ME RMSE MAE MPE MAPE MASE ACF1## Training set 6.4 28.12 22.26 1.1 4.61 0.93 -0.03

res = residuals(fc)mean(abs(res))

## [1] 22.2587

sqrt(mean(res^2))

## [1] 28.12234

mean(100*abs(res)/oildata)

## [1] 4.610635

11

Page 12: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

In Table 7.2 we demonstrate the calculation using these parameters. The second last column shows the estimated level for times t=0 to t=18; the last few rows of the last column show the forecasts for h=1,2,3.

fc2 = naive(oildata,h=3)autoplot(fc) + ggtitle("Forecasts from SES for Saudi Oil Production") + autolayer(fitted(fc), series="SES Fitted") + autolayer(fitted(fc2),series="Naive Fitted") + ylab("Oil (millions of tons)") + xlab("Year")

## Warning: Removed 1 rows containing missing values (geom_path).

12

Page 13: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

The forecasts for the period 2014-2016 are plotted above. Also plotted are one-step-ahead fitted

values ( y¿

t) alongside the data over the period 1996-2013. The large value of α in this example

(α¿

=0.83) is reflected in the large adjustment that takes place in the estimated level lt at each time.

A smaller value of α would lead to smaller changes over time, and so the series of fitted values would be smoother. For comparison the fitted values from a naive fit has been added to plot. We

can see very little differences between the two which would be expected for an α¿

this large, as most

of the weight is placed on the previous observation, i.e.

y¿

t= y t−1≈α¿

y t−1+(1−α¿

) y¿

t−1

when α¿

is large.

The prediction intervals shown here are calculated using the methods described in Section 7.5. The prediction intervals show that there is considerable uncertainty in the future values of oil production over the three-year forecast period. So interpreting the point forecasts without accounting for the large uncertainty can be very misleading.

For purposes of comparison consider the naive and SES forecasts (h=3) for these data.

# Compare forecasts for SES and Naive for these datafc

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2014 542.6806 504.4541 580.9070 484.2183 601.1429## 2015 542.6806 492.9073 592.4539 466.5589 618.8023## 2016 542.6806 483.5747 601.7864 452.2860 633.0752

fc2

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2014 542.3405 504.8963 579.7846 485.0746 599.6064## 2015 542.3405 489.3864 595.2945 461.3543 623.3267## 2016 542.3405 477.4853 607.1956 443.1531 641.5279

autoplot(fc2) + xlab("Year") + ylab("Oil (millions of tons)") + ggtitle("Naive Forecasts for Saudi Oil Production")

13

Page 14: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Again we can see that the naive and SES forecasts (h=3) differ only slightly for these data.

Example 7.2 - Weekly Closing Value - Fastenal Stock (3/13/2013-3/12/2018)

This time series contains the average weekly closing value of Fastenal stock.

FastStock = read.csv(file="http://course1.winona.edu/bdeppa/FIN%20335/Datasets/Weekly%20Fastenal%20Stock.csv")Close = ts(FastStock$WeekClose,start=c(2013,10),frequency=52)head(Close)

## Time Series:## Start = c(2013, 10) ## End = c(2013, 15) ## Frequency = 52 ## [1] 50.92 50.93 50.68 49.32 49.72 48.13

autoplot(Close) + xlab("Year") + ylab("Weekly Close ($)") + ggtitle("Fastenal Weekly Closing Value")

14

Page 15: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Let’s fit an SES model to these data and forecast the weekly closing stock price for the next month, i.e. (h=4 ).

fc = ses(Close,h=4)summary(fc)

## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ## ## Call:## ses(y = Close, h = 4) ## ## Smoothing parameters:## alpha = 0.9999 ## ## Initial states:## l = 50.9173 ## ## sigma: 1.1558## ## AIC AICc BIC ## 1538.769 1538.862 1549.474 ## ## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 0.02631002 1.151378 0.8683093 0.01633136 1.910426 0.2073822

15

Page 16: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## ACF1## Training set 0.1804845## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 57.80988 56.32866 59.29109 55.54455 60.07520## 2018.231 57.80988 55.71523 59.90452 54.60639 61.01336## 2018.250 57.80988 55.24451 60.37524 53.88648 61.73327## 2018.269 57.80988 54.84767 60.77208 53.27957 62.34018

autoplot(fc) + xlab("Year") + ylab("Weekly Close ($)") + ggtitle("Fastenal Weekly Closing Value - SES with *optimal* values")

fc

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 57.80988 56.32866 59.29109 55.54455 60.07520## 2018.231 57.80988 55.71523 59.90452 54.60639 61.01336## 2018.250 57.80988 55.24451 60.37524 53.88648 61.73327## 2018.269 57.80988 54.84767 60.77208 53.27957 62.34018

Notice that α¿

=.9999≈1.0, thus SES is virtually identical to a naive forecasts. This is because naive

forecasts are the same as SES when α=1.0.

We can force our own choice for α and visualize the results. We will explore this using the weekly close values for Fastenal stock.

fc.1 = ses(Close,alpha=.1,initial="simple",h=4)fc.2 = ses(Close,alpha=.2,initial="simple",h=4)fc.4 = ses(Close,alpha=.4,initial="simple",h=4)fc.6 = ses(Close,alpha=.6,initial="simple",h=4)autoplot(Close) + xlab("Year") + ylab("Weekly Close ($)") + autolayer(fitted(fc.1),series="Alpha = 0.1") + autolayer(fitted(fc.2),series="Alpha = 0.2") + autolayer(fitted(fc.4),series="Alpha = 0.4") + autolayer(fitted(fc.6),series="Alpha = 0.6")

16

Page 17: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Comparing forecasts…

# Alpha = 0.1fc.1

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 53.10981 49.72119 56.49843 47.92736 58.29226## 2018.231 53.10981 49.70429 56.51533 47.90151 58.31811## 2018.250 53.10981 49.68747 56.53215 47.87579 58.34383## 2018.269 53.10981 49.67073 56.54889 47.85020 58.36942

# Alpha = 0.2fc.2

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 55.16948 52.53882 57.80013 51.14623 59.19272## 2018.231 55.16948 52.48672 57.85223 51.06656 59.27239## 2018.250 55.16948 52.43562 57.90333 50.98840 59.35055## 2018.269 55.16948 52.38545 57.95350 50.91168 59.42728

# Alpha = 0.4fc.4

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 56.32953 54.33874 58.32032 53.28488 59.37418## 2018.231 56.32953 54.18538 58.47367 53.05034 59.60872## 2018.250 56.32953 54.04228 58.61677 52.83149 59.82756## 2018.269 56.32953 53.90763 58.75143 52.62555 60.03350

# Alpha = 0.6fc.6

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2018.212 57.00742 55.29786 58.71697 54.39288 59.62195## 2018.231 57.00742 55.01375 59.00108 53.95836 60.05647## 2018.250 57.00742 54.76535 59.24948 53.57848 60.43635## 2018.269 57.00742 54.54186 59.47297 53.23667 60.77816

We can use split-sample cross-validation to compare forecast performance. We will use the last 26 weeks of the time series as a test set and the first 89 weeks to train the model.

Close.sub = window(Close,start=2016)Close.test = tail(Close.sub,26)

17

Page 18: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

Close.train = head(Close.sub,89)fc.0 = ses(Close.train,alpha=0,initial="simple",h=26)fc.05 = ses(Close.train,alpha=.05,initial="simple",h=26)fc.1 = ses(Close.train,alpha=.1,initial="simple",h=26)fc.2 = ses(Close.train,alpha=.2,initial="simple",h=26)fc.4 = ses(Close.train,alpha=.4,initial="simple",h=26)fc.6 = ses(Close.train,alpha=.6,initial="simple",h=26)fc.8 = ses(Close.train,alpha=.8,initial="simple",h=26)fc.opt = ses(Close.train,h=26)

summary(fc.opt)## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ## ## Call:## ses(y = Close.train, h = 26) ## ## Smoothing parameters:## alpha = 0.9999 ## ## Initial states:## l = 39.6102 ## ## sigma: 1.2134## ## AIC AICc BIC ## 437.8903 438.1726 445.3562 ## ## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 0.02977534 1.199652 0.9136153 0.03497786 2.074689 0.2752526## ACF1## Training set 0.2754556## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2017.712 42.25998 40.70499 43.81497 39.88183 44.63813## 2017.731 42.25998 40.06100 44.45895 38.89694 45.62302## 2017.750 42.25998 39.56684 44.95312 38.14118 46.37878## 2017.769 42.25998 39.15024 45.36972 37.50404 47.01592## 2017.788 42.25998 38.78320 45.73676 36.94270 47.57725## 2017.808 42.25998 38.45137 46.06859 36.43521 48.08474## 2017.827 42.25998 38.14622 46.37373 35.96853 48.55143## 2017.846 42.25998 37.86219 46.65776 35.53415 48.98581## 2017.865 42.25998 37.59543 46.92453 35.12617 49.39379## 2017.885 42.25998 37.34312 47.17684 34.74029 49.77966## 2017.904 42.25998 37.10314 47.41682 34.37327 50.14668## 2017.923 42.25998 36.87384 47.64612 34.02259 50.49737## 2017.942 42.25998 36.65391 47.86605 33.68624 50.83372## 2017.962 42.25998 36.44229 48.07767 33.36259 51.15737## 2017.981 42.25998 36.23810 48.28186 33.05031 51.46965## 2018.000 42.25998 36.04061 48.47934 32.74828 51.77168## 2018.019 42.25998 35.84920 48.67075 32.45555 52.06441## 2018.038 42.25998 35.66335 48.85661 32.17130 52.34865## 2018.058 42.25998 35.48259 49.03737 31.89485 52.62510## 2018.077 42.25998 35.30652 49.21343 31.62559 52.89437## 2018.096 42.25998 35.13481 49.38515 31.36297 53.15698## 2018.115 42.25998 34.96714 49.55282 31.10654 53.41342## 2018.135 42.25998 34.80323 49.71672 30.85587 53.66408## 2018.154 42.25998 34.64286 49.87710 30.61060 53.90936## 2018.173 42.25998 34.48579 50.03417 30.37038 54.14958## 2018.192 42.25998 34.33183 50.18813 30.13492 54.38503

# Alpha = 0round(accuracy(fc.0,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 3.47 4.90 4.01 7.18 8.58 1.21 0.93 NA## Test set 9.78 10.66 9.78 18.55 18.55 2.95 0.84 7.52

18

Page 19: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

# Alpha = 0.05round(accuracy(fc.05,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.72 3.41 2.98 1.12 6.55 0.90 0.93 NA## Test set 6.58 7.83 6.67 12.25 12.46 2.01 0.84 5.46

# Alpha = 0.1round(accuracy(fc.1,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.25 2.99 2.62 0.16 5.80 0.79 0.92 NA## Test set 7.57 8.68 7.58 14.21 14.21 2.28 0.84 6.08

# Alpha = 0.2round(accuracy(fc.2,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.07 2.36 1.92 -0.09 4.28 0.58 0.87 NA## Test set 8.58 9.58 8.58 16.20 16.20 2.59 0.84 6.73

# Alpha = 0.4round(accuracy(fc.4,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.02 1.74 1.32 -0.07 2.98 0.40 0.75 NA## Test set 8.98 9.94 8.98 16.99 16.99 2.71 0.84 7

# Alpha = 0.6round(accuracy(fc.6,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.01 1.45 1.09 -0.04 2.47 0.33 0.61 NA## Test set 9.01 9.96 9.01 17.04 17.04 2.71 0.84 7.01

# Alpha = 0.8round(accuracy(fc.8,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.01 1.28 0.96 -0.02 2.18 0.29 0.47 NA## Test set 8.93 9.89 8.93 16.88 16.88 2.69 0.84 6.96

# Alpha = 0.9999round(accuracy(fc.opt,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.03 1.20 0.91 0.03 2.07 0.28 0.28 NA## Test set 8.86 9.82 8.86 16.73 16.73 2.67 0.84 6.91

# Plot "best" SES results (alpha = 0.05)

summary(fc.05)## ## Forecast method: Simple exponential smoothing## ## Model Information:## Simple exponential smoothing ## ## Call:

19

Page 20: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

## ses(y = Close.train, h = 26, initial = "simple", alpha = 0.05) ## ## Smoothing parameters:## alpha = 0.05 ## ## Initial states:## l = 41.34 ## ## sigma: 3.4108## Error measures:## ME RMSE MAE MPE MAPE MASE## Training set 0.718573 3.410761 2.976965 1.116049 6.550505 0.8968953## ACF1## Training set 0.9298144## ## Forecasts:## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2017.712 44.53765 40.16658 48.90872 37.85268 51.22262## 2017.731 44.53765 40.16112 48.91418 37.84433 51.23097## 2017.750 44.53765 40.15567 48.91963 37.83599 51.23931## 2017.769 44.53765 40.15022 48.92508 37.82766 51.24764## 2017.788 44.53765 40.14478 48.93052 37.81934 51.25596## 2017.808 44.53765 40.13935 48.93595 37.81103 51.26427## 2017.827 44.53765 40.13392 48.94138 37.80273 51.27257## 2017.846 44.53765 40.12850 48.94680 37.79444 51.28086## 2017.865 44.53765 40.12309 48.95221 37.78616 51.28914## 2017.885 44.53765 40.11768 48.95762 37.77789 51.29741## 2017.904 44.53765 40.11228 48.96302 37.76963 51.30567## 2017.923 44.53765 40.10689 48.96841 37.76139 51.31391## 2017.942 44.53765 40.10150 48.97380 37.75315 51.32215## 2017.962 44.53765 40.09612 48.97918 37.74492 51.33038## 2017.981 44.53765 40.09075 48.98455 37.73670 51.33860## 2018.000 44.53765 40.08538 48.98992 37.72849 51.34681## 2018.019 44.53765 40.08002 48.99528 37.72029 51.35501## 2018.038 44.53765 40.07466 49.00064 37.71210 51.36320## 2018.058 44.53765 40.06932 49.00598 37.70392 51.37138## 2018.077 44.53765 40.06397 49.01132 37.69575 51.37955## 2018.096 44.53765 40.05864 49.01666 37.68759 51.38771## 2018.115 44.53765 40.05331 49.02199 37.67944 51.39586## 2018.135 44.53765 40.04799 49.02731 37.67130 51.40400## 2018.154 44.53765 40.04267 49.03263 37.66317 51.41213## 2018.173 44.53765 40.03736 49.03794 37.65505 51.42025## 2018.192 44.53765 40.03206 49.04324 37.64694 51.42836

20

Page 21: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

autoplot(Close.train) + xlab("Year") + ylab("Weekly Close ($)") + ggtitle("SES (alpha = 0.05)") + autolayer(Close.test,series="Actual") + autolayer(fc.05$mean,series="Forecast (alpha = .05)")

# Compare to Mean and Naive Forecastsfc.mean = meanf(Close.train,h=26)fc.mean

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2017.712 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.731 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.750 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.769 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.788 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.808 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.827 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.846 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.865 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.885 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.904 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.923 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.942 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.962 44.80618 40.28665 49.32571 37.8504 51.76195## 2017.981 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.000 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.019 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.038 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.058 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.077 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.096 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.115 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.135 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.154 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.173 44.80618 40.28665 49.32571 37.8504 51.76195## 2018.192 44.80618 40.28665 49.32571 37.8504 51.76195

round(accuracy(fc.mean,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.00 3.46 2.82 -0.60 6.32 0.85 0.93 NA## Test set 6.31 7.61 6.45 11.72 12.03 1.94 0.84 5.3

21

Page 22: Ch 7 - Exponential Smoothing - Winonacourse1.winona.edu/bdeppa/FIN 335/Handouts/Exponential... · Web viewIf we replace l t with ^ y t+1|t and l t-1 with ^ y t|t-1 in the smoothing

# Plot the mean forecast which looks competitiveautoplot(Close.train) + xlab("Year") + ylab("Weekly Close ($)") + ggtitle("Mean Forecast") + autolayer(Close.test,series="Actual") + autolayer(fc.mean$mean,series="Mean Forecast")

fc.naive = naive(Close.train,h=26)fc.naive

## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95## 2017.712 42.26 40.73208 43.78792 39.92325 44.59675## 2017.731 42.26 40.09920 44.42080 38.95534 45.56466## 2017.750 42.26 39.61357 44.90643 38.21263 46.30737## 2017.769 42.26 39.20416 45.31584 37.58650 46.93350## 2017.788 42.26 38.84347 45.67653 37.03487 47.48513## 2017.808 42.26 38.51738 46.00262 36.53616 47.98384## 2017.827 42.26 38.21751 46.30249 36.07754 48.44246## 2017.846 42.26 37.93839 46.58161 35.65067 48.86933## 2017.865 42.26 37.67624 46.84376 35.24975 49.27025## 2017.885 42.26 37.42830 47.09170 34.87055 49.64945## 2017.904 42.26 37.19247 47.32753 34.50988 50.01012## 2017.923 42.26 36.96714 47.55286 34.16526 50.35474## 2017.942 42.26 36.75101 47.76899 33.83473 50.68527## 2017.962 42.26 36.54305 47.97695 33.51668 51.00332## 2017.981 42.26 36.34240 48.17760 33.20981 51.31019## 2018.000 42.26 36.14833 48.37167 32.91300 51.60700## 2018.019 42.26 35.96023 48.55977 32.62534 51.89466## 2018.038 42.26 35.77759 48.74241 32.34601 52.17399## 2018.058 42.26 35.59996 48.92004 32.07435 52.44565## 2018.077 42.26 35.42694 49.09306 31.80974 52.71026## 2018.096 42.26 35.25820 49.26180 31.55167 52.96833## 2018.115 42.26 35.09343 49.42657 31.29967 53.22033## 2018.135 42.26 34.93236 49.58764 31.05334 53.46666## 2018.154 42.26 34.77476 49.74524 30.81231 53.70769## 2018.173 42.26 34.62041 49.89959 30.57625 53.94375## 2018.192 42.26 34.46911 50.05089 30.34487 54.17513

round(accuracy(fc.naive,Close.test),2)

## ME RMSE MAE MPE MAPE MASE ACF1 Theil's U## Training set 0.01 1.19 0.90 -0.01 2.05 0.27 0.31 NA## Test set 8.86 9.82 8.86 16.73 16.73 2.67 0.84 6.91

The mean forecast, perhaps surpisingly, performs the “best”. However, as the weekly stock prices have continued to rise, none of these methods come close to the actual weekly stock prices.

22