Download - Hal Varian
-
8/13/2019 Hal Varian
1/36
Big Data: New Tricks for Econometrics1
Hal R. Varian2
June 20133Revised: February 2, 20144
Abstract5
Nowadays computers are in the middle of most economic transactions.6
These computer-mediated transactions generate huge amounts of7
data, and new tools can be used to manipulate and analyze this data.8
This essay offers a brief introduction to some of these tools and meth-9
ods.10
Computers are now involved in many economic transactions and can cap-11
ture data associated with these transactions, which can then be manipulated12
and analyzed. Conventional statistical and econometric techniques such as13
regression often work well but there are issues unique to big data sets that14
may require different tools.15
First, the sheer size of the data involved may require more powerful data16
manipulation tools. Second, we may have more potential predictors than17
appropriate for estimation, so we need to do some kind of variable selection.18
Third, large data sets may allow for more flexible relationships than simple19
Hal Varian is Chief Economist, Google Inc., Mountain View, California, and Emeritus
Professor of Economics, University of California, Berkeley, California. Thanks to Jeffrey
Oldham, Tom Zhang, Rob On, Pierre Grinspan, Jerry Friedman, Art Owen, Steve Scott,
Bo Cowgill, Brock Noland, Daniel Stonehill, Robert Snedegar, Gary King, the editors of
this journal for comments on earlier versions of this paper.
1
-
8/13/2019 Hal Varian
2/36
linear models. Machine learning techniques such as decision trees, support20
vector machines, neural nets, deep learning and so on may allow for more21
effective ways to model complex relationships.22
In this essay I will describe a few of these tools for manipulating and an-23
alyzing big data. I believe that these methods have a lot to offer and should24
be more widely known and used by economists. In fact, my standard advice25
to graduate students these days is go to the computer science department26
and take a class in machine learning. There have been very fruitful collabo-27
rations between computer scientists and statisticians in the last decade or so,28
and I expect collaborations between computer scientists and econometricians29
will also be productive in the future.30
1 Tools to manipulate big data31
Economists have historically dealt with data that fits in a spreadsheet, but32
that is changing as new more detailed data becomes available; see Einav33
and Levin [2013] for several examples and discussion. If you have more34
than a million or so rows in a spreadsheet, you probably want to store it in a35
relational database, such as MySQL. Relational databases offer a flexible way36
to store, manipulate and retrieve data using a Structured Query Language37
(SQL) which is easy to learn and very useful for dealing with medium-sized38
data sets.39
However, if you have several gigabytes of data or several million observa-40
tions, standard relational databases become unwieldy. Databases to manage41
data of this size are generically known as NoSQL databases. The term is42
used rather loosely, but is sometimes interpreted as meaning not only SQL.43
NoSQL databases are more primitive than SQL databases in terms of data44
manipulation capabilities but can handle larger amounts of data.45
Due to the rise of computer mediated transactions, many companies have46
found it necessary to develop systems to process billions of transactions per47
2
-
8/13/2019 Hal Varian
3/36
day. For example, according to Sullivan [2012], Google has seen 30 trillion48
URLs, crawls over 20 billion of those a day, and answers 100 billion search49
queries a month. Analyzing even one days worth of data of this size is50
virtually impossible with conventional databases. The challenge of dealing51
with data sets of this size led to the development of several tools to manage52
and analyze big data.53
A number of these tools are proprietary to Google, but have been de-54
scribed in academic publications in sufficient detail that open-source imple-55
mentations have been developed. Table 1 contains both the Google name56
and the name of related open source tools. Further details can be found in57
the Wikipedia entries associated with the tool names.58
Though these tools can be run on a single computer for learning purposes,59
real applications use large clusters of computers such as those provided by60
Amazon, Google, Microsoft and other cloud computing providers. The ability61
to rent rather than buy data storage and processing has turned what was62
previously a fixed cost of computing into a variable cost and has lowered the63
barriers to entry for working with big data.64
2 Tools to analyze data65
The outcome of the big data processing described above is often a small66
table of data that may be directly human readable or can be loaded into67
an SQL database, a statistics package, or a spreadsheet. If the extracted68
data is still inconveniently large, it is often possible to select a subsample69
for statistical analysis. At Google, for example, I have found that random70
samples on the order of 0.1 percent work fine for analysis of business data.71
Once a dataset has been extracted it is often necessary to do some ex-72
ploratory data analysis along with consistency and data-cleaning tasks. This73
is something of an art which can be learned only by practice, but data clean-74
ing tools such as OpenRefine and DataWrangler can be used to assist in data75
3
-
8/13/2019 Hal Varian
4/36
Google name Analog Description
Google File System Hadoop FileSystem
This system supports files so large thatthey must be distributed across hundredsor even thousands of computers.
Bigtable Cassandra This is a table of data that lives in theGoogle File System. It too can stretch overmany computers.
MapReduce Hadoop This is a system for accessing manipulat-
ing data in large data structures such asBigtables. MapReduce allows you to ac-cess the data in parallel, using hundredsor thousands of machines to extract thedata you are interested in. The query ismapped to the machines and is then ap-plied in parallel to different shards of thedata. The partial calculations are thencombined (reduced) to create the sum-mary table you are interested in.
Sawzall Pig This is a language for creating MapReducejobs.Go None Go is flexible open-source general-purpose
computer language that makes it easier todo parallel data processing.
Dremel, BigQuery Hive, Drill,Impala
This is a tool that allows data queries to bewritten in a simplified form of SQL. WithDremel it is possible to run an SQL queryon a petabtye of data (1000 terabytes) ina few seconds.
Table 1: Tools for manipulating big data.
4
-
8/13/2019 Hal Varian
5/36
cleansing.76
Data analysis in statistics and econometrics can be broken down into four77
categories: 1) prediction, 2) summarization, 3) estimation, and 4) hypothesis78
testing. Machine learning is concerned primarily with prediction; the closely79
related field of data mining is also concerned with summarization, and par-80
ticularly in finding interesting patterns in the data. Econometricians, statis-81
ticians, and data mining specialists are generally looking for insights that82
can be extracted from the data. Machine learning specialists are often pri-83
marily concerned with developing high-performance computer systems that84
can provide useful predictions in the presence of challenging computational85
constraints. Data science, a somewhat newer term, is concerned with both86
prediction and summarization, but also with data manipulation, visualiza-87
tion, and other similar tasks. Note that terminology is not standardized88
in these areas, so these descriptions reflect general usage, not hard-and-fast89
definitions. Other terms used to describe computer assisted data analysis90
include knowledge extraction, information discovery, information harvesting,91
data archaeology, data pattern processing, and exploratory data analysis.92
Much of applied econometrics is concerned with detecting and summariz-93
ing relationships in the data. The most common tool used to for summariza-94
tion is (linear) regression analysis. As we shall see, machine learning offers95
a set of tools that can usefully summarize more various sorts of nonlinear96
relationships in the data. We will focus on these regression-like tools because97
they are the most natural for economic applications.98
In the most general formulation of a statistical prediction problem, we99
are interested in understanding the conditional distribution of some variable100
y given some other variables x= (x1, . . . , xP). If we want a point prediction101
we could use the mean or median of the conditional distribution.102
In machine learning, the x-variables are usually called predictors or103
features. The focus of machine learning is to find some function that104
provides a good prediction of y as a function of x. Historically, most work105
5
-
8/13/2019 Hal Varian
6/36
in machine learning has involved cross-section data where it is natural to106
think of the data being independent and identically distributed (IID) or at107
least independently distributed. The data may be fat, which means lots108
of predictors relative to the number of observations, or tall which means109
lots of observations relative to the number of predictors.110
We typically have some observed data on y and xand we want to compute111
a good prediction of y given new values of x. Usually good means it112
minimizes some loss function such as the sum of squared residuals, mean of113
absolute value of residuals, and so on. Of course, the relevant loss is that114
associated with new out-of-sample observations of x, not the observations115
used to fit the model.116
When confronted with a prediction problem of this sort an economist117
would think immediately of a linear or logistic regression. However, there118
may be better choices, particularly if a lot of data is available. These include119
nonlinear methods such as 1) classification and regression trees (CART), 2)120
random forests, and 3) penalized regression such as LASSO, LARS, and elas-121
tic nets. (There are also other techniques such as neural nets, deep learning,122
and support vector machines which I do not cover in this review.) Much123
more detail about these methods can be found in machine learning texts; an124
excellent treatment is available in Hastie et al. [2009], which can be freely125
downloaded. Additional suggestions for further reading are given at the end126
of this article.127
3 General considerations for prediction128
Our goal with prediction is typically to get good out-of-sample predictions.129
Most of us know from experience that it is all too easy to construct a predictor130
that works well in-sample, but fails miserably out-of-sample. To take a trivial131
example, n linearly independent regressors will fit n observations perfectly132
but will usually have poor out-of-sample performance. Machine learning133
6
-
8/13/2019 Hal Varian
7/36
specialists refer to this phenomenon as the overfitting problem. They use134
three different methods for dealing with this problem.135
First, since simpler models tend to work better for out of sample forecasts,136
machine learning experts have come up with various ways penalize models137
for excessive complexity. In the machine learning world, this is known as138
regularization and we will encounter a some examples later on. Economists139
tend to prefer simpler models for the same reason, but have not been as140
explicit about quantifying complexity costs.141
Second, it is conventional to divide the data into separate sets for the142
purpose of training, testing and validation. You use the training data to143
estimate a model, the validation data to choose your model, and the testing144
data to evaluate how well your chosen model performs. (Often validation145
and testing sets are combined.)146
Third, simpler models tend to generate better out-of-sample forecasts. So147
one way to control overfitting is to impose a cost for excessive complexity; in148
machine learning such a complexity cost is known as a tuning parameter.149
The conventional way to choose a tuning parameter is to use k-fold cross150
validation.151
1. Divide the data into k roughly equal subsets and label them by s =152
1, . . . , k. Start with subset s= 1.153
2. Pick a value for the tuning parameter.154
3. Fit your model using thek1 subsets other than subset s.155
4. Predict for subsetsand measure the associated loss.156
5. Stop ifs= k, otherwise increment sby 1 and go to step 2.157
Common choices for k are 10, 5, and the sample size minus 1 (leave158
one out). After cross validation, you end up with k values of the tuning159
parameter and the associated loss which you can then examine to choose160
7
-
8/13/2019 Hal Varian
8/36
an appropriate value for the tuning parameter. Even if there is no tuning161
parameter, it is useful to use cross validation to report goodness-of-fit mea-162
sures since it measures out-of-sample performance which is generally more163
meaningful than in-sample performance.164
The test-train cycle and cross validation are very commonly used in ma-165
chine learning and, in my view, should be used much more in economics,166
particularly when working with large datasets. For many years, economists167
have reported in-sample goodness-of-fit measures using the excuse that we168
had small datasets. But now that larger datasets have become available,169
there is no reason not to use separate training and testing sets. Cross-170
validation also turns out to be a very useful technique, particularly when171
working with reasonably large data. It is also a much more realistic measure172
of prediction performance than measures commonly used in economics.173
4 Classification and regression trees174
Let us start by considering a discrete variable regression where our goal is to175
predict a 0-1 outcome based on some set of features (what economists would176
call explanatory variables or predictors.) In machine learning this is known as177
a classification problem. A common example would be classifying email into178
spam or not spam based on characteristics of the email. Economists179
would typically use a generalized linear model like a logit or probit for a180
classification problem.181
A quite different way to build a classifier is to use a decision tree. Most182
economists are familiar with decision trees that describe a sequence of de-183
cisions that results in some outcome. A tree classifier has the same general184
form, but the decision at the end of the process is a choice about how to185
classify the observation. The goal is to construct (or grow) a decision tree186
that leads to good out-of-sample predictions.187
Ironically, one of the earliest papers on the automatic construction of de-188
8
-
8/13/2019 Hal Varian
9/36
features predicted actual/total
class 3 died 370/501class 1-2, younger than 16 lived 34/36class 2, older than 16 died 145/233class 1, older than 16 lived 174/276
Table 2: Tree model in rule form.
cision trees was co-authored by an economist (Morgan and Sonquist [1963]).189
However, the technique did not really gain much traction until 20 years later190
in the work of Breiman et al. [1984] and his colleagues. Nowadays this predic-191
tion technique is known as classification and regression trees, or CART.192
To illustrate the use of tree models, I used the R package rpart to find193
a tree that predicts Titanic survivors using just two variables, age and class194
of travel.1 The resulting tree is shown in Figure 1, and the rules depicted195
in the tree are shown in Table 2. The rules fit the data reasonably well,196
misclassifying about 30% of the observations in the testing set.197
This classification can also be depicted in the partition plot shown in198
Figure 2 which shows how the tree divides up the space of age and class199
pairs into rectangular regions. Of course, the partition plot can only be used200
for two variables while a tree representation can handle an arbitrarily large201
number.202
It turns out that there are computationally efficient ways to construct203
classification trees of this sort. These methods generally are restricted to204
binary trees (two branches at each node). They can be used for classifi-205
cation with multiple outcomes (classification trees) , or with continuous206
dependent variables (regression trees.)207
Trees tend to work well for problems where there are important nonlin-208
earities and interactions. As an example, let us continue with the Titanic209
data and create a tree that relates survival to age. In this case, the rule210
generated by the tree is very simple: predict survive if age < 8.5 years.211
1All data and code used in this paper can be found in the online supplement.
9
-
8/13/2019 Hal Varian
10/36
class >= 2.5
age >= 16
class >= 1.5
died
370 / 501
died
145 / 233
lived
174 / 276
lived
34 / 36
yes no
Figure 1: A classification tree for survivors of the Titanic. See text forinterpretation.
0
20
40
60
80
class
age
1st 2nd 3rd
Figure 2: The simple tree model predicts death in shaded region. Whitecircles indicate survival, black crosses indicate death.
10
-
8/13/2019 Hal Varian
11/36
Coefficient Estimate Std Error t value p value
Intercept 0.465 0.0350 13.291 0.000age -0.002 0.001 -1.796 0.072
Table 3: Logistic regression of survival vs age.
We can examine the same data with a logistic regression to estimate the212
probability of survival as a function of age, with results reported in Table 3.213
The tree model suggests that age is an important predictor of survival214
important, while the logistic model says it is barely important. This dis-215
crepancy is explained in Figure 3 where we plot survival rates by age bins.216
Here we see that survival rates for those under 10 years old were elevated217
compared to older passengers, except for the very oldest group. So what218
mattered for survival is not so much age, but whether the passenger was a219
child or elderly. It would be difficult to discover this pattern from a logistic220
regression alone.2221
Trees also handle missing data well. Perlich et al. [2003] examined several222
standard data sets and found that logistic regression is better for smaller223
data sets and tree induction for larger data sets. Interestingly enough, trees224
tend not to work very well if the underlying relationship really is linear,225
but there are hybrid models such as RuleFit (Friedman and Popescu [2005])226
which can incorporate both tree and linear relationships among variables.227
However, even if trees may not improve on predictive accuracy compared to228
linear models, the age example shows that they may reveal aspects of the229
data that are not apparent from a traditional linear modeling approach.230
2It is true that if you knew that there was a nonlinearity in age, you could use age
dummies in the logit model to capture this effect. However the tree formulation made this
nonlinearity immediately apparent.
11
-
8/13/2019 Hal Varian
12/36
10 20 30 40 50 60 70
0.
1
0.
2
0.
3
0.
4
0.
5
0.6
0
.7
age bin
ractons
urv
ve
Figure 3: The figure shows the mean survival rates for different age groupsalong with confidence intervals.
4.1 Pruning trees231
One problem with trees is that they tend to overfit the data. Just as a232
regression with n observations and n variables will give you a good fit in233
sample, a tree with an many branches will also fit the training data well. In234
either case, predictions using new data, such as the test set, could be very235
poor.236
The most common solution to this problem is to prune the tree by237
imposing a cost for complexity. There are various measures of complexity,238
but a common one is the number of terminal nodes (also known as leafs.239
The cost of complexity is a tuning parameter that is chosen to provide the240
best out-of-sample predictions, which is typically measured using the 10-fold241
cross validation procedure mentioned earlier.242
A typical tree estimation session might involve dividing your data into243
ten folds, using nine of the folds to grow a tree with a particular complexity,244
and then predict on the excluded fold. Repeat the estimation with different245
12
-
8/13/2019 Hal Varian
13/36
values of the complexity parameter using other folds and choose the value246
of the complexity parameter that minimizes the out-of-sample classification247
error. (Some researchers recommend being a bit more aggressive and advo-248
cate choosing the complexity parameter that is one standard deviation lower249
than the loss-minimizing value.)250
Of course, in practice, the computer program handles most of these de-251
tails for you. In the examples in this paper I mostly use default choices to252
keep things simple, but in practice these default will often be adjusted by the253
analyst. As with any other statistical procedure, skill, experience and intu-254
ition are helpful in coming up with a good answer. Diagnostics, exploration,255
and experimentation are just as useful with these methods as with regression256
techniques.257
There are many other approaches to creating trees, including some that258
are explicitly statistical in nature. For example, a conditional inference259
tree, or ctree for short, chooses the structure of the tree using a sequence260
of hypothesis tests. The resulting trees tend to need very little pruning.261
(Hothorn et al. [2006]) An example for the Titanic data is shown in Figure 4.262
The first node divides by gender. The second node then divides by class.263
In the right-hand branches, the third node divides by age, and a fourth264
node divides by the number of siblings and spouses aboard. The bins at265
the bottom of the figure show the total number of people in that leaf and a266
graphical depiction of their survival rate. One might summarize this tree by267
the following principle: women and children first . . . particularly if they were268
traveling first class. This simple example again illustrates that classification269
trees can be helpful in summarizing relationships in data, as well as predicting270
outcomes.271
4.2 Economic example: HMDA data272
Munnell et al. [1996] examined mortgage lending in Boston to see if race273
played a significant role in determining who was approved for a mortgage.274
13
-
8/13/2019 Hal Varian
14/36
sexp < 0.001
1
female male
pclass
p < 0.001
2
2 > 2
Node 3 (n = 236)
1
0
0
0.20.4
0.6
0.8
1Node 4 (n = 152)
1
0
0
0.20.4
0.6
0.8
1
pclass
p < 0.001
5
1 > 1
agep = 0.01
6
54 > 54
Node 7 (n = 123)
1
0
0
0.20.4
0.6
0.8
1Node 8 (n = 28)
1
0
0
0.20.4
0.6
0.8
1
agep < 0.001
9
9 > 9
sibspp < 0.001
10
2 > 2
Node 11 (n = 24)
1
0
0
0.20.4
0.6
0.8
1Node 12 (n = 16)
1
0
0
0.20.4
0.6
0.8
1Node 13 (n = 467)
1
0
0
0.20.4
0.6
0.8
1
Figure 4: A ctree for survivors of the Titanic. The black bars indicate fractionof the group that survived.
The primary econometric technique was a logistic regression where race was275
included as one of the predictors. The coefficient on race showed a statis-276
tically significant negative impact on probability of getting a mortgage for277
black applicants. This finding prompted considerable subsequent debate and278
discussion; see Ladd [1998] for an overview.279
Here I examine this question using the tree-based estimators described in280
the previous section. The data consists of 2380 observations of 12 predictors,281
one of which was race. Figure 5 shows a conditional tree estimated using the282
R package party. (For reasons of space, I have omitted variable descriptions283
which are readily available in the online supplement.)284
The tree fits pretty well, misclassifying 228 of the 2380 observations for an285
error rate of 9.6%. By comparison, a simple logistic regression does slightly286
better, misclassifying 225 of the 2380 observations, leading to an error rate287
of 9.5%. As you can see in Figure 5, the most important variable is dmi288
= denied mortgage insurance. This variable alone explains much of the289
14
-
8/13/2019 Hal Varian
15/36
variation in the data. The race variable (black) shows up far down the tree290
and seems to be relatively unimportant.291
dmi
p < 0.001
1
no yes
ccsp < 0.001
2
3 > 3
dirp < 0.001
3
0.431 > 0.431
ccs
p < 0.001
4
1 > 1
Node 5 (n = 1272)
yes
no
0
0.2
0.4
0.6
0.8
1
pbcrp < 0.001
6
yes no
Node 7 (n = 37)
yes
no
0
0.2
0.4
0.6
0.8
1Node 8 (n = 479)
yes
no
0
0.2
0.4
0.6
0.8
1
mcs
p = 0.011
9
1 > 1
Node 10 (n = 48)
yes
no
0
0.2
0.4
0.6
0.8
1Node 11 (n = 50)
yes
no
0
0.2
0.4
0.6
0.8
1
pbcrp < 0.001
12
no yes
lvr
p = 0.001
13
0.953 > 0.953
dirp < 0.001
14
0.415 > 0.415
blackp = 0.021
15
no yes
Node 16 (n = 246)
yes
no
0
0.2
0.4
0.6
0.8
1Node 17 (n = 71)
yes
no
0
0.2
0.4
0.6
0.8
1Node 18 (n = 36)
yes
no
0
0.2
0.4
0.6
0.8
1Node 19 (n = 10)
yes
no
0
0.2
0.4
0.6
0.8
1Node 20 (n = 83)
yes
no
0
0.2
0.4
0.6
0.8
1Nod
yes
no
Figure 5: HMDA tree. The black bars indicate the fraction of each group
that were denied mortgages. The most important determinant of this is thevariable dmi, denied mortgage insurance.
One way to gauge whether a variable is important is to exclude it from292
the prediction and see what happens. When this is done, it turns out that293
the accuracy of the tree based model doesnt change at all: exactly the same294
cases are misclassified. Of course, it is perfectly possible that there was295
racial discrimination elsewhere in the mortgage process, or that some of the296
variables included are highly correlated with race. But it is noteworthy that297
the tree model produced by standard procedures that omits race fits the298
observed data just as well as a model that includes race.299
15
-
8/13/2019 Hal Varian
16/36
5 Boosting, bagging and bootstrap300
There are several useful ways to improve classifier performance. Interestingly301
enough, the some of these methods work by addingrandomness to the data.302
This seems paradoxical at first, but adding randomness turns out to be a303
helpful way of dealing with the overfitting problem.304
Bootstrap involves choosing (with replacement) a sample of size n from a305
data set of sizento estimate the sampling distribution of some statistic.306
A variation is the mout ofn bootstrap which draws a sample of size307
mfrom a dataset of size n > m.308
Bagging involves averaging across models estimated with several different309
bootstrap samples in order to improve the performance of an estimator.310
Boosting involves repeated estimation where misclassified observations are311
given increasing weight in each repetition. The final estimate is then a312
vote or an average across the repeated estimates.3313
Econometricians are well-acquainted with the bootstrap but rarely use the314
other two methods. Bagging is primarily useful for nonlinear models such315
as trees. (Friedman and Hall [2005].) Boosting tend to improve predictive316
performance of an estimator significantly and can be used for pretty much317
any kind of classifier or regression model, including logits, probits, trees, and318
so on.319
It is also possible to combine these techniques and create a forest of320
trees that can often significantly improve on single-tree methods. Here is a321
rough description of how such random forests work.322
Random forests refers to a technique that uses multiple trees. A typical323
procedure uses the following steps.324
3Boosting is often used with decision trees, where can dramatically improve their pre-
dictive performance.
16
-
8/13/2019 Hal Varian
17/36
1. Choose a bootstrap sample of the observations and start to grow325
a tree.326
2. At each node of the tree, choose a random sample of the predictors327
to make the next decision. Do not prune the trees.328
3. Repeat this process many times to grow a forest of trees329
4. In order to determine the classification of a new observation, have330
each tree make a classification and use a majority vote for the331
final prediction332
This method produces surprisingly good out-of-sample fits, particularly333
with highly nonlinear data. In fact, Howard [2013] claims ensembles of334
decision trees (often known as Random Forests) have been the most successful335
general-purpose algorithm in modern times. He goes on to indicate that336
the algorithm is very simple to understand, and is fast and easy to apply.337
See also Caruana and Niculescu-Mizil [2006] who compare several different338
machine learning algorithms and find that ensembles of trees perform quite339
well. There are a number variations and extensions of the basic ensemble of340
trees model such as Friedmans Stochastic Gradient Boosting (Friedman341
[1999]).342
One defect of random forests is that they are a bit of a black boxthey343
dont offer simple summaries of relationships in the data. As we have seen344
earlier, a single tree can offer some insight about how predictors interact. But345
a forest of a thousand trees cannot be easily interpreted. However, random346
forests can determine which variables are important in predictions in the347
sense of contributing the biggest improvements in prediction accuracy.348
Note that random forests involves quite a bit of randomization; if you349
want to try them out on some data, I strongly suggest choosing a particular350
seed for the random number generator so that your results can be reproduced.351
(See the online supplement for examples.)352
I ran the random forest method on the HMDA data and found that it353
misclassified 223 of the 2380 cases, a small improvement over the logit and354
17
-
8/13/2019 Hal Varian
18/36
the ctree. I also used the importance option in random forests to see how355
the predictors compared. It turned out that dmi was the most important356
predictor and race was second from the bottom which is consistent with the357
ctree analysis.358
6 Variable selection359
Let us return to the familiar world of linear regression and consider the prob-360
lem of variable selection. There are many such methods available, including361
stepwise regression, principal component regression, partial least squares,362
AIC and BIC complexity measures and so on. Castle et al. [2009] describes363
and compares 21 different methods.364
6.1 Lasso and friends365
Here we consider a class of estimators that involves penalized regression.366
Consider a standard multivariate regression model where we predict yt as a367
linear function of a constant, b0, andPpredictor variables. We suppose that368
we have standardized all the (non-constant) predictors so they have mean369
zero and variance one.370
Consider choosing the coefficients (b1, . . . , bP) for these predictor variables
by minimizing the sum of squared residuals plus a penalty term of the form
P
p=1
[(1)|bp|+|bp|2]
This estimation method is calledelastic net regression; it contains three other371
methods as special cases. If there is no penalty term (= 0), this is ordinary372
least squares. If = 1 so that there is only the quadratic constraint, this373
is ridge regression. If = 0 this is called the lasso, an acronym for least374
absolute shrinkage and selection operator.375
18
-
8/13/2019 Hal Varian
19/36
These penalized regressions are classic examples of regularization. In376
this case, the complexity is the number and size of predictors in the model.377
All of these methods tend to shrink the least squares regression coefficients378
towards zero. The lasso and elastic net typically produces regressions where379
some of the variables are set to be exactly zero. Hence this is a relatively380
straightforward way to do variable selection.381
It turns out that these estimators can be computed quite efficiently, so382
doing variable selection on reasonably large problems is computationally fea-383
sible. They also seem to provide good predictions in practice.384
6.2 Spike and slab regression385
Another approach to variable selection that is novel to most economists is386
spike-and-slab regression, a Bayesian technique. Suppose that you have P387
possible predictors in some linear model. Let be a vector of length P388
composed of zeros and ones that indicate whether or not a particular variable389
is included in the regression.390
We start with a Bernoulli prior distribution on ; for example, initially391
we might think that all variables have an equally likely chance of being in392
the regression. Conditional on a variable being in the regression, we specify a393
prior distribution for the regression coefficient associated with that variable.394
For example, we might use a Normal prior with mean 0 and a large variance.395
These two priors are the source of the methods name: the spike is the396
probability of a coefficient being non-zero; the slab is the (diffuse) prior397
describing the values that the coefficient can take on.398
Now we take a draw offrom its prior distribution, which will just be a399
list of variables in the regression. Conditional on this list of included vari-400
ables, we take a draw from the prior distribution for the coefficients. We401
combine these two draws with the likelihood in the usual way which gives402
us a draw from posterior distribution on both probability of inclusion and403
the coefficients. We repeat this process thousands of times using a Markov404
19
-
8/13/2019 Hal Varian
20/36
Chain Monte Carlo (MCMC) technique which give us a table summarizing405
the posterior distribution for (indicating variable inclusion), (the coef-406
ficients), and the associated prediction of y. We can summarize this table407
in a variety of ways. For example, we can compute the average value ofp408
which shows the posterior probability that the variable p is included in the409
regressions.410
6.3 Economic example: growth regressions411
We illustrate these different methods of variable selection using data from412
Sala-i-Martn [1997]. This exercise involved examining a data set of 72 coun-413
ties and 42 variables in order to see which variables appeared to be important414
predictors of economic growth. Sala-i-Martn [1997] computed at all possible415
subsets of regressors of manageable size and used the results to construct416
an importance measure he called CDF(0). Ley and Steel [2009] investigated417
the same question using Bayesian techniques related to, but not identical418
with, spike-and-slab, while Hendry and Krolzig [2004] examined an iterative419
significance test selection method.420
Table 4 shows 10 predictors that were chosen by Sala-i-Martn [1997], Ley421
and Steel [2009], lasso, and spike-and-slab. The table is based on that422
in Ley and Steel [2009] but metrics used are not strictly comparable across423
the various models. The BMA and spike-slab columns are posterior424
probabilities of inclusion; the lasso column is just the ordinal importance425
of the variable with a dash indicating that it was not included in the chosen426
model; and the CDF(0) measure is defined in Sala-i-Martn [1997].427
Thelassoand the Bayesian techniques are very computationally efficient428
and on this ground would likely be preferred to exhaustive search. All 4429
of these variable selection methods give similar results for the first 4 or 5430
variables, after which they diverge. In this particular case, the data set431
appears to be too small to resolve the question of what is important for432
economic growth.433
20
-
8/13/2019 Hal Varian
21/36
predictor BMA CDF(0) lasso spike-slab
GDP level 1960 1.000 1.000 - 0.9992Fraction Confucian 0.995 1.000 6 0.9730Life expectancy 0.946 0.942 5 0.9610Equipment investment 0.757 0.997 1 0.9532Sub-Saharan dummy 0.656 1.000 - 0.5834Fraction Muslim 0.656 1.000 - 0.6590Rule of law 0.516 1.000 - 0.4532Open economy 0.502 1.000 3 0.5736Degree of Capitalism 0.471 0.987 - 0.4230Fraction Protestant 0.461 0.966 - 0.3798
Table 4: Comparing variable selection algorithms. See text for discussion.
7 Time series434
The machine learning techniques described up until now are generally applied435
to cross-sectional data where independently distributed data is a plausible436
assumption. However, there are also techniques that work with time series.437
Here we describe an estimation method which we call Bayesian Structural438
Time Series (BSTS) that seems to work well for variable selection problems439
in time series applications.440Our research in this area was motivated by Google Trends data which441
provides an index of the volume of Google queries on specific terms. One442
might expect that queries on [file for unemployment] might be predictive443
of the actual rate of filings for initial claims, or that queries on [Orlando444
vacation] might be predictive of actual visits to Orlando. Indeed, Choi and445
Varian [2009, 2012], Goel et al. [2010], Carriere-Swallow and Labbe [2011],446
McLaren and Shanbhoge [2011], Arola and Galan [2012], Hellerstein and447
Middeldorp [2012] and many others have shown that Google queries do have448
significant short-term predictive power for various economic metrics.449
The challenge is that there are billions of queries so it is hard to determine450
exactly which queries are the most predictive for a particular purpose. Google451
21
-
8/13/2019 Hal Varian
22/36
Trends classifies the queries into categories, which helps a little, but even then452
we have hundreds of categories as possible predictors so that overfitting and453
spurious correlation are a serious concern. BSTS is designed to address these454
issues. We offer a very brief description here; more details are available in455
Scott and Varian [2012a,b].456
Consider a classic time series model with constantlevel, linear time trend,457
and regressor components:458
yt= +bt+xt+et.459
The local linear trend is a stochastic generalization of this model where460
the level and time trend can vary through time.461
Observation: yt= t+zt+e1t = level + regression462
State 1: t= t1+bt1+e2t = random walk + trend463
State 2: zt= xt = regression464
State 3: bt= bt1+e3t = random walk for trend465
It is easy to add an additional state variable for seasonality if that is ap-466propriate. The parameters to estimate are the regression coefficients and467
the variances of (eit) for i = 1, . . . , 3. We can then use these estimates to468
construct the optimal forecast based on techniques drawn from the literature469
on Kalman filters.470
For the regression we use the spike-and-slab variable choice mechanism471
described above. A draw from the posterior distribution now involves a draw472
of variances of (e1t, e2t), a draw of the vector that indicates which vari-473
ables are in the regression, and a draw of the regression coefficients for the474
included variables. The draws oft, bt, and can be used to construct esti-475
mates ofyt and forecasts for yt+1. We end up with an (estimated) posterior476
distribution for each parameter of interest. If we seek a point prediction, we477
22
-
8/13/2019 Hal Varian
23/36
can average over these draws, which is essentially a form of Bayesian model478
averaging.479
As an example, consider the non-seasonally adjusted data for new homes480
sold in the U.S. (HSN1FNSA) from the St. Louis Federal Reserve Economic481
Data. This time series can be submitted to Google Correlate, which then482
returns the 100 queries that are the most highly correlated with the series.483
We feed that data into the BSTS system which identifies the predictors with484
the largest posterior probabilities of appearing in the housing regression are485
shown in Figure 6. In these figures, black bars indicate a negative relation-486
ship and white bars indicate a positive relationship. Two predictors, [oldies487
lyrics] and [www.mail2web] appear to be spurious so we remove them and488
re-estimate, yielding the results in Figure 7.489
The fit is shown in Figure 8 which shows the incremental contribution490
of the trend, seasonal, and two two of the regressors. Even with only two491
predictors, queries on [appreciate rate] and queries on [irs 1031], we get a492
pretty good fit.4493
real.estate.appraisal
www.mail2web.com
irs.1031
real.estate.purchase
oldies.lyrics
appreciation.rate
Inclusion Probability
0.0 0.2 0.4 0.6 0.8 1.0
Figure 6: Initial predictors.
selling.real.estate
real.estate.appraisal
X80.20.mortgage
estate.appraisal
real.estate.purchase
century.21.realtors
irs.1031
appreciation.rate
Inclusion Probability
0.0 0.2 0.4 0.6 0.8 1.0
Figure 7: Final predictors.
4IRS section 1031 has to do with deferring capital gains on certain sorts of property
exchange.
23
-
8/13/2019 Hal Varian
24/36
Figure 8: Incremental plots. The plots show the impact of the trend, sea-sonal, and a few individual regressors. The residuals are shown on the bot-tom.
24
-
8/13/2019 Hal Varian
25/36
8 Econometrics and machine learning494
There are a number of areas where there would be opportunities for fruitful495
collaboration between econometrics and machine learning. I mentioned above496
that most machine learning uses IID data. However, the BSTS model shows497
that some of these techniques can be adopted for time series models. It is498
also be possible to use machine learning techniques to look at panel data and499
there has been some work in this direction.500
However, the most important area for collaboration involves causal infer-501
ence. Econometricians have developed several tools for causal inference such502
as instrumental variables, regression discontinuity, difference-in-differences503
and various forms of natural and designed experiments. (Angrist and Krueger504
[2001].) Machine learning work has, for the most part, dealt with pure pre-505
diction. In a way this is ironic, since theoretical computer scientists, such506
as Pearl [2009a,b] have made significant contributions to causal modeling.507
However, it appears that these theoretical advances have not as yet been508
incorporated into machine learning practice to a significant degree.509
8.1 Causality and prediction510
As economists know well there is a big difference between correlation and511
causation. A classic example: there are often more police in precincts with512
high crime, but that does not imply that increasing the number of police in513
a precinct would increase crime.514
The machine learning models we have described so far have been entirely515
about prediction. If our data was generated by policymakers who assigned516
police to areas with high crime, then the observed relationship between police517
and crime rates could be highly predictive for thehistorical
data, but not518
useful in predicting the causal impact of explicitly assigningadditional police519
to a precinct.520
To enlarge on this point, let us consider an experiment (natural or de-521
25
-
8/13/2019 Hal Varian
26/36
signed) that attempts to estimate the impact of some policy, such as adding522
police to precincts. There are two critical questions.523
How will police be assigned to precincts in both the experiment and524
the policy implementation? Possible assignment rules could be 1) ran-525
dom, 2) based on perceived need, 3) based on cost of providing service,526
4) based on resident requests, 5) based on a formula or set of rules,527
6) based on asking for volunteers, and so on. Ideally the assignment528
procedure in the experiment will be similar to that used in the policy.529
Developing accurate predictions about which precincts will receive ad-530
ditional police under the proposed policy based on the experimental531
data can clearly be helpful in predicting the expected impact of the532
policy.533
What will be the impact of these additional police in both the exper-534
iment and the policy? As Rubin [1974] and many subsequent authors535
have emphasized, when we want to estimate the causalimpact of some536
treatment we need to compare the outcome with the intervention to537
what would have happened without the intervention. But this coun-538
terfactual cannot be observed, so it must be predicted by some model.539
The better predictive model you have for the counterfactual, the bet-540
ter you will be able to estimate the causal effect, an observation that541
is true for both pure experiments and natural experiments.542
So even though a predictive model will not necessarily allow one to con-543
clude anything about causality by itself, such models may help in estimating544
the causal impact of an intervention when it occurs.545
To state this in a slightly more formal way, consider the identity from546
Angrist and Pischke [2008], page 11:547
observed difference in outcome = average treatment effect on the treated
+ selection bias
26
-
8/13/2019 Hal Varian
27/36
If you want to model the average treatment effect as a function of other vari-548
ables, you will usually need to model both the observed difference in outcome549
and the selection bias. The better your predictive model for those compo-550
nents, the better predictions you can make about the average treatment ef-551
fect. Of course, if you have a true randomized treatment-control experiment,552
selection bias goes away and those treated are an unbiased random sample553
of the population.554
To illustrate these points, let us consider the thorny problem of estimat-555
ing the causal effect of advertising on sales. (Lewis and Rao [2013].) The556
difficulty is that there are many confounding variables, such as seasonality or557
weather, that cause both increased ad exposures and increased purchases by558
consumers. For example, consider the (probably apocryphal) story about an559
advertising manager who was asked why he thought his ads were effective.560
Look at this chart, he said. Every December I increase my ad spend and,561
sure enough, purchases go up. Of course, in this case seasonality can be562
included in the model. However, generally there will be other confounding563
variables that affect both exposure to ads and the propensity of purchase,564
which makes causal interpretations of observed relationships problematic.565
The ideal way to estimate advertising effectiveness is, of course, to run a566
controlled experiment. In this case the control group provides an estimate567
of the counterfactual: what would have happened without ad exposures.568
But this ideal approach can be quite expensive, so it is worth looking for569
alternative ways to predict the counterfactual. One way to do this is to use570
the Bayesian Structural Time Series method described earlier.571
Suppose a given company wants to determine the impact of an advertising572
campaign on its sales. It first uses BSTS (or some other technique) to build573
a model predicting the time series of sales as a function its past history,574
seasonal effects and other possible predictors such as Google queries on its575
company name, its competitors names, or products that it produces. Since576
there are many possible choices for predictors, it is important to use some577
27
-
8/13/2019 Hal Varian
28/36
variable selection mechanism such as those described earlier.578
It next runs an ad campaign for a few weeks and records sales during579
this period. Finally, it makes a forecast of what sales would have been in580
the absence of the ad campaign using the model developed in the first stage.581
Comparing the actual outcome to the counterfactual outcome gives us an582
estimate of causal effect of advertising.583
Figure 9, taken from Brodersen et al. [2013], shows the outcome of this584
procedure. Panel a shows the actual sales and the prediction of what the585
sales would have been without the campaign based on the BSTS forecasting586
model. Panel bshows the difference between actual and predicted sales, and587
Panelc shows the cumulative difference. It is clear from this figure that there588
was a significant causal impact of advertising which can then be compared589
to the cost of the advertising to evaluate the campaign.590
Note that this procedure does not require a control group in the conven-591
tional sense. I fact a good predictive model can be better than a random592
control group, which is usually thought to be the gold standard. To see this,593
suppose that you run an ad campaign in 100 cities and retain 100 cities as594
a control. After the experience is over, you discover the weather was dra-595
matically different over the cities in the study. Should you use weather as a596
predictor of the counterfactual? Of course! If weather affects sales (which it597
does) then you will get a more accurate prediction of the counterfactual and598
thus a better estimate of the causal effect of advertising.599
9 Model uncertainty600
An important insight from machine learning is that averaging over many601
small models tends to give better out-of-sample prediction than choosing a602
single model.603
In 2006, Netflix offered a million dollar prize to researchers who could604
provide the largest improvement to their existing movie recommendation605
28
-
8/13/2019 Hal Varian
29/36
Figure 9: Actual and predicted sales.
system. The winning submission involved a complex blending of no fewer606
than 800 models though they also point out that predictions of good quality607
can usually be obtained by combining a small number of judiciously chosen608
methods. (Feuerverger et al. [2012].) It also turned out that a blend of the609
best and second-best model outperformed both of them.610
Ironically, it was recognized many years ago that averages of macroeco-611
nomic model forecasts outperformed individual models, but somehow this612
idea was rarely exploited in traditional econometrics. The exception is the613
literature on Bayesian model averaging which has seen a steady flow of work;614
see Steel [2011] for a survey.615
However, I think that model uncertainty has crept in to applied econo-616
metrics through the back door. Many papers in applied econometrics present617
regression results in a table with several different specifications: which vari-618
ables are included in the controls, which variables are used as instruments,619
and so on. The goal is usually to show that the estimate of some interesting620
parameter is not very sensitive to the exact specification used.621
One way to think about it is that these tables illustrate a simple form of622
29
-
8/13/2019 Hal Varian
30/36
model uncertainty: how an estimated parameter varies as different models are623
used. In these papers the authors tend to examine only a few representative624
specifications, but there is no reason why they couldnt examine many more625
if the data were available.626
In this period of big data it seems strange to focus on sampling un-627
certainty, which tends to be small with large data sets, while completely628
ignoring model uncertaintywhich may be quite large. One way to address629
this is to be explicit about examining how parameter estimates vary with630
respect to choices of control variables and instruments.631
10 Summary and further reading632
Since computers are now involved in many economic transactions, big data633
will only get bigger. Data manipulation tools and techniques developed for634
small datasets will become increasingly inadequate to deal with new prob-635
lems. Researchers in machine learning have developed ways to deal with636
large data sets and economists interested in dealing with such data would be637
well advised to invest in learning these techniques.638
I have already mentioned Hastie et al. [2009] which has detailed descrip-639
tions of all the methods discussed here but at a relatively advanced level.640
James et al. [2013] describes many of the same topics at an undergraduate-641
level, along with R code and many examples.5642
Venables and Ripley [2002] contains good discussions of these topics with643
emphasis on applied examples. Leek [2013] presents a number of YouTube644
videos with gentle and accessible introductions to several tools of data anal-645
ysis. Howe [2013] provides a somewhat more advanced introduction to data646
science that also includes discussions of SQL and NoSQL databases. Wu647
and Kumar [2009] gives detailed descriptions and examples of the major al-648
5There are several economic examples in the book where the tension between predictive
modeling and causal modeling is apparent.
30
-
8/13/2019 Hal Varian
31/36
gorithms in data mining, while Williams [2011] provides a unified toolkit.649
Domingos [2012] summarizes some important lessons which include pitfalls650
to avoid, important issues to focus on and answers to common questions.651
References652
Joshua D. Angrist and Alan B. Krueger. Instrumental variables and the653
search for identification: From supply and demand to natural experiments.654
Journal of Economic Perspectives, 15(4):6985, 2001. URL http://www.655
aeaweb.org/articles.php?doi=10.1257/jep.15.4.69 .656
Joshua D. Angrist and Jorn-Steffen Pischke. Mostly Harmless Econometrics.657
Princeton University Press, 2008.658
Concha Arola and Enrique Galan. Tracking the future on the web: Con-659
struction of leading indicators using internet searches. Technical report,660
Bank of Spain, 2012. URL http://www.bde.es/webbde/SES/Secciones/661
Publicaciones/PublicacionesSeriadas/DocumentosOcasionales/12/662
Fich/do1203e.pdf.663
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification664
and Regression Trees. Wadsworth and Brooks/Cole, Monterey, 1984.665
Kay H. Brodersen, Nicolas Remy, Fabian Gallusser, Steven L. Scott, Jim666
Koehler, and Penny Chu. Inferring causal impact using Bayesian structural667
time series models. Technical report, Google, Inc., 2013. URL http:668
//research.google.com/pubs/pub41854.html .669
Yan Carriere-Swallow and Felipe Labbe. Nowcasting with Google Trends in670
an emerging market. Journal of Forecasting, 2011. doi: 10.1002/for.1252.671
URL http://ideas.repec.org/p/chb/bcchwp/588.html. Working Pa-672
pers Central Bank of Chile 588.673
31
-
8/13/2019 Hal Varian
32/36
Rich Caruana and Alexandru Niculescu-Mizil. An empirical comparison of674
supervised learning algorithms. In Proceedings of the 23rd International675
Conference on Machine Learning, Pittsburgh, PA, 2006.676
Jennifer L. Castle, Xiaochuan Qin, and W. Robert Reed. How to pick the677
best regression equation: A review and comparison of model selection algo-678
rithms. Technical Report 13/2009, Department of Economics, University679
of Canterbury, 2009. URL http://www.econ.canterbury.ac.nz/RePEc/680
cbt/econwp/0913.pdf.681
Hyunyoung Choi and Hal Varian. Predicting the present with Google Trends.682
Technical report, Google, 2009. URL http://google.com/googleblogs/683
pdfs/google_predicting_the_present.pdf .684
Hyunyoung Choi and Hal Varian. Predicting the present with Google Trends.685
Economic Record, 2012. URL http://people.ischool.berkeley.edu/686
~hal/Papers/2011/ptp.pdf .687
Pedro Domingos. A few useful things to know about machine learning. Com-688
munications of the ACM, 55(10), October 2012. URL http://homes.cs.689
washington.edu/~pedrod/papers/cacm12.pdf .690
Liran Einav and Jonathan Levin. The data revolution and economic analysis.691
Technical report, NBER Innovation Policy and the Economy Conference,692
2013.693
Andrey Feuerverger, Yu He, and Shashi Khatri. Statistical significance of694
the Netflix challenge. Statistical Science, 27(2):202231, 2012. URLhttp:695
//arxiv.org/abs/1207.5649 .696
Jerome Friedman. Stochastic gradient boosting. Technical report, Stan-697
ford University, 1999. URL http://www-stat.stanford.edu/~jhf/ftp/698
stobst.pdf.699
32
-
8/13/2019 Hal Varian
33/36
Jerome Friedman and Peter Hall. On bagging and nonlinear estimation.700
Technical report, Stanford University, 2005. URL http://www-stat.701
stanford.edu/~jhf/ftp/bag.pdf.702
Jerome Friedman and Bogdan E. Popescu. Predictive learning via rule703
ensembles. Technical report, Stanford University, 2005. URL http:704
//www-stat.stanford.edu/~jhf/R-RuleFit.html.705
Sharad Goel, Jake M. Hofman, Sbastien Lahaie, David M. Pennock, and706
Duncan J. Watts. Predicting consumer behavior with web search. Pro-707
ceedings of the National Academy of Sciences, 2010. URL http://www.708
pnas.org/content/107/41/17486.full .709
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of710
Statistical Learning: Data Mining, Inference, and Prediction. Springer-711
Verlag, 2 edition, 2009. URL http://www-stat.stanford.edu/~tibs/712
ElemStatLearn/download.html .713
Rebecca Hellerstein and Menno Middeldorp. Forecasting with714
internet search data. Liberty Street Economics Blog of the715
Federal Reserve Bank of New York, January 2012. URL716
http://libertystreeteconomics.newyorkfed.org/2012/01/717
forecasting-with-internet-search-data.html.718
David F. Hendry and Hans-Martin Krolzig. We ran one regression. Oxford719
Bulletin of Economics and Statistics, 66(5):799810, 2004.720
Torsten Hothorn, Kurt Hornik, and Achim Zeileis. Unbiased recursive par-721
titioning: A conditional inference framework. Journal of Computational722
and Graphical Statistics, 15(3):651674, 2006.
723
Jeremy Howard. The two most important algorithms in predictive mod-724
eling today. Conference presentation, February 2013. URL http://725
strataconf.com/strata2012/public/schedule/detail/22658 .726
33
-
8/13/2019 Hal Varian
34/36
Bill Howe. Introduction to data science. Technical report, University of727
Washington, 2013. URL https://class.coursera.org/datasci-001/728
lecture/index.729
Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An730
Introduction to Statistical Learning with Applications in R. Springer, New731
York, 2013.732
Helen F. Ladd. Evidence on discrimination in mortgage lending. Journal of733
Economic Perspectives, 12(2):4162, 1998.734
Jeff Leek. Data analysis, 2013. URL http://blog.revolutionanalytics.735
com/2013/04/coursera-data-analysis-course-videos.html.736
Randall A. Lewis and Justin M. Rao. On the near impossibility of mea-737
suring the returns to advertising. Technical report, Google, Inc. and738
Microsoft Research, 2013. URL http://justinmrao.com/lewis_rao_739
nearimpossibility.pdf.740
Eduardo Ley and Mark F. J. Steel. On the effect of prior assumptions in741
Bayesian model averaging with applications to growth regression. Jour-742
nal of Applied Econometrics, 24(4):651674, 2009. URL http://ideas.743
repec.org/a/jae/japmet/v24y2009i4p651-674.html .744
Nick McLaren and Rachana Shanbhoge. Using internet search data745
as economic indicators. Bank of England Quarterly Bulletin,746
June 2011. URL http://www.bankofengland.co.uk/publications/747
quarterlybulletin/qb110206.pdf .748
James N. Morgan and John A. Sonquist. Problems in the analysis of survey749
data, and a proposal. Journal of the American Statistical Association, 58750
(302):415434, 1963. URL http://www.jstor.org/stable/2283276.751
34
-
8/13/2019 Hal Varian
35/36
Alicia H. Munnell, Geoffrey M. B. Tootell, Lynne E. Browne, and James752
McEneaney. Mortgage lending in Boston: Interpreting HDMA data.Amer-753
ican Economic Review, pages 2553, 1996.754
Judea Pearl. Causality. Cambridge University Press, 2009a.755
Judea Pearl. Causal inference in statistics: An overview. Statistics Surveys,756
4:96146, 2009b.757
Claudia Perlich, Foster Provost, and Jeffrey S. Simonoff. Tree induction vs.758
logistic regression: A learning-curve analysis. Jounral of Machine Learning759
Research, 4:211255, 2003. URL http://machinelearning.wustl.edu/760
mlpapers/paper_files/PerlichPS03.pdf.761
Donald Rubin. Estimating causal effects of treatment in randomized and non-762
randomized studies. Journal of Educational Psychology, 66(5):689, 1974.763
Xavier Sala-i-Martn. I just ran two million regressions. American Economic764
Review, 87(2):17883, 1997.765
Steve Scott and Hal Varian. Bayesian variable selection for nowcasting766
economic time series. Technical report, Google, 2012a. URL http:767
//www.ischool.berkeley.edu/~hal/Papers/2012/fat.pdf . Presented768
at JSM, San Diego.769
Steve Scott and Hal Varian. Predicting the present with Bayesian structural770
time series. Technical report, Google, 2012b. URL http://www.ischool.771
berkeley.edu/~hal/Papers/2013/pred-present-with-bsts.pdf.772
Mark F. J. Steel. Bayesian model averaging and forecasting. Bulletin773
of E.U. and U.S. Inflation and Macroeconomic Analysis, 200:3041,7742011. URL http://www2.warwick.ac.uk/fac/sci/statistics/staff/775
academic-research/steel/steel_homepage/publ/bma_forecast.pdf .776
35
-
8/13/2019 Hal Varian
36/36
Danny Sullivan. Google: 100 billion searches per month, search to integrate777
gmail, launching enhanced search app for iOS. Search Engine Land, 2012.778
URL http://searchengineland.com/google-search-press-129925.779
W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Springer-780
Verlag, New York, 4 edition, 2002.781
Graham Williams. Data Mining with Rattle and R. Springer, New York,782
2011.783
Xindong Wu and Vipin Kumar, editors. The Top Ten Algorithms in784
Data Mining. CRC Press, 2009. URL http://www.cs.uvm.edu/~icdm/785
algorithms/index.shtml.786
36