a growth model of the data economy · keywords: data, growth, digital economy, data barter. 1....

46
A Growth Model of the Data Economy Maryam Farboodi * and Laura Veldkamp April 13, 2020 Abstract The rise of information technology and big data analytics has given rise to “the new econ- omy.” But are its economics new? This article constructs a classic growth model with data accumulation. Data has three key features: 1) Data is a by-product of economic activity; 2) data enhances firm productivity; and 3) data is information used for resolving uncertainty. The model can explain why data-intensive goods or services, like apps, are given away for free, why firm size is diverging, and why many big data firms are unprofitable for a long time. While these transition dynamics differ from those of traditional growth models, the long run features diminishing returns. Just like capital accumulation, data accumulation alone cannot sustain growth. Without other improvements in productivity, data-driven growth will grind to a halt. Does the new information economy have new economics, in the long run? When the economy shifted from agrarian to industrial, economists focused on capital accumulation and removed land from production functions. As we shift from an industrial to a knowledge economy, the nature of inputs is changing again. In the information age, production increasingly revolves around informa- tion and, specifically, data. Many firms are valued primarily for the data they have accumulated. As of 2015, global production of information and communications technology (ICT) goods and services was responsible for 6.5% of global GDP, and 100 million jobs (United Nations, 2017). Collection and use of data is as old as book-keeping. But recent innovations in computing and arti- ficial intelligence (AI) allow us to use more data more efficiently. How will this new data economy * MIT Sloan School of Business and NBER; [email protected]. Columbia Graduate School of Business, NBER, and CEPR, 3022 Broadway, New York, NY 10027; [email protected]. Thanks to Rebekah Dix for invaluable research assistance and to participants at the 2019 SED plenary, Kansas City Federal Reserve, 2020 ASSA and Bank of England annual research conference for helpful comments and suggestions. Preliminary and incomplete draft. Keywords: Data, growth, digital economy, data barter. 1

Upload: others

Post on 18-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

A Growth Model of the Data Economy

Maryam Farboodi∗ and Laura Veldkamp†

April 13, 2020

Abstract

The rise of information technology and big data analytics has given rise to “the new econ-

omy.” But are its economics new? This article constructs a classic growth model with data

accumulation. Data has three key features: 1) Data is a by-product of economic activity; 2)

data enhances firm productivity; and 3) data is information used for resolving uncertainty. The

model can explain why data-intensive goods or services, like apps, are given away for free, why

firm size is diverging, and why many big data firms are unprofitable for a long time. While

these transition dynamics differ from those of traditional growth models, the long run features

diminishing returns. Just like capital accumulation, data accumulation alone cannot sustain

growth. Without other improvements in productivity, data-driven growth will grind to a halt.

Does the new information economy have new economics, in the long run? When the economy

shifted from agrarian to industrial, economists focused on capital accumulation and removed land

from production functions. As we shift from an industrial to a knowledge economy, the nature of

inputs is changing again. In the information age, production increasingly revolves around informa-

tion and, specifically, data. Many firms are valued primarily for the data they have accumulated.

As of 2015, global production of information and communications technology (ICT) goods and

services was responsible for 6.5% of global GDP, and 100 million jobs (United Nations, 2017).

Collection and use of data is as old as book-keeping. But recent innovations in computing and arti-

ficial intelligence (AI) allow us to use more data more efficiently. How will this new data economy

∗MIT Sloan School of Business and NBER; [email protected].†Columbia Graduate School of Business, NBER, and CEPR, 3022 Broadway, New York, NY 10027; [email protected].

Thanks to Rebekah Dix for invaluable research assistance and to participants at the 2019 SED plenary, Kansas City FederalReserve, 2020 ASSA and Bank of England annual research conference for helpful comments and suggestions. Preliminary andincomplete draft. Keywords: Data, growth, digital economy, data barter.

1

Page 2: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

evolve? Because data is non-rival, increases productivity and is freely replicable (has returns to

scale), current thinking equates data growth with idea or technological growth. This article uses a

simple framework to argue the contrary, that data accumulation is more like capital accumulation,

which, by itself, cannot propel growth in the long run.

Data is information that can be encoded as a binary sequence of zeroes and ones. That broad

definition includes literature, visual art and technological breakthroughs. We are focusing more

narrowly on big data because that is where the technological breakthroughs have taken place that

have spawned talk of a new information age or economy. Machine learning or artificial intelligence

are prediction algorithms. Such algorithms predict the probability of a high demand for a good on

a day, a picture being a cat, or advertisement resulting in a sale. Much of the big data firms use

for these predictions are transactions data. It is personal information about online buyers, satellite

images of traffic patterns near stores, textual analysis of user reviews, click through data, and other

evidence of economic activity. Such data is used to forecast sales, earnings and the future value

of firms and their product lines. Data is also used to advertise, which may create social value or

might simply steal business from other firms. We will consider both possibilities. But the essential

features of the data we consider are that it is user-generated and that it is used to predict uncertain

outcomes.

Data and technology are not the same. They entail different benefits and different costs. Trans-

actions data is information used to forecast unknown outcomes. The benefits of forecasting and

inventing technologies differ: Because perfect foresight models never feature infinite profits, perfect

forecasts must yield finite gains. If the gain to a perfect forecast is finite, the return to better and

better forecasts must be diminishing. Data and technology also have different production costs.

Producing new technology requires resources: skilled labor, a laboratory and prototypes. In con-

trast, data is a by-product of economic activity. Producing and selling generates data about the

volume of sales, the means of payment, and characteristics of buyers. Of course, collecting and

processing the data to extract knowledge is costly. But data itself is not produced in a lab. More

data comes from more economic activity. This difference in production matters. One of the fun-

damental insights of Romer (1990) is that monopolies are necessary to incentivize idea production.

This is not true of data production. Because data is a by-product of economic transactions and

data is less prone to leakage, no extra incentives are needed for its production.

2

Page 3: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

The key features of the model reflect these differences. In the model, data is information,

generated by past economic activity, used to forecast future unknown states. Like all information,

it has returns to scale. Data can be used to make one unit of capital more valuable, or one thousand

units more productive. The more productive capacity the data is matched with, the greater are

the gains in output. In that sense, data is similar to technology. At the same time, the key insight

of the mode is that data has diminishing returns. The diminishing returns arise from the fact that

forecast errors give rise to profit losses. Data can mitigate those losses. But the best possible

outcome is that data reduces forecast errors to zero. With perfect forecasting, zero operational

mistakes, profits are large, but not infinite. In fact, many macro models have no uncertainty. Such

environments are infinite-data limit economies.

Section 2 explores the logical conditions under which data cannot sustain long-run growth. The

key is that data, like all information, is a means of reducing uncertainty. Uncertainty is bounded

below by zero. Unless a perfect forecast gives a firm access to a pure, real, limitless arbitrage, the

perfect forecast generates finite payoff. If the payoff to data is bounded above, the returns, at some

point, must diminish, so as not to exceed that upper bound. Furthremore, perfect forecasts imply

that the future is not random. It must be a perfectly predictable function of past events, otherwise,

a data set could not predict it perfectly. While both of these are mathematical possibilities, they

go well beyond the fictions typically used in economic modelling.

At the same time, the way in which data is produced means that, at low levels, data has

increasing returns. Our models features what is referred to as a “data feedback loop.” More data

makes a firm more productive, which results in more production and more transactions, which

generates more data, and further increases productivity and data generation. This force is also

present in our models. It is the dominant force when data is scarce, before the diminishing returns

to forecasting set in and overwhelm it. This increasing returns force can generate a growth poverty

trap. Firms, industries, or countries may have low levels of data, which confine them to low

levels of production and transactions, which make adopting new data technologies not worth the

cost. Thus, data could explain some of the increase in inequality, across firms, industries, or even

countries that adopt data technologies at different rates. While the increasing returns to data can

generate firm size inequality, eventually the decreasing returns to data cause firms to converge.

Thus, the properties of data suggest that the part of firm size dispersion that is due to data is a

3

Page 4: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

transitory phenomenon.

To understand these trends, a theoretical framework is essential. Therefore, Section 1 constructs

a model to guide our thinking about which changes are logical outcomes, and which are not. The

model also offers guidance for measurement. Measuring and valuing data is complicated by the

fact that frequently, data is given away, in exchange for a free digital service. Our model makes

sense of this pricing behavior and assigns a value to goods and data that have a zero transactions

price. In so doing, it moves beyond price-based valuation, which often delivers misleading answers

when valuing digital assets.

The model looks much like a simple Solow (1956) model. There are inflows of data from new

economic activity and outflows, as data depreciates. The depreciation comes from the fact that

the state is constantly evolving. Firms are forecasting a moving target. Economic activity many

periods ago was quite informative about the state at the time. However, since the state has random

drift, such old data is less informative about what the state is today. When data is scarce, little is

lost due to depreciation. As data stocks grow large, depreciation losses are substantial. The point

at which data depreciation equals the inflow of new data is a data steady state. Firms with less

data than their steady state grow in data, and therefore in productivity and investment. If a firm

ever had more data than its steady state level, it should shrink. But without any other source of

growth in the model, data-driven growth, like capital-driven growth eventually grinds to a halt.

Our result should not be interpreted to mean that data does not contribute to growth. It

absolutely does, in the same way that capital investment does. If non-data-technology (referred

to hereafter as “productivity” or “technology”) continues to improve, data helps us find the most

efficient uses of these new technologies. The accumulation of data may even reduce the costs of

technological innovation by reducing its uncertainty, or increase the incentives for innovation by

increasing the payoffs. The point is that being non-rival, freely replicable and productive is not

enough for data to sustain growth. If it is used for forecasting, as most big data is, the data

functions like capital. It increases output, but cannot sustain infinite growth all by itself. We still

need innovation for that.

Related literature. Work on information frictions in business cycles, (Veldkamp (2005), Or-

donez (2013) and Fajgelbaum et al. (2017)) have early versions of a data-feedback loop whereby

4

Page 5: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

more data enables more production, which in turn, produces more data. In each of these models,

information is a by-product of economic activity; firms use this information to reduce uncertainty

and guide their decision-making. These authors did not call the information data. But it has all

the hallmarks of modern transactions data. The data produced was used by firms to forecast the

state of the business cycle. Better forecasting enabled the firms to invest more wisely and be more

profitable. These models restricted attention to data about aggregate productivity. In the data

economy, that is not primarily what firms are using data for. But such modeling structures can be

adapted so that production can also generate firm- or industry-specific information. As such, they

provide useful equilibrium frameworks on which to build a data economy.

In the growth literature, our model builds on Jones and Tonetti (2018). They explore how

different data ownership models affect the rate of growth of the economy. In both models, data is

a by-product of economic activity and therefore grows endogenously over time. What is different

here is that data is information, used to forecast a random variable. In Jones and Tonetti (2018),

data contributes directly to productivity. It is not information. A fundamental characteristic of

information is that it reduces uncertainty about something. When we model data as information,

not technology, Jones and Tonetti (2018)’s conclusions about the benefits of data privacy may still

hold. But instead of long-run growth, there is long-run stagnation.

Other authors consider the interaction of artificial intelligence (AI) and innovation. Agrawal et

al. (2018) develops a combinatorial-based knowledge production function and embeds it in the clas-

sic Jones (1995) growth model to explore how breakthroughs in AI could enhance discovery rates

and economic growth. Lu (2019) embeds self-accumulating AI in a Lucas (1988) growth model

and examines growth transition paths from an economy without AI to an economy with AI and

how employment and welfare evolves. Aghion et al. (2017) explore the role of AI for the growth

process and its reallocative effects. The authors argue that Baumol (1967)’s cost disease leads to

the declining share of traditional industries’ GDP, as they become automated. This decline is offset

by the growing fraction of automated industry. In such an environment, AI may discourage future

innovation for fear of imitation, undermining incentives to innovate in the first place.

While some big data is used to facilitate innovation, most of the “new economy” data is web

searches, shopping behavior and other evidence of economic transactions. While the existence of

5

Page 6: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

such data has inspired innovations such as the sharing economy and recommendations engines,

those new ideas are distinct from the data itself. The contents of transactions data is not likely,

by itself, to reveal a breakthrough technology. Whether data and innovation are complements is

a separate question that these studies shed light on. The accumulation of data used solely for

prediction is driving a large and growing sector of the economy. Our contribution is to understand

the consequence of big data and the new prediction algorithms alone, for economic growth.

In the finance literature, Begenau et al. (2018) explore growth in the data processing capacity

of financial investors and home data affects firm size through financial markets. They do not model

firms’ use of their own data. Such studies complement this work by illustrating other ways in which

abundant data is re-shaping the economy.

Finally, the model builds on the five-equation toy model in Farboodi et al. (2019), which was

designed to explore the size distribution of heterogeneous firms. In this paper, we add features such

as adjustment costs and non-rival, tradeable data. These features make the model more suitable

for answering questions about time-series dynamics and the long run.

1 A Data Economy Growth Model

Real Goods Production Time is discrete and infinite. There is a continuum of competitive

firms indexed by i. Each firm can produce kαi,t units of goods with ki,t units of capital. These goods

have quality Ai,t. Thus firm i’s quality-adjusted output is

yit = Ai,tkαi,t (1)

The quality of a good depends on a firm’s choice of a production technique ai,t. Each period

firm i has one optimal technique, with a persistent and a transitory components: θi,t + εa,i,t.

Neither component is separately observed. The persistent component θi,t follows an AR(1) process:

θi,t = θ+ρ(θi,t−1− θ)+ηi,t. The AR(1) innovation ηi,t is i.i.d. across time. We explore two possible

correlations of ηi,t across firms. First, we consider independent θ processes (corr(ηi,t, ηj,t) = 0, ∀i 6=

j). Then we consider an aggregate θ process (corr(ηi,t, ηj,t) = 1, ∀i, j). The transitory shock εa,i,t

is i.i.d. across time and firms and is unlearnable.

6

Page 7: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

The optimal technique is important for a firm because the quality of a firm’s good, Ai,t, de-

pends on the squared distance between the firm’s production technique choice ai,t and the optimal

technique θi,t:

Ai,t = A− (ai,t − θi,t − εa,i,t)2 (2)

Data The role of data is that it helps firms to choose better production techniques. One in-

terpretation is that data can inform a firm whether blue or green cars or white or brown kitchens

will be more valued by their consumers, and produce accordingly. Transactions help to reveal

these preferences but they are constantly changing and firms must continually learn to catch up.

Another interpretation is that the technique is inventory management, or other cost-saving activi-

ties. Observing many establishments with a range of practices and many customers provides useful

information for optimizing business practices.

Specifically, data is informative about θi,t. The role of the temporary shock εa is that it prevents

firms, whose payoffs reveal their productivity Ai,t, from inferring θi,t at the end of each period.

Without it, the accumulation of past data would not be a valuable asset. If a firm knew the value

of θi,t−1 at the start of time t, it would maximize quality by conditioning its action ai,t on period-t

data ni,t and θi,t−1, but not on any data from before t. All past data is just a noisy signal about

θi,t−1, which the firm now knows. Thus preventing the revelation of θi,t−1 keeps old data relevant

and valuable.

The next assumption captures the idea that data is a by-product of economic activity. The

number of data points n observed by firm i at the end of period t depends on their production kαi,t:

ni,t = zikαi,t, (3)

where zi is the parameter that governs how much data a firm can mine from its cutomers. A data

mining firm is one that harvests lots of data per unit of output.

Each data point m ∈ [1 : ni,t] reveals

si,t,m = θi,t + εi,t,m, (4)

7

Page 8: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

where εi,t,m is i.i.d. across firms, time, and signals. For tractability, we assume that all the shocks

in the model are normally distributed: fundamental uncertainty is ηi,t ∼ N(µ, σ2θ), signal noise is

εi,t,m ∼ N(0, σ2ε ), and the unlearnable quality shock is εa,i,t ∼ N(0, σ2

a).

Data Trading and Non-Rivalry In order to model data as a tradeable asset, we need to allow

for the possibility that the amount of data a firm produces is not the same as the amount of data

they use. The difference between the two is the amount of data purchased from or effectively lost

due to its sale to other firms.1 Let δit be the amount of data traded by firm i a time t. If δit < 0,

the firm is selling data. If δit > 0, the firm purchased data. We restrict δit > −nit so that a firm

cannot sell more data than it produces. Let the price of one piece of data be denoted πt.

Of course, data is non-rival. Some firms use data and then sell that same data to others. If there

were no cost to selling one’s data, then every firm in this competitive, price-taking environment

would sell all its data to all other firms. In reality, that does not happen. Instead, we assume that

when a firm sells its data, it loses a fraction ι of the amount of data that it sells to each other firm.

Thus if a firm sells an amount of data δit < 0 to other firms, then the firm has nit+ ιδit data points

left to add to its own stock of knowledge. (Note that ιδ < 0 so that the firm has less data than the

nit points it produced.) This loss of data ι is a stand-in for the loss of market power that comes

from sharing one’s data. If the firm buys δit > 0 units of data, it adds nit + δit units of data to its

stock of knowledge. Another interpretation of this assumption is that there is a transaction cost of

trading data, proportional to the data value.

Data Adjustment and the Stock of Knowledge The information set of firm i when it

chooses its technique ai,t is Ii,t = [Ai,τt−1τ=0; si,τ,m

ni,τm=1tτ=0]. To make the problem recursive

and to define data adjustment costs, we construct a helpful summary statistic for this information,

called the “stock of knowledge.”

Each firm’s flow of ni,t new data points allows it to build up a stock of knowledge Ωi,t that it

uses to forecast future economic outcomes. We define the stock of knowledge of firm i at time t

to be Ωi,t. We use the term “stock of knowledge” to mean the precision of firm i’s forecast of θit,

1This formulation prohibits firms from both buying and selling data in the same period. See Appendix ?? fora discussion of this assumption. To micro-found such a cost with non-exclusive use of data requires moving toa imperfect competition model. See Jones and Tonetti (2018) for an example. Such a model involves far morecomplexity than is required to make the simple points made here.

8

Page 9: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

which is formally:

Ωi,t := Ei[(Ei[θi,t|Ii,t]− θi,t)2]−1. (5)

Note that the conditional expectation on the inside of the expression is a forecast. It is the firm’s

best estimate of θit. The difference between the forecast and the realized value, Ei[θi,t|Ii,t] − θi,t,

is therefore a forecast error. An expected squared forecast error is the variance of the forecast. It’s

also called the variance of θ, conditional on the information set Ii,t, or the posterior variance. The

inverse of a variance is a precision. Thus, this is the precision of firm i’s forecast of θit

Data adjustment costs capture the idea that a firm that does not store or analyze any data

cannot freely transform itself to a big-data machine learning powerhouse. That transformation

requires new computer systems, new workers with different skills, and learning by the management

team. As a practical matter, data adjustment costs are important because they make dynamics

gradual. If data is tradeable and there is no adjustment cost, a firm would immediately purchase

the optimal amount of data, just as in models of capital investment without capital adjustment

costs. Of course, the optimal amount of data might change as the price of data changes. But such

adjustment would mute some of the cross-firm heterogeneity we are interested in.

We assume that, if a firm’s data stock was Ωi,t and becomes Ωi,t+1, the firm’s period-t output

is diminished by Ψ(∆Ωi,t+1) = ψ(∆Ωi,t+1)2, where ψ is a constant parameter and ∆ represents the

percentage change: ∆Ωi,t+1 = (Ωi,t+1 − Ωi,t)/Ωi,t. The percentage change formulation is helpful

because it makes doubling one’s stock of knowledge equally costly, no matter what units data is

measured in.

Firm’s Problem. A firm chooses a sequence of production, quality and data-use decisions

ki,t, ai,t, ωi,t to maximize

E0

∞∑t=0

(1

1 + r

)t (PtAi,tk

αi,t −Ψ(∆Ωi,t+1)− πδit − rki,t

)(6)

Firms update beliefs about θi,t using Bayes’ law. Each period, firms observe last period’s

revenues and data, and then choose capital level k and production technique a. The information

set of firm i when it chooses its technique ai,t and its investment kit is Ii,t.

Here, we take the rental rate of capital as given, just to show the data-relevant mechanisms as

9

Page 10: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

clearly as possible. It could be that this is a small open economy facing a world rate of interest

r. But, of course, one should embed this problem in an equilibrium context where capital markets

clear. Similarly, one can add labor markets and endogenize the demand for goods. The model here

is only a sketch of an idea that should be explored in a fuller economic context.

Equilibrium

Pt denotes the equilibrium price per quality unit of goods. In other words, the price of a good

with quality A is APt. The inverse demand function and the industry quality-adjusted supply are:

Pt = P Y −γt , (7)

Yt =

∫iAi,tk

αi,tdi. (8)

Firms take the industry price Pt as given and their quality-adjusted outputs are perfect substitutes.

Solution The state variables of the recursive problem are the prior mean and variance of beliefs

about θi,t−1, last period’s revenues, and the new data points. Taking a first order condition with

respect to the technique choice, we find that the optimal technique is a∗i,t = Ei[θi,t|Ii,t].

Lemma 1

Ωi,t+1 =[ρ2(Ωi,t + σ−2

a )−1 + σ2θ

]−1+ (nit + δit(1δit>0 + ι1δit<0))σ−2

ε (9)

Recall that ι = 1 is exclusive use of data. All data sold (δit < 0) is subtracted unit by unit from

the seller’s data set. They keep none of the data they sell. If 0 < ι < 1, the when the firm sells

data, its own data set is depreciated, in proportion to the amount of the sale. If ι = 0, then data

can be shared costlessly.

Thus, expected quality of firm i’s good at time t is Ei[Ai,t] = A − Ω−1i,t − σ2

a. We can thus

express expected firm value recursively.

Lemma 2 The optimal sequence of capital investment choices ki,t and data use choices ωi,t ≥ 0

10

Page 11: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

solve the following recursive problem:

V (Ωi,t) = maxki,t,ωi,t

Pt

(A− Ω−1

i,t − σ2a

)kαi,t −Ψ(∆Ωi,t+1)− πδi,t − rki,t

+

(1

1 + r

)V (Ωi,t+1) (10)

where ni,t = zikαi,t and the law of motion for Ωi,t is given by (9).

See Appendix for the proof. This result greatly simplifies the problem by collapsing it to a

deterministic problem with only one state variable, Ωi,t. The reason we can do this is that quality

Ai,t depends on the conditional variance of θi,t and because the information structure is similar to

that of a Kalman filter, where the sequence of conditional variances is generally deterministic.

Notice here that the non-rivalry of data does not change our problem substantially. It is

equivalent to a kinked price of data, or to a transactions cost. We could redefine the choice variable

to be ω, the amount of data added to a firm’s stock of knowledge Ω. Then, ω = nit + δit for data

purchases (δit > 0) and ω = nit + ιδit for data sales when δit < 0. Then, we could re-express this

problem as a choice of ω and a corresponding price that depends on whether ω ≥ nit or ω < nit.

For now, we continue with ι = 1, understanding that this changes the interpretation of the price of

data.

Valuing Data In this formulation of the problem, Ωi,t can be interpreted as the amount of data

a firm has. Technically, it is the precision of the firm’s posterior belief. But according to Bayes’ rule

for normal variables, posterior precision is the discounted precision of prior beliefs plus the precision

of each signal observed. In other words, the precision of beliefs, Ωit is a linear transformation of

the number of all past used data points, ωists=1. Ωit captures the value of past observed data

through the term for the discounted prior precision, Ωi,t−1.

The marginal value of one additional piece of data, of precision 1, is simply ∂Vt/∂Ωit. When we

consider markets for buying and selling data, ∂Vt/∂Ωit represents the firm’s demand, its marginal

willingness to pay for data.

11

Page 12: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

2 Analytical Results

We begin by establishing a few key properties of this data economy. The first three results are not

model-specific. They describe conditions under which data can sustain infinite growth. They do

not prove that data-driven growth is not possible. Rather, they show that, if one believes that the

accumulation of data for forecasting can sustain growth forever, here are some logically equivalent

statements that one must therefore also accept.

The remainder of the results describe comparative statics and dynamic properties: Who buys

or sells data, at what price, how does the behavior of one firm affect the incentives of others, and

finally, what are the dynamics of data accumulation?

2.1 Long Run Growth Impossibility Results

Consider an economy, where productivity A is fixed. Only data nit grows. Data is used to forecast

random outcomes. Call that a “data economy.” Then the following results must hold for data

accumulation to sustain a minimal positive rate of aggregate output growth.

Proposition 1 Data and Infinite Output. A data economy can sustain an aggregate growth

rate of output ln(Yt+1)− ln(Yt) that is greater than any lower bound g > 0, in each period t, only

if infinite data nit →∞ for some firm i implies infinite output yit →∞.

Proof: Suppose not. Then, for every firm i, producing infinite data implies finite firm output

yit. If each firm’s output is finite and the measure of all firms is finite, aggregate output, an integral

of these individual firm outputs, is finite: Yt has a finite upper bound in each period t. If Yt has

a finite upper bound in each period t and ln(Yt+1) − ln(Yt) > g > 0 in every period t, then there

exists a finite time at which Yt will surpass its upper bound. Since only data is growing and not

the measure of firms, aggregate output can only be infinite if some firm’s output is infinite, yit =∞

for some i, t. That is a contradiction.

Let Γt represent the distribution of firms’ stock of knowledge Ωit at time t. For a continuum

of firms, Γt encapsulates exactly what measure of firms have how much data, or more specifically,

what forecast precision, when they forecast θt.

Proposition 2 Data and Infinite Precision. Suppose aggregate output is a finite-valued

12

Page 13: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

function of each firm’s forecast precision: Yt = f(Γt). A data economy can sustain an aggregate

growth rate of output ln(Yt+1) − ln(Yt) that is greater than any lower bound g > 0, in each each

period t, only if infinite data nit →∞ for some firm i implies infinite precision Ωit →∞.

Proof : From proposition 1, we know that sustaining aggregate growth above any lower bound

g > 0 arises only if a data economy achieves infinite output Yt → ∞ when some firm has infinite

data nit →∞. Since Yt is a finite-valued function of Γt, it can only be that Yt →∞ if some moment

of Γt is also becoming infinite Γt → ±∞. Moments of Γt cannot become negative infinite because

Γ is a distribution over Ωt which is a precision, defined to be non-negative. Thus for some moment,

Γt →∞. If some amount of probability mass is being placed on Ω’s that are approaching infinity,

that means there is some measure of firms that are achieving perfect forecast precision: Ωit →∞.

The next result says that if signals are derived from the observations of past events, then infinite

precision implies the the future is deterministic. There cannot be any fundamental randomness,

any unlearnable risk, because that would cause forecasts to be imperfect. Infinite precision means

zero forecast error with certainty. Such perfect forecasts can only exist if future events are perfectly

forecastable with past data. Perfectly forecastable means that, conditional on past events, the

future is not random. Thus, future events are deterministic.

Proposition 3 Infinite Precision Implies a Deterministic Future. Suppose all data points

si,t,m observed at the start of time t are t − 1 measurable signals about some future event θt. If

infinite data nit →∞ for some firm i implies infinite precision Ωit →∞, then future events θt are

deterministic: θt is a deterministic function of the sigma algebra of past events.

Proof : Suppose θt is not a deterministic function of the sigma algebra of past events. Then θt

is random with respect to the sigma algebra of past events. If signals are measurable with respect

to all past events, then they are function of the sigma algebra of past events. Specically, t − 1

measurable signals cannot contain information about the future event θt that is not already present

in past events. Thus, if θt is random with respect to past events, it must be random with respect

to all possible signals si,t,m . If θt is random with respect to the signals, there is strictly positive

forecast variance. If forecast variance cannot be zero, then signal precision cannot be infinite.

13

Page 14: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Taken together, these results suggests two distinct reasons why one might be skeptical of long-

run, data-driven growth. The first reason is that prefect forecasts might no create infinite output.

The second reason is that a perfect forecast implies that future events are deterministic functions

of past events. Proving whether the universe is random or not is well beyond the scope of this

paper, and is perhaps better left to philosophers.

Data Barter and the Emergence of Data Platforms Next, we turn to results that are

more specific to the assumptions of our model. We begin by showing how data may be bartered

for goods (or services). Then we turn to the decision of the firm to buy, sell or keep to themselves,

the data that they collect.

In the model, barter arises when the goods are exchanged for customer data, at a zero price.

This is a possibility result and a knife-edge case. But it is an interesting case because it illustrates

a phenomenon we see in reality. In many cases, digital products, like apps, are being developed

at great cost to a company and then given away “for free.” Free here means zero monetary price.

But obtaining the app does involve giving one’s data in return. That sort of exchange, with no

monetary price attached, is a classic barter trade.

Proposition 4 Firms barter goods for data. If πtziα/(rι) > 0, then it is possible that a

firm will optimally choose positive production kαit > 0, even if its price per unit of output is zero:

PtAit = 0.

Proof: Proving this possibility requires a proof by example. Suppose the price of goods is

zero Pt and the price of data π is such that firm i finds it optimal to sell all its data produced in

period t: δit = −nit. In this case, differentiating the value function (8) with respect to k yields

(πt/ι)ziαkα−1 = r. Can this optimality condition hold for positive k? If k1−α = πtziα/(rι) > 0,

then the firm optimally chooses kit > 0, at price Pt = 0.

The idea of this argument is that costly investment kit > 0 has a marginal benefit: more data

that can be sold at price πt and a marginal cost r. If the price of data is sufficiently high, and/or

the firm is a sufficiently productive data producer (high zi), then the firm should engage in costly

production, even at a zero goods price because it also produces data, which has a positive price.

When a firm produces lots of data, they can either sell it or keep it entirely to themselves. A

14

Page 15: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

firm that produces lots of data and sells that data looks like a data platform. This is an Amazon-

like business. Amazon does not use the data for goods production. It sells data and data services

that allow other firms to produce more efficiently or to get more value out of the goods they have

produced. A firm that produces lots of data and then uses it is more like Uber. Uber mostly uses

its own data to offer its service more effectively and price it more profitably.

Different firms will make different choices, depending on how effective they are at producing

data (their zi) and on the distribution of other firms’ data productivity (zj ’s ). The following

results guide our thinking about the emergence of data users and data platforms.

In some circumstances, firms that produce more data retain more data and use it to improve

their products. We call this the Uber model, named after the rise-sharing service that uses users’

demand for rides to direct drivers to high-demand areas and price the rides to equate supply and

demand. Uber’s main business is not the sale of the data they collect.

In other circumstances, firms that produce more data find that their comparative advantage

is in data production, not in goods production. Therefore, they retain less data and sacrifice the

quality of the good they produce. Such firms derive most of their revenue from selling data, not

from using the data. Because this type of firm looks like a data platform, we call this the Amazon

model, after the online digital platform, Amazon. Of course, Amazon does use its data to optimize

its operations. But most of its valuation comes not from the profit margins on the goods it sells.

For years, Amazon made losses on its goods sales. Rather, Amazon’s revenue and valuation come

mostly from the value of the data. Amazon’s data is what allows it to sell advertising, which is a

data service. The next two results prove that which outcome prevails depends on parameter values.

Proposition 5 More efficient data producers keep more data for themselves (Uber

model). Suppose the economy is in steady state and only one firm’s data savviness zit changes.

If condition x on parameters holds, then ∂Ωit/∂zit > 0.

Proposition 6 More efficient data producers sell their data (Amazon model). Suppose

the economy is in steady state and only one firm’s data savviness zit changes. If condition y on

parameters holds, then ∂Ωit/∂zit < 0.

All remaining proofs are (will be) in Appendix A. The next result is also about this same margin

of whether to use or sell data. But it explores the strategic dimension of this choice. How does a

15

Page 16: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

firm’s choice of whether to use data to improve its product or sell data to others depend on the

amount of data collected by other firms? To explore this question, we consider two types of firms:

high data and low data.

Suppose there are two types of firms, data savvy firms with zH and data unsavvy firms with

zL < zH . Let λ be the fraction of less-data-savvy firms.

Proposition 7 Data-savvy firms retain more data when others are less savvy. If the

parameters satisfy condition z, then ∂Ωit/∂λ > 0.

The previous results have been about a choice at a particular point in time: whether to use

data in a proprietary way or sell it to others. It is not quite a static choice because that decision

to sell or not has long-run consequences for the evolution of the firm’s stock of knowledge. But the

last result characterizes the dynamic evolution of the aggregate economy.

Increasing and Decreasing Returns The final result highlights the two main competing forces

in this model. The first force is one for increasing returns. This is the data feedback loop: Firms

with more data produce higher-quality goods, which induces them to invest more, produce more,

and sell more, which, in turn, generates more data for them. That is a force that causes aggregate

knowledge accumulation to accelerate (laft side of S-curve). The second force is the diminishing

returns that sets in when data is already abundant and additional data does not reduce forecast

errors by very much. That is the right side of the S-curve.

Proposition 8 S-shaped accumulation of aggregate stock of knowledge. ∂2Ωt/∂t2 is

positive for low levels of Ωt and negative for high levels of Ωt.

After describing qualitative properties of this economy, it is helpful to see why these properties

look like, and what they imply for the dynamics of the aggregate economy and the dispersion of

individual firms. To illustrate this, we need to turn to numerical analysis.

3 Numerical Results: Growth and the Distribution of Firms

When data results from economic activity, reduces forecast errors and improves the value of firms’

goods, there is both a force for increasing and a force for decreasing returns. This section explores

16

Page 17: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

how these two forces interact and what long-run economic trajectories are possible.

Our goal is not to make precise quantitative forecasts of future growth. The model is far too

simple for that. Rather, the goal is to illustrate the forces at work and to compare and contrast

them with more traditional growth models. Illustrating with figures is helpful. Figures require

putting numbers on parameters. Each figure lists the parameters is uses. These parameters were

chosen to be simple, to conform with standing economic intuition, or because they delivered the

trade-off we sought to illustrate.

Measuring data. The model suggests two possible ways of measuring data. One is to measure

output or transactions. If we think data is a by-product of economic activity, then a measure of that

activity should be a good indicator of aggregate data production. At the firm level, a firm’s data

use could differ from their data production, if data is traded. But one can adjust data production

for data sales and purchases, to get a firm-level flow measure of data. Then, a stock of data is

a discounted sum of data flows. The discount rate depends on the persistence of the market. If

the data is about demand for fashion, then rapidly changing tastes imply that data has a short

longevity and a high discount rate. If the data is mailing addresses, that is quite persistent, with

small innovations. An AR(1) coefficient and innovation variance of the variable being forecasted

are sufficient to determine the discount rate.

The second means of measuring data is to look at what actions it allows firms to choose. A

firm with more data can respond more quickly to market conditions than a firm with little data

to guide them. To use this measurement approach, one needs to take a stand on what actions

firms are using data to inform, what variable firms are using the data to forecast, and to measure

both the variable and the action. One example is portfolio choice in financial markets Farboodi

et al. (2018). Another example is firms’ real investment David et al. (2016). Both measure the

covariance between investment choices and future returns. That covariance between choices and

unknown states reveals how much data investors have about the future unknown state. A similar

approach could be to use the correlation between consumer demand and firm production, across a

portfolio of goods, to infer firms’ data about demand.

Which approach is better depends on what data is available. One difference between the two

is the units. Measuring correlation gives rise to natural units, in terms of the precision of the

17

Page 18: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

information contained in the total data set. The first approach of counting data points, measures

the data points more directly. But not all data is equal. Some data is more useful to forecast a

particular variable. The usefulness or relevance of the data is captured in how it is used to correlate

decisions and uncertain states.

Calibration The results below are not yet calibrated. However, to calibrate the model, we

propose to match the following moments of the data. The capital-output ratio tells us something

about the average productivity, which would be governed by a parameter like A, among others.

The variance of GDP and the capital stock, each relative to its mean, var(Kt)/mean(Kt) and

var(Yt)/mean(Yt), are each informative about variance of the shocks to the model, such as σθ and

σa. The share of aggregate income paid to capital is commonly thought to be about 0.4. Since this

is governed by the exponent α, we set α = 0.4. For the rental rate on capital, we use a riskless

rate of 3% , which is an average 3-month treasury rate over the last 40 years. The inverse demand

curve parameters determine the price elasticity of demand. We take γ and P from the literature.

Finally, we model the adjustment cost for data ψ in the same was as others have the adjust cost

of capital.We think this is a reasonable approach because adusting one’s process to use more data

typically involves the purchase of new capital, like new computing and recording equipment and

involves disruptive changes in firm practice, similar to the disruption of working with new physical

machinery.

Finally, we normalize the noise in each data point σε = 1. We can do this without loss of

generality because it is effectively a re-normalization of all the data-savviness parameter for all

firms zi. This is because for normal variables, having twice as many signals, each with twice the

variance, makes no difference to the mean or variance of the agent’s forecast. As long as we ignore

any integer problems with the number of signals, the amount of information conveyed per signal is

irrelevant. What matters is the total amount of information conveyed.

Aggregate data inflows and outflows. Just like we typically teach the Solow (1956) model by

examining the inflows and outflows of capital, we can gain insight into our data economy growth

model by exploring the inflows and outflows of data. The inflows of data are new pieces of data

that are generated by economic activity. The number of new data points ni,t was assumed to be

18

Page 19: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

data mining ability times end of period physical output: zikαi,t. By Bayes’ law for normal variables,

the total precision of that information is the sum of the precisions of all the data points: ni,tσ−2ε .

That is the quantity represented by the line inflows in Figure 1. How can data flow out? It is not

really leaving. It is just depreciating. Data depreciates because data generated at time t is about

next period’s optimal technique θt+1. But that means that data generated s periods ago is about

θt−s+1. Since θ is an AR(1) process, it is constantly evolving. Data from many periods ago, about

a θ realized many periods ago is not as relevant as more recent data. So, just like capital, data

depreciates.

For this exercise where we examine the aggregate amount of data in the economy, we ignore

data purchases and sales. Sales just move data around, rather than changing the aggregate amount.

The extent of data depreciation can be seen in (9), the law of motion for the stock of knowledge

Ωi,t. The second term of that law of motion is exactly the inflows described above. The first term of

the law of motion is the amount of data carried forward from period t:[(ρ2(Ωi,t + σ−2

a ))−1 + σ2θ

]−1.

The Ωi,t + σ−2a term represents the stock of knowledge at the start of time t plus the information

about period t technique revealed to a firm by observing its own output. The information precision

is multiplied by the persistence of the AR(1) process squared, ρ2. If the process for optimal

technique θt was perfectly persistent then ρ = 1 and this term would not discount old data. If the

θ process is i.i.d. ρ = 0, then old data is irrelevant for the future. Next, the formula says to invert

the precision, to get a variance and add the variance of the AR(1) process innovation σ2θ . This

represents the idea that volatile θ innovations add noise to old data and make it less valuable in

the future. Finally, the whole expression is inverted again so that the variance is transformed back

into a precision and once again, represents a (discounted) stock of knowledge. The depreciation of

knowledge is the end-of-period-t stock of knowledge, minus the discounted stock:

Outflows = Ωi,t + σ−2a −

[(ρ2(Ωi,t + σ−2

a ))−1 + σ2θ

]−1(11)

Figure 1 illustrates the inflows and outflows, in a form that looks just like the traditional Solow

model with capital accumulation. Depreciation is not linear, but is very close to linear. For other

parameter values, sometimes it has a noticeable concavity at low levels of knowledge Ω. The

inflows have diminishing returns. There are two reasons for this. The first is that more data raises

19

Page 20: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 1: Economy converges to a data steady state: Aggregate inflows and outflows of data.Line labeled inflows plots zik

αi,tσ−2ε for a firm i, that makes an optimal capital decision k∗i,t, with different levels of

initial data stock. Line labeled outflows plots the quantity in (11).

efficiency, which incentivizes more capital investment. But capital has diminishing returns because

the exponent in the production function is α < 1. But that is not the only reason. Even if capital

did not have diminishing marginal returns, inflows would still exhibit concavity. The reason is

that the returns to data are bounded. With infinite data, all learnable uncertainty about θ can be

resolved. With a perfect forecast of θ, the expected good quality is(A− σ2

a

), which is finite. Thus,

the optimal capital investment is finite. Since a function that is continuous and not concave will

always cross any finite upper bound, productivity, investment and data inflows must all be concave

in the stock of knowledge Ω.

Diminishing returns arise because we model data as information, not directly as an addition to

productivity. Information is used to forecast random variables. With infinite information, the best

forecast has perfect foresight. But perfect foresight does not typically mean infinite investment or

profits. Information has diminishing returns because its ability to reduce variance gets smaller and

smaller as beliefs become more precise.

Data poverty traps. The aggregate accumulation of data exhibits both increasing and decreas-

ing returns. At low levels of data, more data would incentivize more output, which would create

20

Page 21: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 2: Aggregate growth dynamics without data sales: Data accumulation grows knowledge andoutput over time, with diminishing returns.

more data. This is the convex region of the inflows graph in Figure 1. This initial increasing returns

force creates the possibility of a data poverty trap. On the left side of Figure 1, data outflows exceed

inflows. For such an economy, the stock of data does not grow. The optimal capital investment

is not high enough to generate enough transactions to replace the number of data points lost due

to data depreciation. This no-growth phase, in which the firm produces low-quality goods, can be

interpreted as a barrier to entry in global data markets. At high levels of data, diminishing returns

overwhelms this force. Thus, if a firm can manage to accumulate enough data, it can escape the

trap and then sustain a high stock of knowledge.

These results suggest that financially constrained young firms may never enter or make the

investment in data systems. By not paying the data adjustment costs, they can remain financially

viable. But by not collecting and using data, they never escape the trap of poor data and low

value-added.

In a country where data science skills are scarce, the labor cost of hiring a data analyst may

make this data adjustment cost very high. Thus scare data-skilled labor might condemn an entire

21

Page 22: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

economy of firms to this data poverty trap.

Growth without data sales. To build intuition about the mechanisms at work, we begin

by looking at a simple version of the model where firms’ optimal techniques are independent

(corr(ηi, ηj) = 0 ∀i 6= j). This implies that firms can only learn from data that they them-

selves produce: δit = 0, ∀i, t. They are allowed to buy data. But since such data is not informative

about their optimal action, it has no value to them.

Consider an economy of symmetric firms, all producing and accumulating data at the same

rate. Figure 2 shows how firms accumulate data rapidly. This accumulation helps them to pro-

duce higher quality goods. The raw units of non-quality-adjust goods production in this example

grows proportionately to Yt. When goods are higher quality, firms produce more units of them.

More production generates more transactions data. The accumulation of data makes each unit of

production generate more value.

The price per quality unit, Pt is falling monotonically, as output rises. In this example, the

price Pt falls fast enough that the quality-adjusted price for a unit of the good, PtAit, falls, even

as the goods’ quality, Ait, improves. That is possible, but not always true.

Growth with data sales. Next, consider an economy with perfectly correlated optimal tech-

niques (corr(ηi, ηj) = 1 ∀i, j). In this economy, purchased data is just as relevant as the firms’ own

data. Now, firms will want to buy or sell data. For now, the price of data π is fixed. This could

be seen as a small open economy buying and selling into a large international data market. When

(corr(ηi, ηj) = 1 ∀i, j), one piece of a firm’s own data is perfectly substitutable for a piece of data

generated by another firm. For two firms selling an identical good to the same market, this might

be close to true. In many cases, a firms’ own data is more relevant for forecasting the variables

of interest to that firm: its demand, its future cost or its own future productivity. But certainly

external data is relevant and is frequently purchased. Thus the truth likely lies in between this

model and the previous case with non-tradeable (zero correlation) model.

There are two scenarios of interest here. The first is where the industry is not very good at

data mining (z = 1). In this case, the industry will be a net purchaser of data. Figure 3 illustrates

that firms in such an industry will accumulate data (rise in Ωit in the top panel), but will do so by

22

Page 23: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 3: Dynamics for a firm that produces little data (z = 1). Parameters: ρ = 0.9, r = 0.2, β =0.9, α = 0.5, ψ = 0.5, γ = 2, A = 1, P = 0.75, σ2

a = 0.1, σ2θ = 0.1, σ2

ε = 0.5, z = 1

23

Page 24: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 4: A data specialist: The productive data miner (z = 25) sells more data, but retains lessknowledge and sells a high volume of cheaper, lower-quality goods. Parameters: ρ = 0.9, r = 0.2, β =0.9, α = 0.5, ψ = 0.5, γ = 2, A = 1, P = 0.75, σ2

a = 0.1, σ2θ = 0.1, σ2

ε = 0.5, z = 25

24

Page 25: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

purchasing data δit > 0. Firms that are poor data miners need to buy data from others to raise

the quality of their production.

With a data market, an unproductive data producer is not condemned to produce low-quality

goods. Instead, they can purchase data from others. The firms in the example continue to purchase

data throughout their lifespan. Early on, data purchases increase the stock of knowledge. Once

the stock is sufficiently large, the same data purchases are just enough to offset the depreciation of

old data. As the net inflows of data slows, price and output level off as well.

The opposite case is a highly productive data miner. In Figure 4, the firms in the economy

all have z = 25, meaning that every unit of production generates 25 signals. These firms do not

use most of their data. Instead, they sell most of their data. In the top panel of Figure 4, firms’

data sales are positive and far exceed their data stock. These firms do not produce higher quality

goods. Instead, they produce a larger volume of low-quality goods. They do this because their

main source of revenue is not from their goods sales, it is from their data sales. In period 1, these

firms set a price below marginal cost. They sell goods at a loss, like free apps for download, in

order to mine data from their customers and sell that data. This is optimal because the price of

data is sufficiently high. These firms lost on goods, but profit on their data sales.

Data barter. One phenomenon observed in digital goods is that they are often given away for

a low or zero cost. These goods or services were costly to produce. The reason that they are given

away is not that they have no value. Instead, this should be thought of as a barter trade. The

digital good is being bartered for the user’s data. This barter-like trade arises in this model as well.

In some examples, the optimal price of a firms’ goods are initially well below marginal cost, and

even close to zero. Figure 5 illustrates an example of this where the firm makes negative profits

for the first ten periods because they price their goods at less than marginal cost. The reason for

the near-zero price is that the firm wants to sell many units, in order to accumulate data. Data

will boost the productivity of future production and enable future profitable goods sales. It could

also be that the firm wants to accumulate data, in order to sell it. Lowering the price is a costly

investment in a productive asset that yields future value. This is what makes a zero optimal price,

or data barter, possible.

In the previous results, all firms were growing together. In this example, only one firm is

25

Page 26: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 5: Costly investment in data: When one firm alone accumulates data, it initially sells atnear zero price and makes zero profit. This is a form of bartering goods for customers’ data.

accumulating data, while the rest of the firms are in steady state. Since firms are competitive, this

one firm’s output does not affect the equilibrium goods price.

This negative value production is a costly investment in data, which enables future profitable

production. In this example, that strategy works for the firm because it faces no financing con-

straint. In reality, many firms that make losses for years on end lose their financing and exit.

Firm Size Dispersion: Bigger then Smaller. One of the biggest questions in macroeconomics

and industrial organization is: What is the source of the changes in the distribution of firm size?

One possible source is the accumulation of data. Big firms do more transactions, which allows

them to be more productive and grow bigger. This force is present in our model and does generate

dispersion in firm size. However, that dispersion is a transitory phenomenon.

The S-shaped dynamic of firm growth implies that firm size first becomes more heterogeneous

26

Page 27: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 6: Initial differences in stocks of data generate firm size dispersion, but eventually conver-gence.

and then converges. During the convex, increasing returns portion of the growth trajectory, small

initial differences in the initial data stock of firms get amplified. Increasing returns means that

a firm with a larger stock of knowledge acumulates additional data at a faster pace. That drives

divergence. Later on in the growth path, the trajectory becomes concave. This is where diminishing

returns sets in. In this region, firms with a larger stock of knowledge accumulate more data, but

less additional knowledge. As diminishing returns to data set in, firms that have varying degrees of

abundance in their stock of knowlege become similar in their productivity and their output. This

is the region of convergence. Small initial differences in the stock of knolwedge fade in importance.

Figure 6 illustrates the convergence in firm size at the end of the growth trajectory. Holding

all other parameters fixed, firms are accumulating more data, which results in similar stocks of

knowledge because of the diminishing returns to data. This by no means proves that data is

responsible for any change in firm size. It simply points out that, if some part of the increase in

firm size dispersion came from data, then that effect should be transitory. It may be long-lived.

But eventually, the initial advantage conferred by a large initial data stock should fade.

Data Dispersion with Data Trading One might think that allowing firms to trade data would

eliminate data dispersion. Presumably, firms that do not produce much data could simply pay to

27

Page 28: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 7: Data sales do not eliminate data dispersion. When few firms have the ability to generate largeamounts of data (λ = 0.99, lightest lines), the difference in the data stocks of data savvy and unsavvy firms is large.

acquire it. This would eliminate data dispersion. But that is not what happens.

Figure 7 shows that when few firms have the ability to generate large amounts of data (λ = 0.99,

lightest lines), the difference in the data stocks of data savvy and unsavvy firms is large. When

many firms can generate lots of data (λ = 0.01, darkest lines), even the few that cannot generate

much data end up with similar size data stocks. This is the numerical counterpart to Proposition 7.

It also helps to explain why data can generate firm size dispersion, even in a world where data-poor

firms can buy data.

4 Extending the Data Growth Framework

The benefit of a simple framework is that it can be extended in many directions to answer other

questions. Below are three extensions that could allow the model to address a variety of interesting

data-related phenomena.

Data for Business Stealing Data is not always used for a socially productive purpose. One

might argue that many firms use data simply to steal customers away from other firms. So far,

we’ve modeled data as something that enhances a firm’s productivity. But what if it increases

profitability, in a way that detracts from the profitability of other firms? Using an idea from Morris

28

Page 29: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

and Shin (2002), we can model such business-stealing activity as an externality that works through

productivity:

Ai,t = A−(ai,t − θi,t − εa,i,t

)2+

∫ 1

j=0

(aj,t − θj,t − εa,j,t

)2dj (12)

This captures the idea that when one firm uses data to reduce the distance between their chosen

technique ait and the optimal technique θ + ε, that firm benefits, but all other firms lose a little

bit. These gains and losses are such that, when added up to compute aggregate productivity, they

cancel out:∫Ait = A. This represents an extreme view that data processing contributes absolutely

nothing to social welfare. While that is unlikely, examining the two extreme cases is illuminating.

What we find is that reformulating the problem this way makes very little difference for most

of our conclusions. The externality does reduce the productivity of firms and does reduce welfare,

relative to the case without the externality. But it does not change firms’ choices. Therefore, it

does not change data inflows, outflows or accumulation. It does not change firm dynamics. The

reason there is so little change is that the externality does not enter in a firm’s first order condition.

It does not change its optimal choice of anything. Firm i’s actions have an infinitesimal, negligible

effect on the average productivity term∫ 1j=0

(aj,t − θj,t − εa,j,t

)2dj. Because firm i is massless in a

competitive industry, its actions do not affect that aggregate term. So the derivative of that term

with respect to i’s choice variables is zero. If the term is zero in the first order condition, it means

it has no effect on choices of the firm.

Whether data is productivity-enhancing or not matters for welfare and the price per good, but

does not change our conclusions that a firm’s growth from data alone is bounded, that firms can

be stuck in data poverty traps, or that markets for data will arise to partially mitigate the unequal

effects of data production.

Investing in Data-Savviness So far, the data savviness parameter zi has been fixed. To some

extent, this represents the idea that certain industries will spin off more data than others. Credit

card companies learn an enormous amount about their clients, per dollar value of service they

provide. Barber shops or other service providers may be learning soft information from their

clients, but accumulate little hard data. At the same time, every firm can do more to collect,

structure and analyze the data that its transactions produce. We could allow a firm to increase

29

Page 30: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

its data-savviness zi, at a cost. Our previous results still hold for a given set of z choices. Put

differently, for any exogenously assumed set of z’s, there exists a cost structure on the choice of z

that would give rise to that set of z as an optimal outcome. However, endogenizing these choices,

with a fixed cost structure, might produce changes in the cross-section of firms’ data, over time.

Data allocation choice. A useful extension of the model would be to add a choice about what

type of data to purchase or process. To do that, one needs to make the relevant state θit a vector

of variables. Then, use rational inattention.

Why is rational inattention a natural complement to this model? Following Sims (2003), ratio-

nal inattention problems consider what types of information or data is most valuable to process,

subject to a constraint on the mutual information of the processed information and the underlying

uncertain economic variables. The idea of using mutual information as a constraint, or the basis

of a cost function, comes from the computer science literature on information theory. The mutual

information of a signal and a state is an approximation to the length of the bit string or binary

code necessary to transmit that information (Cover and Thomas (1991)). While the interpreta-

tion of rational inattention in economics has been mostly as a cognitive limitation on processing

information, the tool was originally designed to model computers’ processing of data. Therefore, it

would be well suited to explore the data processing choices of firms.

5 Conclusions

The economics of transactions data bears some resemblance to technology and some to capital.

It is not identical to either. Yet, when economies accumulate data alone, the aggregate growth

economics are similar to an economy that accumulates capital alone. Diminishing returns set in

and the gains are bounded. Yet, the transition paths differ. There can be regions of increasing

returns that create possible poverty traps. Such traps arise with capital externalities as well.

Data’s production process, with its feedback loop from data to production and back to data, makes

such increasing returns a natural outcome. When markets for data exist, some of the effects are

mitigated, but the diminishing returns persist. Even if data does not increase output at all, but is

only a form of business stealing, the dynamics are unchanged. Thus, while the accumulation and

30

Page 31: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

analysis of data may be the hallmark of the “new economy,” this new economy has many economic

forces at work that are old and familiar.

31

Page 32: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

References

Aghion, Philippe, Benjamin F. Jones, and Charles I. Jones, “Artificial Intelligence and

Economic Growth,” 2017. Stanford GSB Working Paper.

Agrawal, Ajay, John McHale, and Alexander Oettl, “Finding Needles in Haystacks: Arti-

ficial Intelligence and Recombinant Growth,” in “The Economics of Artificial Intelligence: An

Agenda,” National Bureau of Economic Research, Inc, 2018.

Baumol, William J., “Macroeconomics of Unbalanced Growth: The Anatomy of Urban Crisis,”

The American Economic Review, 1967, 57 (3), 415–426.

Begenau, Juliane, Maryam Farboodi, and Laura Veldkamp, “Big Data in Finance and the

Growth of Large Firms,” Working Paper 24550, National Bureau of Economic Research April

2018.

Cover, Thomas and Joy Thomas, Elements of information theory, first ed., John Wiley and

Sons, New York, New York, 1991.

David, Joel M, Hugo A Hopenhayn, and Venky Venkateswaran, “Information, misalloca-

tion, and aggregate productivity,” The Quarterly Journal of Economics, 2016, 131 (2), 943–1005.

Fajgelbaum, Pablo D., Edouard Schaal, and Mathieu Taschereau-Dumouchel, “Uncer-

tainty Traps,” The Quarterly Journal of Economics, 2017, 132 (4), 1641–1692.

Farboodi, Maryam, Adrien Matray, and Laura Veldkamp, “Where has all the big data

gone?,” 2018.

, Roxana Mihet, Thomas Philippon, and Laura Veldkamp, “Big Data and Firm Dynam-

ics,” American Economic Association Papers and Proceedings, May 2019.

Jones, Chad and Chris Tonetti, “Nonrivalry and the Economics of Data,” 2018. Stanford GSB

Working Paper.

Jones, Charles I., “R&D-Based Models of Economic Growth,” Journal of Political Economy,

1995, 103 (4), 759–784.

32

Page 33: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Lu, Chia-Hui, “The impact of artificial intelligence on economic growth and welfare,” 2019.

National Taipei University Working Paper.

Lucas, Robert, “On the mechanics of economic development,” Journal of Monetary Economics,

1988, 22 (1), 3–42.

Morris, Stephen and Hyun Song Shin, “Social value of public information,” The American

Economic Review, 2002, 92 (5), 1521–1534.

Ordonez, Guillermo, “The Asymmetric Effects of Financial Frictions,” Journal of Political Econ-

omy, 2013, 121 (5), 844–895.

Romer, Paul, “Endogenous Technological Change,” Journal of Political Economy, 1990, 98 (5),

S71–102.

Sims, Christopher, “Implications of rational inattention,” Journal of Monetary Economics, 2003,

50(3), 665–90.

Solow, Robert M., “A Contribution to the Theory of Economic Growth,” The Quarterly Journal

of Economics, 02 1956, 70 (1), 65–94.

United Nations, “Information Economy Report: Digitalization, Trade and Development,” Tech-

nical Report, United Nations Conference on Trade and Development 2017.

Veldkamp, Laura, “Slow Boom, Sudden Crash,” Journal of Economic Theory, 2005, 124(2),

230–257.

33

Page 34: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

A Appendix: Derivations and Proofs

A.1 Belief updating

The information problem of firm i about its optimal technique θi,t can be expressed as a Kalman

filtering system, with a 2-by-1 observation equation, (µi,t,Σi,t).

We start by describing the Kalman system, and show that the sequence of conditional variances

is deterministic. Note that all the variables are firm specific, but since the information problem is

solved firm-by-firm, for brevity we suppress the dependence on firm index i.

At time t, each firm observes two types of signals. First, date t − 1 output provides a noisy

signal about θt−1:

yt−1 = θt−1 + εa,t−1, (13)

where εa,t ∼ N (0, σ2a). Second, the firm observes data points, as a bi-product of economic activity.

For firms that do not trade data, the number of new data points added to the firm’s data set is

ωit = nit = zkαit. For firms that do trade data, ωit = nit + δit(1δit>0 + ι1δit<0). The set of signals

st,mm∈[1:ωi,t] are equivalent to an aggregate (average) signal st such that:

st = θt + εs,t, (14)

where εs,t ∼ N (0, σ2ε /ωit). The state equation is

θt − θ = ρ(θt−1 − θ) + ηt,

where ηt ∼ N (0, σ2θ).

At time, t, the firm takes as given:

µt−1 = E[θt | st−1, yt−2

]Σt−1 = V ar

[θt | st−1, yt−2

]where st−1 = st−1, st−2, . . . and yt−2 = yt−2, yit−3, . . . denote the histories of the observed

34

Page 35: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

variables, and st = st,mm∈[1:ni,t].

We update the state variable sequentially, using the two signals. First, combine the priors with

yt−1:

E[θt−1 | It−1, yt−1

]=

Σ−1t−1µt−1 + σ−2

a yt−1

Σ−1t−1 + σ−2

a

V[θt−1 | It−1, yt−1

]=[Σ−1t−1 + σ−2

a

]−1

E[θt | It−1, yt−1

]= θ + ρ ·

(E[θt−1 | It−1, yt−1

]− θ)

V[θt | It−1, yt−1

]= ρ2

[Σ−1t−1 + σ−2

a

]−1+ σ2

θ

Then, use these as priors and update them with st:

µt = E[θt | It

]=

[ρ2[Σ−1t−1 + σ−2

a

]−1+ σ2

θ

]−1· E[θt | It−1, yt−1

]+ ωtσ

−2ε st[

ρ2[Σ−1t−1 + σ−2

a

]−1+ σ2

θ

]−1+ ωtσ

−2ε

(15)

Σt = V ar[θ | It

]=[ρ2[Σ−1t−1 + σ−2

a

]−1+ σ2

θ

]−1+ ωtσ

−2ε

−1(16)

Multiply and divide equation (15) by Σt as defined in equation (16) to get

µt = (1− ωtσ−2ε Σt)

[θ(1− ρ) + ρ ((1−Mt)µt−1 +Mtyt−1)

]+ ωtσ

−2ε Σtst, (17)

where Mt = σ−2a (Σt−1 + σ−2

a )−1.

Equations (16) and (17) constitute the Kalman filter describing the firm dynamic information

problem. Importantly, note that Σt is deterministic.

A.2 Making the Problem Recursive: Proof of Lemmas 1 and 2

Lemma. The sequence problem of the firm can be solved as a non-stochastic recursive problem

with one state variable. Consider the firm sequential problem:

maxE0

∞∑t=0

(1

1 + r

)t(PtAtk

αt − rkt)

35

Page 36: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

We can take a first order condition with respect to at and get that at any date t and for any level

of kt, the optimal choice of technique is

a∗t = E[θt|It].

Given the choice of at’s, using the law of iterated expectations, we have:

E[(at − θt − εa,t)2|Is] = E[V ar[θt|It]|Is],

for any date s ≤ t. We will show that this object is not stochastic and therefore is the same for

any information set that does not contain the realization of θt.

We can restate the sequence problem recursively. Let us define the value function as:

V (st,mm∈[1:nt], yt−1, µt−1,Σt−1) = maxkt,at

E[Atk

αt − rkt +

(1

1 + r

)V (st+1,mm∈[1:ωt+1], yt, µt,Σt)|It−1

]

with ωit being the net amount of data being added to the data stock. Taking a first order condition

with respect to the technique choice conditional on It reveals that the optimal technique is a∗t =

E[θt|It]. We can substitute the optimal choice of at into At and rewrite the value function as

V (st,mm∈[1:nt], yt−1, µt−1,Σt−1) = maxkt

E[(A− (E[θt|It]− θt − εa,t)2

)kαt − rkt

+

(1

1 + r

)V (st+1,mm∈[1:ωt+1], yt, µt,Σt)|It−1

].

Note that εa,t is orthogonal to all other signals and shocks and has a zero mean. Thus,

V (st,mm∈[1:nt], yt−1, µt−1,Σt−1) = maxkt

E[(A− ((E[θt|It]− θt)2 + σ2

a))kαt − rkt

+

(1

1 + r

)V (st+1,mm∈[1:ωt+1], yt, µt,Σt)|It−1

].

Notice that E[(E[θt|It] − θt)2|It−1] is the time-t conditional (posterior) variance of θt, and the

posterior variance of beliefs is E[(E[θt|It] − θt)2] := Σt. Thus, expected productivity is E[At] =

A−Σt − σ2a, which determines the within period expected payoff. Additionally, using the Kalman

36

Page 37: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

system equation (16), this posterior variance is

Σt =[[ρ2(Σ−1

t−1 + σ2a)−1 + σ2

θ

]−1+ ωtσ

−2ε

]−1

which depends only on Σt−1, nt, and other known parameters. It does not depend on the realization

of the data. Thus, st,mm∈[1:nt], yt−1, µt do not appear on the right side of the value function

equation; they are only relevant for determining the optimal action at. Therefore, we can rewrite

the value function as:

V (Σt) = maxkt

[(A− Σt − σ2

a)kαt −+πδit−Ψ(∆Ωi,t+1−)rkt +

(1

1 + r

)V (Σt+1)

]s.t. Σt+1 =

[[ρ2(Σ−1

t + σ2a)−1 + σ2

θ

]−1+ ωitσ

−2ε

]−1

Data use is incorporated in the stock of knowledge through (9), which still represents one state

variable.

A.3 Equilibrium and Steady State Without Trade in Data

Capital choice. The first order condition for the optimal capital choice is

αPtAitkα−1t −Ψ′(·)∂Ωt+1

∂kit− r +

(1

1 + r

)V ′(·)∂Ωt+1

∂kit= 0

where ∂Ωt+1

∂kit= αzik

α−1it σ−2

ε and Ψ′(·) = 2ψ(Ωi,t+1 − Ωit). Substituting in the partial derivatives

and for Ωi,t+1, we get

kit =

r

(PtAit + ziσ

−2ε (

(1

1 + r

)V ′(·)− 2ψ(·))

)]1/(1−α)

(18)

Differentiating the value function in Lemma 1 reveals that the marginal value of data is

V ′(Ωit) = PtAitkαit

∂Ait∂Ωit

−Ψ′(·)(∂Ωt+1

∂Ωt− 1

)+

(1

1 + r

)V ′(·)∂Ωt+1

∂Ωt

where ∂Ait/∂Ωit = Ω−2it and ∂Ωt+1/∂Ωt = ρ2

[ρ2 + σ2

θ(Ωit + σ−2a )]−2

.

To solve this, we start with a guess of V ′ and then solve the non-linear equation above for kit.

37

Page 38: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Then, update our guess of V .

Steady state The steady state is where capital and data are constant. For data to be constant,

it means that Ωi,t+1 = Ωit. Using the law of motion for Ω (eq 9), we can rewrite this as

ωssσ−2ε +

[ρ2(Ωss + σ−2

a )−1 + σ2θ

]−1= Ωss (19)

This is equating the inflows of data ωitσ−2ε with the outflows of data

[ρ2(Ωi,t + σ−2

a )−1 + σ2θ

]−1−Ωit.

Given a number of new data points ωss, this pins down the steady state stock of data. The number

of data points depends on the steady state level of capital. The steady state level of capital is given

by (18) for Ass depending on Ωss and a steady state level of V ′ss. We solve for that steady state

marginal value of data next.

If data is constant, then the level and derivative of the value function are also constant. Equating

V ′(Ωit) = V ′(Ωi,t+1) allows us to solve for the marginal value of data analytically, in terms of kss,

which in turn depends on Ωss:

V ′ss =

[1−

(1

1 + r

)∂Ωt+1

∂Ωt|ss]−1

PtkαssΩ−2ss (20)

Note that the data adjustment term Ψ′(·) dropped out because in steady state ∆Ω = 0 and we

assumed that Ψ′(0) = 0.

The equations (18), (26) and (27) form a system of 3 equations in 3 unknowns. The solution

to this system delivers the steady state levels of data, its marginal value and the steady state level

of capital.

A.4 Equilibrium With Trade in Data

To simplify our solutions, it is helpful to do a change of variables and optimize not over the amount

of data purchased or sold δit, but rather the closely related, net change in the data stock ωit. We

also substitute in nit = zikαit and substitute in the optimal choice of technique ait. The equivalent

38

Page 39: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

problem becomes

V (Ωi,t) = maxki,t,ωi,t

Pt

(A− Ω−1

i,t − σ2a

)kαi,t − π

(ωit − zikαit

1ωit>nit + ι1ωit<nit

)− rki,t

−Ψ (∆Ωi,t+1) +

(1

1 + r

)V (Ωi,t+1) (21)

where Ωi,t+1 =[ρ2(Ωi,t + σ−2

a )−1 + σ2θ

]−1+ ωitσ

−2ε (22)

Capital choice. The first order condition for the optimal capital choice is

FOC[kit] : αPtAitkα−1it +

παzikα−1it

1ωit>nit + ι1ωit<nit− r = 0 (23)

Solving for kit gives

kit =

(1

r(αPtAit + παzi)

) 11−α

(24)

where π ≡ π/(1ωit>nit + ι1ωit<nit).

• Note that a firm’s capital decision is optimally static. It does not depend on the future

marginal value of data (i.e., V ′(Ωi,t+1)) explicitly.

Data use choice. The first order condition for the optimal ωit is

FOC[ωit] : −Ψ′(·)∂∆Ωi,t+1

∂ωit− π +

(1

1 + r

)V ′(Ωi,t+1)

∂Ωi,t+1

∂ωit= 0 (25)

where∂Ωit,t+1

∂ωit= σ−2

ε .

Steady state The steady state is where capital and data are constant. For data to be constant,

it means that Ωi,t+1 = Ωit. Using the law of motion for Ω (eq 9), we can rewrite this as

ωssσ−2ε +

[ρ2(Ωss + σ−2

a )−1 + σ2θ

]−1= Ωss (26)

This is equating the inflows of data ωitσ−2ε with the outflows of data

[ρ2(Ωi,t + σ−2

a )−1 + σ2θ

]−1−Ωit.

Given a number of new data points ωss, this pins down the steady state stock of data. The number

of data points depends on the steady state level of capital. The steady state level of capital is given

39

Page 40: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

by Equation 24 for Ass depending on Ωss and a steady state level of V ′ss. We solve for that steady

state marginal value of data next.

If data is constant, then the level and derivative of the value function are also constant. Equating

V ′(Ωit) = V ′(Ωi,t+1) allows us to solve for the marginal value of data analytically, in terms of kss,

which in turn depends on Ωss:

V ′ss =

[1−

(1

1 + r

)∂Ωt+1

∂Ωt|ss]−1

PsskαssΩ−2ss (27)

Note that the data adjustment term Ψ′(·) dropped out because in steady state ∆Ω = 0 and we

assumed that Ψ′(0) = 0.

From the first order condition for ωit (eq 25), the steady state marginal value is given by

V ′ss = (1 + r)πσ2ε (28)

The equations (24), (25), (26) and (27) form a system of 4 equations in 4 unknowns. The

solution to this system delivers the steady state levels of capital, knowledge, data, and marginal

value data.

A.5 Proofs of Propositions 5, 6, 7 and 8

In progress.

Proposition 8 Proof: We proceed in two parts: concavity then convexity. In Part 1 of the proof,

we prove that net data inflows must be concave. In part 2 of the proof, we work backwards in

time to a period when data is sufficiently scarce and show that net data inflow must have a convex

region.

Part 1: Concavity. Firm productivity has an upper bound. Because a square of a real number

is always non-negative, Ait ≤ A. Since the optimal capital choice kit is a continuous, finite function

of Ait, each firm has a finite optimal invesment kit ≤ k. Thus, each firm generates a finite maximum

number of data points zikitσε ≤< ni. Since the inflow of data per firm is bounded above by a finite

number, the integral on the unit interval of all firms’ data production is bounded above by a finite

number∫ 1

0 nidi. For ρ ≤ 1 and σ≥θ 0, data outflows (11) are bounded below by zero. But note that

40

Page 41: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

outflows are not bounded above. As the stock of knowledge Ωit → ∞, outflows approach infinity.

Thus, there must exist a crossing point after which outflows exceed inflows and Ωit not longer has

a positive growth rate. If Ωit no longer grows for any firm i, optimal investment kit and output do

not grow for any firm i. Thus, aggregate output has a finite upper bound. Any continuous process

Yt with a finite start point and a finite upper bound, must have some region where ∂2Yt/∂t2 < 0.

Part 2: Convexity. Data outflows are approximately linear (prove). Thus it suffices to look at

the convexity of data inflows, as a function of the current stock of knowledge Ωt. Rest of proof in

progress.

B Properties of the Model: Numerical Analysis

The section contains computational details, additional comparative statics and steady state numer-

ical analyses that illustrate how our data economy responds to changes in parameter values for one

or more firms.

B.1 Computational procedure.

Lemma 2 casts this problem as a one-state variable problem, which suggests a simple functional

approximation. However, that result supposed that the agent knew the future path of prices. The

difficulty of computing equilibria numerically is figuring out what the equilibrium path of prices

along the transition path will be. We propose two possible recursive procedures.

Solving for equilibrium with two types. Suppose there is a continuum of firms. All firms

are identical, except for one characteristic, which takes on one of two values. Suppose for now that

the characteristic is data-savviness ziεzH , zL, ∀i. Let λ be the fraction of less data-savvy firms

λ = Pr(zi = zL). This is a known and fixed parameter for now. Then, there is a value function for

each of the two types, and a data stock for each of the two types.

V H(ΩHt ,Ω

Lt ) = max

kHt ,ωHt+1

P (ΩHt ,Ω

Lt )(A− ΩH

t−1 − σ2

a

)kHt

α −Ψ(∆ΩHt+1) + (π/ι)(nHt+1 − ωHt+1)− rkHt

+

(1

1 + r

)V (ΩH

t+1) (29)

41

Page 42: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

where the bar over ΩHt is there simply to remind us that the price depends on all other agents’

stock of knowledge, not this individual’s. In other words, when we take first order conditions, we

need to take prices as given. But to be clear, in equilibrium ΩHt = ΩH

t .

V H(ΩHt ,Ω

Lt ) = max

kHt ,ωHt+1

P (ΩHt ,Ω

Lt )(A− ΩH

t−1 − σ2

a

)kHt

α −Ψ(∆ΩHt+1) + (π/ι)(nHt+1 − ωHt+1)− rkHt

+

(1

1 + r

)V (ΩH

t+1) (30)

V L(ΩHt ,Ω

Lt ) = max

kLt ,ωLt+1

P (ΩHt , Ω

Lt )(A− ΩL

t−1 − σ2

a

)kLt

α −Ψ(∆ΩLt+1) + (π/ι)(nLt+1 − ωLt+1)− rkLt

+

(1

1 + r

)V (ΩL

t+1) (31)

where each type’s stock of knowledge ΩH ,ΩL evolves according to the law of motion in 9.

Aside from replacing the firm-specific i subscript with the type H or L, the other difference in

this formulation of the problem is that we have replaced the known price Pt with a function P of

(ΩHt , Ω

Lt ). We will need to approximate that function. But once we do, then knowing the data

choices is as good as knowing tomorrow’s price.

Create a grid of initial data stocks for H and L types. Guess a price function and a value

function. For both types, choose the optimal k, ω. Clear the goods market to get an equilibrium

price. Compute the expected utility / value function. Update the price and value function guesses.

Iterate until convergence.

More general data heterogeneity. Let Γt represent the distribution of firms’ stock of knowl-

edge Ωit at time t. For a continuum of firms, Γt encapsulates exactly what measure of firms have

how much data. More specifically, Γ is the distribution of forecast precisions when firms forecast

θt.

In principle, one needs to keep track of the entire distribution Γ in order to update the aggregate

data stock. Having a distribution as a state variable is problematic. However, we find that the law

of motion for the aggregate stock of knowledge is close to linear. That means that keeping track only

42

Page 43: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

of the average stock of knowledge yields a good approximation to the true problem. This is similar

to the type of approximation Krussel and Smith use to keep track of their heterogeneous-agent

economy.

B.2 Comparative Statics

Efficiency of Data Processing (z) Suppose that only one firm i improves its data processing

(zi). All other firms are in steady state. Firm i’s data profits are increasing in zi. But production

profits are relatively flat over z since the price of the output good is unchanged. See Figure 8.

This firm is decreasing its price to get more customers and more data. This result is important

because regulators are struggling to figure out how to measure market power in data markets. This

suggests that looking for markups, the traditional approach, or looking for high primary market

profits, is not where to look for evidence of data market dominance.

When all firms are symmetric and grow together, the steady-state stock of knowledge is de-

creasing in z.

Steady-state profits (equivalently, steady-state level of the value function) are increasing in

z, but the increase is due to a large increase in data profits (π(nss − ωss)). Production profits

(PssAsskαss − rkss) can actually become negative for sufficiently large z. The productive data

miners are willing to make a loss through selling their final good in order to mine and sell data

(this relates to the example mentioned in the abstract/introduction about firms giving away apps

for free. Thus, firms may be willing to distribute an app and make losses from its production in

order to harvest and sell data from it.)

Negative production profits does not always arise. The price of data needs to sufficiently high

and the variances σ2a etc. also need to be large. Figure 8 illustrates these differences.

Key assumptions needed to observe this are: (1) symmetric firms, (2) exogenous data price,

and (3) the final good sells at its market-clearing price.

What this means in terms of osbervable economic predictions is the following: an industry that

is relatively more productive at mining data will accumulate a smaller stock of knowledge in the

steady-state. This industry has comparative advantage in mining data and does not need to use a

lot of data in order to produce more data. Thus, the industry chooses to sell most of its data and

maintains a relatively small stock of knowledge.

43

Page 44: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 8: Becoming a data specialist: One firm improves its data processing. It gets more data,but a small decline in goods market profits.

44

Page 45: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 9: Higher data adjustment costs (ψ) slow convergence to steady state. Blue line is withoutany adjustment cost.

Adjustment Costs

Changes in adjustment cost slow or speed a firm’s convergence to its data steady-state. As such,

when one firm’s adjustment cost changes, the aggregate dynamic is unchanged, but this cost change

interacts with the s-shaped dynamics of that firm’s growth. Figure 9 shows that with very low

adjustment costs (small ψ), it is still possible for a firm’s transition path (of Ω) to have an S-shape.

As the adjustment cost increases, a firm’s transition path becomes linear.

Heterogeneous Firms

This is an analysis of how the steady state changes as firms become more or less heterogeneous.

This is a first step in thinking about the consequences of market competition.

Consider an industry with mass 1 of firms in which mass λ have data mining parameter z1

(fixed at 1) and mass (1 − λ) have data mining parameter z2. With asymmetric firms, when the

mass of the more productive firms is sufficiently small, the steady state stock of knowledge of these

firms is increasing in these own firms’ data mining parameter.

Figure 10 shows that the profits of the z1 type firms decrease as the z2 firms become more

productive (crowding out). The two types of firms make similar profits from production (selling

45

Page 46: A Growth Model of the Data Economy · Keywords: Data, growth, digital economy, data barter. 1. evolve? Because data is non-rival, increases productivity and is freely replicable (has

Figure 10: Data processing asymmetry grows: Firm size inequality, profits and knowledge diverge.

the final good). The z2 type firms make large positive profits from selling their data, and the z1

type firms are net buyers of data.

46